Skip to main content

Managed triplets improve perovskite LEDs

Researchers in Japan have found that the performance of light-emitting diodes (LEDs) made from quasi-2D perovskites depends on the behaviour of excitons within the material. This advance in our understanding could make it easier to develop more efficient perovskite-based optoelectronic devices.

2D and quasi-2D perovskites are often touted as an alternative to silicon in optoelectronics. 2D perovskites are made up stacked sheets of alternating organic and inorganic layers. Quasi-2D perovskites are slightly different in that they contain small regions where organic and inorganic materials alternate in all directions, as is the case in 3D perovskites. The quasi-2D versions also incorporate two different types of organic material: one for the 3D domains and one for the 2D organic sheets between them.

In the new study, researchers led by Chihaya Adachi of Kyushu University’s Center for Organic Photonics and Electronics Research studied how changes to the composition of the organic sheets affect the efficiency of LEDs made from these quasi-2D perovskites. The perovskites in question have an A2Bn−1CnX3n+1 structure, where A is phenylethylammonium (PEA) or 1-naphthylmethylamine (NMA), B is methylammonium (MA) or formamidinium (FA), B is lead or tin and X is chlorine, bromine or iodine. In particular, Adachi and colleagues investigated quasi-2D perovskites based on PEA and NMA formadinium lead bromide (PEA2/NMA2FAn-1PbnBr3n+1).

Organic vs inorganic

As 2D and quasi-2D perovskites contain a combination of inorganic and organic components, there has been much discussion about whether their semiconducting properties are better described by models for inorganics or organics. In organic semiconductors, charges come together when they are electrically excited to form an energetic state called an exciton. This exciton may exist in a singlet state, which has no net spin because the contributing electron spins point in opposite directions, or in a triplet state where the spins point in the same direction. In either case, the energy in the exciton can then be released as light via a process known as radiative recombination. However, triplets generally have a lower energy than singlets, and therefore do not emit as much light.

After other researchers reported the existence of excitons in some perovskites, Adachi’s team decided to look at the material from an organic semiconductor perspective. One key property of organic LEDs is that the triplet energies of periphery materials (that is, materials other than the active light-emitting ones) are higher than that of the emitting layer itself. This means that when radiative recombination occurs, the energy from the triplet is transferred from the periphery to the emitting layer, rather than being lost.

To see whether their quasi-2D perovskite behaves in a similar fashion, the researchers examined how the triplet energy of one of the organic components (PEA or NMA) in the perovskite LED affects how much light it emits. They found that when NMA is used and the triplet exciton energy is low, light-emitting performance significantly drops. They also found that replacing the NMA with PEA, which has a higher triplet energy, improves performance.

Diverted triplet energy is not lost

“This, along with other studies on the energy dynamics in films and devices, convinced us that [in the NMA device] energy in triplets was being lost to an organic component that does not emit light,” explains study lead author Chuanjiang Qin. In the PEA device, in contrast, the researchers see a similar pattern as in organic LEDs, with the energy from triplet states ending up in the lower-energy light-emitting part of the material. “Quenching of triplets by the organic is thus a major energy loss path that we must take into account when thinking about applications,” Qin concludes.

The researchers also found that the energy gap between the singlet and triplet states in their PEA-based perovskite is small, at just 0.02 eV. This means that triplets can “upconvert” into singlets by adsorbing a small amount of (thermal) energy from the environment. The number of light-emitting singlets can thus be increased. “Converting all the triplets in the material into singlets would mean that all the energy available could be harvested for light emission,” Qin tells Physics World. “Indeed, the external quantum and current efficiencies of our green (527 nm) devices reached 12.4% and 52.1 cd A−1, respectively.”

Favourable properties

Perovskites of this type can be fabricated from low-cost starting materials in simple solution-based processes, and their high colour purity and performance make them one of the most promising thin-film LED materials available. Another favourable property is that charge carriers (electrons and holes) diffuse through them quickly and over long distances. Overall, the light-emitting efficiencies of diodes made from these materials have skyrocketed from an initial 0.1% (in 2014) to nearly 20% recently, bringing them on a par with established technologies. Researchers would like to continue improving this figure, but to do this they need to find out what exactly influences this efficiency – especially for green light emission, which is a key colour for displays and other applications.

The results from this study provide us with a better picture of how to achieve highly efficient LEDs based on perovskites, Qin says. “These materials show great promise for solar cells, transistors and other electronics devices, so a better understanding of their physics will help in the future development of these applications.”

The work is detailed in Nature Photonics.

Researchers discover an origin of dendrite growth in lithium-metal batteries

Three pioneers of lithium-ion batteries shared the 2019 Nobel Prize for Chemistry but after four decades of research, the implementation of the batteries has not been without obstacles. Lithium metal, an ideal anode material for its high energy density, was used in the earliest lithium battery prototypes. However, this design has been shelved due to insurmountable safety issues. Needle-like structures called dendrites form around the anode, leading to short circuiting and electrolyte depletion.

Now, researchers at the Pacific Northwest National Laboratory, led by Chongmin Wang, have made significant strides in understanding lithium dendrite formation. Using in situ growth studies, they gained valuable insights for the development of safe lithium batteries.

Ion conductivity

During battery operation, the anode reacts with the surrounding electrolyte to form a porous solid electrolyte interphase layer that encases the anode. The properties of the interphase layer have a strong influence on lithium dendrite formation and morphology. Wang and colleagues used correlative transmission electron microscopy and atomic force microscopy to monitor dendrite growth under different environmental conditions and external stressors. They discovered that dendrite formation is inhibited by ion conductivity. Lithium ions are less prone to deposit as dendrites if the interphase layer conducts ions rapidly.

Our work directly proves the correlation between dendrite morphology and its growth environment 

Chongmin Wang

“Slow movement of ions in the interphase layer leads to localized deposition and then needle-shaped growth,” said Wang. “Our work directly proves the correlation between dendrite morphology and its growth environment.”

Poisoned electrolyte

These real-time observations were made in a gaseous environment, but real-world batteries and cells employ liquid electrolytes. The researchers verified their ion conductivity hypothesis in such liquid cells. When they “poisoned” the organic liquid electrolyte with ethylene carbonate, the resulting solid electrolyte interphase layer possessed lower ionic conductivity, and dendrite formation was observed once again.

Wang and colleagues propose two strategies to increase the safety levels of lithium-metal batteries. The first is to tailor the electrolyte such that only decomposition products with high ion mobility are formed in the solid electrolyte interphase. The second is to use a stiff barrier to suppress the elongation of dendrites.

The team plans to further optimize the chemical properties of the electrolyte to solve the dendrite problem once and for all. Nevertheless, they admit that the realization of rechargeable lithium-metal batteries is much more complex than eliminating the dendrite problem.

“The lithium battery is a system and all its components are interdependent,” says team member Yang He. “Changing the electrolyte to prevent dendrite growth can inadvertently alter other aspects of the battery such as the cathode-electrolyte interactions. But our work provides a guideline for screening different electrolyte recipes.”

Preventing dendrite growth would be a significant step towards harnessing the full energy density and capacity of pure lithium, to realize green, efficient, and most importantly, safer batteries.

The research is described in Nature Nanotechnology.

City heat hits the poor hardest

The strength of the urban heat island (UHI) effect varies both between and within cities, and is usually greatest in poorer neighbourhoods, say researchers in the US and Singapore. Tirthankar Chakraborty, at Yale School of Forestry and Environmental Studies, and colleagues combined census data and satellite imagery to map the relationship between the UHI and income in 25 cities around the world. They found that low socioeconomic status often goes hand in hand with a greater density of built-up structures and an absence of vegetation – two factors that amplify the UHI effect. The result means that poorer residents could be more vulnerable to health problems that are exacerbated by heat stress.

The impact of industrial civilization on the climate is usually discussed in terms of the planet-wide changes caused by greenhouse-gas emissions. But as a growing proportion of the world’s population resides in towns and cities – more than two-thirds by 2050 – increasing numbers of people are experiencing another, more local effect.

The UHI effect arises because built-up areas, dominated by concrete and tarmac, absorb and release solar radiation differently from the countryside. When urban environments, with their artificial surfaces, replace trees and other vegetation, they also lose the benefit of evaporative cooling, which moderates the temperature in rural landscapes. Combined with the heat produced directly by human activity, from transportation or air-conditioning, for example, this means that cities are often a few degrees warmer than their surroundings. While this can be a benefit when it mitigates extreme cold snaps in the winter, by intensifying summer heatwaves, it can have major adverse effects.

“Because of the way heat stress works, incremental increases in heat exposure (both acute and chronic) can lead to disproportionately worse health outcomes for urban residents, especially those in hot and humid climates,” says Chakraborty.

To map the severity of the effect, Chakraborty and colleagues used measurements of the land surface temperature based on satellite observations of emitted infrared radiation. The measurements covered 25 cities from a range of climatic zones, with each city divided into between 12 (Berlin, Germany) and 297 (Detroit, US) discrete neighbourhoods. The average income for each neighbourhood came mainly from census and survey data all compiled for the Urban Environment and Social Inclusion Index.

The researchers compared the daytime and night-time temperatures in each neighbourhood with a local reference value, which they took to be the average from the non-urban areas within each city’s boundary. The terrain composing these non-urban areas included vegetation of various types, bare soil and permanent snowpack. They also used the ratio of reflected red and near-infrared light to define the vegetation index for each neighbourhood, and the ratio of near-infrared to short-wave infrared radiation as a proxy for built-up density.

Chakraborty and colleagues found that in 18 of the 25 cities studied – including both poorer and richer cities – the strongest UHI effect was seen in low-income neighbourhoods. Variation in vegetation coverage between neighbourhoods was usually the cause, but in coastal cities there was another factor: high-income areas are often found near the waterfront, while low-income areas are in the hinterland, away from the sea’s moderating effect.

“From a policy perspective, the UHI is critical because, unlike background temperature, UHI is manageable,” says Angel Hsu of Yale-NUS College in Singapore. This management will be even more important in the future, because not only is the urban fraction of the world’s population increasing, but, according to Hsu, for some cities, the UHI effect itself is becoming stronger.

Chakraborty and colleagues reported their findings in Environmental Research Letters.

Cells thrive in blood-based bioink

A bioink made from human platelet derivatives and plant-based cellulose nanocrystals (CNCs) can form 3D printed structures with exceptional cell-sustaining properties. Researchers at the 3B’s Research Group at the University of Minho, and the University of Oklahoma, report that stem cells encapsulated in the new nanocomposite bioink during printing thrive and proliferate without the need for animal-derived serum supplements. The result points to a way of producing highly biocompatible, “xeno-free” artificial tissues for medical or research purposes (Biofabrication 10.1088/1758-5090/ab33e8).

Currently, a wide range of techniques and bioinks are being applied to the goal of 3D-printing artificial tissues and organs. Although each approach differs in terms of the deposition method and the mix of cells, biomaterial and bioactive molecules that compose the bioink, what they have in common is that all of them are limited to the “biofabrication window”.

“The biofabrication window concept refers to the compromises that have traditionally been made to design bioinks with reasonable print fidelity while maintaining cytocompatibility,” the authors tell Physics World. “For example, high shape-fidelity 3D constructs can be achieved with high polymer concentrations or crosslink densities, but these will produce a dense network that limits cell migration, growth and differentiation.”

As well as inhibiting cell proliferation through their dense protein networks, typically these structures also fail to reproduce the fibrous architecture of natural tissues. As an alternative, some researchers have turned to scaffolds of decellularized extracellular matrix: naturally grown animal tissues from which the cells have been stripped away. The problem with this approach is that some of the original biological material can remain on the resulting constructs, as well as residues of the chemicals used to remove the cells.

The bioink that the team has developed is based on human platelet lysate (PL) – a widely available mixture of proteins and growth factors obtained from blood – meaning that there is no risk of adverse reactions to foreign biological material. In their method, a PL solution is extruded from one barrel of the printer, while a combination of aldehyde-functionalized CNCs and thrombin – an enzyme that triggers blood clotting – is extruded from the other. When the two solutions meet at the print nozzle, a “coagulation cascade” is initiated, yielding a hydrogel formed from fibrin proteins crosslinked and stabilized by the functionalized CNCs.

Constructs printed in this way possess a protein network that reproduces the multiscale structure of natural tissues, making them especially benign environments for cells. “Remarkably, unlike currently available systems, this nanocomposite bioink can self-support 3D cell cultures in serum-free conditions, promoting fast cellular densification and remodelling of the printed constructs,” say the researchers.

With such a high degree of cytocompatibility, it is no surprise then that the team’s PL-based bioink, which they call HUink, compromises on shape fidelity. Both the PL and CNC components of HUink exhibit low viscosities individually, and even after the slow process of polymerization and crosslinking (10–20 min), the resulting hydrogel is not strong enough to support freestanding structures.

HUink composition and bioprinting

While this is good news for cells suspended in the PL solution – low shear stresses during printing mean that they survive the process unharmed – it means that the standard layer-by-layer bioprinting approach cannot be used. Instead, the researchers embed 3D structures in a supporting bath of agarose, which they wash away once the hydrogel has cured.

As promising as the results may be, the researchers are realistic about the immediate applications of the research. “Despite the hype, we are still far from bioprinting functional organs for transplantation or regeneration in vivo,” they say. “The development of an advanced bioink (i.e. a biologically and biomechanically complex material) will be certainly be a technology challenge for the next years. In our opinion, the next step will be adding a layer of biological complexity to the HUink system. For example, the incorporation of different cell types or bioactive cues to explore a specific type of engineered tissue (e.g. bone or tendon).”

Putting physics in plain language

Thia Keppel

Radiation therapy has been a mainstay of cancer treatment for more than a century. In recent years, a newer method – one that uses beams of protons instead of X-rays to target and destroy cancerous tissue – has emerged. This method is known as proton therapy, and it has some important advantages over conventional radiotherapy. In particular, proton beams can be tuned so that they deposit nearly all of their energy at the tumour, avoiding surrounding healthy tissue – thereby both minimizing complications and allowing for higher tumour dose. The downside is that the accelerators used to create proton beams are massive, expensive and require specialist knowledge to install and operate. For more than a decade, Thia Keppel has worked to overcome these barriers, using her expertise in nuclear physics and business to help start proton-therapy centres around the world.

How did you become interested in physics?

I went to a small liberal arts college where the focus is on philosophy (St John’s College in Annapolis, Maryland, US), and at some point I got a bit frustrated. We would discuss deep questions at length in class, and I would think, “There’s got to be a way to answer some of this, rather than just discourse, logic, opinion. Can’t we test something?” Physics seemed to be a place where people were striving to provide concrete answers to big questions, so I looked for summer internships in physics, and to my surprise – because of course they had actual physics students applying too – I got one.

My internship was with a group of plasma physicists who had created a model of the solar magnetic flux cycle, where motion of plasma within the Sun causes it to flip its north and south magnetic poles every 11 years or so. They wanted an “artsy” person to make a movie based on their model so that people could visualize the plasma motion that causes sunspots and makes the Sun flip its poles. Nowadays you could do this with Flash animation on a regular PC, but back then it required a specialized computer system. I had to open the box and get the system running, learn how to make movies with it, input observatory data and create a movie – and I loved it. I liked learning the physics, I liked being sent off on my own, and it turned out I even liked the programming.

I liked learning the physics, I liked being sent off on my own, and it turned out I even liked the programming

My other path into physics is that I worked a bit on cars for fun. That’s a legacy from my father, who built performance cars in a local garage and raced them. I like experimental hardware of nearly any kind. Nowadays this is mostly particle detection systems, but I do still take my car out on the track sometimes.

How did you get involved in proton therapy?

I did my PhD research in nuclear physics at the Stanford Linear Accelerator Center (SLAC), and afterwards came to what is now the Thomas Jefferson National Accelerator Facility (Jefferson Lab) in Virginia to continue my career in nuclear science. One night, I was working late on a scintillating fibre type particle detector, and I realized that a colleague in the lab across from me was building the same type of detector – but for a project in medical instrumentation, not nuclear or particle physics. We started working together, and a few years later I founded something called the Center for Advanced Medical Instrumentation at Hampton University, which is located about 10 miles from Jefferson Lab, and where I held a joint faculty appointment.

I directed the centre for a few years. Our idea was to bring nuclear physics detection techniques into medical applications, and we patented more than a dozen technologies. I’m particularly proud that around three-quarters of those patents were licensed to private industry. We made an effort to work with physicians from the get-go, so that we had reasonable confidence that what we were developing had a substantial chance to make it to medical device manufacturers and ultimately to the clinic.

A few years into that effort, our local medical school asked whether we’d be interested in working with them to start a graduate medical-physics programme. This initiative led us to shift our technology development focus away from diagnostic applications and towards treatment, because that’s where most of the medical-physics jobs are and we were training graduate students. Around that time, someone approached Hampton University’s president about proton therapy. He called me in and, to make a long story short, we realized that between the university and the lab and the local healthcare community, we had the resources and know-how to build our own centre. It ended up being a $200m project. We mutated; our little medical instrumentation centre turned into one of the two largest proton-therapy centres in the world.

What did you do next?

I directed the centre from ground- breaking up to a couple years into clinical operations, but directing it in the operations era for me wasn’t as much fun as building, instrumenting and commissioning it to prepare for that era. So, when Jefferson Lab had an opening for the leader of one of the four experimental halls, I decided to switch over. Around that same time, a colleague and I also set up a consulting company to help other institutions start their own proton-therapy centres. There are a lot of choices and calculations that need to be made during the start-up phase in terms of equipment, shielding and calibrations that a traditionally trained clinical medical physicist doesn’t necessarily know how to make. This is where we help out, coming alongside the construction and/or clinical teams to ease the transition from design to successful operations. So far, I’ve helped to start 16 proton-therapy centres.

You’ve been awarded a lecturership by the American Physical Society for promoting applications of physics. Can you tell me more about that work?

When I got interested in medical applications of nuclear physics, I found that language was a big barrier to understanding. If you talk to a physician, they may tell you that you have a subcutaneous haematoma, and you need to be able to hear, “Oh, I have a bruise.” There’s a wealth of compellingly interesting stuff going on in medicine, but it needs to be translated. Similarly, I won’t be terribly successful talking to a physician about quantum chromodynamics without explaining that it’s the fundamental theory of how nuclei hold together. Clear translation is requisite to foster a successful flow of communication.

I think that my discourse-based philosophy education has been a help in learning to express ideas clearly and succinctly to people, and I also was able to hone my communication skills when I was running the proton-therapy centre. If you’re going to irradiate people, you must explain carefully and well why that’s a beneficial thing – or at least, why it might be better than traditional radiotherapy or other options for treating their tumour. Once you’re used to explaining things in plain language to potential patients or the public, you can give the same talk in a boardroom.

Do you have any advice for physics students?

This is something that’s particularly near to my heart lately, because my daughter is studying physics. What I said to her that I think all students need to hear is, basically, “Physics is a difficult odyssey, so you really need to love it. Maybe you won’t love it every day, but it should generally be so compellingly fascinating that it feels worth your effort.” If you do love it, stick with it. Buckle down and get through your classes, because after you graduate, you’ll get to decide what to do. There’s a universe of amazing options out there for you.

Maybe you’re a theorist who wants to delve into multidimensional field theory. Or maybe you’re an experimentalist who wants to detect dark matter. Or maybe you want to build the first desktop quantum computer. Whatever it is that you enjoy, you’ll be able to pursue it. Don’t get too focused on the standard academic career path. A lot of people go into industry after they get their bachelor’s physics degree, and a lot of others go all the way through to a PhD but then take their knowledge into an industrial or clinical setting. There are fundamental laws yet to be discovered and applied, many interesting puzzles to solve, new instruments to devise, and a wide variety of career paths to take you to them.

Neural networks extract information from sparse datasets

How did you get the idea for your company?

I was chatting to a materials science PhD student in a pub a few years ago, and he started telling me about some mathematical problems his group was facing. They were trying to use neural networks to predict the properties of new materials as a function of their composition, and I showed them how to use a tool called a covariance matrix to calculate the overall probability that a new material will satisfy various requirements – strength, cost, density and so on – at once. By doing that, we were able to design several new metal alloys, which are now being tested by Rolls-Royce.

At that point, I began to investigate ways of getting even deeper insights into material properties. Certain physical laws, like the fact that electrical conductivity is proportional to thermal conductivity, or that the tensile strength of a material is proportional to three times its hardness, are very powerful for predicting how a material will behave. However, because we set up our neural networks to always extrapolate from composition to property, we weren’t exploiting property–property correlations. So I changed the algorithm so that the neural network could capture that additional information, and we used it to design materials that can be used in a 3D printing process called direct metal deposition. We only had 10 experimental data points for how well materials could be 3D printed, but we were able to take that small amount of data and merge it with the huge database of how weldable different alloys are, which is an analogous property. The resulting extrapolations guided our design of new materials.

What happened next?

The direct metal deposition project exposed me to the idea that there might be new opportunities in merging sparse databases (like the one for 3D printability) with full ones (like the one for weldability), so the next step was to develop a much more comprehensive method for doing that. The mathematical inspiration for this method comes from many-body quantum mechanics, where something called the Dyson formula is used to calculate the Green’s function for an interacting particle in terms of the Green’s function for a non-interacting particle and a self-energy term that captures the effect of one particle interacting with another. We’re able to make an analogy in which the Green’s function of an interacting particle is like a prediction of a full material property, while the Green’s function of a non-interacting particle is like an “empty” data cell, for which we just make a naïve guess about what the value might be. Then our neural networks use the quantity we know to guide the extrapolation of the quantity we don’t. This enables us to merge experimental datasets, which are sparse, with some first-principles computer simulations and molecular dynamics simulations, which are complete.

We also noticed that there is often a lot of information hidden in the “noise” within data. Again, we know this from many-body physics, from the physics of critical phenomena that occur in low-temperature solid-state systems, and from renormalization group theory, where the large-scale fluctuations in one physical quantity can be related to the mean expectation value of a different physical quantity. Physicists have developed a lot of maths to capture that knowledge, and if I port that across to our neural network, we can use the uncertainty in one quantity to tell us the mean value of another. That’s been helpful for interpreting microstructures and phase behaviour in materials.

These techniques have many possible uses, and although I’ve worked on a few of them in my capacity as a researcher at the University of Cambridge – collaborating first with Rolls-Royce, and later with Samsung to design new battery materials and BP to design new lubricants – I eventually decided that I needed to form a spin-out company to really drive them forward.

What was the spin-out process like?

Initially, I was put in touch with Cambridge Enterprise, which is the university’s commercialization arm. They introduced me to several local business angels. I took each of them out to dinner, worked out what they thought the opportunities were and tried to understand what they’d be like to work with, and eventually selected an angel called Graham Snudden. Working with Graham helped me to understand our business plan, and he also introduced me to a former employee of his, Ben Pellegrini, who became my co-founder and the CEO of our spin-out, Intellegens. Ben had experience of working at smaller companies, and he had worked in software, which is a complementary area to my own skillset and absolutely core to our business strategy.

Ben Pellegrini: When I first met Gareth, he was running the algorithm through a terminal prompt on the university computer centre. He was always very enthusiastic and very bright, and I could see that there was real interest and value in what he was doing, but it was hard – I had to meet him a few times before I understood that when he was moving data around, he was generating interesting results. The big question was how to transform this tool from something that a specialized user can engage with at the command line into something your average engineer or scientist in a clinical lab or materials company can use. That’s a challenge I enjoy.

How did you get funding?

BP: For the first six months, I was based in my kitchen and Gareth was doing work for Intellegens in the evenings. Then we got some money from Innovate UK to get us going with a proof-of-concept project, plus a little bit of money from Cambridge Enterprise and from Graham, who (as Gareth mentioned) is a local angel investor. We’ve also been quite lucky in that we can run consultancy-style projects to generate income as we’re going along.

What are some of the projects you’ve worked on?

GC: We’ve been pushing hard on the problem of designing new drugs. The basic question is, if you inject a drug into a patient, which proteins will react to it? Does the drug activate them or inhibit them? There are about 10,000 proteins in the body and about 10 million drugs that you can test, so if you imagine a huge matrix where each column is a different protein and each row is a different drug, the dataset is only about 0.05% complete, because it’s impossible to conduct experimental tests on that many drug-protein combinations. It’s the ultimate sparse dataset.

However, we do have information about the chemical structure of every drug and every protein. That’s a complete dataset. Our goal is to marry the complete dataset of chemical knowledge to the sparse dataset of protein activity and use it to predict the activities of proteins. We can do this by taking advantage of protein-to-protein correlations and protein-to-drug-chemical-structure correlations. It’s very similar to what we were doing with materials for 3D printing, where weldability is a complete dataset and 3D printability is a sparse dataset.

The business has now moved to the stage of licensing machine learning as a product. For drug discovery, Alchemite is marketed through Optibrium, and there has already been enthusiastic take-up by Big Pharma. For materials discovery, Intellegens is licensing a full-stack solution direct to the customer, with the first sales now complete.

BP: We’re also talking to people who work on infrastructure, trying to understand gaps in maintaining things like bridges or equipment. In a transport network, for example, you may or may not have data on relevant factors such as weather, geography, topology, road composition and pedestrian use at specific points in the network, so you end up with very big, sparse datasets. We’re working on patient analytics as well, trying to predict optimum treatment profiles from sparse sets of historical patient data. Again, we may or may not have the same data available for all patients, but we have a combination of data points, and trying to learn from all the data points we have seems to give us an edge in suggesting possible routes of treatment.

I would like to point out, though, that there’s a lot of hype around artificial intelligence (AI) and deep learning at the moment, and that’s a double-edged sword for us. It’s getting us a lot of interest, but we have a special – maybe even unique – academically driven toolset that solves problems in a new way, and that can sometimes get lost in the noise about AI-based voice recognition or image recognition.

How is your technology different?

The main differentiator is our ability to train models from incomplete data. The usual methods for training an AI or a neural network require lots of good-quality training data to make good models for future predictions. In contrast, the driver for our algorithm is that we don’t have enough data for an AI to learn the correlations and build a model on its own. I think that’s our unique selling point. Everyone talks about “big data”, and you sometimes hear people complain about it – “Oh, I’ve got big data, I’ve got too much data to deal with.” But when you hone in on a specific use case and look at it in a certain way, you realize that in fact, their problem is that they don’t have enough data, and they never will. At that point, we can say, well, given that you haven’t got enough data, we can use our technology to learn from the data you have, and use that information to help you make the best decisions.

What’s the most surprising thing you’ve learned from starting Intellegens?

BP: This is the first time I’ve worked closely with academics, which has been interesting (in a good way). I’d worked in software start-ups before, so I was used to dealing with experienced software people who are familiar with the tools and processes of commercial software. Academic software sometimes needs a bit more finessing to get it into a commercially stable product, in terms of source control, release management and documentation. It might sound like quite boring stuff, but if you’re going to be selling a product and supporting it, it becomes critical.

GC: I was surprised to learn that the process of getting contracts depends so much on word of mouth. I give talks at conferences, potential customers come up to me afterward, and then one customer introduces us to the next one, like stepping stones.

I also didn’t fully understand the different reasons why people might want to engage with a business like ours. Some people really want to bring in the latest technology to give their company a competitive advantage. Others want to be associated with using a technique that’s right at the bleeding edge. And some are interested in working with entrepreneurs because they personally want to buy in to the adventure and the excitement of a smaller company.

  • Gareth Conduit is a Royal Society University Research Fellow at the University of Cambridge, UK, and the chief technology officer at Intellegens, e-mail gjc29@cam.ac.uk. Ben Pellegrini is the chief executive officer at Intellegens

The physics of lawn sprinklers, the hazy mist of the Standard Model, careers in medical physics

Here in Bristol the climate is ideal for a carefree lawn – it rarely gets very hot and we usually get enough rain to make it through the summer without the need for a lawn sprinkler. As a result, I miss the hiss of sprinklers that were part of my youthful summers in Canada. In Wired, the physicist Rhett Allain looks at the fascinating physics of lawn sprinklers – and the optical illusions they can create – in “The mesmerizing science of garden sprinklers”.

Unfortunately, Allain does not look at the physics of impact sprinklers (see above image), which used to fascinate me as a child with their seemingly chaotic behaviour.

If I had a penny for every time a wrote a phrase like “…this new experiment could provide tantalizing hints of what physics lies beyond the Standard Model” I would be a pound or two richer. In “The once and present Standard Model of elementary particle physics” James Wells of the University of Michigan looks into the “hazy mist” of the model since its birth with the discovery of charm in 1974 (Wells’ definition, not mine).

One chapter is called “Facts, mysteries and myths”, which advocates constructing a myth for how neutrinos obtain mass and describes the cosmological constant, dark matter, baryogenesis, and inflation as four “mysteries of the cosmos”.

If you have given up on improving the Standard Model, you might fancy becoming a medical physicist. There is a nice article in Symmetry that asks five former particle physicists why they chose that career path and what they do now as medical physicists.  You can read more in “Transitions into medical physics” by Catherine Steffel.

Ultrasound device creates an audio, visual and tactile 3D display

An ultrasound-powered, 3D visual display that can also produce audible sound and holograms that you can touch been unveiled by researchers at the University of Sussex. The team used the display to produce 3D images such as a torus knot, a globe, a smiley face and letters, as well as a dynamic countdown of levitating numbers.

The display is a type of sonic tractor beam, which uses ultrasound transducers to create acoustic holograms that can trap and manipulate objects in mid-air. The Sussex device uses two arrays of 256 speakers to levitate a single polystyrene bead, which traces out 3D images in mid-air while illuminated by coloured LEDs. The bead can move at speeds of almost 9 m/s (in the vertical direction), which is so fast that an image is drawn in less than 0.1 s. This creates the illusion of a single 3D image in much the same way as a cathode-ray tube creates a 2D image in an old television by rapidly scanning an electron beam across a phosphor screen.

“Our new technology takes inspiration from old TVs,” explains Ryuji Hirayama of the University of Sussex. “Our prototype does the same using a coloured particle that can move so quickly anywhere in 3D space that the naked eye sees a volumetric image in mid-air.”

Amplitude and phase

The display creates sound by vibrating the polystyrene bead at audible frequencies. This is possible because the device uses different elements of the ultrasound signal for levitation and vibration. Sussex’s Sriram Subramanian explains that the ultrasound phase information is used to create the levitation traps, which means that the amplitude is free to be used for other applications. In the Sussex display, amplitude modulation is used to generate audible sound.

To add tactile feedback, the device creates a secondary set of acoustic holograms, which produce enough pressure for you to feel them. But this does have an impact on the performance of the device, which switches between levitation and tactile holograms, producing levitation traps 75% of the time and tactile feedback 25% of the time. While producing just a visual display, the polystyrene particle can be moved horizontally at speeds of 3.75 m/s, but this drops to 2.5 m/s when tactile feedback and audio is added.

Subramanian says that the most significant part of this work is the speed at which they can now move a levitated object. Previous displays held particles for a few milliseconds in each new position before moving them again — but the new device keeps them moving all the time. “The particle is always accelerating and that is how we get the speed,” Subramanian explains.

Harry Potter theme park

Subramanian believes that one of the most obvious applications the display is at a theme park, using the example of a Harry Potter experience. “As a kid you can walk up to the system, you can hold your hand out and you can start feeling a magic spell in your hand. And then there is a fireball that is bubbling in front of you. You can have a very magical experience,” he says.

But there are other non-display applications for these techniques. For example, the researchers show that the device can be used to manipulate liquids and Subramanian says that this could, for example, be used to create 3D printers that manipulate different liquids simultaneously, to print a multi-material object. “We haven’t tested these things, but I think they are future projections of what we could do,” he explains.

Tatsuki Fushimi, at the University of Bristol, who unveiled a similar display, but without the tactile and audio elements, earlier this year, said that he is very impressed by the work.

“The future of acoustophoretic volumetric displays is very bright and this [research is] a significant step towards turning this science fiction idea into reality,” Fushimi says. “They were successful in enlarging the screen size of the display by increasing the number of ultrasonic emitters (we used 60 whereas their setup used 512). This means that the particle can be displaced along a larger region of space and with a greater velocity. There are many things to be done before commercialization, but it is exciting to think about the future of acoustophoretic volumetric displays, and I am sure that interesting applications will emerge as we further improve the performance of these devices.”

The display is described in Nature.

Ultrafast 3D ultrasound wins journal citations prize

Mathieu Pernot and co-authors

A research paper describing an ultrasound imaging technique that can produce ultrafast 3D videos has won its authors the 2019 Physics in Medicine & Biology (PMB) citations prize. This annual prize recognizes the PMB paper that received the most citations in the preceding five years.

The paper, 3D ultrafast ultrasound imaging in vivo, was written by researchers from Physics for Medicine, formerly Institut Langevin (ESPCI ParisTech, CNRS, INSERM and PSL Research University) in France. The winning study, which also won the Roberts Prize for the best paper published in PMB in 2014, describes the first implementation of a novel ultrasound technique that produces 3D videos at thousands of frames per second.

The researchers achieved this high imaging rate by extending their previous work on 2D ultrahigh-frame-rate ultrasound imaging to three dimensions. To do this, they used diverging or plane waves emitted from a sparse virtual array located behind the probe. They designed a customized portable ultrasound system that samples 1024 independent channels and drives a 32×32 matrix-array probe. Graphics processing units were employed to speed the processing of the backscattered signals.

The 3D ultrafast ultrasound system achieved high contrast and resolution. Lead author Mathieu Pernot and colleagues demonstrated its use for several potential applications, including 3D mapping of stiffness and tissue motion, as well as the first real-time imaging of blood flowing through the chambers of a human heart.

Rapid progress

In the years since the paper was published, the field of 3D ultrafast ultrasound imaging has progressed rapidly. Pernot’s team and other groups have used the technique for applications including imaging cardiac blood flow and tissue in the human heart, myocardial fibre imaging, coronary flow imaging and functional brain imaging in animals.

“3D ultrafast imaging remains today a research tool, but the miniaturization of the technology is progressing rapidly and cost-effective solutions are emerging,” says Pernot. “Clinical systems could become available in the next few years.”

As for why the paper attracted so many citations, Pernot suggests that it introduced a transition from ultrasound being perceived as a low-tech imaging modality with high operator dependency to a flexible tool for imaging entire organs with high spatial and temporal resolutions.

“This is a new paradigm for ultrasound imaging,” he says. “3D ultrafast imaging can provide, in quasi-real time, quantitative parameters such as myocardial stiffness or functional connectivity of the brain, which remain challenging to image with other modalities.”

The PMB citations prize is marked with the presentation of the Rotblat medal, named in honour of Sir Joseph Rotblat, PMB’s second and longest-serving editor. “We feel very honoured and proud to receive the Rotblat medal,” Pernot tells Physics World. “Our team, Physics for Medicine, is pursuing the development of new imaging and therapeutic modalities for many years with the support of our institutions and funding organisations such as the ERC and the ANR. The Rotblat Medal is an important recognition of our efforts to achieve these goals at the highest scientific level.”

  • The winner of the 2019 Physics in Medicine & Biology citations prize is: 3D ultrafast ultrasound imaging in vivo by Jean Provost, Clement Papadacci, Juan Esteban Arango, Marion Imbault, Mathias Fink, Jean-Luc Gennisson, Mickael Tanter and Mathieu Pernot Phys. Med. Biol. 59 L1

Nanowire circuits allow for transparent and flexible LED screens

Researchers in China have fabricated transparent and flexible LED screens using a simple, low-cost manufacturing process based on silver nanowires. Liu Yang and colleagues at Zhejiang University say their technique is an improvement on existing screens, which are too opaque for some applications and can be brittle when deposited on flexible substrates. Their technology could soon bring diverse new capabilities to displays built into the walls and windows of modern buildings.

In recent years, transparent LED screens have become a focus of efforts to make flexible video displays using substrates like glass and clear plastic. Such screens are made from networks of highly transparent conductive circuits that connect their constituent LEDs together. For screens measuring a metre or more, either fluorine-doped tin oxide or indium tin oxide are typically used to construct the circuits. However, networks of this type suffer from several shortcomings, including a complex and expensive manufacturing processes as well as brittleness and a lack of transparency.

In order to design an effective alternative, Yang’s team needed to fabricate a network of wires that was dense enough to distribute electric current throughout the screen, but also sparse enough to preserve transparency. This led them to silver nanowires, which have excellent optical transmittance, electrical conductance, and mechanical flexibility. To manufacture their nanowires, Yang and colleagues first coated plastic and glass substrates with sacrificial masks, etched with networks of straight lines. After treatment in a specialized solution, these lines became stickier than the rest of the substrates. A further spray-coating process led to silver nanowires forming only along these sticky lines, creating an intricate network.

Using this technique, the researchers fabricated a series of 25cm-long, transparent and highly uniform conductive strips from both types of substrate. Through experiments, they showed that these strips possessed both high optical conductivity and low resistance, making them superior to previous tin oxide-based materials. In addition, they demonstrated a screen that hosted a silver nanowire circuit as long as 1.2m; enabling it to emit red, green, and blue light with varying biases, as seen in conventional displays. Finally, they showed that when the circuit was deposited onto a polymer substrate, its performance remained stable even when bent to a radius of 15mm – confirming its flexibility.

Thanks to these advantages, Yang’s team believes their technology could eventually replace tin oxide-based circuits in transparent display applications. The next steps in their research will include developing coatings to protect circuits from the surrounding environment; enhancing substrate adhesion; and sandwiching circuits between substrates for better protection and maintenance. With these improvements in place, the technology shows significant promise in allowing for widespread and practical smart displays.

The team report their findings in Optical Materials Express.

Copyright © 2025 by IOP Publishing Ltd and individual contributors