Skip to main content

Is quantum tech an industry or a community?

A woman standing next to a mannequin head wearing a cap full of magnetic sensors

The question in this headline has been on my mind since Friday, when I travelled to London for the annual UK National Quantum Technologies Showcase. The showcase is now in its fifth year, and I’m reliably informed that the first such event – which took place not long after the UK government released an initial £270m tranche of funding for R&D in applied quantum science – had a definite “community” flavour, with lots of exciting ideas but little in the way of anything concrete, never mind actual commercial products.

Now, however, the line between industry and community is starting to blur. Most exhibitors in the QE2 conference centre were showing off devices, rather than proposals or preliminary data. Take Elena Boto, a research fellow at the University of Nottingham. She and her colleagues have developed a wearable magnetoencephalography (MEG) scanner that uses optical magnetometers rather than bulky and expensive superconducting sensors to monitor magnetic activity in the brain. Their technology makes it possible to build an MEG device that fits over the head like a cycling helmet, and can scan people while they are moving. That’s useful for kids and adults with medical conditions that mean they cannot keep still, and also for studying the brains of people while they are doing motion-based tasks.

Two men standing and smiling in front of a receiver and holding a metal frame containing a prototype single photon source

Another intriguing technology at the showcase, this time in the field of quantum communications, was the receiver for a satellite-based quantum key distribution (QKD) network. The receiver was built by members of the University of Bristol’s Quantum Engineering Technology Labs, and Siddarth Joshi, a senior research associate on the project, told me that the satellite part of this new secure-communications network is due to be launched in 2-3 years.

The quantum sensing sector is also seeing some notable advances. The device in the photo below is a quantum gravimeter made by M Squared Lasers, and it uses cold rubidium atoms to detect small changes in the local gravitational field. The M Squared team developed it in partnership with physicists at the University of Birmingham, and shortly before the showcase, they loaded it onto a barge and sent it around London’s waterways to see how it performed in the field. Sensors of this type could help utility companies detect underground pipes and other types of infrastructure without the need to dig, saving money and time while also making life easier for people who live or commute near sites undergoing maintenance.

A man standing next to a large white box labelled "quantum gravimeter"

But for all the progress on show in the crowded exhibit hall, there is still a big gap between exhibition-worthy gadgets and saleable products. Despite considerable progress, most of the devices on display fell squarely in the former category. M Squared’s chief executive, Graeme Malcolm, told me that his Glasgow-based firm plans to test its gravimeter with Shell, the multinational oil and gas company, to help refine its business model. Similarly, Diviya Devani, a systems engineer at Teledyne e2V, says that her company will begin trialling its miniature atomic clock with a defence firm, Leonardo, sometime in 2020. But near-commercial devices like these were very much the exception, not the rule, and the demographics of the exhibit hall reflected this. Around three-quarters of the booths were showcasing the work of researchers in university or government labs, not private companies.

A woman standing next to a podium with an oblong box (the atomic clock) on top of it

The question of how to build a profitable industry on top of a successful research base came up repeatedly in a series of panel discussions at the showcase. For David Delpy, a veteran science administrator and current honorary treasurer at the Institute of Physics (which publishes Physics World), the missing ingredient is a “supply chain” of people with the right skills. “If you have an industry you need the equivalent of technicians and apprentices,” Delpy told the audience. “The EPSRC [Engineering and Physical Sciences Research Council] cannot fund that.”

Another panellist, Paul Warburton, an engineer and nanoelectronics expert at University College London, highlighted the need for more diversity in the field – not only in terms of race and gender, but also in disciplinary backgrounds. In Warburton’s view, the nascent quantum technologies industry would benefit from having more engineers and computer scientists, and perhaps fewer physicists. “All the science has been done in quantum technologies,” he declared. “The challenges for the next 5-10 years are all engineering problems.”

To my mind, though, the most perceptive suggestion came from Mike Muller. As the former chief technology officer of ARM Holdings, the semiconductor and software firm he co-founded in 1990 and sold for £23.4bn in 2016, Muller is a self-described outsider in the quantum world. He based his advice, instead, on the rise of exascale computing – that is, supercomputers that can perform 1018 calculations per second. Building computers in this class required R&D in several fields, including fundamental physics. But instead of simply handing out grants to researchers, Muller said that the US government also acted as the first customer for their innovations. “Customers are wonderful things for any technology,” he observed. Because developers needed to deliver concrete advances to a paying customer, he explained, they had to make compromises rather than simply making their product better in the abstract. What is more, he added, “The act of buying something encouraged lots of people to work together.”

Over the next few years – and thanks to hefty injections of cash in several countries, not just the UK – I expect to see more quantum technologies move out of the lab and into the marketplace. For now, though, my verdict is that the field of quantum technologies exists in a superposition of industry and community, with some of the characteristics of each.

 

Optical fibre device can detect micro-pollutants in real time

Researchers in Germany have developed a novel laser and optical fibre-based technique for measuring micrometre-sized airborne pollutants. They say that their method has the potential to offer real-time, in situ measurement and identification of this ultra-fine particulate matter, which can penetrate deep into the lungs, causing serious health problems (Optics Express 10.1364/OE.27.034496).

Exposure to airborne pollutants that measure less than 2.5 µm (known as PM2.5) has been linked to cardiovascular and respiratory disease, as well as cancer. According to the World Health Organization, these particles – which are around 30 times smaller than the diameter of a human hair – are responsible for an estimated 4.2 million premature deaths every year.

There are many techniques available for measuring airborne pollutants, but they all have drawbacks and most are not very good at monitoring the really small particles, Shangran Xie, at the Max Planck Institute for the Science of Light, tells Physics World. He adds that most options involve sampling the air and then taking it back to the lab for analysis.

Xie and his colleagues develop and work on applications for hollow-core photonic crystal fibres, such as wavelength conversion and opto-mechanics. They wondered if some of this fibre technology could find use in pollution monitoring.

“We don’t really work in this area of environmental monitoring, but it just occurred to us that the hollow-core fibre we work with is pretty much ideal, because you can have a cheap laser that focuses its light right into the hollow core and then, through optical forces, any particle that wanders by gets trapped by the laser and pushed into the hollow core,” explains Philip Russell, Director of the Max-Planck Institute for the Science of Light.

As the laser light and the particle – which is trapped by the laser due to its tiny size – travel through the hollow-core photonic crystal fibre, some of the light is scattered. By measuring the changes in the transmitted light and the time it takes a particle to transit the fibre, the researchers are able to extract useful information about the particles, such as their size and refractive index. And from this data, they can identify them.

Particle analysis

Russell explains that we know what most of the pollutants present in the air are and we know their refractive indexes, therefore identifying individual particles from this information isn’t too difficult.

The researchers tested the technique using a range of polystyrene and silica particles, and found that the device could successfully identify and measure different particles, including silica particles that were 0.99 µm in diameter.

Due to the way in which it works, Russell claims that the device should be very robust. The particle gets pushed out the other end of the fibre, so it doesn’t cause any issues by getting stuck, he explains. “The system in principle should run for ever,” he adds. “You have to filter out the large particles, of course, before you actually test the air, as they would mess up the system.”

Xie says that although they have tested materials besides silica and polystyrene, the researchers need to conduct more work on non-transparent particles and real-world examples of pollutants. The team have filed a patent on this technique and plan to develop a prototype for monitoring air pollution outside of the lab.

Porous polymer could help regulate heat and light in buildings

New coating materials that could help cool buildings in the summer, and then change their optical and thermal properties in the winter to keep the same buildings warm, have been created by researchers in the US. The polymer-based materials could also allow daylight to illuminate building interiors.

The energy use of a building can be reduced by coating its exterior with a smart material that reflects sunlight and emits heat in the summer – but can then be switched in the winter to absorb sunlight and be a poor emitter of heat. The challenge for material scientists is to create practical materials that fit this bill.

Now, Yuan Yang and colleagues at Columbia University School of Engineering and Applied Science in New York City have created such materials using porous polymer coatings (PPCs). These are synthetic materials that can adsorb and desorb a wide range of compounds. Their unique properties – including interconnected pore structures, large surface areas, and small pore sizes – make them suitable for many industrial applications. In particular, the variable optical and structural properties of PPCs have found wide uses in coatings for optical and thermal management.

Radiative cooling

PPCs that reflect sunlight and are good emitters of infrared radiation (heat) have proven effective for radiative cooling. On the other hand, PPCs that reflect sunlight but are thermally transparent (are poor infrared emitters) have found use as covering layers for radiative coolers.

The optical properties of PPCs can be tuned by controlling the amount of moisture present in the pores of the material. A similar effect occurs in paper – dry paper has pores filled with air that scatter and reflect light, making the surface appear white. When wet, however, paper is translucent, because the pores are filled with water. This change is a result of the close match between the refractive indices of the material and water, which reduces light scattering within paper.

The newly developed PPCs use this effect to switch between opaque and translucent states for solar radiation. Yang and colleagues have also extended the concept to long-wavelength infrared radiation (LWIR) to modulate the heat radiated by objects.

Similar refractive indices

This was done using the closeness in refractive indices of fluoropolymers (about 1.4) and typical alcohols (about 1.38). Specifically, white poly(vinylidene fluoride-co-hexafluoropropene) PPCs become transparent upon wetting with isopropanol. This results in large transmittance changes for solar radiation (a ΔTsol of about 0.74, or more than a factor of six increase in transmittance) and visible light (ΔTvis of about 0.80). Similar changes were also observed for other PPCs, such as polytetrafluoroethene, ethyl cellulose, and polyethylene(PE).

Furthermore, the research group showed a decrease in the transmittance of LWIR when thermally transparent PE PPCs were wetted with IR-emissive alcohols. The contrasting transmittances (ΔTLWIR of about –0.64) and (ΔTsol of about 0.33) for PE PPCs suggest potential applications for dynamic switches that could make an icehouse turn into a greenhouse. The observed optical switching was achieved in about 1 min and remain unchanged even after 100 wet-dry cycles.

The research is described in Joule.

 

An electric car future?

There has been a boom in electric vehicle (EV) take-up. Some see this as wonderful development in removing dirty fossil-fuelled cars from the roads, others, however, are not so sure. Not all of the criticisms that have emerged may be fair as the carbon debt associated with EVs or battery manufacture may be high, but, it is argued, it will be paid off fast if EVs use green power.

It is complicated doing the emission sums as it depends on what energy sources are used for charging as well as for EV and battery manufacture. But overall there are clear emission savings. That’s good news, although it’s not all good news: EVs wont reduce road congestion, the need for more roads and for more parking space.

The EV boom will also create a demand for more power (some say 20% more), which may create problems for balancing and even running the power grid, especially given that peak EV charging times coincide with peak-heating demand in winter.  There are other mobility options too, including improved public transport with trains, buses and trams, that are powered with green electricity or hydrogen. This might be seen as more socially and environmentally beneficial together with the overall need to reduce demand for travelling.

While EVs may not be the best bet, they do have a role to play and with longer-range versions now emerging, expansion of their use seems inevitable. A recent report by the UK Energy Research Centre suggested that as part of a wider programme of technical fixes and demand reduction in all transport sectors (including aviation), EVs could help reduce UK transport emissions by around 25% by 2050. This percentage would rise to 50% if lifestyle changes were also made.

Giving back to the grid

The spread of EVs may also open up opportunities to provide storage support for renewables via the so-called vehicle-to-grid” (V2G) option. In this case, EV’s batteries could be used to balance the grid and its use of variable renewables.  EVs will be charged from the mains supply at home or elsewhere, and at times their batteries could provide a source of power when there were shortages on the grid.

There could be significant advantages from using vehicle-to-grid and associated home-based smart power and storage systems

Some see V2G as a way to convert cars from being an environmental problem into part of the cleanenergy solution that would enable variable renewables to spread. V2G would also enable EV owners to earn some income from “renting out” their batteries. Obviously, V2G is only viable where there are grids and in many parts of the world that is not the case. Where there are grids, however, V2Gs must overcome potential real or perceived inconvenience issues. For example, in the worst case, car owners would not be happy to have their EV batteries drained flat when there was a power shortfall on the grid.

However, if drain limit protections are built in, then the potential battery capacity from millions of EVs linked to the grid could act as vast distributed grid stabilizing system.  Such a system could also have smartcharging controls, so that EV charging is only done when there was a surplus on the grid and not at times when home heating demand was high. One way to achieve that would be to charge higher prices for input power at peak demand times. Incentives would presumably also be offered to EV owners for permitting V2G power withdrawals.

There are other timing limits to what EVs and V2G can offer. When EVs return home, typically in the early evening, their batteries would be mostly discharged, so they would be no use for meeting peakpower demand. Although EV batteries could be charged directly using domestic photovoltaic solar, that would only be possible in the daytime when cars are most likely to be away from the home.

However, from a national-energy-system viewpoint, there could be significant advantages from using V2G and associated home-based smart power and storage systems. A report from Imperial College London and OVO Energy – a Bristol-based energy-supply company – found that the use of residential flexible technologies such as smart EV charging and V2G as well as smart electric heating and in-home battery storage, could save the UK energy system £6.9bn. That would be equivalent to a £256 saving on average household energy bills each year. The scenario relied on the uptake of 25 million EVs and 21 million electric-heating units in the UK by 2040, which the report says was ambitious but achievable

Safety first  

Whether autonomous self-driving car takes off remains to be seen. Safety is a key issue, with some highprofile and fatal accidents occurring with early versions. Not everyone is convinced that driverless car can negotiate the real complex chaotic world. However, it is claimed that self-driving EVs will reduce congestion since they can use road space more efficiently, while electronically hailed taxi variants can avoid the need for parking at destinations.

Some say that user mileage may increase with selfdrive cars and that we may even see autonomous cars left driving around instead of seeking a parking space. All of this adding to the air pollution problems that you get from the micro-particle dust released by rubberwheeled vehicles on roads. As EVs are often heavier than conventional cars (with heavy batteries instead of petrol tanks), they may even increase it. It is also worth noting that there may be materials limits to the current push to EVs, including the availability of key rare-earth elements and lithium for batteries. Nevertheless, EV and lowcarbon vehicle technology is developing rapidly with other new battery ideas emerging and new fuels being tested.

The bottom line, however, is whether battery-powered cars are the best use of green power? And perhaps more radical than that is whether it is time to start moving away from cars altogether?

PennPET Explorer acquires first human images

PennPET team

Researchers at the University of Pennsylvania have published the first clinical images acquired using the prototype PennPET Explorer, the second of two large axial field-of-view (FOV) whole-body PET imagers developed by the US-based EXPLORER Consortium. The proof-of-concept studies showed that the PennPET Explorer can produce higher quality images in shorter times and offer far greater versatility than state-of-the art commercial PET scanners (J. Nucl Med. 10.2967/jnumed.119.231845).

The PennPET Explorer is based on a digital silicon photomultiplier developed by Philips. The prototype configuration has three rings – each consisting of 18 detector modules – and an axial FOV of 64 cm. The scanner has a spatial resolution of 4.0 mm and time-of-flight resolution of 250 ps – leading to state-of-the-art imaging performance.

The Perelman School of Medicine researchers are continuing to develop the scanner’s design, increasing the number of rings from three to six, thereby expanding the axial FOV to 140 cm, as well as integrating an in-line CT scanner. They are basing their designs on established technologies and manufacturing procedures, to ensure that the PennPET Explorer can be developed into a commercial modality in the future.

PennPET Explorer

Extended axial FOV PET scanners offer many advantages over existing PET systems. The PennPET Explorer, with its high sensitivity of 55 kcps/MBq, enables dynamic whole-body imaging with high temporal resolution, as well as imaging with decreased radiotracer dose or shorter scan times, without compromising image quality.

Whole-body coverage also permits kinetic analysis of lesions beyond a standard axial FOV, ensuring the inclusion of large vascular structures for input functions. Finally, the scanner can image a wider range of isotopes, including longer-lived radiotracers that allow study of slower biological processes and applications such as cell tracking. These capabilities could expand PET utilization to study diseases and medical conditions not currently interrogated with PET.

Initial clinical tests

Senior author Joel Karp and colleagues conducted their first human studies with three groups of individuals to test various capabilities of the PennPET Explorer. Participants included five healthy volunteers, three clinical patients, and two research subjects participating in PET studies using investigational radiotracers.

Healthy volunteers underwent 18F-FDG scans with a commercial PET/CT scanner followed by PennPET scans from head to abdomen. To simulate shorter scans, the researchers subsampled scan data. Comparison of 16-min scans showed superior image quality in the PennPET scan. A subsampled 2-min PennPET image showed comparable, if not better, quality to the 16-min clinical scan.

To demonstrate the potential for dynamic whole-body imaging, two healthy subjects received bolus injections of 18F-FDG during an hour of dynamic imaging on the PennPET Explorer. The researchers also performed delayed scans, up to 10 half-lives after 18F-FDG injection, to study late kinetics and the ability to ability to image at low activity.

Dynamic study

In a patient with metastatic colon cancer, 18F-FDG PET images acquired on the PennPET Explorer showed more accurate delineation of disease than those taken using a standard PET scanner. This included more conspicuous perihepatic disease and an FDG-avid lymph node only visible on the PennPET image.

The team also examined a patient with metastatic neuroendocrine cancer involved in a 68Ga-DOTATATE PET study. A PennPET image using one-fifth the activity of the clinical PET scan showed comparable diagnostic quality to the clinical image, suggesting that the PennPET Explorer could offer a more practical option for this high-cost radiotracer.

In the two participants enrolled in separate research studies, one subject was imaged on the PennPET Explorer, from the vertex to the lower abdomen, 2 hr after injection of 18F-NOS, an imaging agent that targets inflammation. PennPET-acquired images revealed unexpected ocular uptake, which was excluded from the FOV of the standard research scan.

Another study imaged 18F-FTP, an agent for the dopamine D3 receptor, in the upper abdomen after the patient ate a fatty meal to stimulate gallbladder emptying. Images showed mild gallbladder emptying over time, highlighting potential uses for the PennPET Explorer in dosimetry studies.

“Our clinical studies with the prototype PennPET Explorer demonstrated excellent image quality and potential for imaging with lower activity and shorter scan duration, as well as demonstrating the potential for very delayed imaging and the measurement of multi-organ kinetics,” wrote the authors.

Karp tells Physics World that the completed scanner is being moved to another imaging facility in the School of Medicine and will be operational in early 2020.

“Our focus will be on expanding the research PET programme here by leveraging its unique capabilities. We are planning to undertake many possible studies that can take advantage of the larger axial FOV,” he explains. “We will also be performing the performance and compliance tests to enable us to submit a 510(k) application to the FDA. FDA compliance will enable us to eventually perform a wider variety of clinical studies, including paediatric studies with our colleagues at the Children’s Hospital of Philadelphia.”

Supernovae and earthbound chemical explosions are surprisingly similar

Despite their enormous differences in size, type 1a supernovae and chemical explosions on Earth detonate via similar physical mechanisms – according to a team led by Alexei Poludnenko at Texas A&M University and the University of Connecticut. The researchers uncovered the similarity by comparing computer simulations of stellar explosions with observations of chemical detonations. Their discovery could provide new insights into the poorly understood processes of supernovae explosions.

A type 1a supernova (SN1a) is thought to occur when a white-dwarf star in a binary system acquires enough material from its companion star to reach a critical mass — at which point a runaway thermonuclear fusion reaction causes the star to explode. Beyond this basic description, however, a more complete understanding of the mechanics involved in SN1a explosions is lacking – and several different theoretical models of SN1a remain equally plausible.

Most theories of SN1a assume that the huge explosion is triggered by the formation of a supersonic detonation wave that causes the thermonuclear burning of all stellar material it encounters as it expands. Yet the processes that initiate detonation are poorly understood in unconfined systems like white-dwarf interiors. These systems are extremely difficult to describe using numerical models because they involve physics that occurs over a wide range of scales. Ultimately, this limits the predictive power of existing SN1a models.

Subsonic to supersonic

Poludnenko and colleagues sought to overcome this challenge by focussing on similarities between the thermonuclear combustion waves characteristic of SN1a, and the chemical combustion waves found in Earth-based explosions. In both cases, a transition occurs from deflagration (where combustion moves at subsonic speeds) to detonation (combustion moving at supersonic speeds).

This transition is believed to be driven by high-intensity turbulence in both chemical explosions and SN1a. To study the similarities, the team first developed a theory of how turbulence drives the transition from deflagration to detonation, which they then confirmed using experimental data from hydrogen explosions. Then they used the theory to develop a computer simulation of how the same process would occur in a SN1a – and looked for parallels between the stellar and chemical explosions.

Their study has provided important insights into how thermonuclear combustion waves in SN1a make the transition to detonation. Ultimately, this could make it far easier for astrophysicists to predict how the dynamics of the stellar explosions will unfold over time.

The research is described in Science.

Committee urges Ireland to join CERN particle-physics lab

Ireland should “immediately” begin negotiations to become an associate member of the CERN particle-physics laboratory near Geneva. That is according to a report by a cross-party Irish parliamentary committee, which concludes that Ireland’s absence from CERN could impact its ability to attract high-tech companies and goes against the country’s claim to be a “knowledge economy”.

CERN currently has 23 member states and eight nations that are associate members. Ireland, however, is one of only three European countries that do not have a formal agreement with CERN. Becoming a member allows a participating country’s scientists to have access to formal training and become CERN staff members, while giving local companies access to CERN contracts. CERN membership is also thought to increase collaboration with other nations and boost the number of students studying science and engineering subjects in the country.

We have made a cross-party view — now the government needs to come on board

James Lawless

About 20 Irish companies have contracts with CERN, but the lab prioritizes companies from its member countries, which, according to the report, places Irish businesses at a competitive disadvantage. Experts told the committee that membership would also increase Ireland’s ability to secure funding from the European Union, with projects supported by the Horizon 2020 programme that were linked to CERN garnering a 35% success rate — well above the average of 12%.

Joining forces

In 2014, the Institute of Physics (IOP) in Ireland published a report –The Case for Irish Membership of the European Laboratory for Particle Physics – CERN  – in which it recommended full membership, but noted that associate membership would allow Ireland to first evaluate the returns before making an decision about full membership. Full membership would cost Ireland around €12.5m per year, but associate membership could be obtained for 10% of that cost. The latest report agrees with the IOP conclusion, noting that Ireland should first become an associate member and then, after three years, carry out a full cost-benefit analysis to assess whether to become a full member.

Ronan McNulty, a particle physicist, from University College Dublin who gave evidence to the committee, describes the report as pragmatic, sensible and cautious. “We fully endorse it and we hope the minister will act upon it now,” he says, adding that he has “been plugging away and trying to make this happen for 30 years”.

Indeed, Ireland’s five-year R&D strategy — Innovation 2020 – that was published in 2015 identified four international research bodies that Ireland would benefit from joining. These were CERN; the European Southern Observatory; ELIXIR, which manages data from life-sciences research; and the LOFAR radio telescope network. Ireland has joined them all, bar CERN.

At the launch of the report on 13 November, parliamentary deputy and committee member James Lawless, praised the academic community for making such a clear-cut case for CERN membership. “We have a final step to climb,” he added.  “We have made a cross-party view — now the government needs to come on board.”

Physicist Emmanuel Tsesmelis, who is head of relations with associate members and non-member states at CERN, says that the lab is “pleased to see the positive recommendations” in the report and that it “stands by to provide any assistance required for the next actions”.

Quantum-technology programmes in UK, China and Russia are described by top physicists

Quantum-technology initiatives in Russia, China and the UK are the subjects of three different articles in the journal Quantum Science and Technology, which has put together a special Focus on Quantum Science and Technology Initiatives Around the World.

The UK report is by Peter Knight and Ian Walmsley of Imperial College London and describes how more the £1bn in government and industry funding has been committed to developing quantum technologies in the period 2014-2024. The UK’s National Quantum Technology Programme and its research and skills hubs at universities throughout the country are also described.

Russia’s efforts to develop quantum technologies are described in a report by 14 physicists at the Russian Quantum Center in Moscow and several other institutes. The authors point out that the country is in the process of adopting a five-year Russian Quantum Technologies Roadmap as part of the nation’s Digital Economy National Program. The final budget for the quantum technologies portion of this initiative is expected to be about €1bn.

A report written by five physicists at the University of Science and Technology of China including Jian-Wei Pan points out that China has spent nearly $1bn on quantum-technology research and development over the past decade. This includes the launch of the Micius quantum-communications satellite, which was launched in 2016.

The three reports follow on from reviews on the states of quantum technologies in Canada, the EU, Japan, the US and Australia – which were published earlier this year.

Elevator pitches: how to get the most out of those awkward networking events

Networking event

I was recently at a networking event, balancing a glass of wine in one hand and a plate of mini samosas and onion bhajis in the other, when someone came up to talk to me. Initially, they sounded impressive but then spent the next five minutes droning on entirely about themselves, barely pausing for breath. Rather than ask any questions myself, which would have been polite, I simply exchanged contact details and escaped by pretending my drink needed a top-up. My fellow guest had clearly forgotten the old adage of talking and listening using the same proportions as the one mouth and two ears we’re born with.

The incident got me thinking about the merits of concise communication and why the “elevator pitch” – a pithy, 30-second summary to sell yourself or your product – is so important in many social and business situations. Interestingly, the term elevator pitch originates not from the time you get standing next to a potential customer in an office lift, but from the first demonstration of an elevator with a safety brake system. It dates back to 1852 when the American inventor Elisha Otis created a locking system that would catch and secure a plummeting elevator whose hoisting ropes had failed.

Unable to find much interest in his innovation, Otis organized a demonstration at the 1853 World’s Fair in New York. Standing in an elevator, he instructed an assistant to sever the hoisting ropes and, to the audience’s astonishment, the lift didn’t plummet. Instead, it moved barely a few centimetres before the safety brake engaged. Otis’ innovation paved the way for humans to ride in elevators without fear and, whether or not the story’s true, it’s one to remember if you ever really are stuck in a lift and can’t think how to break an embarrassing silence.

Networking events should be about finding the right people to talk to – and that’s why you need a good elevator pitch

But back to those networking events. To me, they aren’t about finding someone to simply pass the time with – they should be about finding the right people to talk to. If you’re lucky, you might strike gold with the first person you meet. More often than not, though, you’ll have to move around the room to find someone useful to you. And that’s why you need a good elevator pitch. You need to be able to deliver a short, positive and punchy introduction to yourself and then ask your guest about themselves.

If they respond with a similarly pushy elevator pitch, you’ll soon know if you have – or haven’t – got something of mutual interest to discuss. But even if the person’s pitch doesn’t appear to be of interest, their work may become relevant in the future so I always share contact details; you never know where life will take you. So swap business cards, say it was good to meet, and thank them for their time. Networking events can be awkward, but I find this strategy works well.

Pitch perfect

An elevator pitch is a bit like an advert or the cover of a book: it should help you decide whether to buy the product, not to tell you everything that’s in it. Like any form of marketing, a good elevator pitch will help you turn people into potential customers of you or your product. And if that seems all a bit grubby for you as a physicist, remember that it’s what you do already when writing your CV. It’s essentially a mini-elevator pitch.

You might think a CV is all about dates and facts, making it hard to get wrong. However, I was recently talking to a friend who’s done some amazing things in his life and was surprised to hear he was struggling to get an interview. When I looked at his CV, I immediately saw the problem. It was five pages long, printed in eight-point font and basically dull. After I helped my friend focus on his desired role, the CV was trimmed to a page. Remember, a CV is designed to get a job interview – not the job itself – so don’t turn it into a book.

Selling yourself isn’t hard; you just need a little preparation. Similarly, if you’ve got just a minute to pitch a business idea to an investor or a product to a customer, start by defining the problem you’re trying to solve. If your product or service doesn’t solve a problem that potential customers have, you don’t have a viable business model.

Ideally, you should be able to describe the problem you’re solving in no more than a sentence or two. You’ll need to define exactly who has the problem you’re solving and give an estimate of the size of your target market. You’ll also have to describe the competition. Every business or technology has competitors and it’s naive to ignore them or pretend they don’t exist. When cars were new, the competition wasn’t other cars but horses. Car manufacturers had to explain why combustion engines were better than just giving customers faster horses.

With the problem spelled out, now you can describe your solution. You don’t need to explain every aspect of your technology – if someone’s really interested, they’ll ask. And think about what’s next. Are you looking for financial investment? Or are you just after help? Tell your new contact what they need to know to secure you that next meeting or introduction.

Even if you’re in academia, a good elevator pitch is vital to explaining what you do. It’s the thinking behind the Institute of Physics’ Three Minute Wonder competition, which challenges researchers or project team members in the UK and Ireland to explain their work in three minutes flat. Once you’ve got a good elevator pitch down to a tee, those networking events might not seem so bad after all – and the only thing you’ll have to worry about is not dropping your onion bhajis.

AI dermatology tool needs more diverse skin types in its training datasets

An artificial intelligence (AI)-powered dermatology algorithm that can identify a range of skin conditions doesn’t work effectively on black skin. That’s the finding of researchers in Uganda and Sweden, who tested the software on adults in Uganda. The study highlights the potential risks of AI-based healthcare and the need to ensure diversity in datasets used to train algorithms, the authors explain (bioRxiv 10.1101/826057).

Skin Image Search is an AI app that helps people identify skin conditions. The user uploads a photograph of the problem area and the app suggests the three most likely skin diseases. The latest version of the AI algorithm, released in June this year, has an accuracy of about 80% for the top three suggestions, while its top suggestion is around 45% accurate, according to First Derm, the company that developed the platform. The company also runs a telemedicine service, where uploaded images are assessed by a qualified dermatologist.

The convolutional neural network that powers the AI-based app was trained using more than 300,000 photographs of skin diseases collected by this service, ranging from inflammatory conditions, such as acne and psoriasis, to skin cancers.

Founder and CEO of First Derm, Alexander Börve, tells Physics World that he reached out to his co-authors on the paper for their help to conduct the research as he knew that images of black skin only make up about 5–10% of the company’s database. According to Börve, the majority of images in the database (70%) are from the US, 15% are from the UK and 5% are from Sweden, with the rest coming from all over the world.

In Uganda, researchers at The Medical Concierge Group tested the app on 123 photographs of skin diseases collected by a local telemedicine company. All images were from adults with Fitzpatrick skin type 6 (dark brown to black), 62% of whom were female and 38% male.

Skin Image Search

The team used an older version of the app, launched in April 2018, that suggests five – rather than three – possible skin conditions, with an accuracy of 70%. On the Uganda data, however, the AI was only able to place the correct skin condition in its top five suggestions for 17% of the images. It failed to return any correct suggestions for 102 of the 123 photographs.

There was also a marked difference in performance with different skin diseases. The app worked well for dermatitis, identifying it as the most likely condition – giving it the top spot in its suggestions – with an accuracy of 80%. But while fungal conditions were the most common images, the software failed to identify any of them, achieving an accuracy of 0% for its top five suggestions. The study authors also note that the app performed slightly better on females than males.

Börve says that the research will serve as a benchmark for the company, so that it can make sure that it improves with future updates. Going forward, he says that the company will be working to increase the diversity of images that the AI is trained on. “Basically, when we train these neural networks, we need at least 500 images per skin disease and when it comes to black skin, we also need 500 images of that skin disease and that skin type,” he explains.

Disease presentation is also an issue, Börve says. In Africa, due to issues with healthcare access, patients are often first seen by a doctor when their problem is at a more advanced stage than in Europe and the US. This means that skin diseases may show a different, more advanced presentation than the AI has been trained on.

Börve says that this could be part of the problem with fungal diseases. As fungal conditions progress, he explains, they can impact hair growth and cause the skin to become white, and the AI struggles with this, misidentifying conditions as hair loss or psoriasis.

“This is the problem with artificial intelligence, with the development of these systems, you need correct data and you need a lot of data representing the different diseases,” Börve says.

Copyright © 2025 by IOP Publishing Ltd and individual contributors