Studies suggest that human error is responsible for over 90% of the 1.25 million people who die each year globally due to car accidents. Therefore, improving driver safety is one of the biggest incentives for increasing the autonomy of vehicles. But this brave new world of autonomous driving is not without its own risks – as Andrew Glester discovers in the August episode of the Physics World Stories podcast.
To learn about how automated features can reduce human error, Glester catches up with Siddartha Khastgir, the head of Verification & Validation of Connected and Autonomous Vehicles at WMG, University of Warwick, UK. Khastgir describes the form that fully automated vehicles might take, and explains why it is a myth that these vehicles could provide absolute safety without human intervention.
Cars today already have a degree of autonomy, such as parking-assist systems that use ultrasonic sensors. This autonomy is increasing every year, as sensors and other hardware can monitor a car’s state and create dynamic maps of its surroundings. But these systems bring a new threat – opportunities for hackers to access cars remotely. To learn about these emerging risks, Glester speaks with Simon Parkinson, a computer scientist who leads the Centre for Cyber Security at the University of Huddersfield, UK.
Find out more about the cyber threat posed to autonomous vehicles in this feature by Stephen Ornes, originally published in the August edition of Physics World, a special issue on the physics of cars.
The first robust computational model of so-called “strange” metals has revealed that they are in fact a new state of matter. The model, developed by physicists from Cornell University and the Flatiron Institute, both in New York, could advance our understanding of other correlated quantum materials, such as high-temperature superconductors and even black holes.
Strange metals get their name from the peculiar behaviour of their electrons. Unlike electrons in ordinary metals, which travel freely with few interactions and little resistance, electrons in strange metals move sluggishly and in a restricted fashion. They also dissipate energy at the fastest possible rate allowed by the fundamental laws of quantum mechanics. In this sense, strange metals lie somewhere between metals and insulators, which have strongly-interacting electrons that occupy fixed positions.
But their strangeness doesn’t end there. For more than 30 years, researchers have puzzled over the fact that strange metals can become superconductors when cooled below a certain (relatively high) critical temperature. While models of this behaviour have existed for a while, they could not be accurately solved because the electrons are entangled, meaning that they cannot be treated as individual particles. What is more, any real material will have an enormous number of electrons, making exact solutions impossible.
Transition between an ordinary metal and an insulator
A team of researchers led by Eun-Ah Kim of Cornell’s Department of Physics have now solved this problem using a combination of two methods. First, they used a “quantum embedding” technique based on an approach developed in the 1990s by team member Antoine Georges of The Flatiron Institute’s Center for Computational Quantum Physics (CCQ). Here, the researchers performed detailed calculations on a few atoms in the system, rather than on the whole quantum system. They then used a quantum Monte Carlo algorithm, which relies on random sampling of the atoms, to solve the model of strange metals down to absolute zero.
A key feature of the resulting model, Kim explains, is that when the electrons’ kinetic energy is low, and their position therefore becomes more fixed, the system enters a so-called spin glass insulator state. In this state, random interactions between electron spins do not allow individual spins to point in the same direction as the spins of their neighbours, leading to “frustration” and a random alignment of spins.
Conversely, when kinetic energy is high, the electrons move freely, and the system enters a weakly-interacting state known as a Fermi liquid. In this state, large numbers of electrons can be modelled as non-interacting quasiparticles, each with an effective mass that exceeds that of a free electron. The Fermi-liquid model is good at predicting the properties of conventional metals – including the fact that, at low temperatures, the resistance of a conventional metal scales with the square of its temperature.
Kim points out that this scaling does not apply to strange metals. Instead, their electrical conductivity depends linearly on temperature, with the Planck and Boltzmann constants appearing as scaling factors. For this reason, strange metals are sometimes known as non-Fermi liquids or, alternatively, as Planckian metals.
A “reluctant” metal
By adjusting the ratio between the kinetic energy and interaction energy of the electrons in their model, the researchers were able to drive their model system to the verge of a transition between an ordinary metal and an interaction-driven insulator. At this point, strange metals appeared as a new state of matter, bordering these two known phases.
“We found there is a whole region in the phase space that is exhibiting Planckian behaviour rather than belonging to either of the two phases that we’re transitioning between,” Kim says. In this quantum spin liquid state, the electrons are not completely locked down, but they are also not completely free either, she adds – making the material metallic, but only “reluctantly” so.
The new result, detailed in PNAS, could shed fresh light on the physics of high-temperature superconductors, which are related to Planckian metals in that their electrical resistance does not vary as expected with temperature either. More surprisingly, it could also have links to astrophysics. Like strange metals, black holes have properties (such as the length of time a black hole “resonates” after merging with another black hole) that depend only on temperature and the Planck and Boltzmann constants. “The fact that you find this same scaling across all these different systems is fascinating,” adds team member Olivier Parcollet of the CCQ.
Late one night I found myself on a Pinterest board trying to understand the appeal of isomalt for candy-making and edible cake decorations.
Why? A Twitter post announcing the latest research out of an interdisciplinary collaboration had caught my attention: complex networks of blood vessels can be 3D printed using isomalt, a sugar substitute usually used for making candies and decorative edible sculptures. Such work would open avenues for 3D printing personalized organs, reducing the human organ transplant shortage.
“Each year, thousands of patients die simply waiting for donor organs to become available,” says Jordan Miller, assistant professor at Rice University and the principal investigator leading the study. “These critical medical challenges motivate us every day to apply engineering principles to biology to build tissues and organs that can one day solve this challenge. Our own field of biomanufacturing, though still in its infancy, is rapidly gaining capabilities we couldn’t imagine a decade ago.”
Simply put, the researchers wanted to develop complicated vascular systems that can sustain engineered tissues and organs. And as it turns out, as the researchers explain in their Nature Biomedical Engineering paper, this is a really complex problem.
3D printing the lifeblood of tissues
The role of blood vessels is deceptively simple: to deliver oxygen and nutrients to tissues and carry away some waste. But blood vessels are an intricate network of incredibly extensive branching – they even grow in response to certain signals – all of which researchers must consider when 3D printing functioning tissues.
Conventional 3D printing methods aren’t suitable for replicating these delicate vascular webs. Take extrusion 3D printing, for example, in which structures are formed by melting strands of material that flow through a nozzle. Similar to the balsawood bridge you built in physics class, blood vessels created with this method will deform or collapse under their own weight unless supported by more 3D-printed material. Vessel networks that can be made with extrusion printing are therefore limited.
As if this wasn’t enough, 3D-printed blood vessels also need to keep engineered tissues alive. Any 3D-printed tissue will need to be chock-full of living cells to function, and vessels need to deliver oxygen and nutrients to the furthest reaches of engineered tissue. Because extrusion 3D printing cannot easily create dense networks of blood vessels, this task is also difficult.
Such engineering challenges required the researchers to rethink the way in which they 3D print tissues and blood vessels.
Printing with sugar
The researchers realized that they could apply a 3D printing technique already widely used for polymers and metals to build 3D-printed blood vessels that overcome these restrictions. The technique, called selective laser-sintering, uses a focused laser to melt and fuse small powder grains into pre-defined shapes. They found that these powdered grains could be sugars, and after much testing, they 3D printed a sugar template with extensive branching and unsupported overhangs using an isomalt mixture. The team collaborated with the Nervous System design studio to create the blood vessel network shape designs.
A sample of blood vessel templates 3D printed using a special blend of powdered sugars. (Courtesy: Brandon Martin/Rice University)
Next, they created a tissue by filling the space around the 3D-printed sugar template with a cell-infused gel. When this cell-infused gel solidified, the researchers dissolved the template using water and flushed it away, leaving empty channels primed to deliver oxygen and nutrients to cells.
It takes only five minutes to generate vascular tissues using this approach, compared with hours or days for conventional 3D printing techniques. This gives researchers time to send liquids containing oxygen and nutrients through the vessels and learn more about how cells function when supplied by these vessels.
Keeping cells alive with laser-welded sugar channels
“A major focus for this paper was to build spatial ‘maps’ of how cells function based on their position relative to the 3D-printed vascular network, to quantify how effective or efficient a particular vascular design is and better replicate what we see in the body,” says lead author Ian Kinstlinger, a graduate student in Miller’s lab at Rice University.
Co-authors Sarah Saxton and Ian Kinstlinger successfully begin their studies of how well 3D-printed blood vessels sustain cells. (Courtesy: Kelly Stevens/University of Washington, Seattle)
The researchers’ 3D printing technique, developed in collaboration with researchers at the University of Washington, Seattle, allowed them to control the engineered tissue environment, including tissue composition, cell types and blood flow, to better understand how cells retain their function over time. They demonstrated that their 3D-printed blood vessels could keep large and dense engineered liver tissues alive outside the body for at least a week.
What’s next for the tech
The researchers surmounted numerous feats of hardware and software engineering for this work to become reality.
“The sugar materials presented major fabrication challenges and required creative solutions, like developing a new kind of powder dispenser and laser control system,” says Kinstlinger. “There were virtually no blueprints for do-it-yourself, open-source selectively laser-sintered systems, and nobody had successfully shown selective laser-sintering with sugars.”
Currently, the researchers are working on a high-precision laser control system that will help them 3D print even smaller vessels (they can currently print vessels around 300 microns in size) so that vessels reach every piece of engineered tissue and keep cells alive.
There’s still much to learn from a biology perspective, including how various types of cells respond to environments engineered with this method.
And, armed by advances in biology research and their 3D printing technique, the researchers hope to extend the size and lifetime of engineered tissues and deploy the technology more widely. The 3D printing technique might also be useful to create medical imaging test devices like tissue phantoms, for example.
“Our long-term vision is an end to the donor organ shortage and replacement of damaged organs with new manufactured ones that are perfectly matched to each patient in need,” says Miller. “I am hopeful that efforts like ours will yield manufactured organs for human patients within my lifetime.”
As for me? I need to pay an online visit to a local bakery and partake in some sweet treats…for science’s sake.
We can’t see them. In fact, we often can’t hear them either. But mechanical vibrations, whether sound or ultrasound, play a key role in modern cars. They’re perhaps most familiar in the form of those ultrasonic sensors that safely guide us into and out of tight spaces when we park or drive off. But more importantly, the vast majority of internal-combustion engines that power today’s vehicles use an adaptation of an ultrasonic transducer invented by the French physicist Paul Langevin in 1916.
Over the last 30 years about 1.5 billion road vehicles have been fitted with fuel-injection systems, of which roughly a third contain the Langevin-inspired devices. Relying on the tiny mechanical vibrations of piezoelectric crystals, they inject precise, computer-controlled amounts of fuel into an engine’s cylinders. Indeed, fuel-injection systems have been so successful that cars, lorries and trucks are no longer the polluting monsters they were 50 years ago. Today’s vehicles give off practically no lead, black soot or hazardous un-burnt petrol fumes, though they still emit lots of climate-warming carbon dioxide (CO2).
As a physicist, what fascinates me with fuel-injection systems is that they rely on piezoelectric vibrations, in this case at frequencies of 1–5 kHz. That’s well within the range of human hearing, which extends up to about 20 kHz. Above that limit lies “ultrasound”, though the precise upper frequency you can hear depends on your age and the quality of your ears. But in my profession, whether a vibration is audible sound or ultrasound is mostly immaterial: the physics is all the same. With that in mind, let’s begin our sonic car journey with a technology that’s vital before you’ve even started driving.
Sensing danger
Look closely at most modern cars and you’ll see a few small, round discs about 20 mm in diameter set into the front and rear bumpers. Hundreds of millions of vehicles have been fitted with these electro-mechanical transducers since they first entered the market in the mid-2000s. Emitting and receiving ultrasonic pulses, they’re echo-locaters, acting not only like the vocal cords of bats but also like their ears.
Pulses of ultrasound travel away from the parking transducer until they bounce off a nearby object. Some of the reflected energy returns to the transducer where it is converted into electrical signals and then sent for signal processing. The time, Δt, for a pulse to make this round trip is 2L/c where c is the speed of sound in air (roughly 330 m/s) and L is the distance to the object. With electronic digital timers able to make timing measurements of this kind very accurately (Δt ~ 300–30,000 μs), ultrasound sensors are perfect for measuring distances of a metre or two.
Simple and cheap to make, parking transducers are structured a bit like a sandwich (figure 1). Metal, usually aluminium, is first extruded to form a flat-bottomed cup with walls about 1 mm thick. A flat piezoelectric disc made from polarized lead zirconate titanate is then glued to the inside base of the cup. About 1 mm thick and 1 cm in diameter, the disc has two metal electrodes that have been evaporated onto its two flat surfaces.
The open end of the cup is sealed with a polymer layer that has two metal pins electrically wired to the two electrodes. Applying an electrical voltage to the transducer’s pins deforms the piezoelectric disc, which expands and contracts as it follows the electrical drive voltage (usually a short burst of sine waves). But there is also a smaller, radial deformation that depends on the material’s specific piezoelectric properties. With the disc glued to the inside of the device, the cup’s outer flat surface therefore buckles, creating ultrasonic vibrations at a frequency of about 40 kHz.
1 Parking sensors: the usefulness of ultrasound Ultrasonic parking sensors consist of a metal cup with a piezoelectric disc in the base. Known as a “sandwich transducer”, the structure is shown here with one part cut out for clarity and without its electric pins or the polymer sealant that covers the top. The colours, which are the results of a finite-element computer model, show the displacement of different parts of the structure, ranging from dark blue (zero displacement) to brown (maximum displacement of 10–13 m) at a resonating frequency of 39,006 Hz. The piezoelectric disc, which moves up and down like a piston, has the largest displacement; its zero-displacement position is given by the black wireframe pie. (Courtesy: David Andrews/Cambridge Ultrasonics; iStock/Chesky_W)
Sandwich transducers have many resonating frequencies, corresponding to different modes of vibration, just like the skin of a drum. Manufacturers, however, like to tune the frequency of the driving voltage so that the entire, flat surface of the transducer oscillates in phase, moving up and down like a piston. This vibration maximizes the electro-acoustic coupling, creating waves that are in phase over a wide area. In fact, because their wavelength (8 mm) roughly equals the diameter of the piezoelectric disc (10 mm), the sound waves propagate as a hemisphere with a radius expanding at the speed of sound.
Any piezoelectric transducer can also work in reverse, generating an electric charge when deformed by an incoming ultrasonic wave. Obviously, the transducer cannot transmit and receive ultrasound waves at the same time, so it receives waves only when the drive voltage is switched off (there is no timing problem because the echo arrives many microseconds later). In fact, the plane surface of the transducer is spatially matched to receive planar echo-waves, which the outgoing spherical waves will have become by the time they return.
Sandwich transducers are incredibly common as ultrasonic parking sensors, with 10 or so fitted to a typical car. They have a maximum range of about 5 m – perfect for parking – while their sealed construction means they can withstand rain and much of the grime that accumulates on road vehicles. However, their range is too short to detect other vehicles moving at speed, which is why microwave sensors or cameras are used instead for collision-avoidance purposes.
One unexpected benefit of ultrasonic sandwich transducers is that they can ward off rodents, which love nothing better than climbing into car engine bays and gnawing at tasty electrical cables and rubber hoses. Rats can hear ultrasound up to 60 kHz, while mice have ears that operate up to 100 kHz, making ultrasound transducers installed under your bonnet the ideal, low-cost rodent-scarer. The animals hear the sound, but we don’t. And you don’t even need to measure the return signals that well: all you need is a loud, intermittent ultrasound and the rodents will leave your cherished vehicle alone.
Injection of interest
So let’s assume those ultrasound parking sensors have helped you negotiate your way out of that tight parking space and you’re ready to hit the road. To get moving, you press your foot down on the accelerator pedal, which sends a signal to the car to start burning more fuel. But for it to combust as efficiently and as effectively as possible, the fuel has to be mixed with just the right amount of air.
In an internal combustion engine, this was originally done with a carburettor – essentially a pipe through which air passes into the engine. Fuel enters from the side, with a “float valve” controlling how much gets in. Problem is, the valve has to be correctly oriented with respect to gravity. Turn it upside-down and it won’t work at all. Corner your car too fast and the float valve can be disturbed by strong inertial forces, potentially starving the engine of fuel and preventing it from generating mechanical power.
This drawback of carburettors applies to aircraft engines too, as Britain’s Royal Air Force found to its cost during the Second World War. Its Spitfire and Hurricane planes were fitted with Merlin engines, which had carburettors that were prone to cutting out when the pilots performed steep dives. The German Messerschmitt 109, in contrast, used one of the first fuel-injection systems and could perform dives and manoeuvres under full power. It was a superior fighting aircraft, though in the end Britain’s ability to build new planes faster than the Germans gave Britain the decisive military edge.
Since they entered the car market in the 1970s and 1980s, fuel-injectors have transformed how we drive. With a traditional carburettor, a classic four-stroke internal combustion engine works by the piston first moving down as air and fuel enters the cylinder. On the second stroke, the piston moves up to squeeze the mixture, which ignites (spontaneously in a diesel engine; with spark plugs in a petrol engine). The burning fuel then pushes the piston back down again. Finally, the piston moves back up and expels the burnt gases, each stroke taking half a revolution of the engine’s main crank-shaft.
But with individual fuel-injectors for each cylinder, the four strokes are slightly different. On the first, the piston moves down as air (rather than an air/fuel mixture) enters the cylinder. On the second stroke, the air is compressed and only now – towards the end of this stroke – is the fuel injected. Third, the mixture burns, pushing the cylinder down. Finally the piston moves back up as the gases are expelled. By injecting fuel towards the end of the second stroke, the engine becomes both more efficient and able to run at a lower temperature.
Burning advantages
Driven by increasingly stringent emissions regulations over the last 50 years, fuel-injection systems have now largely replaced carburettors in most road vehicles. Key to their operation are tiny piezoelectric transducers, which open and close the valve to let precise amounts of fuel into the engine’s cylinders. Compared to old-fashioned carburettor engines, these fuel-injection systems allow modern cars to run much efficiently and with far less fuel for the same power output.
However, manufacturers can’t use the sandwich transducers found in the humble ultrasound parking sensor: they can’t generate displacements of more than 0.01 μm in air and are unable to work with liquid fuel, which is far harder to compress than air. Instead, most high-power ultrasonic transducers in fuel-injection systems are based on the Langevin transducer, which was invented during the First World War to help the French navy track down enemy submarines by detecting the echoes from the transducer itself.
Rather than having a single piezoelectric disc as with the ultrasonic parking sensors, a modern Langevin transducer uses a stack of them sandwiched between two metal rods. Roughly 5 mm thick and 30 mm in diameter, each lead-zirconate-titanate disc has a cylindrical 10 mm hole in the centre, through which a threaded metal screw passes. The screw holds the assembly into a single mechanical unit, which vibrates by expanding and contracting along its length (figure 2).
2 Fuel-injection systems: powered by piezoelectrics Fuel-injection systems in cars contain an ultrasonic piezoelectric transducer, the vibrations from which open and close a valve, allowing precise, computer-controlled amounts of fuel to be pumped as droplets into the engine’s combustion chamber. The transducers are based on a design invented by the French physicist Paul Langevin during the First World War to spot echoes from submarines. a This cut-away diagram of a classic “Langevin transducer” has four ceramic piezoelectric discs (black rings) inside a metal rod (purple). The screw running through the centre holds the parts together and compresses the discs, preventing them from shattering. b This finite-element computer model shows the transducer in motion, with the displacement ranging from zero (blue) to a maximum (red). The central region is a node, while the two ends move axially in antiphase in a half-wave mode. c Fuel injectors in cars use an adaptation of this transducer, with many more piezoelectric discs (200 in this example) and much less rod. The discs are connected at the top to a stiff outer casing and at the bottom to a spring (here a polymer ring). As this finite-element computer model shows, the stack moves up and down like a piston with the biggest displacement (~30 nm in this model) at the end near the spring. (Courtesy: David Andrews/Cambridge Ultrasonics)
The Langevin transducer does two main things in a fuel-injection system. First, it actuates a “pintle valve” that allows fuel into a small chamber near to the nozzle at the tip of the injector. Second, it pumps the fuel out of the small chamber into the car engine’s cylinder at high pressure. In fact, the transducer is clamped at the end farthest from the injection nozzle by the outside body of the injector, meaning one end of the stack is stationary but the other end moves.
Manufacturers want is to maximize displacement at the tip so that fuel can be pumped into the engine as effectively as possible. We know that the displacement of the piezoelectric material is proportional to the strain, which depends largely on the applied electric field, E, across each disc. So to maximize the displacement we need to maximize E = V/d, where V is the voltage across each disc and d is the thickness of the disc. The best solution is to use lots of thin discs rather than one, single fat disc even if it is of the same overall thickness. The multiple thin discs will give the same total deformation but with a fraction of the voltage, which is handy as smaller voltages are easier to generate, safer to use and require thinner insulation.
In practice, most fuel-injectors have between 20 and 100 piezoelectric discs in a stack, with each disc being 10–20 mm in diameter and just 0.5 mm thick. They can achieve displacements of about 10 μm with drive voltages of about 150 V and boast a relatively low mechanical “quality factor”, which means they do not resonate and you can get a single mechanical pulse rather than a packet of pulses as you would with a conventional Langevin stack.
In simple systems, cars use just one injector to fire fuel into the air-intake manifold, ahead of the air-intake valves for each cylinder, thereby directly replacing a carburettor. But many cars now have one injector per cylinder, each delivering a precisely optimized amount of fuel to each cylinder to suit the quantity of air received. Nearly all the fuel gets burnt in each power stroke of the piston – practically eliminating the carbon deposits that would build up were combustion imperfect – and ensuring almost no micro-particles of carbon leave the exhaust.
As for how quickly a fuel injector needs to work, an engine running at top speed turns at about 6000 revolutions per minute, which is 100 revolutions per second. The injector therefore has to open and close in less than half a cycle of the piston, which would be 5 ms in this example, though typical opening and closing times are usually 1 ms or less. Conventional Langevin transducers, in contrast, operate at 25–60 kHz and so have a period of oscillation of 15–40 μs, which is much less than the 1 ms needed for fuel injection.
One final trick of a Langevin transducer is that it converts incoming low-pressure fuel into liquid at high pressure. This not only overcomes the compression pressure in the cylinder but also turns the fuel into a mist of fine droplets roughly 10 μm in diameter. Much more of the fuel surface is therefore exposed to air, allowing combustion to be more fully completed. The Langevin design does this thanks to its vibrations having a high “acoustic impedance” (density × wave speed).
Fuel for thought
When it comes to delivering fuel to your car engine, fuel injectors can adjust the precise amount in three ways. First, they control how long the injector is open. This method is more effective when an engine is running faster (at low speeds, where strong acceleration is needed, the fuel-pumping action cannot sustain a high pressure of fuel for the entire time the injector valve is open). Second, fuel injectors can control the voltage to the piezoelectric stack of discs (the peak voltage is about 150 V but lower values will narrow the nozzle’s aperture, thereby restricting the quantity of fuel entering the cylinder). Finally, fuel injectors can adjust fuel levels by repeatedly pulsing the voltage so the nozzle opens and closes repeatedly within one stroke of the piston.
As a pulsed system, fuel injectors therefore allow the modern internal combustion engine to be highly controlled. Indeed, some Formula 1 car manufacturers like to show how well they can do this by running their engines without a load on a test bed and programming them to play tunes such as “God Save the Queen”. The digital control of the engine, provided by the fuel-injectors, allows the engine speed – and hence the audible musical note heard from the engine – to be synthesized precisely. It’s a gimmick, but impressive nonetheless.
In reality, your car has its own computer system to calculate how much fuel to inject – and when. Performing simple tasks quickly, the “engine-control unit” collects data from several sensors on the engine and exhaust as well as the accelerator pedal, which it then compares with a table of values it holds to find what fuel-injector settings to use. Manufacturers can therefore precisely control their car engines to give drivers a choice of operation – such as economy, sport, race-track, engine longevity and low-pollution – or to meet specific pollution-monitoring tests.
Maximized performance Fuel injectors have made combustion engines more efficient and less polluting, as well as making the engines last longer. (Courtesy: iStock/fermate)
Clearly, some of these modes conflict with one another or are even mutually exclusive. Economy, for example, does not fit well with sport or race-track modes, while the desire to maximize performance while beating pollution-monitoring tests can also conflict, as one large car manufacturer (Volkswagen) has found to its cost. But we should not lose sight of the fact that fuel-injection systems have transformed motoring.
Apart from letting internal-combustion engines run more efficiently and more reliably, they use less fuel and cut pollution compared with carburettors. They also let engines last longer and – by lowering the amount of pollution from exhaust gases – extend the lifetime of catalytic converters. Fuel-injection systems have not, however, cut CO2 emissions from road vehicles and surely the future of road transport will involve burning hydrogen in internal combustion engines (producing water instead of CO2) or switching wholesale to electric vehicles.
Both changes would eliminate CO2 generation, provided that the hydrogen and electricity in each case can be supplied without emitting its own greenhouse gases. In the meantime, however, we can thank the vibrations of tiny piezoelectric crystals for at least helping us to burn fuel better. In fact, more small-particle pollution comes from the action of car brakes and tyres than from the exhaust fumes of current internal-combustion engines – a testament to the effectiveness of catalytic converters and to modern fuel-injection systems. Those vibrations may be small, but their benefits are big.
Rapid progress is needed in negotiations regarding the UK’s participation in the EU’s major scientific framework programme if collaborations are to continue smoothly into 2021. That is according to a statement signed by more than 100 organizations and individuals representing the European scientific community. Together, they urge the EU and the UK to compromise on the UK’s participation in Horizon Europe – the successor to the €80bn Horizon 2020 programme.
The signatories say that while it is encouraging that both the EU and the UK have committed in principle to UK participation in the research programme, major sticking points remain. They add that with good faith, agreement on the terms of participation should be possible, but warn that time is running out. Anton Zensus, director at the Max Planck Institute for Radio Astronomy in Bonn, Germany, who signed the statement, told Physics World that there is real concern that the UK will lose its place in Horizon Europe. “There are only a few months left, and little progress toward an agreement has been reached so far,” he explains. “This risk is increasing with every day of unsuccessful negotiations.”
Horizon Europe is the Champions League of research and there is no doubt that UK football clubs will continue to play in the Champions League
Anton Zensus
The signatories argue that association via Horizon Europe, which runs from 2021 to 2027, should be “a core part of the future relationship between the EU and the UK for research, underpinning valuable scientific partnerships that have been built up over many years”. In the statement they suggest several solutions to the issues that are still hindering negotiations. One is that the UK needs to demonstrate its commitment by explicitly setting aside additional funding for full association for Horizon Europe in its science budget.
They also argue that the EU needs to introduce a two-way correction mechanism for balancing substantial disparities between contributions and receipts from the programme. Currently, the EU text includes a mechanism to ensure the UK cannot be a net beneficiary of programme funding but does not protect the UK from making contributions that exceed its financial return. The signatories claim that this is likely to create an imbalance that would be too much for the UK to reasonably pay. The UK also needs to accept the need for EU institutions to oversee the correct use of programme funds, while the EU should confirm that the UK can exploit research results as it wishes. Finally, the signatories call for both sides to agree reciprocal arrangements to support the mobility of researchers participating in Horizon Europe funded work.
“Horizon Europe is the Champions League of research and there is no doubt that UK football clubs will continue to play in the Champions League,” Zensus told Physics World. “UK researchers have always benefited from the competition with their European counterparts. I would say the scientific benefits have always outweighed the financial ones and I predict this will also be the case in a future collaboration.”
Mentoring – providing support and guidance to someone less experienced to help their professional and personal development – can help progress the mentee’s studies, increase their confidence, and inspire and motivate them to reach their career goals. In medical physics, where trainees must learn to work, research and interact with other professionals in a complex healthcare environment, mentoring could prove particularly valuable.
As both a science and a health profession, medical physics has a specific set of challenges. It calls for more than just excellent scientific abilities, with leadership skills another key requirement for burgeoning medical physicists eyeing professional success in the hospital environment. With this task in mind, Kwan Hoong Ng from the University of Malaya helped set up a global mentoring programme, Medical Physics: Leadership & Mentoring, to provide young medical physicists with guidance and support and help them to develop leadership roles.
“I have always been interested in helping and encouraging young people in their studies, career and life in general. I would like to see them assume leadership one day,” Ng explains. “The programme brings in leaders and pioneers, both within and outside medical physics, as mentors. These are people who have contributed to advances in healthcare through the application of physics to research, education and clinical practice.”
Writing in Physica Medica, a group of early-career medical physics professionals describe their personal experiences of participating in this programme. The study investigated the impact of this experience on their lives and careers and evaluated the importance of mentoring for young clinical and academic medical physicists.
Expert guidance
The programme comprised one permanent mentor (Ng) and 16 mentees from countries in Latin America and Asia. The mentees included medical physicists, postgraduate students and an early-career researcher, most of whom were working in radio-diagnostics or radiotherapy. Ng notes that he was supported at the start by Robert Jeraj, Eva Bezak and Tomas Kron, while additional mentors from several countries and institutions also joined the group to take part in discussions, share experiences and provide advice about a professional career in medical physics.
Left to right: mentors Kwan Hoong Ng, Robert Jeraj, Eva Bezak and Tomas Kron.
To address the geographic constraints, the programme employed “e-mentoring”, with video meetings and online group conference calls used to establish relationships between mentors and mentees from different areas and countries. This approach is particularly useful for supporting young professionals in developing countries.
“In developing countries, medical physics is just starting to develop. There is a lack of experienced seniors to guide and mentor early-career medical physicists,” Ng explains. “Leadership qualities are not being introduced and emphasized in academic programmes. There is therefore an urgent and crucial need to mentor them to be leaders and contribute to clinical service and research.”
Analysing the outcome
To evaluate their experiences of participating in this global mentoring scheme, all mentees in the group completed an online survey. Most of the group (81.3%) reported that the programme had a positive impact on both their personal and professional lives.
One of the main activities involved invited speakers (the temporary mentors) sharing knowledge of their area of expertise or professional experiences. Here, the mentees preferred to hear talks about the mentors’ personal experiences and leadership advice, rather than medical physics techniques. One positive outcome of such interactions was that more than half of the mentees communicated with mentors outside of these group meetings. “I would like to see mentor and mentee develop a lifelong partnership,” Ng points out.
Most mentees said that taking part in the programme had improved their leadership skills. And all of them believed that, on some level, participation changed their behaviour in challenging times and helped them to make decisions about their career. Most also agreed that the mentors served as role models for their professional careers.
The mentees were also encouraged to focus on skills such as article writing and conference participation. Half of the participants published at least one manuscript, with help or involvement of the mentors as co-author, while about 45% gave at least one conference presentation. For example, an initiative of female mentees, guided by one mentor, wrote an article entitled “Women in physics: pioneers who inspire us”.
Overall, participants in the mentoring group had nothing but positive impressions, concluding that the programme was beneficial to their careers and personal growth and should be encouraged. They recommend that mentoring for leadership should be implemented as an extracurricular activity whenever possible.
“Leadership and mentoring of medical physicists is essential to produce future innovative leaders,” says Ng. “This programme also promotes gender balance and equality, collaboration and intercultural understanding among the mentees. These are important features. The human values are also much appreciated by the mentees.”
As such, the team plans to expand and diversify the scheme with the participation of medical physicists from other continents, such as Africa and Europe. “This programme is gaining traction and we are getting more people to join us,” Ng tells Physics World.
A system that reconstructs and classifies acoustic images with far smaller features than the wavelength of sound they emit has been developed by Bakhtiyar Orazbayev and Romain Fleury at the Swiss Federal Institute of Technology in Lausanne. Their technique beats the diffraction limit by combining a metamaterial lens with machine learning and could be adapted to work with light. The research could lead to new advances in image analysis and object classification, particularly in biomedical imaging.
The diffraction limit is a fundamental constraint on using light or sound waves to image tiny objects. If the separation between two features is smaller than about half the wavelength of the light or sound used, then the features cannot be resolved using conventional techniques.
One way of beating the diffraction limit is to use near-field waves that only propagate very short distances from an illuminated object and carry subwavelength spatial information. Metamaterial lenses have been used to amplify near-field waves, allowing them to propagate over longer distances in imaging systems. However, this approach is prone to heavy losses, resulting in noisy final images.
Hidden structures
In their study, Orazbayev and Fleury improved on this metamaterial technique by combining it with machine learning – through which neural networks can be trained to discover intricate hidden structures within large, complex datasets.
The duo first trained a neural network with a database of 70,000 variations of the digits 0-9. They then displayed the digits acoustically using an 8X8 array of loudspeakers measuring 25 cm across, with the amplitude of each speaker representing the brightness of one pixel. In addition, they placed a lossy metamaterial lens in front of the speaker array, consisting of a cluster of sub-wavelength plastic spheres (see figure). These structures acted as resonant cavities, which coupled to the decaying near-field waves to amplify them over long distances.
With an array of microphones placed several metres away, Orazbayev and Fleury picked up the amplitudes and phases of the resulting waves, which they fed into two separate variations of their neural network: one to reconstruct the images, and the other to classify specific digits. Even when the most subtle features of the displayed digits were 30 times smaller than the wavelength of sound emitted by the speakers, the algorithms were able to classify images they had never seen before to within an accuracy of almost 80%.
Orazbayev and Fleury now hope to implement their technique using light waves. This will allow them to apply their neural networks to cellular-scale biological structures; quickly producing high-resolution images with little required processing power. If achieved, such a technique could open new opportunities for biomedical imaging, with potential applications ranging from cancer detection to early pregnancy tests.
Inspired by the behaviour of moths, researchers have developed a novel collective search algorithm for locating the source of an odour. Robots that mimic moth search patterns have been proposed before for the detection of harmful gases and volatile substances. Now, research led Antonio Celani at the Abdus Salam International Centre for Theoretical Physics in Trieste suggests that this approach could be optimized by using a group of robots that collaborate to find a source.
Many animals rely on their sense of smell to navigate and locate resources. Dispersal by turbulent air currents can make this a difficult task and some animals have adapted behaviours to track odours. One of the most extraordinary of these is observed in male moths. When finding a mate, they must follow hormonal chemicals (pheromones) produced by females and they are able to do this successfully over distances of hundreds of metres.
Able to sense wind direction, and equipped with a keen sense of smell, moths employ a two-step “cast and surge” strategy to correct their motion. When a moth detects pheromones it “surges” by flying into the wind until it loses the scent, at which point it begins zigzagging (casting) at a 45° to the wind until it recovers the trail. It is this process that, when incorporated into an algorithm, can drive gas detection robots.
Collective motion meets cast and surge
Like most animals, moths do not work together to find a mate. However, other animals such as fish and birds coordinate their movement in groups to increase the efficiency of their search for resources. Celani and colleagues investigated whether modelling moths as flocking animals could give the same advantage in the search for an odour source – and by extension enhance the cast and surge algorithm.
In a flock, each agent has two forms of motion. They react individually to their environment – in this case by performing cast and surge – and they move as a group by aligning with those around them. This tendency to align is called “collective motion” and in its simplest form it is described by a model called the Vicsek algorithm. Here each agent chooses a velocity by averaging that of their neighbours, so that the group moves coherently.
The researchers combined the Vicsek and cast and surge algorithms and introduced a “trust parameter”; a number between 0-1 to describe the balance between them. When this parameter is 0, the agents behave as individuals, and when it is 1 they purely follow the Vicsek model.
In simulations, agents were released a distance from a source of odour particles, which were dispersed in a turbulent velocity field. The time taken for the first agent to reach the source was then calculated as a function of the trust parameter.
Optimizing trust
The researchers simulated collective search in a range of environments by varying the speed, population and perception of the agents, as well as the odour emission rate and the velocity field fluctuations. Across the simulations, a trend soon emerged. There is an optimal trust parameter of around 0.8 where the search time is minimized, and the fastest agent behaves almost as if it already knows where the source is. This is also the point where variations in the search time are smallest, with collective motion effectively “buffering” against changes to the simulation conditions.
Describing their research in Physical Review E, the team highlights the effect of changing the odour particle emission rate; reducing this parameter from 1 to 0.05 increases the search time of a single agent threefold but has almost no effect in the case of the optimized group.
This research demonstrates that incorporating collective motion into odour-tracking algorithms can not only make them faster, but also more flexible. The algorithm’s robustness makes it a realistic candidate for further study, where it could advance the use of robotics for the detection of harmful, volatile chemicals and pollutants.
To successfully diagnose diseases and disorders, doctors need the best possible tools to see inside the human body. Imaging the body’s tightest spaces without causing damage, however, can be tricky and necessitates camera-like devices that are smaller than was previously possible to engineer. An international team of researchers has now overcome these technical challenges to produce a camera that is small enough to see inside even the narrowest blood vessels.
The team, led by researchers from the University of Adelaide and the University of Stuttgart, has manufactured the smallest 3D imaging probe ever reported. They used 3D micro-printing to put a miniature lens onto the end of an optical fibre that is the same thickness as a human hair. Together with a sheath to protect the surrounding tissue and a special coil to help it rotate and create 3D images, the entire probe is less than half a millimetre across – the same thickness as a few sheets of paper.
Heart disease plaques come into focus
Such a small device can reach into miniscule spaces deep inside the body. The team even managed to see inside the tiny blood vessels of a diseased mouse and a severely narrowed human artery. The ability to look inside these blood vessels gives the researchers hope that their probe could improve the way we understand and treat heart disease.
“A major factor in heart disease is the plaques, made up of fats, cholesterol and other substances that build up in the vessel walls,” explains Jiawen Li, a co-author of the study, published in Light: Science and Applications. “Miniaturized endoscopes, which act like tiny cameras, allow doctors to see how these plaques form and explore new ways to treat them,” she says.
3D micro-printing opens new doors
The new device is not just the smallest, though. It also overcomes many issues that caused other small imaging devices to produce poor-quality images. Previously, the smallest lenses that could be made were simple and often spherical in shape. This led to unwanted effects such as spherical aberration and astigmatism, which prevent the image from being properly focussed and render it blurry. The small designs also struggled to find a good balance between high resolution and good depth-of-field, a measure of how much of the image is in focus.
The new technique of 3D printing more complex micro-optics straight onto a fibre overcomes these issues. Simon Thiele from the University of Stuttgart was responsible for producing these tiny lenses. “Until now, we couldn’t make high-quality endoscopes this small,” he says. “Using 3D micro-printing, we are able to print complicated lenses that are too small to see with the naked eye.”
Given that heart disease kills one person every 19 minutes in Australia, Li predicts that these new devices could be set to make a big impact. “It’s exciting to work on a project where we take these innovations and build them into something so useful,” she explains. “It’s amazing what we can do when we put engineers and medical clinicians together. This project not only brought disciplines together, but researchers from institutions around the world.”
A side view of the ion trap apparatus used in the experiment. (Courtesy: JCJ Koelemeij)
The most precise measurement to date of the proton-electron mass ratio suggests that the proton may be lighter than previously thought. The result, from researchers in the Netherlands and France, provides a crucial independent cross-check with previous measurements of the ratio, which yielded inconsistent values.
The proton-electron mass ratio is an important quantity in physics and a benchmark for molecular theory. It can be determined by measuring the rotations and vibrations of ordinary molecular hydrogen ions (H2+) and comparing them to similar ro-vibrational measurements in their deuterated cousins (HD+). Both entities are the very simplest bound systems that can be termed “molecules”, and as such they are ideal for probing models of fundamental physics. Indeed, when researchers first performed measurements of ro-vibrational transitions in HD+ 40 years ago, they suggested that the results could be used to test the theory of quantum electrodynamics (QED) in molecules.
Most precise determination of the proton-electron mass ratio
Recent measurements of the relative atomic masses of light atomic nuclei – including deuterons and helions (helium-3 nuclei) as well as protons – have, however, uncovered values that differ from earlier results by several standard deviations. For example, a proton-mass measurement made in 2017 using a traditional method, Penning-trap mass spectrometry, determined the mass with a precision of 32 parts per trillion (ppt). While this precision of this measurement was three times higher than that of the previously accepted value (termed CODATA-2014), the actual value measured was nearly 300 ppt smaller.
“This value was taken along with the earlier values to compute the 2018 CODATA value of the proton mass, but all uncertainty margins had to be stretched by a factor of 1.7 to cover the difference,” explains study team leader Jeroen Koelemeij of VU University in Amsterdam. “This currently limits the precision of the ‘official’ proton-electron mass ratio to 60 ppt – and affects theoretical calculations of HD+ at a similar level.”
In their new work, Koelemeij and colleagues used a different technique, Doppler-free two-photon laser spectroscopy of trapped HD+ ions, to measure a proton-electron mass ratio of 1,836.152 673 406(38), where the brackets contain the statistical uncertainty. At 21 ppt precision, this measurement is the most precise determination of the proton-electron mass ratio to date – a record shared with a very recent value obtained from rotations of HD+ in a concurrent experiment in Düsseldorf, Germany.
Getting rid of the Doppler effect
In measurements like these, Koelemeij explains that the Doppler effect – the apparent detuning of the laser frequency from resonance with the HD+ ion, depending on the ion’s velocity – is a nuisance. Even at the very low temperatures (10 mK above absolute zero) that prevail in their experimental trap, the molecules in the trap can still move around. To eliminate the Doppler effect entirely, the ions would ideally need be brought to a complete standstill – an experimental impossibility.
“Fortunately, quantum mechanics is forgiving,” Koelemeij says. “If we want a laser to ‘see’ a particle at rest, we just need to ensure that it doesn’t move around by more than the wavelength of the laser light” – a condition known as the Lamb-Dicke limit.
Koelemeij and colleagues met this condition in a roundabout way by shining two lasers on the molecule from opposite directions. When the wavelengths of these counter-propagating laser beams are tuned to the right values, the molecule will absorb a photon from each beam and, in effect, add the energy from both photons to its own vibrational energy. The resulting “two-photon” transition between the molecule’s energy levels has an apparent wavelength equal to the difference between the two laser wavelengths. “Since both lasers have nearly the same wavelengths, the apparent wavelength becomes very large – larger than the volume in which the HD+ ions are confined,” explains Koelemeij. “According to quantum mechanics, the ions appear to be standing still: no Doppler effect.”
With the Doppler effect removed, the true purity of the vibrations becomes visible, Koelemeij tells Physics World. Indeed, the vibrations of the molecules under these circumstances is nearly as pure as the oscillations in the best atomic clocks – meaning that they can be measured with high precision.
Another advantage of the technique is that by using two photons, the researchers can select molecular vibrations in a way that is relatively insensitive to magnetic fields, which helps improve the precision of the measurement.
QED theory works for molecules
The new work, which is detailed in Science, confirms that QED, which is very successful at describing single particles and atoms, also works for more complex matter such as simple molecules – something that Koelemeij says was previously unclear. “While there have been comparisons (and agreement) between QED theory of molecules (notably the HD+ ion) and experiment, the experiments were never more precise than theory,” he explains. “This means that the finest details of the theory calculation could not be tested.”
The new result, he says, has turned the situation on its head. For the first time, experimental measurements of these molecular vibrations are significantly more precise than values generated from theory – meaning that theoretical predictions can now be tested to the fullest extent. What’s more, predictions from theory can be used as a tool to translate the measured vibrations into a new value of the proton-electron mass ratio – as predicted more than four decades ago.
“Intriguingly, the proton-electron mass ratio we have measured is in fact compatible with the recent measurements of the proton mass, which were unexpectedly smaller than previous reference values,” adds Koelemeij. “This means that the proton indeed seems to be lighter than we thought.”
The very precise measurements in HD+ could also help solve several important and hotly debated mysteries in physics, including the puzzle of why the radius of the proton appears to be smaller than expected. Knowing the mass of the proton (and the anti-proton) to high precision could also help researchers understand why there is much more matter than antimatter in the universe, even though equal quantities of each were thought to have been created in the Big Bang.