We often talk about “hot topics” in the podcast, but this week there’s a chill in the air as the Physics World team explores stories about cold stuff.
First up is a discussion of what happens when cold lithium atoms collide with a cold ytterbium ion, as observed by a group of physicists in the Netherlands and described in a recent research paper.
After that, we switch to discussing the “cold spot” in the cosmic microwave background, which was first observed by NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) in 2004 and has been puzzling cosmologists ever since.
We then take a detour into the early years of the Cold War, when the physicist-turned-spy Klaus Fuchs gave the Soviet Union crucial insights into the workings of the first atomic weapons. Fuchs’ story includes many of the trappings of a spy novel, and a new book about his life is a fascinating read – albeit one marred by some annoying editorial lapses.
Finally, the podcast team comes in from the cold with a chat about the strange behaviour of Betelgeuse. The star in Orion’s right shoulder is usually one of the brightest objects in the night sky, but for the past few weeks it’s been much dimmer than usual. Could this dimming be the preamble to a spectacular supernova?
Physicists have taken another step forward in the search for signs that antimatter behaves differently to matter — and so might explain why the universe appears to consist almost exclusively of the latter. Researchers at the CERN particle-physics laboratory in Switzerland used laser spectroscopy to scrutinize the fine structure of antihydrogen, revealing with an uncertainty of a few percent that the tiny difference in energy of states – known as the Lamb shift – is the same as it is in normal hydrogen.
The fact that the cosmos seems to contain very little antimatter – even though equal quantities of that and ordinary matter should have been produced following the Big Bang – is a major outstanding problem in physics. Generating, trapping and then measuring atoms of antimatter offers a relatively new way of probing this asymmetry. In particular, anomalies in the spectra of antiatoms compared with the known results from ordinary matter could point to a violation of what is known as charge–parity–time (CPT) symmetry.
The ALPHA collaboration at CERN is led by Jeffrey Hangst of Aarhus University in Denmark and is one of the leading groups in the field. It makes atoms of antihydrogen by taking antiprotons from the lab’s Antiproton Decelerator and combining them inside an electromagnetic trap with positrons emitted by a source of radioactive sodium. To be able to study the resulting neutral atoms over extended periods it stores them at the local minimum of a magnetic field created by powerful superconducting magnets, thanks to the interaction of that field with the particles’ tiny magnetic dipole moments.
The ALPHA Collaboration has achieved spectacular progress
Randolf Pohl
ALPHA uses laser beams tuned across a specific range of frequencies to study the energy spectrum of antihydrogen. It has already measured the energy difference between the ground and first excited states (1S and 2S), showing in 2018 that the difference is equal to that of normal hydrogen at a level of one part in 1012. That uncertainty was within three orders of magnitude of the best hydrogen measurements and about five below the level that a theory known as the Standard-Model Extension predicts could reveal CPT-violating effects.
Precision studies
In the latest research, reported in Nature, ALPHA has instead probed fine structure within antihydrogen’s first excited state. It did so by accumulating several hundred cold anti atoms – produced in groups of about 20 every four minutes – and storing these atoms for over two days using a magnetic-field of 1 T. It then used short pulses of ultraviolet light to lift the atoms from their ground state to either the 2P1/2 or 2P3/2 states. As the atoms dropped back down to the 1S state, some (in specific magnetic substates) could no longer be held by the magnetic trap and so annihilated with atoms of ordinary matter in the trap walls.
By identifying peaks in a plot of the number of annihilations against laser frequency, Hangst and co-workers were able to establish the energy gaps between the two 2P states and the 1S state in the presence of the magnetic field. These results had a precision of 16 parts in a billion. Then subtracting the smaller gap from the bigger one, and using theory to work out what the difference would be without a magnetic field, they found that the so-called fine-structure splitting in antihydrogen is equal to that of its matter counterpart to within 2%.
Finally, the researchers subtracted the 1S to 2P1/2 energy from their previously obtained figure for the 1S–2S transition to yield a value for the Lamb shift (the gap between the 2S1/2 and 2P1/2 states). Discovered by Willis Lamb in 1947, this effect arises from the interaction of hydrogen’s electron with quantum fluctuations in the vacuum and was key to the subsequent development of quantum electrodynamics. ALPHA has shown that the Lamb shift in antihydrogen agrees with that of normal hydrogen to about one part in ten.
Writing a commentary to accompany the research, Randolf Pohl at the University of Mainz in Germany says that the ALPHA Collaboration “has achieved spectacular progress” in precision spectroscopy of antihydrogen. Such research, he argues, could in future enable tests of CPT symmetry, quantum electrodynamics and the Standard Model of particle physics.
In particular, Pohl explains that reducing uncertainty in measurement of the Lamb shift to less than one part in 10,000 would allow scientists to demonstrate that the antiproton has a finite charge radius, like the proton. Pushing the uncertainty down even further, he told Physics World, “may eventually help establish CPT violation, if it exists in nature”.
Left: A small volume of blood is placed into the optofluidic lab-on-a-chip for rapid plasma separation. Right: the copper master used for chip fabrication; zoomed-in SEM image; the microfluidic chip with input of drops of whole blood. (Courtesy: Nat. Biomed. Eng. 10.1038/s41551-019-0510-4)
Researchers in the UK claim to have developed a microfluidic chip that can rapidly tell whether someone has suffered a traumatic brain injury from a finger-prick blood sample. The optofluidic device detects a biomarker linked to brain injury, based on the way that it scatters light (Nat. Biomed. Eng. 10.1038/s41551-019-0510-4).
Identifying a traumatic brain injury – where a head injury disrupts normal brain function – isn’t always easy and time is often critical. But with diagnosis often relying on imaging such as CT and MRI scans, assessing all potential cases can be resource intensive.
To address this, Pola Oppenheimer from the University of Birmingham and colleagues developed a lab-on-a-chip system that assesses levels of a molecule produced by the central nervous system after a brain injury. “The idea is to try to develop a portable point-of-care diagnostic assay that allows us to pick up really early stages, really low concentrations of biomarkers indicating traumatic brain injury,” Oppenheimer tells Physics World.
The device rapidly separates plasma from whole human blood through capillary action. The finger-prick blood sample is added to the chip and then flows along a channel and through a series of combs that filter out red blood cells. The separated plasma then flows into an optical detection area.
The plasma sample is analysed using Raman spectroscopy to detect concentrations of N-acetylasparate, one of the most abundant molecules in the central nervous system. The researchers assessed four potential biomarkers, but decided to focus on N-acetylasparate, as evidence suggests it is an effective, specific indicator of traumatic brain injury.
The team tested the setup using blood samples taken as part of a large study looking at early changes in patients who have suffered a traumatic brain injury. The samples came from 35 people with confirmed severe traumatic brain injury, eight people who had suffered head – but not brain – injuries, and 23 healthy volunteers. In the injury groups, the first blood sample was taken by the ambulance crew at the scene of the accident. Additional samples were taken over the next 48 hours. In total, the researchers tested 221 blood samples.
N-acetylasparate levels were on average more than five times higher in patients with traumatic brain injury immediately after injury, than in the other groups. Concentrations were also significantly higher in the brain injury group compared with patients with head injuries only, eight and 48 hours after injury.
The researchers say that the test clearly discriminated between those with traumatic brain injuries and those with other injuries. Overall, they claim that by testing for elevated N-acetylasparate levels, they were able to identify patients with traumatic brain injury with an accuracy of almost 99% immediately after injury, and around 91% at eight and 48 hours after injury.
As the concentrations are so low in blood plasma, to analyse N-acetylasparate levels using Raman spectroscopy the team had to enhance the spectral readout. To do this, they performed surface-enhanced Raman spectroscopy using special electrohydrodynamically fabricated surfaces for the detection portion of the chip.
The detection surfaces were created by placing silicon wafers between two electrodes. When a voltage is applied to the electrodes the resulting electrical field and electrostatic forces destabilize the smooth silicon wafer, creating a pattern of pillars. By adjusting the nature of the electrical field, this pattern can be precisely controlled and tailored to enhance the light scattering of specific molecules. Once this process is complete, the wafer is given a fine coating of gold.
“We can control the dimensions really nicely, we can control the height and the width and the spacing between the structures, and this allows us to really carefully tune the different morphologies to match the excitation wavelength and get a really good resonance for high signal enhancement,” Oppenheimer explains.
Ultimately the team hopes to create a device that can be used at accident sites to diagnose and triage patients – to help ambulance crews decide which patients to send to major hospitals with neurosurgical facilities.
Oppenheimer tells Physics World that the next step is a medium-sized clinical trial. But she adds that there could be other healthcare applications: “This is quite a versatile technology, so although we validated it for traumatic brain injury, we are now starting work with other diseases.”
A friend of mine recently took his boat across the Atlantic. It was great fun and a real adventure – I particularly loved the photos of dolphins he posted online. But as a practical mode of transport, going by boat just doesn’t cut it in the modern age. Unless, of course, you’re Greta Thunberg, who sailed to New York to make a serious point about the impact of climate change before delivering a powerful speech to the United Nations on the matter.
But when Thunberg was named as Time magazine’s person of the year for 2019, it got me thinking. For all the amazing advances in aircraft technology – flying from Europe to New York is now nearly twice as efficient as going by ship – aviation still has a big environmental problem. Planes spew out carbon dioxide and nitrogen oxides, which form ozone in the upper troposphere. They also emit particulates and leave water-vapour trails, both of which trap heat.
Indeed, the downsides of air travel have led to a rapidly growing “flight-shame” movement, particularly in Europe. In Sweden, for example, passenger numbers are down year-on-year by 11%. A similar fall has occurred in Germany, where the federal government has responded by cutting tax on train travel.
Airbus, meanwhile, is one of 50 firms to join the Air Transport Action Group (ATAG) – a not-for-profit association that wants to halve the aviation industry’s CO2 emissions by 2050 (compared to 2005 levels). However, these ambitious targets (though I doubt Thunberg would see them as such) cannot be achieved using existing technologies. Members of the ATAG therefore believe that alternative propulsion technologies – including electric and hybrid-electric systems – will be required.
Airbus, along with Rolls-Royce and Siemens, has already developed the E-Fan X – a demonstrator craft in which one of the four jet engines is replaced by a 2 MW electric motor, which has roughly the power of 10 medium-sized cars. When high power is required – at take-off, for example – the system’s generator and battery supply energy together. Unfortunately, today’s batteries are so heavy and bulky that it’s unlikely this plane could be used for long-haul flights, which make up four-fifths of aviation emissions.
The clear winner is hydrogen, which has an energy density of over 140 MJ/kg.
So what’s to be done? Well, we can forget nuclear fuel as a solution: uranium-235 has a huge energy density of 8 x 107 MJ/kg but no-one’s going to want fission-powered planes landing at their local airport and who’d want to get on board in the first place? As for lithium-ion batteries, they have a storage capacity of 0.95 MJ/kg at best – nowhere near the 43 MJ/kg of kerosene. The clear winner is hydrogen, which has an energy density of over 140 MJ/kg. Burning it emits almost no CO2 and few nitrogen oxides. It leaves just a bit of water vapour, which I suspect we can live with.
But hydrogen has problems. It’s highly volatile so you can’t store it in a plane’s wings. Most aircraft designs that use liquid or pressurized hydrogen therefore store it in the fuselage. That in turn means you’d need a larger fuselage (for the same number of passengers) than a conventional kerosene fuelled aircraft, leading to a bigger friction drag and wave drag – and hence higher energy costs.
On the plus side, 1 kg of hydrogen can provide the same energy as 3 kg of kerosene, cutting the gross take-off gross mass of a Boeing 747-400 aircraft from 360 to 270 tonnes. Given that hydrogen is likely to be cost-competitive with kerosene by 2037, I think existing aircraft designs and jet engines could be adapted to run on hydrogen without too much difficulty. Problem is, that date is so far off that, even though kerosene supplies are dwindling, no-one’s in a rush to make the switch fast.
Money matters
What will drive the change to low-carbon flight is economics. Every passenger leaving the UK currently has to pay £26 in Air Passenger Duty (APD) tax for every short-haul flight and £150 for long-haul flights. Introduced in 1993 to offset the environmental impacts of air travel, APD currently brings in £3–4bn to the UK Treasury’s coffers. Ticket prices would only rise further if planes were fitted with hydrogen fuel tanks as these swallow up about a third of the craft’s available passenger space.
To me, shifting the cost of greener air travel to customers is the wrong way of going about things.
To me, shifting the cost of greener air travel to customers is the wrong way of going about things. The EU and UK currently put no tax on aircraft fuel (zero VAT), but if kerosene were taxed it would encourage aircraft manufacturers to develop even more efficient planes and be a catalyst for faster change in the industry. Aircraft and engine makers would also have new revenue streams in the form of refurbishing planes to run on more efficient kerosene engines or even converting existing planes to hydrogen.
According to the UN’s International Civil Aviation, the global air-transport network is expected to double in size by 2030. With 23,000 commercial aircraft in service in 2017, Boeing says we’ll need almost 40,000 new planes over the next 20 years. By 2037 there should therefore be more than 63,000 aircraft in the world. Greener air travel is a problem we need to solve now – not in 20 years’ time.
The challenge is to make hydrogen fuel cleanly and economically and, for me, the only solution is to do so by electrolysing water rather than extracting it from fossil fuels. And if the power used to create, compress or liquefy hydrogen can itself be from carbon-free sources such as renewables or nuclear, then surely hydrogen is the future of carbon-free aviation.
Despite the advent of radio astronomy in the early 20th century, and the ability to listen to our galactic neighbours, it’s not obvious that anyone wants to be heard, and it’s not obvious that anyone is saying anything. The silence is felt. But this hasn’t stopped scientists entertaining this notion: if there is intelligent life out there, could we understand each other? And if we could, how would we? That’s the crux of science writer Daniel Oberhaus’s new book Extraterrestrial Languages.
Oberhaus gives a comprehensive overview of attempts at designing interstellar messages, contemporarily known as METI (messaging extraterrestrial intelligence) and contrasted with SETI (search for extraterrestrial intelligence). The early endeavours were varied (Morse code, giant solar mirrors), if not outlandish (flaming kerosene-filled craters in the Sahara). But each had value in reflecting the Zeitgeist of the culture that devised it.
More recent ideas include jocular artificial-intelligence chat-bots, self-explanatory computer software and natural language corpora transmitted in binary. Throughout, Oberhaus shows how important both the form and content of METI really are.
Integrating linguistics, philosophy, biology, mathematics and even animal studies, I felt the complexity that a lingua cosmica would rightly entail. Many of the related concepts are easy to understand, such as mathematical Platonism – maths is a language of the universe independent of mind – and the Formalist approach, which says that mathematical phenomena are derived via biased observers.
A mathematical background will therefore help with concepts such as Zipf’s law and Lambda calculus, which, lacking concrete definitions, might disorient readers (though the sizeable appendix will be useful). This doesn’t detract from the accessible tone of the book, however, which touches on landmark attempts at interstellar communication including the Arecibo transmission of 1974, the Voyager 1 and 2 craft launched in 1977, and the Cosmic Call broadcasts of 1999 and 2003 – the latter spurring renewed interest in the search for extraterrestrials.
Each chapter piqued my interest with its technical and practical implications. For instance, Lancelot Hogben’s Astraglossa – the first symbolic system designed for interstellar communication, presented to the British Interplanetary Society in 1952 – raises concerns over whether a message should be self-interpreting given the unfathomable timescales involved in interstellar travel. And, considering the transient nature of human knowledge, how fundamental (i.e. non-human) should the contents of a message be? What would constitute a language of the universe? Each dilemma adds to the “meta” quality of the book, which I found stimulated my reading and was a clear signal of Oberhaus’s scholarship.
The text is meticulously researched and serves as a useful entry to various academic disciplines. Oberhaus swiftly fleshes out each point of contention in METI’s controversial history. This is especially salient in the final chapter on the ethics of METI. Here Oberhaus considers: Is it safe? Is it worth the investment? Who should speak for Earth, and should they tell the truth?
I came away from Extraterrestrial Languages with more questions than answers. But I am sure I won’t be the only reader who will find themselves left with a sense of the nuance and uniqueness of human experience, represented in our attempts at interstellar communication past, present and future.
As part of the Liver4Life project, researchers in Zurich have developed a new machine that can keep livers alive outside the body for up to a week. The development of this machine could reduce waiting times for donor organs and improve the lives of thousands of people on organ waiting lists (Nature Biotechnol. 10.1038/s41587-019-0374-x).
Currently, when an organ is removed from a donor it is flushed with a preservation solution and stored on ice. This will keep it in a good condition for up to 12–18 hours whilst it is transferred to the recipient. However, there is a shortage of healthy organs available for donations and waiting lists can be prohibitively long.
To keep donated livers alive for a longer time, perfusion machines can be used. These keep nutrients flowing through the organ to maintain function. Currently, however, such machines can only keep livers viable for 24 hours when operating around normal body temperature. The Liver4Life project set out to extend this time to one week. To do this, the machine replicates conditions in the body by reproducing key functions.
Replicating the liver’s natural environment
Before trying out their machine with any human livers, the researchers, from University Hospital Zurich, ETH Zurich, Wyss Zurich and the University of Zurich, investigated pig livers. They identified several key functions of the liver that had to be maintained: prevention of red blood cells bursting, glucose metabolism, liver oxygenation, simulation of diaphragm movement and waste removal. All these factors are controlled automatically by the machine, removing the need for physicians to adjust them manually throughout the week.
The machine provides pulsing blood through an artery of the liver, mimicking natural blood supply. This pulsing of the blood flow is essential to prevent red blood cells from bursting, which can damage the liver. Through the blood, the machine delivers nutrients and controls glucose levels and oxygenation, monitoring their levels using a series of sensors.
To remove the waste products produced, the team incorporated a dialysis unit to the machine, which uses an algorithm to automatically adjust the flow and control the concentration of red blood cells.
Finally, in addition to nutrients, the liver also needs to be kept constantly moving to prevent tissue death. To simulate the movement caused by the diaphragm, the team placed a balloon below the liver to create automated movement.
Infographic describing some of the functions designed to replicate the conditions of the body. (Credit: USZ)
The researchers tested the machine’s effectiveness using 10 poor-quality human livers. After one week, six of the original livers maintained their health, and some even recovered to some extent from any injuries caused by the preservation technique.
Success could change lives
“The success of this unique perfusion system – developed over a four-year period by a group of surgeons, biologists and engineers – paves the way for many new applications in transplantation and cancer medicine helping patients with no liver grafts available,” explains senior author Pierre-Alain Clavien.
Extending the window of viability for donated livers could enable poor-quality livers to be repaired for transplant in the future. The next step for the project is to use organs kept alive by the machine for transplant.
If you’re studying physics, you’ll have endless career opportunities to pick from once you graduate. So if you’re weighing up what to do next, the latest annual Physics World Careers guide is here to help you plan your best route – and perhaps introduce you to options you’d never thought of before. The free-to-read 102-page digital guide includes advice on career development, case studies showcasing different paths in academia and industry, as well as a comprehensive directory of employers looking to hire physicists just like you.
If it’s a further foray into academia you’re after, with a master’s or PhD in your sights, then we can help you pick the perfect postgraduate topic. If research into data science and high-energy physics is what you’re interested in, read our interview with CERN openlab’s Federico Carminati, as he lays out the future of particle physics and computing. Don’t forget to follow up with data-science employers, such as Tessella and TPP, which are looking to hire physicists.
And if you’ve still got a long way to go before you graduate, check out the career-development article on “Going the extracurricular mile”, which explores the benefits of finding a placement or internship to see what kind of career might best suit you and your skills.
I hope you find Physics World Careers 2020 useful. And if you want even more, do sign up for our careers newsletter. Sent once every two months, it brings you a mix of case studies, careers advice and practical information from leading employers you might be interested in working for. To sign up, simply sign in to your free Physics World online account and tick the “Careers bimonthly” box.
The built-in RGB camera in a modern smartphone can be used to create a hyperspectral imaging system for analysis and monitoring of skin features. Ruikang Wang and Quinghua He from the University of Washington suggest that the ability to produce images comparable to those from expensive hyperspectral imaging systems may eventually enable widespread use of smartphone-based hyperspectral imaging in low-resource settings and rural areas (Biomed. Opt. Express 10.1364/BOE.378470).
In a hyperspectral image, each pixel contains information regarding a series of narrow wavelength bands. Hyperspectral imaging can be used to determine levels of chromophores with skin tissue – such as haemoglobin and melanin, for example – generating data that help differentiate melanomas from pigmented skin lesions. Variation in melanin may be seen in some skin cancers, nevus and skin pigmentation, while haemoglobin concentration may indicate vascular abnormalities and inflammation.
Hyperspectral imaging systems have been in clinical use for decades, but have a complex design, are expensive and generally limited to use in clinical laboratories. Today’s smartphones typically incorporate RGB cameras with 8 to 12 million pixels and are capable of high-speed photography. To exploit this capability, Wang and He applied Wiener estimation to transform RGB images captured by smartphone cameras into “pseudo”-hyperspectral images with 16 wavebands covering 470–620 nm. They processed the reconstructed hyperspectral images using weighted subtractions between wavebands to extract absorption information caused by specific chromophores, such as haemoglobin or melanin, within skin tissue.
The researchers captured images from two volunteers with redness and moles on their facial skin. They also acquired images in the dark, using the smartphone camera’s built-in flashlight or a fluorescent lamp as illumination sources. Both light sources worked equally well, demonstrating flexibility in terms of using different illumination conditions.
Left: a standard RGB image from a smartphone and magnified details of acne and a mole. Right: blood flow mapping and magnified details. (Courtesy: Ruikang Wang)
As blood vessels are localized within relatively deep skin tissue, light with a longer penetration depth is suitable for detection. To extract spatial haemoglobin absorption information, the researchers therefore applied weighted subtractions between green and red wavebands. Melanin, on the other hand, exists in superficial skin layers, so they extracted melanin absorption data using weighted subtractions between blue and green bands.
Comparing the melanin absorption data with results from a snapshot hyperspectral camera showed that the absorption map created from the smartphone exhibited much better image resolution, because the smartphone camera has many more pixels than the snapshot camera.
Wang and He also examined whether it is possible to evaluate heart rate from a time series of blood information maps, by monitoring changes in blood absorption intensity in the skin. Using a fixed support to keep facial skin stable, they recorded a smartphone video under flashlight illumination.
They extracted the blood absorption map from every frame in the video and summed the signals for each frame. By Fourier transforming the temporal data, they created a plot in the frequency domain and identified a main frequency peak around 1.05 Hz. This matched the 1.05 Hz heart-beat frequency recorded by a pulse sensor for reference.
The researchers also tested the smartphone’s ability to monitor vascular occlusion, by recording images of a volunteer’s finger with pressure from a rubber ring applied for 60 s to create a vascular occlusion. The smartphone video recorded this skin vascular occlusion, as well as restoration of the finger to normal state.
“Compared with conventional hyperspectral imaging systems, which mostly rely on lasers or tunable optical filters, the smartphone-based hyperspectral imaging system eliminates the internal time difference within frames, greatly improving the imaging speed and immunity to motion artefacts,” the researchers write. “Most importantly, our strategy does not require any modification or addition to the existing smartphones, which makes hyperspectral imaging and analysis of skin tissue possible in daily scenes out of labs.”
“In addition to future clinical applications, we envision that smartphone imaging could provide excellent impact on cosmetic consumers and also for the cosmetic industry,” comments Wang.
Wang and He tell Physics World that they are currently developing a smartphone-based app to provide information about blood perfusion, pigmentation, and porphyrin-containing bacteria and collagen content of the skin. They are also currently enrolling a mix of volunteers with various skin colours to check whether there is any skin-colour dependence in the results from their hyperspectral imaging system.
Researchers have efficiently harvested the kinetic energy of falling water droplets for the first time. The team, led by Zuankai Wang at the City University of Hong Kong, demonstrated the conversion through a device that both generates a current and charges a polymer surface as it is hit by falling droplets. Their technology could become an important source of renewable energy, capable of generating electricity in a wide variety of situations (Nature 10.1038/s41586-020-1985-6).
The motion of water, particularly in rivers, has long been an important source of renewable energy. Currently, this hydroelectricity is mostly produced through electromagnetic generators, but these are incredibly bulky, and become highly inefficient when water supplies are low. Alternatively, recent studies have attempted to harvest the kinetic energy of water using electrets – materials that remain charged for indefinite periods of time and can become charged through electrostatic interactions with water. So far, however, this technique has proven to be highly inefficient.
Wang’s team proposed that this performance could be improved through the use of the electret polymer material PTFE, which is known to be a highly stable reservoir for storing densely packed charges. In the researchers’ droplet-based electricity generator (DEG), PTFE is deposited onto a layer of indium tin oxide (ITO), itself deposited onto a glass substrate. Furthermore, the ITO coating is wired to a tiny aluminium electrode, separated from the PTFE by a small gap.
(a) Schematic of the droplet-based electricity generator (DEG). (b) Image showing four parallel DEG devices fabricated on the glass substrate. (Courtesy: City University of Hong Kong/Nature)
As falling droplets hit the DEG, they spread out across its PTFE surface, imparting an electrical charge. In addition, the interaction temporarily bridges the gap between the aluminium electrode and the PTFE/ITO electrode, creating a closed-loop circuit. Since the PTFE’s charge generates an equal and opposite charge in the ITO layer, this allows charges to migrate to the aluminium electrode, generating a current. Then, as the droplet slides off the surface, its area shrinks. This reverses the direction of this current, fully restoring charge to the ITO layer, so the cycle can repeat. After around 16,000 droplets, the PTFE’s surface charge saturates, and the effect stabilises.
By fine-tuning factors including the film thickness, Wang’s team achieved a peak power density of over 50 W/m2 before saturation; as well as an average energy conversion efficiency of 2.2%. Both of these values are thousands of times higher than those reachable in previous electret-based generators – significantly enhancing the DEG’s output voltage and current. Through their experiments, the researchers showed that when just four 100 μl droplets were dropped onto their device from heights of 15 cm, it could power 400 commercial LEDs to light up instantaneously.
Since the DEG harvests electrical energy purely from the kinetic energy of water, the team’s approach could greatly expand the range of possible applications of hydroelectricity. For the first time, the device opens up routes to the generation of renewable energy from sources where water impinges on surfaces periodically – including raindrops and ocean waves. Through longer-term improvements, Wang and colleagues hope that their technology could be applied to surfaces as far-ranging as the hulls of boats, the surfaces of umbrellas and the insides of water bottles.
Over the last 20 years I must have spoken to more than 400 school pupils who want to study physics here at the University of Manchester. One subject that regularly comes up at interview is Young’s double-slit experiments, which clearly interest my prospective students. But when I ask them what the experiments are all about, I’m invariably told they involve using electrons to demonstrate wave–particle duality – one of the cornerstones of quantum physics. That’s curious because Thomas Young performed his experiments in 1804 – long before we knew anything about electrons or the subatomic world.
Young’s original double-slit experiments were in fact the first to demonstrate the phenomenon of interference. When he shone light through two narrow slits and observed the pattern created on a distant screen, Young didn’t find two bright regions corresponding to the slits, but instead saw bright and dark fringes. He explained this unexpected observation by proposing that light is a wave, in opposition to Newton’s idea that light is made of particles. These experiments, and their subsequent explanation, culminated in the classical laws of radiation enveloped by James Clerk Maxwell’s famous equations.
From Young’s double slits to wave–particle duality
Wave pioneer: Thomas Young. (Courtesy: Sheila Terry/Science Photo Library)
The remarkable success of the wave theory of light, inspired by Thomas Young’s original double-slit experiments of 1804, was marred by two later observations that did not fit the theory. One was the measurement of the radiation density emitted by a blackbody, which could not be explained using the accepted laws of radiation formulated by Lord Rayleigh and James Jeans – the so-called “ultraviolet catastrophe”. This problem led Max Planck in 1900 to develop an alternative theory, which assumed blackbody radiators have discrete (quantized) energies, from which he successfully predicted the experimental data.
The second problem was the photoelectric effect – that light can kick out electrons from a material but only if it’s above a certain frequency. Extending Planck’s ideas, Albert Einstein was able to explain this phenomenon by predicting that the radiation is quantized. This insight also let him predict that the intensity of light depends on the rate at which these particles of fixed energy (later called photons) are detected. Wave theory, in contrast, stated that the intensity should be proportional to the square of the amplitude of the wave. Further work by Ernest Rutherford and Niels Bohr in Manchester led to the development of the “old” quantum theory, which explained the structure of atoms and why their spectra were discrete.
Several years later, Louis de Broglie suggested that if light can be considered as having both wave- and particle-like properties, then perhaps matter also has a dual nature. Experimental evidence supporting this soon followed, from which Erwin Schrödinger, Werner Heisenberg and Paul Dirac developed the modern form of quantum mechanics we use today. Only in the 1960s did the link between Young’s double-slit experiment and wave–particle duality become clear when it was carried out for the first time with an electron beam.
The link between Young’s experiments and wave–particle duality only became obvious last century once the basics of quantum mechanics had been firmly established (see box above). The story began in 1961 – more than 130 years after Young’s death – when Claus Jönsson from the University of Tübingen in Germany machined a set of slits 300 nm wide into copper and then irradiated them with a 40 keV beam of electrons from an electron microscope (Z. für Physik161 454). The resulting images showed an interference pattern, just as Young had first seen with light 160 years earlier. This first double-slit experiment with electrons indicated that an electron beam behaves as a wave. But since Jönsson couldn’t create or measure individual electrons, he couldn’t prove that each electron itself has a wave-like character.
In 1965 Richard Feynman then gave a now-famous series of lectures at the California Institute of Technology, in which he discussed how single electrons fired at a double slit would, in principle, produce an interference pattern – thereby demonstrating the dual wave–particle nature of matter. Feynman did not think his thought experiments would ever be possible, but over the next few decades, advances in manufacturing techniques gradually brought this prospect closer. Eventually, in the mid-2000s Stefano Frabboni and co-workers in Italy demonstrated interference with electrons passing through slits just 83 nm wide (2007 Am. J. Phys.75 1053 and 2008 Appl. Phys. Lett.93 073108).
Using an electron microscope operating at 200 keV, Frabboni and his team were able to reduce the beam current to such low levels that they could predict with a very high probability that no more than one electron was between the source and detector at any given time. But because their detector had various limitations, they couldn’t directly measure interference from single electrons. It was not until 2013 that the first experiments to convincingly demonstrate double-slit interference using single electrons were finally carried out (figure 1).
1 Young’s double-slit experiment with single electrons
If you fire single particles, such as photons or electrons, through two slits labelled 1 and 2, the wavefunctions ϕ1 and ϕ2 along each path describe the probability that they will pass through the slits, with the total wavefunction at the detector being ϕdet = ϕ1 + ϕ2. The probability of detecting a particle is then ϕdet2 = ϕ12 + ϕ22 + 2|ϕ1||ϕ2| cos Δξ, where |ϕ1| and |ϕ2| are the amplitudes of the waves and Δξ is their phase difference at the detector. The result is a series of bright and dark bands depending on whether the two wave fronts are in phase (cos Δξ = 1) or out of phase (cos Δξ = –1), meaning either a high or low chance of detecting a particle. But if you close, say, slit 2, then ϕ2 = 0 and you see a distribution of particles due solely to slit 1 (ϕdet2 = ϕ12). If you close slit 1, then ϕ1 = 0 and the distribution is given by (ϕdet2 = ϕ22). You can work out the interference term by measuring the signals from both slits individually, and by then measuring the yield with both slits open.
Working at the University of Nebraska-Lincoln in the US, Roger Bach and co-workers used 62 nm wide slits, through which they fired electrons with a beam energy of just 0.6 keV. This much lower energy, which increased the de Broglie wavelength of the electrons compared with previous experiments, not only produced a wider separation of the interference pattern, but also allowed them to use a channel plate detector that could count single electrons. The experiment also let Bach’s team physically move a mask across the slits so that each could be individually closed, or both could be open.
In these experiments, Bach’s team reduced the intensity of the incident beam so that only one electron was detected each second, thereby guaranteeing (to greater than 99.9999% probability) that only a single electron was present between the source and the detector at any time. The experiment ran continuously for two hours and, initially, the individual electrons appeared to arrive at random points on the screen. But as more and more electrons were detected, an interference pattern with bright and dark regions gradually emerged (2013 New J. Phys.15 033018).
Since each electron was detected before the next was emitted, it clearly could not have influenced future electrons that then passed through the slits. As elegantly stated by Feynman, we therefore have to accept that each electron (and indeed all matter) has both a wave-like nature (to create the interference pattern) and must also be considered as an individual particle (since this is what was detected). It is therefore this double-slit experiment, not Young’s from 1804, that future students at Manchester should be citing when they talk about wave–particle duality.
The new experiments in a single atom
Now if you think that’s about as far as a Young’s double-slit experiment can go, you’d be wrong. In one of those rewarding occasions in science when new discoveries and ideas evolve from seemingly unrelated work, our research group in Manchester recently found an entirely new way to carry out the experiment. The discovery emerged from our studies of the “shape” that atoms adopt when we excite them with laser light and then fire electrons at them. The electrons gain energy as the atoms are de-excited and we catch these scattered electrons at different angles.
We’d known a lot about this “super-elastic” collision process – in fact, we’d studied it for years. But when we used 420.30 nm blue light to excite a particular state in rubidium atoms, known as the 6P state, we were in for a surprise (2019 Phys. Rev. Lett.122 053204). This time we couldn’t find any electrons from the super-elastic collision process. So why, we wondered, was there no signal?
It turns out that the experiment was producing lots of photoelectrons from the laser beam (we could see these even with the incident electron beam off), but they were all at low energies. In fact, these photoelectrons emerged with four different energies in such large quantities that they drowned out the super-elastic signal we were expecting. The photoelectrons came not only from the 6P state, but also from lower states that the atoms could relax back to, including 0.36 eV electrons kicked out of the 5P state (figure 2a).
But what’s this got to do with the double-slit experiment? Well, this is where our new idea came in. We realized that if we fired a second, infrared laser beam with a wavelength of 780.24 nm at the atoms, this light could not only excite the atom to the 5P state, but also ionize the 6P state, producing photoelectrons with an energy of 0.36 eV. This is exactly the same energy as the photoelectrons created when blue light ionizes rubidium atoms in the 5P state.
There are, in other words, two possible paths that produce photoelectrons at this energy (figure 2b). The laser beams effectively “guide” the photoionization process so it goes either through the 5P state with a wavefunction Ψ1 (equivalent to slit 1 in a conventional Young’s double-slit experiment), or through the 6P state with a wavefunction Ψ2 (equivalent to slit 2), or simultaneously via both states. Rather than measuring the intensity of photons or electrons on a screen, we instead count the number of photoelectrons at different angles, θ, relative to the polarization of the laser beams – what’s known as the differential cross-section, DCS(θ).
2 Young’s double-slits with a single atom
(a) Our new version of Young’s double-slit experiment doesn’t involve firing particles through slits but uses lasers to excite rubidium atoms in different ways. Shining blue 420.30 nm laser light excites the atom from the 5S to the 6P state (transition indicated by thick blue arrow). The 6P state then relaxes to two other states (4D and 6S) that in turn relax back to a fourth state (5P) – the relaxations shown by dotted arrows. Additional blue photons (also 420.30 nm wavelength) can then ionize these states, releasing photoelectrons at four different energies (represented by narrow blue arrows), including at 0.36 eV. By using a second, infrared laser at 780.24 nm, we can either excite the rubidium atom to the 5P state or produce a photoelectron from the 6P state (red arrows), also at 0.36 eV. (b) If we set our detector to measure only the 0.36 eV electrons, they come from two possible paths – either via the 6P state ionized by the infrared laser, or via the 5P state ionized by the blue laser. The two paths can be turned on or off, just as we can open or close the slits in a conventional double-slit experiment.
By slightly detuning the frequency of one or other of the lasers, we can turn the pathways on or off, just as we can physically open or close the slits in a conventional Young’s double-slit experiment. Detune the blue laser and you excite only the 5P state, which closes path 2 and gives a photoelectron yield of DCS1(θ) ∝ Ψ12, where θ is the scattering angle. Detune the infrared laser, and you only excite the 6P state, closing path 1 and giving DCS2(θ) ∝ Ψ22. When both lasers are on resonance, both states are excited, and we have to add the wavefunctions to give DCS1+2(θ) ∝ (Ψ1 + Ψ2)2 .
In the same way as for Young’s experiments, we end up with an interference pattern. The interference term DCSinterf(θ) is, in fact, proportional to 2|Ψ1||Ψ2| cos Δχ, where |Ψ1| and |Ψ2| are the amplitudes along each pathway and Δχ is the relative phase shift between the waves at the detector. We can determine DCSinterf(θ) by taking three sets of measurements: one with both lasers on resonance yielding DCS1+2(θ), another with the blue laser off-resonance producing DCS1(θ), and a third with the infrared laser off resonance producing DCS2(θ).
Theory versus experiment
One practical challenge with our new double-slit experiments was finding a way to detect photoelectrons having just 0.36 eV of energy, which is 600,000 times lower than used in the earlier electron-microscope studies. We solved this be carefully eliminating magnetic and electric fields in the experiment that would otherwise have influenced the electrons as they emerged from the atoms, and by building detectors that could select and count single electrons at this energy.
So what did our experiments reveal?
3 From idea to experiment
In our equivalent of Young’s double-slit experiment, we use a beam of rubidium atoms emitted from an oven inside a vacuum chamber. We then fire blue and infrared lasers at the rubidium, feeding the beams vertically into the chamber and rotating their polarization through 360° to determine the number of photoelectrons at different angles. This plot shows the measured “differential cross-section”, DCSinterf(θ), and the “relative phase shift”, Δχ = χ1 (θ) – χ2 (θ) between the two possible ionization pathways that lead to 0.36 eV photoelectrons. If no interference occurred between the pathways, DCSinterf(θ) would be zero, and Δχ would also be 0. The fact that the values are clearly not 0 – and agree with theoretical calculations – shows that the photoelectrons have both wave-like and particle-like properties, thereby confirming wave–particle duality.
If there was no interference between the two ionization pathways – as we’d expect from a classical interpretation of the ionization process – then both the interference term and the relative phase shift should be zero at all angles. But the values were not zero (figure 3). The interference term, for example, varied from –0.14 to –0.56, proving that there was significant interference between the two pathways. The average phase shift, meanwhile, was Δχ = 115°, which is also far from 0. This clearly demonstrated that the individual electrons emerging from each atom must therefore have a wave-like nature, until they are detected as real particles by the detector. In fact, our results were in excellent agreement with calculations carried out by Jonas Wätzel and Jamal Berakdar – two experts in quantum calculations of photo-ionization processes at the Martin-Luther University in Halle, Germany.
Looking to the future, we are now extending and refining our models to study interference in other atoms, for other states and under different regimes. Recently, for example, they’ve been applied to excitation using femtosecond lasers (2019 Phys. Rev. A100 013407). Further theoretical studies show that the interference terms can be dramatically enhanced by choosing atomic states that are close in energy, and indeed there is no reason why the initial state needs to be the ground state – we can equally explore what happens when the process starts with an excited atom. This could help us understand the atmospheres of stars, where the constituent atoms are often in excited states. Further possibilities lie in two-path excitation to highly excited Rydberg atoms, in which the electrons are so far from the nucleus that the atoms are as big as a living cell – and could therefore be used in quantum computers.
The possibilities are limited only by our imagination, which is what makes physics such an exciting and rewarding endeavour.