A new model that predicts the charging timescales of supercapacitors much more accurately than had been previously possible has been unveiled by researchers in the Netherlands and China. Cheng Lian and colleagues at Utrecht University and the East China University of Science and Technology built their model by describing the complex porous structures within a supercapacitor as stacks of thin electrode plates. Their work could improve our ability to predict the charging characteristics of supercapacitor energy storage systems used in a wide range of applications including electric vehicles and solar-powered street lighting.
Supercapacitors are used in a variety of applications that require relatively short, intense bursts of electrical energy. They fall between conventional capacitors and batteries in terms of charge/discharge speeds and energy capacity. Supercapacitors store far more charge than conventional dielectric capacitors by using porous electrodes, which can have surface areas as large as several square kilometres. A significant downside of these nanopores is that supercapacitors take far longer to charge than their conventional cousins.
There is currently a poor understanding of how nanopore structures could be optimized to reduce charging times. One approach has been to develop macroscopic models that fit parameters to experimental measurements of the charging process. The problem with this approach is that there seems to be very little correspondence between the parameters and the underlying physics of a supercapacitor.
Huge disagreement
Researchers have also done molecular-scale dynamics simulations, which provide insights into the charging mechanisms of up to a few supercapacitor nanopores. When used to predict the charging times of real devices, however, the results underestimate charging time by a whopping factor of 1012.
In their study, Lian’s team has taken a completely new approach that approximates electrodes as stacks of flat, fully permeable, and infinitesimally thin charged plates. The gaps between the plates are on par with the diameter of a typical nanopore and the researchers found that their new model could reliably reproduce characteristics of supercapacitors on both micron and nanometre scales.
Lian and colleagues used their model to explore the characteristic timescales of charging with both high and low voltages, which provided new insights into the physical mechanisms involved in charging. Given the simplicity of the model, the charging timescales it predicted agreed remarkably well with experimental values; differing by factors of just two or three, instead of many orders of magnitude. The team now hopes that their model could soon enable researchers and engineers in wide-ranging fields to design safer and more effective devices for energy storage.
Look carefully at one of those classroom posters that shows the sweep of the electromagnetic spectrum, from gamma rays to radio waves, and you’ll find a small patch squeezed in between the infrared and microwave. Called the far-infrared, submillimetre or terahertz region, it is nominally defined as spanning wavelengths from 30 μm (10 THz) in the mid-infrared, to 1–3 mm (0.1–0.3 THz) in the microwave domain. Although hardly prominent in these educational posters, this region is a rich research area for physics on and beyond the Earth.
While there are microwave and infrared sources that can produce thousands of watts of power at those frequencies, there is a lack of sources that work well across the terahertz range, which is why it is often referred to as the “terahertz gap”. Hot blackbodies only emit microwatts at terahertz frequencies, whereas microwave technology is not easily pushed below millimetre wavelengths, so standard spectroscopic methods do not apply. Yet the gap is well worth exploring. It is the right range to probe electronic, lattice and quantum properties in condensed matter, and to examine massive molecules. It supports applications in biomedicine, in security systems and in the study of artworks. Outside the lab, terahertz radiation is relevant to the cosmic microwave background (CMB) and other astrophysical phenomena, and is helping in the search for the origins of life in space.
Making do with microwatts and milliwatts
Fortunately, when scientists see a gap in understanding, they dive right in. Since the 1960s researchers have found ways to obtain high-quality terahertz spectra by using a Fourier-transform spectrometer. In this device, a hot source sends microwatts of terahertz power to a solid, liquid or gaseous sample where it is reflected or transmitted through. Next, the beam is split into two and each part is reflected from a mirror. The beams are then recombined, and a detector measures the resulting interference signal as one mirror moves relative to the other. Fourier analysis of this interferogram yields the frequency spectrum specific to the sample with a higher signal-to-noise ratio than in a conventional spectrometer.
Another common method of creating a terahertz source – which provides greater power – is by optically pumping the vibrational states of a molecular organic medium, such as methanol (CH3OH), with a CO2 laser. This excites molecular rotational sub-levels that emit discrete terahertz laser lines. Different media supply hundreds of lines at milliwatt powers, which yield precise data over broad ranges.
Since the 1960s and 1970s, in my lab and others, such terahertz Fourier spectrometers and pumped lasers have probed semiconductors and their nanostructures, superconductors, inhomogeneous materials, water in liquid and vapour form, and biomolecules. But more recently, lab research has been enhanced by powerful new terahertz sources based on semiconductor technology (see June 2016), and on synchrotrons and free electron lasers (see box, below). Indeed, according to a 2014 bibliometric study by Roger Lewis of the University of Wollongong in Australia, the number of papers published containing “terahertz” in the abstract, title or keywords grew exponentially between 1975 and 2013 (J. Phys. D: Appl. Phys. 47 374001).
Power for the terahertz gap
New source: The NCLS-II at Brookhaven National Laboratory in New York has terahertz capability. (Courtesy: Brookhaven National Laboratory)
Spacecraft-based astrophysical and cosmological research at terahertz frequencies – such as measuring the fluctuations in the cosmic microwave background – is made possible thanks to sensitive detectors that have been cryogenically cooled to reduce noise, as in the COBE and Planck projects. Sensitive detectors are also important for terahertz Fourier-transform spectroscopy in the lab that uses microwatts from a hot source.
Another approach for terahertz spectroscopy is to develop more powerful sources. Optically pumped lasers (see main text) generate milliwatts, which is ample for many uses, but they require a CO2 laser and, unlike a blackbody, do not offer continuous frequency coverage. Other powerful sources have different limitations. Quantum cascade lasers (QCLs, see June 2016 pp28–31) are semiconductor nanostructures that produce higher powers, but at fixed frequencies and under cryogenic cooling. One recent paper (L H Li et al. 2017 Electronics Letters53 799) describes a QCL that emits 1.8 W and 2.4 W at 4.4 THz, cooled to 77 K and 10 K, respectively.
High powers for demanding applications such as terahertz microscopy are also available at central facilities. One type of source generates powerful terahertz waves in a free-electron laser, where a beam of relativistic electrons moves past an arrangement of magnets with alternating poles. This gives the electrons a transverse wiggling motion, which produces monochromatic photons whose frequency can be tuned by changing the electron velocity or the magnetic field, and that are made coherent by confinement in a cavity. The free-electron laser at the University of California, Santa Barbara, for example, generates kilowatts from 0.1 THz to 4.8 THz. Another unit at the Budker Institute of Nuclear Physics in Novosibirsk, Russia, operates from 1.2 to 8.2 THz.
High-power terahertz radiation is also generated by electrons circulating in a synchrotron storage ring. The National Synchrotron Light Source at the Brookhaven National Laboratory in New York maintains a terahertz beamline that provides 100 mW of broadband power at frequencies above 0.15 THz, covering the terahertz range and beyond and acting as a source for a Fourier-transform spectrometer. A beamline at another synchrotron, the Canadian Light Source at the University of Saskatchewan in Saskatoon, also covers the terahertz range at high brightness as a source for Fourier-transform spectrometry and terahertz microscopy.
With judicious choice of detectors, sources or both, scientists have found footholds in the terahertz gap and perform research of the highest quality. But the ideal terahertz source – producing milliwatts or more, tunable over the entire range, compact and operating at room temperature – remains elusive. This is a major stumbling block to carrying out applications in security systems and biomedicine.
Terahertz physics takes to the skies
Back in 1964, terahertz physics also received a powerful push out of the lab and into the universe. That was when Arno Penzias and Robert Wilson, working with an antenna at Bell Labs designed for satellite communications, unexpectedly found a constant signal at the microwave wavelength of 7.35 cm that seemed evenly distributed across the heavens.
The definitive measurement of this unknown radiation was made aboard NASA’s Cosmic Background Explorer (COBE) satellite, which launched in 1989. Using a Fourier-transform spectrometer, COBE’s result was in near-perfect agreement with the emission curve of a blackbody at 2.725 ± 0.002 K. With peak intensity at 1.07 mm, this spectrum spans the terahertz range (figure 1), and Penzias and Wilson had picked up the tail end of the curve. Separately, COBE also compared millimetre-wave radiation from different sky directions and found that the CMB is slightly anisotropic, representing temperature fluctuations of 1 part in 105 (see February 2020).
1 Close curve
The cosmic microwave background data from COBE, which shows a remarkably close fit to the emission curve of a blackbody at 2.7 K, actually enters the terahertz range.
The blackbody data matched a 1965 prediction by the cosmologists Robert Dicke, Philip Peebles and colleagues that as the universe cooled after the Big Bang, it would be filled with residual blackbody radiation at ~3 K. This agreement provided strong support for the Big Bang theory and the results gave deep insights into the history of the universe. Indeed, when George Smoot and John Mather received the 2006 Nobel Prize for Physics for their work on COBE, the Nobel Committee noted that COBE can be “regarded as the starting point for cosmology as a precision science”.
The CMB temperature fluctuations are also significant; they represent density variations in the hydrogen making up the universe 380,000 years after the Big Bang. These evolved into today’s cosmic structure, with filaments of galaxies surrounded by enormous voids. After COBE, the fluctuations were studied from space by NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) launched in 2001, and the Planck spacecraft, launched by the European Space Agency in 2009. Advances in detection methods and space technology improved each subsequent mission. Planck sensed weak signals with heat detectors cooled to 0.1 K, giving attowatt noise levels, and covered the widest frequency range at nine values from 0.03 THz (10 mm) to 0.857 THz (0.35 mm). The spacecraft measured cosmic temperature differences of 5 μK or less at angular resolutions down to 4 arcmin, compared to 7° for COBE and 0.5° for WMAP (figure 2).
2 All in the detail
Successively higher resolution of temperature variations in the early universe as seen in the terahertz range by the COBE, WMAP and Planck spacecraft.
The terahertz data from Planck was analysed with the so-called lambda cold dark matter model (ΛCDM), the cosmological “standard model”. ΛCDM assumes that physics, including general relativity, is the same throughout the universe; that the universe was initially hot and dense and has always been expanding; and that it includes dark energy, dark matter, ordinary matter, photons and neutrinos. In 2018 the final results from Plank showed that the universe is 13.8 billion years old; it contains 31.5% matter (4.9% normal matter and 26.6% dark matter) and 68.5% dark energy; it most probably contains only three species of neutrinos, whose masses sum to less than 0.12 eV; and it is expanding with a Hubble constant H0 of 67.4 km/s/Mpc.
These results provide our most accurate and comprehensive picture of the universe to date and terahertz detection technology played an important role. However, with an uncertainty of only 1%, the Planck value of H0 is at variance with the value 73 km/s/Mpc derived from other supposedly reliable astrophysical data – a difference that perhaps points to new physics.
Seeing a black hole with millimetre waves
Another terahertz astrophysical project required far higher resolution than Planck attained. In April 2019 the international Event Horizon Telescope (EHT) collaboration presented the first ever image of a black hole – the supermassive black hole at the centre of the elliptical galaxy M87, 55 million light-years away.
The aim had been to study the region near the event horizon by observing the black hole’s “shadow”, a dark area within the glow emitted by hot accretion material flowing into the black hole. The shadow, caused by the gravitational bending and capture of light near the event horizon, has a diameter about five times the Schwarzschild radius (the radius of the black hole) as predicted by general relativity. It would subtend only a tiny angle of ~40 μarcsec.
Terahertz photons delineate a black hole because they come from deep within its gravitational well. Earlier studies of M87 at wavelengths from 1.3 mm to 7 mm had shown signs of a central 40 μarcsec structure but could not image it. These results did, however, show that the shorter the millimetre wavelength, the more closely the photons represented the actual site of the black hole within the bright region. But no individual radio telescope installation, single-dish or multi-dish, could provide the required angular resolution at millimetre wavelengths.
The answer was for the EHT to link eight separate installations around the Earth, including the Atacama Large Millimeter/submillimeter Array (ALMA) in the Chilean desert, the South Pole Telescope (SPT) in Antarctica, and the IRAM 30-metre telescope in Spain (figure 3). The resulting virtual telescope gave an angular resolution of the order of the 1.3 mm wavelength divided by the Earth’s diameter. After an intricate process of co-ordinating the telescopes and analysing petabytes of data, the composite network produced a striking image at a resolution of 20 μarcsec. It clearly shows the dark shadow within the bright emission region 42 μarcsec across, which itself displays details. Analysis of the data gave a central mass of 6.5 × 109 solar masses, definitively establishing the existence of a supermassive black hole in M87 and supporting the supposition that black holes of this size lie at the centre of galaxies.
3 Worldwide telescope
The system of telescopes in the Event Horizon Telescope array, forming an Earth-sized virtual telescope to achieve ultrahigh angular resolution of the order of 1.3 mm wavelength divided by the Earth’s diameter. Green dots indicate future sites.
Seeking the molecules of life
Besides research in space to explore the origin and development of the universe, terahertz methods can also examine a different set of fundamental questions. How did life begin on Earth? Was it a unique process, meaning we are alone in the universe? Or did it seed life elsewhere?
One possible answer to these big questions is that the complex molecules of life, or their precursors, originated in the interstellar medium, and came to Earth and other planets via meteorites. According to what we know about earthly life, this means finding organic molecules in space that contain carbon along with hydrogen, oxygen and nitrogen. Some of these molecules – including the amino acids necessary to build proteins – have already been found in meteorites that landed on Earth, and now terahertz astronomical spectroscopy is being used to seek such biotic or pre-biotic molecules in space.
The universe seems to support active carbon-based chemical processes – indeed, the first molecule found in space was CH in 1937, and organic molecules still dominate the more than 200 species found since by ultraviolet to centimetre-wavelength spectroscopy. These results mostly come from radio astronomy at frequencies below 2 THz, where transitions between the energy levels associated with molecular rotations provide many identifying spectral features in emission or absorption. This is the same mechanism that in the lab generates terahertz laser lines from compounds like methanol, CH3OH (which has also been found in space).
The known astronomical organic molecules contain up to 13 atoms (excluding the non-biotic fullerenes C60 and C70), a level of complexity associated with biological function. In 2003 the simple amino acid glycine (NH2CH2COOH) was reportedly detected in space, but later measurements have not confirmed this. Other relevant findings are the sugar-related molecule glycolaldehyde (CH2OHCHO), and formamide (NH2CHO), a possible biotic precursor with the appropriate properties to form sugars and amino acids.
The high angular resolution offered by arrays like ALMA aids the search for complexity beyond the straight-chain carbon backbone found in most big organic molecules in space. In 2014 a team under Arnaud Belloche at the Max Planck Institute for Radio Astronomy in Bonn, Germany, used ALMA at 3 mm wavelength to find the first space molecule with a branched carbon chain, iso-propyl cyanide (i-C3H7CN). This feature is characteristic of the amino acids that have been seen in meteorites on Earth. The molecule was observed in the giant star-forming gas cloud Sagittarius B2 in our galaxy, suggesting that active areas in space tend to make complex compounds. Deeper physical and chemical understanding of how molecules form in diverse places, from interstellar and circumstellar regions to protoplanetary discs, will further focus the hunt for biotic molecules.
Lab measurements of complex molecular spectra are essential as well to guide astronomical research and interpret its results. Susanna Widicus Weaver, for example, is a chemist at Emory University who works on improving terahertz Fourier-transform spectroscopy and other methods, for these purposes and to study new areas in interstellar chemistry, such as molecular reactions with ice. These approaches, she wrote in a recent review article (Ann. Rev. Astron. Astrophys. 57 79), are “poised to fill the terahertz gap…offering analytical techniques that rival those used in the microwave and infrared regions of the electromagnetic spectrum”.
Researchers have extended microwave and infrared methods on and off the Earth to make the terahertz gap navigable and carry out innovative studies in both locales. This breadth illustrates the interdisciplinary nature of terahertz science as it explores the beginnings of the universe, the properties of matter in space and on Earth, and the fascinating, still-mysterious intersection where non-living molecules make the leap into life.
It should be possible to create materials that conduct both electric current and exciton excitation energy with 100% efficiency and at relatively high temperatures – according to theoretical chemists in the US. They have calculated that such materials would exist in a single quantum state but would demonstrate properties of two different condensates – one made from excitons and the other made from pairs of fermions.
Bose–Einstein condensates are made by cooling a gas of particles sufficiently that the de Broglie wavelengths of individual particles are comparable to the spacing between particles – allowing the system to condense into a single quantum ground state. The particles must be bosons, which have integer spin and can therefore all occupy the same quantum state simultaneously. However, condensates can also be made from bound-pairs of half-integer-spin fermions because pairs of fermions have integer spin and are therefore bosons.
In a superconductor, bound pairs of electrons (fermions) create a superfluid that allows electrical current to flow through the material without resistance. These “Cooper pairs” have a low binding energy, which means they are easily destroyed by thermal energy. Above a relatively low critical temperature, the pairs break apart and the material becomes a normal conductor.
Excited electrons
One possible way to boost the critical temperature of a condensate is to make it from excitons (bosons), which are electrons bound to holes. An exciton is created when an electron is excited from the valence band of a material – leaving behind the hole. A condensate of excitons can therefore carry this excitation energy through a material without resistance. Unlike Cooper pairs, however, excitons do not carry electrical charge. Excitons are more tightly bound than Cooper pairs, meaning that such condensates could persist at higher temperatures than superconductors. However, because particles and holes naturally annihilate very quickly, exciton condensates are hard to make.
Exciton condensates can be generated by placing the electrons in an optical trap or using twin layers of material such as semiconductor or graphene to keep particles and holes apart. Exciton condensates can also co-exist alongside fermion-pair condensates, where they allow Cooper pairs to exist at higher temperatures. Two years ago, for example, physicists at Royal Holloway, University of London, and the University of Southampton in the UK combined a superconducting ring with a semiconductor microcavity.
This latest research was done by LeeAnn Sager, Shiva Safaei and David Mazziotti at the University of Chicago. Mazziotti points out that the properties of the two types of condensate remain distinct from one another in such systems. The trio investigated whether it is theoretically possible to create a material that displays both sets of properties together. Such a material, they say, might be able to conduct both electricity and excitation energy with complete efficiency.
“Large family of wave functions”
The researchers first used a computer model to simulate the behaviour of a four-particle fermionic system, finding that it would indeed exhibit these dual properties. Lacking the processing power to scale this system up, they then calculated what would happen when entangling the quantum wave functions of a superconductor and an exciton condensate containing large numbers of particles. Doing so, says Mazziotti, they showed that there should be “a pretty large family of wave functions that combine these properties and in principle exist in the macroscopic world”.
Reporting their results in Physical Review B, the researchers say that this single quantum state, which they call a “fermion-exciton condensate”, combines the properties of the individual condensates “in a highly nontrivial manner”. They explain that the properties of each condensate are reduced somewhat when compared to their creation in isolation. However, this compromise diminishes as the number of electrons in the system goes up.
Mazziotti says that the group is now working with experimentalists to create such a material in the lab. Rather than using two semiconductor layers to create a purely excitonic condensate, he says that the most obvious candidate for a fermion-exciton condensate would be a pair of superconducting layers – although at this stage he does not know what type of superconductor they would use. “This would be the shake-and-bake recipe for materials that have these dual properties,” he quips.
However, Mazziotti is under no illusion that this is an easy project. One challenge, he says, will be handling the different binding energies of the Cooper pairs and excitons. Sager adds that it will be tricky to bring the layers close enough to create the bound pairs but not so close that electrons can tunnel from one layer to the other. But if those hurdles can be overcome then applications beckon, says Mazziotti. One possible use, he suggests, might be in medical imaging – with propagation of visible light without loss preserving resolution.
Peter Abbamonte of the University of Illinois, who was not involved in the research, feels “some luck would certainly be needed” to realise such a condensate in the lab. But he reckons that the theoretical result “makes a compelling case” for trying to construct exciton condensate-like structures from superconducting constituents.
Recent work has suggested that prostate tumours with high nerve densities are more likely to grow and spread than those with low nerve densities. Now, a team in China and the US has shown that such high-risk cases can be identified using a combination of MRI, magnetic particle imaging (MPI) and functionalized iron-oxide nanoparticles. In experiments with mice, the researchers also used the same nanoparticles to deliver a drug that blocks nerve function, slowing the spread of prostate cancer and improving the animals’ survival rate (Science Advances 10.1126/sciadv.aax6040).
Tumour development and proliferation is a complex process involving multiple tissue types and structures. Angiogenesis – the formation of blood-vessel networks – has long been recognized as a vital component, prompting the formulation of antiangiogenic drugs intended to control tumour growth. Only in the last decade, with studies of prostate cancer progression specifically, has the nervous system emerged as a similarly important part of the process.
Wenting Shang of the Chinese Academy of Sciences (CAS) – who led the research with Huijuan You of CAS and Huazhong University of Science and Technology – likens the development of cancer to the construction of a new building. “When a new building is built, you need to set up water pipes and wires for water to flow smoothly and the lights to blaze,” says Shang. “Angiogenesis can be seen as cancer cells building water pipes; a dense network of nerves can be seen as cancer cells laying wires.”
Spotting tumours that have laid down such a network of nerves could help determine how dangerous a given case of prostate cancer is likely to be. This would let clinicians tailor the scale of the intervention to suit the risk, avoiding overtreatment of less aggressive tumours.
Unfortunately, nerve density is a difficult tissue property to measure using typical imaging techniques. For example, while MRI – the method of choice for prostate-cancer imaging – can usually delineate a tumour clearly enough, those that are dense with nerves look very similar to those with undeveloped nervous systems.
To solve this problem, Shang, You and colleagues developed a contrast agent that targets nervous tissue specifically. The team started with nanoparticles of iron oxide, which have already found use in both MRI and MPI, and joined them to the nerve-binding peptide NP41.
Injected into the bloodstream of a mouse, the contrast agent disperses through the animal’s entire circulatory system but is quickly metabolized and removed. Because of the enhanced permeability and retention effect in the tumour, however, the nanoparticles accumulate in the cancerous tissue, where they bind preferentially to proteins around the nerve fibres. The researchers looked for this effect in mice with prostate tumours that had either been allowed to grow without interference (yielding high nerve densities) or from which the nerves had been surgically or chemically compromised (yielding low nerve densities).
Twenty-four hours after injection of the contrast agent, tumours with high nerve density showed up clearly on MRI scans – and even more so using MPI. In low-nerve-density tumours, in contrast, the nanoparticles were virtually undetectable with MRI but produced a faint signal using MPI. The researchers think that MPI therefore offers an ideal method to visualize nerve density in prostate tumours.
Given the importance of dense nerve networks for tumour growth and proliferation, the team proposed that interventions that not only highlight but also target the nerves might reduce a cancer’s aggressiveness. To test this, they added another component to the functionalized nanoparticles – a beta-blocker called propranolol, which affects nerve function.
Mice injected with the propranolol-conjugated nanoparticles tended to survive for longer: 45 days after the treatment, 83% of the propranolol-treated group were still alive, compared with 40–50% of those given either propranolol-free nanoparticles or propranolol alone.
The treatment also seems to have been without significant side effects. Despite the nanoparticles entering each mouse’s general circulatory system, they tend to deliver their propranolol cargo only in the low-pH microenvironment of the tumour. Combined with the preferential accumulation of nanoparticles in the tumour, this means that the mouse’s wider nerve network is relatively unaffected.
The researchers think that the procedure could produce similar results in solid tumours elsewhere in the body, as the relationship between cancer propagation and nerve networks is probably not prostate-specific. There is still much more work to be done before it is ready for the clinic, however.
“Firstly, MPI technology is still in the pre-clinical stage,” says Shang. “Secondly, our probes are not yet mature for clinical use, and still need to be optimized. We plan to conduct more comprehensive toxicological studies of our probes in the next stage.”
Anyone familiar with quantum mechanics knows that the act of measurement forces quantum systems into definite classical states. But new research shows that some measurements don’t destroy all quantum information in the process. It also reveals that measurements are not instantaneous, but instead gradually convert superposition states into classical ones.
The idea that all superposition is destroyed when a measurement is made was an underlying assumption of quantum mechanics as formulated by John von Neumann and others in the 1930s. Two decades later, however, Gerhart Lüders theorized that certain “ideal” measurements should only collapse superpositions of the specific states being probed, leaving others intact. In this way, he argued, a series of such measurements should preserve quantum coherence.
In the latest work, Markus Hennrich and colleagues at Stockholm University, Sweden, together with researchers at the universities of Siegen in Germany and Seville in Spain, performed an ideal measurement involving a single ion of strontium. As they report in Physical Review Letters, they began by using a laser to place the ion in a superposition of two states (out of a possible three), with each state corresponding to a different energy level of the ion’s outermost electron. They then used a short pulse from another laser to excite the ion from only one of the three states, causing it to fluoresce – an ideal measurement according to Lüders’ criteria.
Indirect detection
In this measurement, a single photon is emitted in a random direction, making it difficult to detect directly. Instead, Hennrich and colleagues carried out what is known as process tomography. This involves using laser pulses that reveal, for every possible combination of superposition states, whether the superposition has been destroyed or preserved.
Repeating this process many times over, the researchers found that the excitation and emission destroyed all the superpositions related to the state being probed. The other superpositions, however, remained intact. According to Hennrich, this shows that he and his colleagues had indeed carried out an ideal measurement. What’s more, the fact that they did not need to detect the emitted photons shows that the measurement process does not depend on the presence of an observer. “It is already happening as a result of one fluorescence photon being emitted into the environment,” he says.
Weak measurement
The group then studied the dynamics of the measurement process by varying the power of the laser used to excite the ion. The idea was to reduce the power such that the ion was no longer guaranteed to fluoresce, instead doing so only a fraction of the time. Because fluorescence is less probable at lower powers, Hennrich explains that these weak, or imperfect, measurements would be equivalent to intermediate stages in the measurement process – in other words, “snapshots” of that process.
By carrying out tomography at these varying power levels, Hennrich and co-workers showed that the measurement process causes the superposition to collapse gradually (although the whole process is over in about a millionth of a second). They found that the degree of superposition between the ion’s different states matched those predicted by Lüders’ model 94% of the time.
Proving that Lüders was right about ideal measurements will not come as a surprise to other physicists, says Hennrich. Indeed, he points out that in 2016Arkady Fedorov and colleagues at the University of Queensland, Australia showed that ideal measurements could be made in a three-level superconducting qubit placed in a microwave cavity. But that system, he adds, was somewhat artificial. “What we have shown is that you can realize a Lüders process through a natural measurement,” he says.
Fedorov praises the European researchers for carrying out “a nice physical implementation” of a quantum measurement, pointing out that unlike his group they studied both strong and weak versions. But he feels that the distinction between natural and artificial processes is not very significant. If anything, he reckons, using an artificial system is more demanding given the need “to engineer a particular regime”. The choice, he says, “is a matter of taste”.
Error correction
As for possible applications, Hennrich says that the latest work might be used to improve error correction in quantum computers, given that weak measurements could in principle allow errors to be detected in quantum states without destroying those states in the process.
The researchers also want to investigate the possibility of more complex ideal measurements, in which the measurement process affects multiple states, rather than just one. “Whether such processes exist as natural processes and can be implemented with a fidelity comparable to our experiment is an open question,” they write.
“Smart” textiles are a hot topic in materials science right now, with researchers in various organizations striving to combine light-emitting displays with flexible substrates. One approach is to sew diodes, wires, and optical fibres into textiles, but the resulting garments lack the soft, stretchy quality of their non-luminous counterparts. (They’re hard to wash, too.) The main alternative is to build thin-film light-emitting devices directly into the fabric, but the porous nature of textiles makes such structures hard to manufacture.
Now, however, scientists in Canada have found a truly fabulous solution: gold-coated tights, or pantyhose as they’re known in North America. Yunyun Wu, a PhD student in Tricia Carmichael’s materials-chemistry group at the University of Windsor, was out shopping for fabrics for her research when she realized that sheer fabrics would make a great platform for the transparent conductor in light-emitting devices. From there, Carmichael says, a “second lightbulb moment” led the group to choose pantyhose as “an ideal material” upon which to build their electrodes.
The researchers employed a metal-deposition technique called electroless nickel-immersion gold metallization to coat their pantyhose with gold film. Afterwards, they used the still-stretchy material to create light-emitting textiles emblazoned with a smiley-face emoji and a digital-clock-like display. The next step, they say, is to develop correspondingly flexible energy-storage components that can keep their 10-denier light-emitters going strong until the wearer decides to switch them off.
Love/hate relationship
“My observation is that about half the scientific community loves the h-index and half hates it, and the h-index of the scientist itself is a great predictor of whether s/he belongs to the first or the second group.” That is the wry conclusion of the physicist Jorge Hirsch, who invented the h-index in the early 2000s and appears to have fallen into the latter camp
The index attempts to quantify the academic output of a scientist in terms of number of papers published and the number of times those papers are cited by others. As Hirsch explains in the essay “Superconductivity, what the H? The emperor has no clothes”, “If your h-index is 25, you have written 25 papers that each have 25 or more citations, the rest of your papers have fewer than 25 citations each”.
While Hirsch believes that his index provides an “objective measure of scientific achievement,” he concedes that there have been some unintended negative consequences of its widespread use. One problem, according to Hirsch, is that it incentivizes a journal referee to approve a paper that cites the work of the referee – because doing so would boost the referee’s own h-index.
This bias, says Hirsch, could help explain why his theory regarding the role of holes in superconductivity never got going after he first proposed it in 1989. Hirsch has since written about 100 papers that “poke holes” in the widely-accepted BCS theory of superconductivity – but getting these accepted by journals has been a real struggle, he says.
These papers do not tend to cite the work of leading superconductor researchers, who are also referees, because these people are usually BCS stalwarts. Hirsch believes this could be part of his problem – although he does admit that an alternative explanation is that his ideas about holes could be wrong.
In the latest episode of the Physics WorldStories podcast, Andrew Glester learns about the acoustic design of public spaces, through conversations with acousticians and architects. He visits the Bristol Old Vic – the oldest continuously running theatre in the English-speaking world – which has recently undergone a refurbishment. Glester also visits Manchester’s Bridgewater Hall, a place with which he has a strong personal connection, having worked there in the past.
Find out more about acoustics in architecture in this article by science journalist Anna Demming, which first appeared in the February issue of Physics World.
A new report from the US National Academies of Sciences, Engineering, and Medicine calls for systematic action to address the underrepresentation of women in these fields. The report recommends several ways for colleges and universities to improve recruitment, retention, and advancement of women in the so-called STEMM disciplines – science, technology, engineering, mathematics and medicine – and calls on government agencies and scientific societies to play complementary roles in promoting greater equity and diversity.
The report, entitled Promising Practices for Addressing the Underrepresentation of Women in Science, Engineering, and Medicine: Opening Doors, outlines the persistence of the challenge, particularly in the physical sciences. Women in the US received fewer than 20% of bachelor’s degrees in physics and computer science and 21% of those in engineering. In contrast, women are close to parity for degrees in chemistry, biology, and medicine – although the report notes that in these fields, they nevertheless “encounter barriers that block advancement into senior positions.” These issues are more severe for women of colour, it adds.
While acknowledging that there is no one-size-fits-all solution, the report’s authors call on academic institutions to adopt a step-by-step approach: identify specific problems, collect and analyse data on gender disparities; pilot evidence-based practices to respond to the findings; repeat data collection to check and adjust these pilot schemes; and institutionalize effective changes through shifts in policy.
Societies working together
Beyond academia, the report recommends ways that government departments and professional societies can contribute to overcoming gender and colour inequities in STEMM. “Leaders at federal agencies, policymakers in Congress, scientific and professional societies, and the White House can all play a powerful role in promoting transparency and accountability and in supporting and rewarding evidence-based actions to promote greater equity in the STEMM enterprise,” says Rita Colwell, a former director of the National Science Foundation and chair of the committee responsible for the report.
Billy Williams, who served on the committee and is also vice president for ethics, diversity, and inclusion at the American Geophysical Union, notes that the report highlights some exiting scientific society initiatives. These include the American Association for the Advancement of Science’s SEA Change; the Inclusive Graduate Education Network, which Williams describes as “a partnership of more than 30 societies, institutions, organizations, corporations and national laboratories poised to lead a paradigm shift in increasing the participation of underrepresented racial and ethnic minority students who enter graduate or doctoral level programmes in the physical sciences”; and the Societies Consortium on Sexual Harassment in STEMM, which counts more than 120 scientific societies as members.
Colwell, a microbiologist at Johns Hopkins University and the University of Maryland, says the study gives her “a strong conviction that the challenge of realizing a more diverse, equitable, and inclusive science, engineering, and medical enterprise can be met with great success, if all stakeholders share the passion, will, and perseverance to achieve positive change.”
Abbreviated breast MRI identifies more invasive cancers in women with dense tissue than digital breast tomosynthesis (DBT) does, according to a study by German and US researchers (JAMA 10.1001/jama.2020.0572).
The results suggest that abbreviated breast MRI could be a powerful breast cancer screening tool in a clinical environment increasingly dominated by DBT, for women both at high and average risk, lead author Christopher Comstock of Memorial Sloan Kettering Cancer Center in New York City said in a statement released by the centre.
“When screening women at average risk with dense breasts, we found that abbreviated breast MRI detected almost two and a half times as many breast cancers as 3D mammography,” Comstock said. “We also found that the abbreviated breast MRI was well tolerated by women, with very few side effects.”
Lead author Christopher Comstock. (Courtesy: Memorial Sloan Kettering Cancer Center)
Performance measures
Breast MRI boasts the highest cancer detection rate of all breast imaging modalities, Comstock and colleagues wrote, and has been shown to be useful not only in women at high risk of the disease but also those at average risk, corresponding author Christiane Kuhl of RWTH Aachen University in Germany told AuntMinnieEurope.com via email.
“The reason why MRI is superior to radiographic imaging is because it highlights angiogenic activity of breast cancers – unlike mammography or DBT, which are pure structural imaging,” she said. “It’s also impervious to dense tissue.”
But critics say that using conventional breast MRI to screen for breast cancer isn’t practical in a larger population, due to its expense, its longer exam time (45 minutes, compared with 15 minutes for a mammogram), and the fact that it requires a contrast agent. That’s why abbreviated breast MRI – which takes about 10 minutes – shows promise, Comstock and colleagues noted.
“Multiple studies have confirmed equivalent diagnostic accuracy of abbreviated breast MRI with full MRI protocols,” the group wrote. “These observations have led to the consideration of utilizing abbreviated breast MRI to screen women with dense breasts.”
The study, called Comparison of Abbreviated Breast MRI and Digital Breast Tomosynthesis, or EA1141, compared the screening performance of abbreviated breast MRI and DBT in women with dense breasts. It consisted of 1444 women who underwent breast cancer screening with both modalities between December 2016 and November 2017 at 47 sites in the US and one in Germany. The women were between 40 and 75 and had heterogeneously or extremely dense breast tissue.
The primary end point was the detection of invasive cancer. Secondary measures included sensitivity, specificity, the rate of additional imaging recommendations and positive predictive value of biopsy (PPV1). Biopsy results were the reference standard for cancer detection rate and PPV1, while interval cancers reported were used as the reference standard for sensitivity and specificity.
Twenty-three cancers were found in the patient cohort, 17 of which were invasive cancer with or without ductal carcinoma in situ (DCIS) and six of which were DCIS only. There were no interval cancers.
Breast MRI identified all 17 invasive cancers and five of the six DCIS cases, while DBT found seven of the 17 invasive cases and two of the six DCIS cases. DBT did outperform breast MRI when it came to specificity and PPV1, however, although the PPV1 values were not statistically significant.
Breast MRI versus DBT for detecting invasive cancers.
The study offers further evidence that abbreviated breast MRI could be used on its own for screening rather than as an adjunct to mammography or DBT, according to Kuhl.
“Our study investigates abbreviated breast MRI as a standalone imaging method, not as a supplement to mammography or its ‘best in class’ successor, DBT,” she said. “Once the utility of abbreviated breast MRI is established and the demand for more MRI screening is obvious, I would hope vendors would develop dedicated MR systems optimized for screening purposes.”
Ready for the clinic?
Abbreviated breast MRI shows promise for breast cancer screening in women with dense tissue, but it may not be ready for the clinic, wrote Anna Tosteson, of the Dartmouth Institute for Health Policy and Clinical Practice in Lebanon, New Hampshire, US, in an editorial accompanying the study.
“The promise of abbreviated breast MRI is that it may improve cancer detection without the lengthy examination time and high costs of conventional breast MRI,” she wrote. “Abbreviated breast MRI acquisition requires less than 10 minutes. However, [it] still requires the contrast-enhancing agent used in full-protocol breast MRI and thus carries the same gadolinium-associated risks.”
More research is definitely required, Tosteson cautioned.
“Before widespread adoption, further evidence is needed to demonstrate that abbreviated breast MRI will address the limitations of conventional breast MRI in terms of practicality and cost-effectiveness for the larger screening population of women with dense breasts,” she wrote. “Importantly, the reductions in image acquisition and interpretation time will not overcome the need for better patient access to MRI, the requirement for intravenous gadolinium contrast administration, and the associated patient preparation time.”
An Apollo spacecraft blasts off from Earth and then slowly descends onto the grey, pitted lunar surface. In the next 23 minutes of the thrilling new show at the Hayden Planetarium in New York City, we meet more spacecraft: Cassini, Huygens, Voyager, Rosetta, Galileo and Magellan. “Worlds beyond Earth” is a show that treats these spacecraft as protagonists on missions to different places in the solar system, flying into planetary atmospheres, over canyons, around comets, and through dense swarms of moonlets.
The planetarium show debuted on 21 January this year at the Hayden, which is in the Rose Center for Earth and Space – part of the American Museum of Natural History on Manhattan’s Upper West Side. The show differs radically from its predecessor, “Dark universe”. That one focused on telescopes and what they had discovered about the evolution of the cosmos. Many scenes in it featured throngs of galaxies exploding in the direction of the observer, reminding me of what it’s like to look up during a hailstorm.
“Worlds beyond Earth”, in contrast, is more a travelogue, full of orbiters, flybys and landers as well as scenes of the unique places that the spacecraft encounter. The fastidious attention to details – both their colour and resolution – makes the objects as absorbing as the fossils, crystals and bugs scaled up to the size of dogs that we encounter elsewhere in the museum.
The show’s distinctive character is no surprise given that it was curated by Denton Ebel, a geologist and the first non-physicist among the half-dozen to curate a Hayden Planetarium spectacle. “I wanted it to be tactile,” he told me when I visited him in his office a few weeks before the show opened. “I wanted it to be about the stuff.” This show provides a vivid feel for the colours, textures and roughness of the surfaces that the spacecraft explore. “It’s not just showing what we know,” he said, “it’s also showing how we know it.”
An enormous support system was put into play to mount the show. Two years in the making, it was produced by a team of 15 people plus numerous consultants and advisers. A key piece of equipment was provided by OpenSpace – a $6m interactive data-visualization software. Though still under development, the show’s creators were able to use it to recreate the journey of the spacecraft, showing accurate orbital trajectories and instrument targeting. These scenes – and those involving representations of the spacecraft and the places they studied – then had to be culled to 23 minutes.
What, I asked Ebel, did he most regret going on the cutting-room floor?
The geologist paused, mentally screening outtakes from different areas of the solar system. Finally, he said, “Pluto. It’s five light-hours away, a world of ice but full of colour, and not nearly as cratered as it should be. That means that it’s active and resurfaces itself.” Ebel had great images thanks to NASA’s 2015 New Horizons flyby, when the spacecraft journeyed between Pluto and its twin Charon. “But that required three minutes that we didn’t have.”
Still, Ebel hesitated. “But I dearly miss showing the plumes that rise from the surface of [one of Saturn’s six moons] Enceladus – plumes that feed ice particles to the outer ring of [the planet]. That would have taken 30 seconds, but we didn’t even have that. I also miss not showing the MESSENGER spacecraft mapping the surface of Mercury.”
Five shows have been staged at the Rose Center since it opened in 2000. And, thanks to my children, I’ve seen every one. All the shows have had celebrity narrators: Tom Hanks, Robert Redford, Harrison Ford, Whoopi Goldberg. The astronomer Neil de Grasse Tyson, meanwhile, did “Dark universe”. The latest show is delightfully narrated by the Academy Award-winning, Kenyan-Mexican actress Lupita Nyong’o.
“Worlds beyond Earth” was written by Natalie Starkey, a cosmic chemist who now works as a physics communicator at the Open University in the UK. The original score is by Robert Miller, who has composed the scores for four previous shows, and the music was recorded at Abbey Road Studios in London. The brilliant solo acoustic guitar passages are by the musician and retired Yankee baseball star Bernie Williams, who took the trouble to check out the acoustics of the Rose Center’s dome beforehand. The planetarium’s projection system was upgraded to give it a resolution of 8000 pixels and a 26-channel sound system, including two dedicated as seat shakers.
The critical point
“Don’t think of the solar system as full of objects,” Ebel says, “think of it as full of worlds.” Each place that we come across has vastly different features – not only surfaces, textures and colours but also volcanic activity, magnetic fields of different strength and shape, and internal dynamics. Mars has the largest volcanoes in the solar system but Jupiter’s moon Io is the most volcanically active. Saturn’s moon Titan has a methane atmosphere while Venus, in Ebel’s words, is a “greenhouse gas hellhole”.
What an audience member appreciates from visiting these worlds is the unique status of the Earth, from which the Apollo 15 spacecraft blasted off at the beginning of the show and to which we return at the end. By visiting so many other places in the intervening 23 minutes we appreciate the Earth’s special character even more. This sets the stage for the final element, which makes the show so different from the others – its particular emotional mood. That mood is expressed in the show’s final image of the full Earth rotating, half lit by the Sun, and is audible in Nyong’o’s voice as she concludes her narration by saying that the Earth’s atmosphere has “the perfect blend of molecules” for human life, and that “it’s up to us to sustain it”.