In this episode of the Physics World Weekly podcast Giovanna Fragneto explains why neutrons are an ideal probe for studying the SARS coronavirus that is responsible for the COVID-19 pandemic. Fragneto is leader of the Large Scale Structures group at the Institut Laue-Langevin, which is a world-leading centre for neutron science in Grenoble, France. In the podcast she mentions a recent webinar from the League of Advanced European Neutron Sources about how neutron science is contributing to the fight against global health threats.
Computation will also play and important role in understanding the SARS coronavirus and it is likely that those studying future viruses will benefit from quantum-computing technologies that are being developed today. In this episode we hear from Winfried Hensinger at the University of Sussex, who is cofounder of a quantum-technology start-up called Universal Quantum. The firm has just received a round of seed funding to help it achieve its ultimate goal of integrating a million ion-based quantum bits within a practical quantum computer.
(a) Single-molecule localization microscopy of a network of amyloid aggregates. (b) Nile red binding orientations to amyloid surfaces. (c-g) Individual orientation measurements along fibril backbones within the boxes in (b), colour-coded according to the direction of the estimated angle. Scale bars: 1 µm (a, b), 100 nm (f, g). (Courtesy: Tianben Ding, Tingting Wu and Matthew D Lew, Washington University in St. Louis)
Researchers in the US have developed a novel optical microscopy technique that offers new insights into amyloid plaques, which are characteristic of neurodegenerative disorders such as Alzheimer’s and Parkinson’s disease. Better understanding of these clumped or misfolded proteins could help develop new therapies, the scientists claim.
Amyloids are insoluble, abnormal protein aggregates that have been linked to the development of various diseases, including diabetes and neurological conditions such as Alzheimer’s, Parkinson’s and Huntington’s disease. While most amyloid proteins may be non-toxic, they become problematic when they form fibrous deposits, or plaques, around cells and disrupt their normal function. In the brain, the misfolding and clumping of amyloids can kill many neurons.
Understanding the underlying structure of these plaques could pave the way for the development of effective therapeutics against these diseases. Now, researchers at Washington University in St. Louis have developed a new optical microscopy technique that measures both the location and orientation of single molecules in these amyloid protein aggregates, revealing nanoscale details about their structures.
“We need imaging technologies that can watch these molecular movements in living systems to understand the fundamental biological mechanisms of disease,” explains Matthew Lew, who led the research. “Amyloid and prion-type diseases like Alzheimer’s, Parkinson’s and diabetes are our first targets for this technology, but we see it being applied in many other areas too.”
Amyloid proteins have the ability to be stained by certain florescent dyes. And, as there is no artificial link between the fluorescent probes and amyloid surfaces, the probes’ binding orientation can potentially provide information about the structure and organization of the amyloid proteins.
The researchers created a performance metric to characterize how well various microscopy techniques measured the orientations of such fluorescent dyes. In work described in the journal Optica, they report that a microscope that splits fluorescence light into two polarization channels provides superior and practical orientation measurements.
The new super-resolution microscopy method allows them to measure not only the location of the fluorescence, but also characteristics such as polarization, which are ignored in most other microscopy approaches.
“The metric we developed calculates the performance of a particular microscope design 1000 times faster than before,” explains Tingting Wu. “By measuring the orientations of single molecules bound to amyloid aggregates, the selected microscope enabled us to map differences in amyloid structure organization that cannot be detected by standard localization microscopes.”
The researchers quantified how the orientations of fluorescent molecules (Nile red) varied each time one attached to an amyloid protein. Differences in these binding behaviours can be attributed to structure differences between amyloid aggregates. Because the method provides single-molecule information, the researchers could observe nanoscale differences between amyloid structures.
“In optical microscopy and imaging, scientists and engineers have been pushing the boundaries of imaging to be faster, probe deeper and have higher resolution,” Lew says. “Our work shows that one can shed light on fundamental processes in biology by, instead, focusing on molecular orientation, which can reveal details about the inner workings of biology that cannot be visualized by traditional microscopy.”
The researchers note that their microscopy setup used commercially available parts that are accessible to anyone performing single-molecule super-resolution microscopy. Next, they plan to monitor amyloid structures over hours and days to observe nanoscale changes as they develop and organize. Long-term studies of amyloid aggregates could reveal new information about how amyloid proteins are organized and how quickly they grow or spontaneously dissolve, the team says.
This webinar will explore the nuances of planning simple and complex Gamma Knife cases using inverse planning tools and metrics. Radiosurgical planning and pre-planning can have a significant impact on the clinical decision-making process.
Often the knowledge that one has the ability to deliver the required dose to the target while protecting critical structures can eliminate the need for an open procedure.
Optimizing both planning time and delivery time, these tools and techniques can greatly improve the patient experience. Modern inverse planning software can optimize multiple objectives at the same time making complex dose plans possible.
The webinar presented by Dheerendra Prasad will cover:
Identifing key treatment planning.
Learning how to evaluate dose.
Learning hybrid forward-planning and optimization.
Dheerendra Prasad is director of the Gamma Knife Center and professor of oncology, neurosurgery and radiation medicine at Roswell Park Cancer Institute, Buffalo, New York, USA. He is an international expert in the field of gamma knife radiosurgery and has treated more than 10,000 patients in a career spanning more than 30 years. He has written several journal articles, books and book chapters, and contributed to many research studies in this field.
In her introduction, Koek uses the quantum entanglement metaphor to throw light on the link between physics and art, saying that “What is entangled in the process of both of these different ways of knowing and looking at the world, is crucially the imagination – that mysterious process by which we make unexpected links, out-of-the-box connections, and have inexplicable intuitions beyond the known world.” The book makes the case for the importance of interactions between art and physics via a series of essays by physicists and science communicators. It also includes a number of personal discourses between artists and physicists on gravity, time, space, light, matter and entropy.
Theoretical physicist and bestselling author Carlo Rovelli, from the Aix-Marseille University, provides a short overview of quantum entanglement. Predicted by Albert Einstein and collaborators in 1935, it describes the mysterious connection that exists between remote quantum systems, so that any change made to one instantly influences the other. It is still one of the hardest parts of quantum mechanics to understand in terms of the everyday world – or as Rovelli puts it, “How the hell do the two particles make their decisions consistent? Simple answer: we have no clue.”
However much data we have, they don’t provide us with new ideas for their interpretation – that takes imagination
Science writer Philip Ball follows this with a plea for the importance of imagination in physics, quoting Einstein in 1929, saying, “Knowledge is limited. Imagination encircles the world.” Ball interprets this to mean “imagination precedes knowledge and establishes the precondition for it” and points out that however much data we may have from the Large Hadron Collider and other particle colliders, they don’t provide us with new ideas for their interpretation – that takes imagination.
He suggests the current inability to explain dark matter, or why we haven’t been able to integrate supersymmetry into the Standard Model of particle physics, should be seen as a failure of imagination. Indeed, Ball writes that “the collective imagination of physicists has not yet made them vivid enough to be revealed or disproved”. He adds that efforts to provide a physical picture of quantum mechanics are placing more demands on physicists’ imaginations than ever before. Perhaps, he muses, inspiration may come from philosophy, art, literature or aesthetics “as imagination doesn’t recognize categories and boundaries”.
The entanglement of art and physics was, unsurprisingly, a feature of the surrealist art movement, beginning in 1917. Gavin Parkinson of the Courtauld Institute of Art, London, provides a fascinating history of this relationship, which ultimately went sour. Key surrealists such as André Breton and Salvador Dalí became intrigued by relativity in the 1920s; while in 1943, painter Max Ernst made an explicit case for a philosophical link between surrealism and quantum theory – comparing the uncertainty of quantum phenomena to that of the role of the unconscious in surrealism. But after the war, anti-nuclear sentiment led to an intense critique of physics, and by 1958 the movement thought to “expose the physicists, empty the laboratories”. Only Dalí, who had himself become estranged from the other surrealists, stayed loyal to physics.
Art inspired by physics Foreground: “Two self-revolving toruses” by Julius von Bismarck (2010/2018). Background: “Space study 1-4” by Jorinde Voigt (2016). (Courtesy: Mikael Lundgren/Bildmuseet)
The final essay by Nicola Triscott, chief executive of the Foundation for Art and Creative Technology in Liverpool, UK, expands on both the culture of physics and the contribution of artistic approaches to physics. Triscott argues that the discipline still represents itself as being without culture and value-free, but she presents numerous pieces of evidence to the contrary. For example, she looks at how ideas on gender and physics change geographically, with the perception of physics as “hard science” connected with maleness being prevalent in north-western Europe but not further south and east. Triscott also discusses the contribution artists can make to science, via numerous visiting-artist programmes, and through the adoption of ways of working that would be considered outside the traditional scientific method – for example, Mark Neyrinck at the University of the Basque Country, Bilbao, who has used origami to study cosmic structures.
The essay contributions are followed by what Koek describes as “diptychs” – short discourses from an artist and physicist pair who give personal reflections on a theme, providing contrasts and commonalities. These perspectives go a long way to show that imagination and intuition are “entangled in the process of scientific discovery, just as they are in the artistic process”.
Entangle ultimately urges us to keep examining the relationship between physics and the arts. Koek herself acknowledges that there is “no proof that these interactions have led to big discoveries – yet”, adding that “more work [is] needed to track impacts”. But she argues “the culture of physics as a whole is benefiting from interactions with artists”.
In January 1957 J Robert Oppenheimer was on holiday in the Caribbean when he received a telegram from the Nobel-prize-winning physicist Chen-Ning Yang informing him that experimentalists had discovered that parity is not conserved in the weak interaction.
“Walked through door,” Oppenheimer cabled back to Yang.
Reactions to unexpected events, whether in science or otherwise, are often loosely lumped together as instances of surprise. Philosophers, however, discern several different ways to experience the unforeseen. Oppenheimer’s reaction to what’s called “parity violation”, for instance, I’d say is an example of shock. Shock is when your trusted set of basic assumptions slams into an equally trustworthy finding.
The vast majority of physicists, it’s safe to say, experienced that emotion in January 1957. Until then, they had assumed that every process in physics remains the same if you reverse all three spatial co-ordinates, and had now learned that it wasn’t true. Yang once told me that, for physicists, the discovery of parity violation was like having the lights switched off and being left in such total confusion that you were unsure if, when the lights came back on, you’d be in the same room.
After this particular discovery, the room didn’t look too different to before. What interests me, though, is the shock – the momentary sense that foundational assumptions could be undermined. It’s a physicist’s emotional acknowledgment of the abyss.
Experiencing the unexpected
A different way to experience the unexpected is to be bewildered. Bewilderment also involves a conflict between a finding and your assumptions, but this time your gut goes with those assumptions. You strongly suspect that something’s wrong with an experiment or finding, but you aren’t entirely sure. Think of your reaction to the “discovery” of cold fusion in 1989 or faster-than-light neutrinos in 2011.
I’m aware that I’m using ordinary words such as “shock” and “bewilderment” to describe phenomena that, to philosophers like me, have particular technical characteristics. But then physicists do the same if you think of how you use terms like “friction”, “impulse” or “power”. These names might be colloquial but they’re not arbitrary, and are used because of their loose relation to their technical meaning. So pay less attention to the words I’m about to use and more to the experiences that they point to.
In an essay in the 2018 book Surprise: an Emotion?, Anthony Steinbock – a fellow philosopher at Stony Brook University – characterizes “surprise” as being attentively and expectantly attuned to something, which then catches you off-guard and throws you back on your own experience. You accept, let’s say, both the unexpected findings and the assumptions embedded in your physics practice. But in contrast to both shock and bewilderment, you presume they can nevertheless be integrated.
Surprise involves “a believing what I cannot believe”, Steinbock writes. Think of the reaction when two teams of experimental particle physicists announced, on 11 November 1974, that they’d measured a spike in the number of particles produced at energies of 3.1 GeV indicating the existence of a long-lived particle now known as the J/ψ. As the Italian physicist Giuliano Preparata wrote: “It was as if one found in some remote region of the Earth a human race whose life expectancy was not 70 but rather 70,000 years!” Yet nobody in the physics community doubted either the findings or quantum electrodynamics.
Tell me you weren’t awed last year by the first photos of a black hole, or by the 2016 data demonstrating the existence of gravitational waves
Surprise is different from “awe”, which is a deep respect for a fundamental phenomenon when it abruptly emerges strongly and directly. Tell me you weren’t awed last year by the first photos of a black hole, or by the 2016 data demonstrating the existence of gravitational waves. Neither event was a surprise or shock in the sense of challenging fundamental assumptions. You knew these things were surely there. What was unexpected was that they appeared so magnificently and so suddenly.
“Amazement”, meanwhile, is the experience of a phenomenon that puts in an unexpected appearance but then never goes away. Think of the idea that energy comes in discrete quantities. At the beginning of the 20th century, the “quantum” was regarded as a troublesome but isolated phenomenon that might eventually disappear, but which kept repeatedly turning up, like a peculiar uninvited guest who stalks you and eventually joins your inner circle of friends.
Then there’s “astonishment”, which is the experience of something that you did not believe was even possible – not in the perceptual cards, so to speak – and forces you to reconfigure your experience. In 1895, for example, when the German physicist Wilhelm Röntgen first saw his cathode-ray tube making his fluorescent screen glow, he thought he was hallucinating. Only after elaborate exploration could he believe it was real. But Röntgen remained so mystified about his observation that he called it an “X-ray”.
The critical point
Shock, bewilderment and the other reactions I’ve mentioned are a familiar part of ordinary life. So why am I bothering to point out that they’re a familiar part of physics as well? The reason is that these reactions shed light on the differences and similarities between ordinary activities and physics. For example, in physics, time is a scalar quantity – a measurable sequence of discrete moments. In ordinary life, however, we live time as a flow in which we must simultaneously anticipate and remember.
So when physicists experience surprise, it dramatizes the presence of ordinary time in their activities. You can only find something surprising in the present because you have assumptions (the past) and expec-tations (the future). At its most exciting, therefore, physics is an encounter with the potentially strange, and when the strange arrives the encounter cannot help but be emotional.
But are these phenomena I’ve described familiar to you? If so, or if you have better names or other experiences, send them in and I’ll write about them in a future column.
Femtosecond fibre lasers are all about versatility. With ultrashort pulse durations of just tens of femtoseconds (in the 10—15 s regime), and with different models covering visible through mid-infrared wavelengths, these compact, robust and reliable rare-earth-doped fibre sources have carved out diverse applications across science and industry – from additive manufacturing to molecular “fingerprinting”, from nanoscale spectroscopy to timing distribution in particle accelerators.
Now German precision photonics specialist Menlo Systems, a leading developer and supplier of femtosecond fibre lasers, is taking that versatility to another level. The manufacturer traces its origins two decades ago to the development and subsequent commercialization of optical frequency-comb technology (an invention for which Menlo’s co-founder, Theodor Hänsch, was a joint recipient of the 2005 Nobel Prize for Physics). Today, those optical frequency combs are routinely employed in fields as diverse as time and frequency metrology, spectroscopy, optical communications and fundamental physics.
In the course of its evolution, Menlo Systems has maintained something of a balancing act. On the one hand, the company has built its commercial success around high-precision and customized optical solutions for scientific customers all over the world – whether that’s smaller research groups or large-scale facilities like CERN in Geneva and the SACLA free-electron laser in Japan. At the same time, the manufacturer has forged long-term relationships with OEM industry customers, understanding their problems and gathering technical requirements to inform product development across the Menlo technology portfolio.
Those industry customers, for their part, are seeking femtosecond fibre laser systems that deliver against their core metrics – reliability, robustness, footprint, cost:performance ratio and the like – while scientific customers put the focus on parameters like superior pulse-to-pulse stability, lowest relative-intensity noise and excellent beam quality. For the latter, the dialogue with Menlo’s product development team is crucial and necessitates an extended commitment from vendor and research customer, often running over many months or even years (see “Overcoming time delays with photonics”, below).
“The process involves mutual learning – usually in a research setting – and iteration of technology solutions in close collaboration with the scientific customer,” explains Christian Mauser, product manager for femtosecond fibre lasers at Menlo Systems.
A novel concept for better lasers
To support the diverse functional requirements for its femtosecond fibre lasers – whether in fundamental science, OEM industry applications or even space-based instrumentation – Menlo Systems developed its own mode-locking technology, based on an all polarization-maintaining-fibre design with no lifetime-limiting components. The key building block is a fibre oscillator that exploits Menlo’s patented “Figure 9” mode-locking technology – a robust optical set-up that combines simple, compact and reliable operation to yield several advantages versus other mode-locking approaches.
Our all-fibre mode-locking technology delivers outstanding performance, even in harsh environments.
Christian Mauser, Menlo Systems
“Our all-fibre mode-locking technology delivers outstanding performance, even in harsh environments,” explains Mauser. As a result, Menlo’s fibre lasers are suitable for portable operation in industrial manufacturing applications, while the excellent laser parameters give research users the reliability they need in a variety of settings – from imaging studies in neuroscience to next-generation materials microprocessing.
Equally important is the versatility of Menlo’s femtosecond fibre lasers. For starters, the timing jitter of the lasers’ output pulses is extremely low (in the attosecond and low femtosecond range), a feature that allows for creation of a timing master signal for precise coordination within a complex interplay of processes and events – for example, experiments at accelerator facilities or geodetic observatories. Further, Menlo’s proprietary intracavity actuators enable a particularly low phase-noise level over a wide range of laser repetition rates. “This is critical in applications with stabilized lasers, such as the interrogation of optical transitions in atomic clocks,” says Mauser.
Notably, all of these capabilities can be transferred to other wavelength regions (from 500 nm to 15 microns) via nonlinear frequency-conversion processes. Given Menlo’s vertically integrated structure, Mauser and his colleagues are also able to provide synchronization of the lasers to an external reference in a master–slave configuration, with fixed or tunable repetition rate. The master might be another laser, a hydrogen maser, or some type of RF signal.
Mauser adds: “It’s worth noting that other laser companies use our femtosecond fibre seed lasers and synchronization electronics to build their own laser systems. For the ‘drift-free’ distribution of timing signals, we offer all the necessary technologies – including stabilized fibre links, cross-correlators or balanced photodetectors – to ensure a timing precision of a few fs over distances reaching nearly 1 km.”
Overcoming timing delays with photonics
Keeping time: one of Menlo’s flagship TDS customers is the Geodetic Observatory Wettzell (above), a geosciences research centre in Bavaria. (Courtesy: Menlo Systems)
Scientists, it seems, are able to bend time to their will in at least some respects using the Timing Distribution and Synchronization System (TDS) from Menlo Systems. Built around a femtosecond optical pulse source – an optical femtosecond oscillator using an erbium-doped fibre in Menlo’s “Figure 9” design – the TDS distributes a master RF or optical reference “clock” throughout a large-scale research facility with minimal added phase noise and drift (<10 fs rms).
“Timing distribution gets really challenging really quickly if you want high accuracy,” explains Pablo Dominguez, TDS product manager at Menlo. “My role is to enable that synchronization and make it work 24/7 without operator intervention. Instabilities over days and weeks shall be of the order of a few femtoseconds, whether that’s in a particle accelerator or free-electron laser facility, a radio-telescope array or a geodetic observatory.”
Menlo’s TDS system was originally developed for pump-probe experiments in linear accelerators, though other applications are now emerging. One of Menlo’s flagship TDS customers is the Geodetic Observatory Wettzell, a geosciences research centre in Bavaria that’s engaged in fundamental studies of continental drift, sea-level changes and Earth’s gravitational field, as well as a range of reference measurements to support satellite navigation, geological surveys and mapping.
“I’ve been working closely with scientists at Wettzell for several years,” says Dominguez. “They need to fix and share a common reference clock, with sub-10 fs stability, across multiple experiments and observation stations over a range of several kilometres.”
Right now, the Wettzell team needs mathematical postprocessing to correct for temporal instabilities in the obervatory’s range of geodetic measurements – an unwanted source of complexity and potential distortion that TDS will address. “By fixing time in the femtosecond regime across multiple experiments,” notes Dominguez, “Wettzell will ultimately be able to measure distances between satellites, the Moon and Earth with an accuracy of tens of microns – a three orders of magnitude enhancement on what’s feasible today.”
The cornerstone of the Wettzell TDS is a mode-locked fibre laser – the optical master oscillator – which is synchronized to a low-noise RF oscillator to obtain optimum phase-noise performance, both close to and far away from the carrier frequency. The clock signal from the laser is amplified and split into the required number of ports prior to fibre-optic transmission across the facility to remotely synchronize other lasers or RF systems.
Much of the complexity of the TDS set-up lies in the active compensation and control of each fibre link, with path-length instabilities (typically of the order of several ps, corresponding to a few mm) addressed using Menlo’s high-stability phase detectors and ultrafast actuators (thus achieving sub-10 fs accuracy, corresponding to a few microns).
Dominguez concludes: “We build everything ourselves – the laser system, RF generator, optical and electronic subsystems – so we can be certain about quality, reliability and efficient system integration. Equally, we can also offer customization if the end-user wants modifications versus our off-the-shelf systems.”
Those modifications may include one to several RF outputs synchronized with the TDS, low-noise pulse-per-second (PPS) signals and additional output ports with custom wavelengths. “The form factor can also be adapted to the customer needs,” Dominguez adds, “since our devices must fit into unique locations like the cone-head of a radio-telescope.”
Tissue engineering is an emerging field in which cells, biomaterials and biotechnologies are employed to replace or regenerate damaged or diseased tissues. Currently, this is achieved by generating a biomaterial scaffold outside of the body, maturation in a bioreactor and then surgically implanting the created tissue into the patient. This surgery, however, poses the added risk of infection, increases recovery time and may even negate the therapeutic benefits of the implant.
To prevent such complications, a US research team is developing a way to fabricate 3D tissue scaffolds inside a living patient – so-called intracorporeal tissue engineering. The researchers, from the Terasaki Institute, Ohio State University and Pennsylvania State University, aim to use robotic direct-write 3D printing to dispense cell-laden biomaterials (bioinks) in a highly precise, programmable manner. The printed bioinks are delivered through minimally invasive surgical incisions and the body itself acts as the bioreactor for maturation.
Any technique used to directly print tissues inside the body, however, must meet a specific set of requirements. The biomaterial must be 3D printable at body temperature (37 °C), for example, and all procedural steps should not harm the patient. For example, current methods use UV light to crosslink the constructed tissue, which is not safe for use within the body.
To meet these requirements, the team produced a specially-formulated bioink designed for printing directly in the body. They used the hydrogel gelatin methacryloyl (GelMA) as the biomaterial, and introduced Laponite and methylcellulose as rheological modifiers to enhance printability. “This bio-ink formulation is 3D printable at physiological temperature, and can be crosslinked safely using visible light inside the body,” explains first author Ali Asghari Adib.
The researchers used the GelMA/Laponite/methylcellulose (GLM) formulation, with and without encapsulated fibroblasts, to construct complex 3D tissue scaffolds with clinically relevant dimensions and consistent structures. They successfully 3D printed the scaffolds on agarose and chicken breast pieces, using on-site crosslinking with visible light. For cell-laden GLM, the fibroblasts exhibited consistent mechanical properties and a viability of 71–77% over 21 days in the printed scaffolds.
Another challenge of intracorporeal tissue engineering is attaching the printed structure onto soft, live tissue surfaces. For this, the researchers employed a unique interlock technique using the robotic 3D printer. They modified the nozzle tip to penetrate 1.6 mm into the soft surfaces and fill the punctured space with bioink as it withdrew, thus creating an anchor for the tissue construct. In experiments on agarose and chicken pieces, this interlocking mechanism created stronger attachment of the scaffolds to the tissue. The team observed 3.5-fold (chicken) and 4-fold (agarose) increases in the biomaterial–tissue adhesion strength compared with printing onto the tissue surface.
The researchers conclude that the GLM biomaterial and robotic interlocking mechanism pave the way towards intracorporeal tissue engineering. This could provide lower-risk, minimally invasive laparoscopic options for procedures such as 3D printing of bio-functional hernia repair meshes, implanting patches to enhance ovarian function, creation of cell-laden scaffolds to repair tissue or organ defects, and delivery of drug-loaded or growth factor-tethered biomaterials to improve tissue regeneration.
“Developing personalized tissues that can address various injuries and ailments is very important for the future of medicine,” says Ali Khademhosseini, director and CEO of the Terasaki Institute. “The work presented here addresses an important challenge in making these tissues, as it enables us to deliver the right cells and materials directly to the defect in the operating room.”
The researchers report their findings in Biofabrication.
Looking to escape the clutches of gravity for as long as possible, physicists in the US have made a Bose–Einstein condensate onboard the International Space Station (ISS). The orbiting lab does not yet exceed the performance of the coldest atom experiments on Earth but could in future be the ideal place to run quantum-mechanical gravimeters and carry out the most precise tests of the equivalence principle.
A Bose–Einstein condensate (BEC), known as the fifth state of matter, is a dilute gas of bosonic atoms whose temperature is so low that their wavelength becomes comparable to the distance between one atom and the next. In these circumstances the atoms all occupy the same quantum state and act in unison as a superfluid – so bringing otherwise microscopic wavelike properties into the macroscopic realm.
Physicists usually make BECs by confining a gas of bosonic atoms in a magnetic trap and firing laser beams at the particles to cool them down. The snag is having to release the condensate to study it. Once free, the atoms repel one another and quickly spread out if they are not cold enough – making the gas too tenuous to be detectable. But gravity also poses a major problem, its downward tug causing the atoms to collide with the bottom of the experimental apparatus within a fraction of a second.
Drop towers and parabolic flights
Researchers have carried out a variety of experiments to extend the lives of BECs by putting them in free fall and so temporarily removing the effect of gravity. One option is to drop condensates from the top of a tall, evacuated tower. Alternatively, they can be flown onboard aircraft following a parabolic trajectory or placed on sounding rockets – one such experiment in Sweden having reached a height of over 240 km and achieved free fall for 6 min.
But the best place to carry out such experiments is in orbit. Objects there are in continuous free fall, meaning that they create perpetual zero-gravity conditions. Not only does this in principle allow much more time for experiments, it also means that before atoms are released from their trap the magnetic fields confining them can be gradually turned down – allowing the atoms to spread out slowly and cool down to even lower temperatures.
The new research has been carried out using the “Cold Atom Lab” (CAL), launched by NASA in 2018 and housed inside the US Destiny module on the ISS. Operated remotely, the $70m lab occupies just 0.4 m3 but contains lasers, magnets and all the other instruments needed to trap, cool and control an atomic gas. The atoms are initially held at the centre of a vacuum chamber, before being transferred to an “atom chip” at the top of the chamber that uses radio waves to siphon off the fractionally hotter atoms and leave the remainder at less than a billionth of a kelvin.
Distinctive features
Robert Thompson, David Aveline and colleagues of the Jet Propulsion Laboratory at the California Institute of Technology used CAL to create BECs from atoms of rubidium-87. The condensates were detectable for up to 1.18 s and have a number of distinctive features compared to their terrestrial cousins. In particular, the researchers noted that some of the rubidium atoms used in the experiment remained separate from the condensates and instead formed a halo shape around them. Held very weakly in the trap via a phenomenon known as the second-order Zeeman effect, these atoms would on Earth simply fall to the floor.
According to Brynle Barrett of the Institut d’Optique d’Aquitaine in France, the lifetime of CAL’s condensates is comparable with those produced by the best terrestrial facilities. He points out that the 6 min of free fall time achieved with sounding rockets comprised many separate experiments, none of which lasted more than 300 ms. The fact that Thompson and colleagues have not gone much beyond a second, he explains, is not due to gravity or other inertial effects but instead technical constraints – such as the challenge in getting to ever lower temperatures and the need to reduce residual magnetic forces that disturb the condensates.
The real advantage of being in orbit, says Barrett, is that potentially years of free fall should allow researchers to continually refine their experimental parameters. As such, he believes the latest research “represents a significant step toward performing high-precision experiments with quantum gases in space”.
Atom interferometry
Among the experiments that could be carried out include using the atoms’ halo formation to produce ultra-cold gases with extremely low densities. Another could involve creating a BEC in the (gravity-defying) shape of a bubble. But perhaps the most eagerly awaited experiments will involve atom interferometry. This entails making very accurate measurements of gravity by recording the interference fringes generated when cold atoms placed in a quantum superposition follow two very slightly different paths through a gravitational field.
Such interferometers could potentially be used to carry out exacting tests of the universality of free fall – the idea that inertial and gravitational masses are one and the same – or to enable sensitive environmental monitoring and mineral prospecting from space. However, several technical challenges, including leaks from the atom chip, led the NASA researchers to delay installing additional equipment needed for interferometry. But following the launch of fresh supplies in December, they did that in January this year and a month later were again generating BECs.
Looking further ahead, Barrett says there are several proposals to launch a dedicated satellite that would use cold atoms to make fundamental tests of gravity – free from the vibrations that occur on the ISS. “This decade could see several of these exciting proposals become a reality,” he says.
In modern times, the skill of “rhetoric” is often associated with shady lawyers and slippery politicians. We associate rhetorical mastery with style over substance – the ability to use language to trick people into believing something that is not necessarily true. But according to science communicator Sam Illingworth, rhetoric can play a positive role in sharing scientific findings beyond the research community.
Illingworth, a senior lecturer at the University of Western Australia, was speaking yesterday in an online event to promote the second edition of his e-book Effective Science Communication, which he co-authored with atmospheric physicist Grant Allen. Published by IOP Publishing, which also publishes Physics World, it is a practical guide for research scientists, covering inward-facing communication (writing journal articles, grant applications etc) and outward-facing communication (giving outreach presentations, dealing with the media etc).
Effective Science Communication (Second Edition)
In his talk, Illingworth mentioned rhetoric as part of a broader discussion about how scientists should carefully consider their audiences when communicating. In doing so, they should think about the three basic elements of rhetoric laid out by Ancient Greek philosopher Aristotle: ethos, logos and pathos.
It’s all Greek to me
Ethos is an appeal to ethics, which means convincing an audience of the credibility of the communicator. Relatively speaking, this is often the easiest part of the triad for scientists. Qualifications bring a recognition of expertise, while academic journals and scientific conferences have reputations often established over many years.
Logos is an appeal to logic, which again most scientists are pretty good at. The scientific method brings a structure to how science is done, which is then reflected in how papers and other communications include clear hypotheses, results and discussions.
Where it can get tricky for researchers is pathos – the appeal to passion. This is the powerful element of rhetoric that regularly gets abused by politicians and the media. For scientists, bringing emotion into the equation can seem to go against the fundamentals of being a good scientist.
“We’re taught from a very young age to be objective, to only present the cold hard facts, whereas I would argue that just presenting the cold hard facts helps to alienate scientists from society,” said Illingworth. “Yes, we need to conduct our scientific research very objectively, but when we’re talking about our research I think it’s important to be honest about our passions.”
Illingworth gave the example of climate change researchers expressing concern about the potential impacts of environmental change on society. But it got me thinking about how pathos can be used to effectively communicate physics, especially some of the more foundational research where the human angle is not so obvious.
Appeal to the emotions
Particle physics and astrophysics often appeal to a sense of awe and wonder, as well as the aesthetic appreciation of all the beautiful imagery. Quantum physics can appeal to a sense of surprise and novelty at the idea nature can behave in such counter-intuitive ways. While advances in medical physics can evoke a sense of joy and empathy – that this technology might be able to help people back to good health.
Yes, we need to conduct our scientific research very objectively, but when we’re talking about our research I think it’s important to be honest about our passions.
Sam Illingworth
Illingworth also spoke about the importance of framing, where small changes in how the same information is presented can lead to significantly different audience responses. To illustrate, he gave an example from a 2009 study by Gächter et al. in which only 67% of PhD students registered early for a conference when doing so was presented as a discount, while 93% did so when the emphasis was instead on a penalty fee for late registration.
Of course, some people reading this may just be looking for some quick practical tips. Illingworth’s new book also has a stack of those: from mastering social media, to working with children, to dealing with nerves before giving a presentation. In his own outreach activities, Illingworth is currently using poetry and games to enable dialogues between scientists and non-scientists.
Indeed, Illingworth spoke about his research and performed some of his poetry in an episode of Physics World Stories podcast recorded at the Blue Dot festival in the UK. You can also have a go at the trivia quiz that Illingworth recently created for Physics World, part of a series launched during the COVID-19 pandemic.
Effective Science Communication (second edition) was published in May 2020 by IOP Publishing, which also publishes Physics World. Watch Illingworth introduce the book as part of IOP Publishing’s Meet the Author webinar series.
A strain sensor capable of measurements ranging from the touch of a feather to hard-hitting impacts has been developed by Marcus O’Mara and colleagues at the University of Sussex in the UK. Claimed to be the world’s most sensitive strain sensor, the device was made using a processing technique that creates networks of graphene nanosheets in a highly flexible polymer matrix. This material’s sensitivity over such an extreme range of strains could make it well suited for wide-ranging uses in areas such as healthcare and robotics.
Polydimethylsiloxane (PDMS) is a biocompatible, transparent, and durable polymer that is widely used in applications ranging from healthcare to aerospace engineering. The material’s high flexibility could also make it useful in the latest generation of strain sensors, which display varying electrical resistance when under strain due to embedded conductive nanomaterials like graphene and silver nanoparticles. However, the twisting of molecules within the material makes it difficult to disperse nanostructures evenly, and this has so far prevented PDMS from finding practical use in sensing applications.
Oil and water
Now, O’Mara and colleagues have developed a processing technique that involves a mixture of oil and water that is stabilized by solid particles, which become trapped at the interfaces between droplets to form solid structures. In this case, the researchers assembled graphene nanosheets to stabilize oil droplets containing PDMS molecules. By fine-tuning the processing conditions, they could vary the molecular structure of the resulting material to produce continuous, highly elastic films containing large quantities of conductive graphene nanosheets.
The resulting material displayed a robust exponential relationship between strain and electrical resistance. This made it sensitive to strains ranging from under 0.1% to over 80% — which changed the material’s resistance by a factor of over one million. This represented a significant improvement over most of today’s advanced strain sensors, in which sensitivity and range often need to be sacrificed to maintain accuracy and reliability. The PDMS-based material produced by O’Mara’s team can stretch up to 80 times higher strain, and achieve resistance changes as much as 100 times greater – the largest ever reported – than current devices.
Such a significant improvement could make the material an ideal strain sensor in a wide variety of applications, particularly in healthcare – where sensitive strain measurements are crucial in monitoring heartrate, chest motion, joint bending and patient ventilation. Elsewhere, the material could be incorporated into wearable technologies for monitoring sports performance; and may lead to new advances in “soft robots” that simulate the properties of biological systems.