“A robust bio-inspired, liquid, sludge and bacteria-repellent coating that can essentially make a toilet self-cleaning,” is how Tak-Sing Wong describes a recent invention of his research group at Penn State University.
The coating is sprayed onto the toilet surface in two steps – the first puts down a layer of hair-like molecules and the second makes those hairs extremely slippery. The team then tested its efficacy using artificial poo – yes, it is available – as well as bacteria commonly found in toilets. Neither were able to stick to the surface. As well as boosting hygiene and reducing odours, the team says that the coating could reduce the water used to flush toilets by 50%.
Moving on from toilets to logjams, which aren’t actually jammed according to geoscientists Nakul Deshpande and Benjamin Crosby at Idaho State University in the US. The duo studied a logjam in Idaho’s Big Creek using a number of techniques including time-lapse photography. They charted the motion of the logs in May and June 2016, as the river crested its annual peak. The jam had been formed two years earlier when a snow avalanche pushed dead trees into the river.
Deshpande and Crosby found that the logjam exhibited creep and clogging behaviours that are also seen in some disordered materials. This, they conclude, could provide insights into how to mitigate hazards associated with logjams. You can read more in “Logjams are not jammed: measurements of log motions in Big Creek, Idaho”.
A highly compact, low-energy device capable of switching the paths taken by light within photonic systems has been unveiled by physicists in the US and Switzerland. The new switch could provide a basis for artificial-intelligence (AI) systems that mimic human decisions, allowing for a diverse range of applications.
“All-optical computers” use light instead of electronic signals to process information. In principle, they could be faster and much more energy efficient than conventional computers. However, it is proving very difficult to create compact and energy efficient photonic devices that can switch and process optical signals at high speeds.
The new device was created by Chris Haffner and colleagues at the National Institute of Standards and Technology (NIST), ETH Zurich and the University of Maryland.
Light enters the switch via a linear silicon waveguide that is adjacent to a racetrack-shaped cavity that is etched onto a silicon disc. If the wavelength of the incoming light resonates with the racetrack cavity, some of the light will enter the racetrack and circulate around it many times.
Gold membrane
The device also incorporates an extremely thin, circular gold membrane, suspended just a few nanometres above the disc. This membrane is connected to the disc by doped silicon and gold bridges, enabling Haffner and colleagues to apply a varying voltage between the two structures. This allows them to bend the membrane either up or down, creating a variable gap between it and the waveguide.
As some of the light travelling around the cavity escapes, it strikes the membrane, creating collective oscillations of electrons (called plasmons) on the gold’s surface. These plasmons vibrate at the same frequency as the light, but with much shorter wavelengths. This means that the plasmons can be converted back into light with a high degree of control.
When the membrane is bent upwards, the rest of the light in the waveguide passes through unaffected. When positioned downwards, however, the membrane plasmons oscillations allow light to leak from the waveguide into the cavity. This light can then interfere destructively with light in the waveguide, essentially turning off the transmission of light. Furthermore, light can be transferred to a second waveguide positioned close to the membrane – rerouting the path of the light.
The compactness and low energy requirements of the device allowed the team to integrate it onto a single computer chip. They envisage a wealth of AI-related applications for their device, including self-driving cars which can rapidly redirect light beams to scan for other cars and pedestrians. More generally, the technology could bring about advanced circuits that form the backbones of neural networks – capable of recognizing patterns and making human-like decisions about complex tasks.
This week’s episode focuses on the interface between physics and computing, with deep dives into how artificial intelligence (AI) is contributing to medical physics and how silicon could form the basis of a future quantum computer.
First, we hear from Tami Freeman, Physics World’s resident expert on medical physics, about a new positron emission tomography (PET) scanner that can image a patient’s whole body much more quickly (or at higher resolutions) than is possible with current commercial scanners. We then stick with the medical theme to discuss three recent examples of how AI is being used in medicine: firstly to diagnose skin conditions (but, disturbingly, only if the patient’s skin is white); secondly to help radiologists detect lung tumours in X-rays; and thirdly to develop better radiotherapy treatment plans.
The second part of our podcast switches from classical computing to the quantum world. There are several ways of constructing the qubits, or quantum bits, that make up a quantum computer, and this week we hear from a trio of researchers – Fernando Gonzalez-Zalba, Alessandro Rossi and Tsung-Yeh Yang – who have been developing silicon-based qubits. Their work is part of a Europe-wide collaboration between universities, government laboratories and companies called MOS-Quito, and you can read more about it in their article for the Physics World Focus on Computing.
And finally, if you’ve been dying to hear the answers to last week’s parlour game, be sure to listen to the end of the podcast and groan along with our editors at some truly terrible amazing physics-in-film wordplay.
Deep brain stimulation (DBS) – in which electrodes implanted in the brain send electrical signals to areas that control movement – is increasingly employed to treat symptoms of movement disorders such as Parkinson’s disease, essential tremor or dystonia. It is also used in epilepsy and is under investigation as a potential treatment for traumatic brain injury, addiction, dementia, depression and several other conditions.
Patients with implanted electrodes often undergo brain MRI, for example to guide electrode placement, investigate DBS outcomes or evaluate implantation-related abnormalities. DBS electrodes are generally made from thin-film platinum or iridium oxide. However, such metal-based electrodes are affected by the magnetic fields of the MR scanner, and can cause image artefacts, move or vibrate, or even generate heat.
To tackle these problems, San Diego State University (SDSU) engineers have created a glassy carbon microelectrode for use instead of the metal version. Working in collaboration with researchers at Karlsruhe Institute of Technology, they have now shown that the new electrode does not react to MRI scanning, making it a safer option for DBS (Microsyst. Nanoeng. 10.1038/s41378-019-0106-x).
“Our lab testing shows that, unlike the metal electrode, the glassy carbon electrode does not get magnetized by the MRI, so it won’t irritate the patient’s brain,” explains first author Surabhi Nimbalkar.
The glassy carbon electrodes, first developed in 2017 at SDSU, are designed to last longer in the brain without deterioration. The researchers previously demonstrated that while metal electrodes degrade after 100 million electrical impulse cycles, the glassy carbon material survived 3.5 billion cycles. Another benefit is that glassy carbon electrodes can read both chemical and electrical signals from the brain.
“It’s supposed to be embedded for a lifetime, but the issue is that metal electrodes degrade, so we’ve been looking at how to make it last a lifetime,” says senior author Sam Kassegne. “Inherently, the carbon thin-film material is homogenous so it has very few defective surfaces. Platinum has grains of metal, which become the weak spots vulnerable to corrosion.”
In their latest study, Kassegne and colleagues fabricated probes made from glassy carbon and thin-film platinum microelectrodes supported on a polymer substrate. They placed the probes in a brain-tissue-mimicking agarose phantom and imaged them in a 3 T MRI scanner using clinical MRI sequences.
Sam Kassegne and Surabhi Nimbalkar at San Diego State University have tested and found glassy carbon electrodes are safer as they don’t reach to MRI scans. (Courtesy: SDSU)
The researchers found that, because of their low magnetic susceptibility and lower conductivity, the glassy carbon microelectrodes caused almost no susceptibility shift artefacts and no eddy-current-induced artefacts compared with the platinum microelectrodes. Tests in a high-field (11.7 T) magnet exhibited similar findings.
The team also used a novel instrument developed at KIT to precisely measure gradient-induced vibrations in the electrodes during 1.5 T MRI. Both the platinum and glassy carbon microelectrode samples had vibration amplitudes below the limit of detection (indistinguishable from that of non-conductive PMMA plates).
Theoretical analysis, however, revealed that while the platinum microelectrode was at the limit of detection, the glassy carbon microelectrode had an approximately 40-fold weaker response. The team also note that gradient-induced vibration scales to the power of four with implant radius, so for larger electrodes, the smaller conductance of glassy carbon will be advantageous.
Finally, to examined induced currents in the two microelectrode types, the researchers fabricated glassy carbon and platinum ring electrodes supported on a silicon wafer. Induced currents measured with a 1 Ω resistor indicated that induced current in glassy carbon was at least a factor of 10 less than in the platinum sample.
The researchers conclude that glassy carbon microelectrodes demonstrated superior MR compatibility to standard thin-film platinum microelectrodes, experiencing no considerable vibration amplitudes, minimally induced currents and generating almost no image artefacts. While they did not examine RF-induced heating in this study, the lack of RF-induced eddy currents (a large source of heating) in glassy carbon microelectrodes suggests that they will also be superior to platinum in this aspect.
With lab testing completed, Kassegne’s clinical collaborators will now test the glassy carbon electrode in patients, while Nimbalkar and Kassegne plan to test different forms of carbon for use in future electrodes.
Researchers at the universities of Bonn and Cologne in Germany have developed a new way of splitting photon wavepackets that involves cooling them down to a Bose–Einstein condensate (BEC) in a double-ridge microresonator structure. This thermodynamic method differs from the usual optical beam-splitting techniques because it is irreversible, meaning that the original beam cannot be reconstructed. The BEC-based process might be extended in the future to make new optical sources for entangled and correlated light states for applications in quantum computing.
The research team was led by Martin Weitz from the Institute of Applied Physics at Bonn University, who made headlines in 2010 when he and his colleagues created the first BEC from photons – 15 years after the first BEC was made in 1995 from a cloud of rubidium atoms cooled to a fraction of a degree above absolute zero. BECs form when bosons (particles with integer quantum spin) are cooled until they are all in the same quantum state. At this point, a BEC made up of tens of thousands of particles will behave as if it were in fact just a single quantum particle.
First photon BEC
Weitz and his colleagues made their photon BEC, or “super-photon”, by firing a laser beam into a microresonator cavity made of two concave mirrors separated by about a micron. This separation defines (to within an integer multiple) the maximum wavelength, and thus the minimum energy, of a photon trapped longitudinally within the cavity. The cavity is filled with a dye solution that is held at room temperature, meaning that its thermal energy is only about 1% of the energy of the photons.
Because the dye has so much less energy than the photons, it is highly unlikely that additional photons will emerge from the dye, or that the dye will completely absorb a photon. What happens instead, Weitz explains, is that when the photons collide with the dye molecules, they are briefly “swallowed” and then spat out again. After repeated absorption and re-emission cycles, the photons acquire the temperature of the dye solution and are thus cooled down to room temperature without being lost.
By increasing the laser intensity irradiating the dye solution, the researchers increase the number of photons in the cavity until it reaches about 60,000. This strong concentration of the light particles, combined with simultaneous cooling, causes the individual photons to coalesce into a photon BEC – much like a liquid drop condensing in a gas.
Photons pass into lower-energy states
In their new work, Weitz’s team used the same experimental set-up as before. This time, however, one of the two cavity mirrors was not completely flat. Instead, it contained two small valley-like optical ridges, or potential energy minima. When the laser light beam enters one of these ridges, the distance it travels, and therefore its wavelength, increases slightly. As in previous work, the photons in this beam are cooled by the dye solution before passing into a lower-energy state in the ridges.
The study’s lead author, Christian Kurtscheid, compares the photons in their system to marbles rolling over a sheet of corrugated metal. Whereas marbles with a high velocity would simply skip over the sheet’s surface, slower marbles will settle in the valleys of the corrugations. Similarly, the cooled photons in their experiment “roll into the valleys of the ‘corrugated sheet’ and remain there,” Kurtscheid says.
Tunnel coupling
In the experiment, the two ridges are close enough together to allow for a phenomenon called tunnel coupling, he says. This means it is no longer possible to determine which photons are in which ridge – only that they are in the lowest energy state in the cavity. “This state is the symmetric linear combination of photons localized in the two potential minima,” Weitz explains. “We observe that the photons condense into a BEC into this low-energy, spatially bifurcated state.”
This process irreversibly splits the wavepackets of light as if they were passing through an intersection at the end of a one-way street while the tunnelling between the ridges leads to one of the hybridized wavefunctions of the photons coupling.
Irreversible, coherent light splitting
The researchers observed the light wavepacket splitting by using a camera to monitor the optical radiation transmitted through one of the cavity mirrors. They also monitored the coherence of the beamsplitting by recombining the light beam paths. In this way, they were able to observe interference fringes of the beams.
“Our technique is a new, energetically-driven way to prepare optical quantum states,” Weitz tells Physics World. “We have demonstrated how to irreversibly create coherently split light.”
The method, which is detailed in Science, might be extended in the future to realize new optical sources for entangled and correlated states of light, he adds.
The researchers now plan to prepare photons in a periodic lattice potential and enhance effective photon interactions. In such a set-up, highly entangled many-body states can become the lowest energy state in the cavity. “We could directly populate these quantum states by cooling using our method and these states could be used in applications such as quantum information and communication,” Weitz says.
Biological cells adapt to chemical changes in their environment by sensing certain molecules as they bind to specific receptors on the cell surface. Previous work to measure this sensing ability typically assumed that concentrations of these binding molecules, known as ligands, remain constant or change at steady rates over time. Now, researchers in France and the US have developed a mathematical model for a cell’s sensitivity that better reflects real-world conditions. Their work could shed light on dynamic processes within biological systems, such as rapid cell growth in an embryo or the motion of bacteria in response to chemical stimuli.
The new model was developed by Thierry Mora of the ENS in Paris and Ilya Nemenman of Emory University in Georgia, and it describes the rapid shifts in a cell’s chemical environment with a non-linear randomly changing numerical field. This formulation allows the researchers to apply techniques from stochastic field theory that are routinely employed to solve problems in quantum and statistical physics.
Calculating the smallest fluctuations
Cells sense chemical concentrations by binding external ligands to specific receptors on their surface. Mora and Nemenman’s model derives the probability of a ligand binding to a cell within a given time period to calculate the smallest fractional fluctuations of concentrations that the cell can detect.
They found that the cell can sense smaller fractional fluctuations as the overall concentration of a biochemical increases, or as a receptor’s binding rate increases. In previous calculations that assumed a constant, non-fluctuating, environment, the cell’s sensitivity – expressed as an error in the concentration, c – was related to the biochemical concentration and the receptor binding rate by a ½ power law known as the Berg and Purcell bound.
In the new model this sensitivity changes more slowly with the concentration and binding rate and obeys a ¼ power law. Indeed, it scales as δc/c ∼ (Dacτ) -1/4. In this equation, D is ligand diffusivity, a is the linear size of the receptor, and τ is the ligand fluctuation time scale.
Model works for a real-world situation
The researchers, who have reported their work in Physical Review Letters, say they have already applied their model to a real-world situation in which an external chemical drives a network of signals inside the cell after it binds with the cell’s receptors. Their computer simulations showed that under these circumstances, the cell can detect the molecule within the fundamental limit they derived.
Robert Endres of Imperial College London, who was not involved in this research, says that the problem of sensing a fluctuating ligand concentration by a receptor is certainly relevant for biology. He adds that deriving the sensing limit via a field-theoretic approach, as Mora and Nemenman did, is “very elegant”. However, he also downplays the degree to which their findings differ from earlier work.
“Although a great result, I would not say that it is a radically different limit from the usual Berg and Purcell limit,” Endres says. The Berg and Purcell limit, he explains, is a lower limit, a kind of “noise floor”, while Mora and Nemenman’s new limit is higher due to a fluctuating ligand concentration. Second, as the researchers explain themselves, the new limit can be reconciled with the Berg and Purcell limit using an optimal averaging time – that is, by making the measuring time interval as long as possible to better average the results obtained, and short enough for the concentration of a ligand to vary less.
The maximum number of neutrons that can be packed into fluorine and neon isotopes have been determined by nuclear physicists working on an experiment in Japan. These are the first new measurements of the neutron dripline in 20 years and could provide physicists with important information about how to model the atomic nucleus. The same experiment failed to determine the dripline for sodium, which is the next element in the periodic table beyond neon.
The neutron dripline refers to the maximum number of neutrons that can be packed into an atomic nucleus before it becomes unbound. Until this latest work, physicists had measured the driplines of the eight lightest elements (hydrogen up to oxygen). In general, the maximum number of neutrons in a nucleus increases with the atomic number. However, there appears to be an exception to this rule with the dripline isotopes carbon-22, nitrogen-23 and oxygen-24 – which all have 16 neutrons. This is called the “oxygen anomaly” and suggests that 16 may be a magic number for neutrons, signifying the completion of a stable shell of neutrons.
Now, Deuk Soon Ahn and colleagues working on the BigRIPS experiment at the RIKEN Radioactive Isotope Beam Factory have looked at the next three elements in the periodic table: fluorine, neon and sodium.
Fragmenting nuclei
To look for the most neutron-rich isotopes of these elements, the team fired a high-energy beam of calcium-48 ions at a beryllium target. The calcium nuclei undergo fragmentation to create smaller nuclei, which were studied by the team. This was done using BigRIPS, which sorts nuclei according to their mass and charge.
Before the study was done, the heaviest known isotopes of these elements were fluorine-31, neon-34 and sodium-37. However, it was not known if heavier isotopes existed. The team was unable to detect fluorine-32, fluorine-33, neon-35 and neon-36 – providing strong evidence that fluorine-31 (with 22 neutrons) and neon-34 (with 24 neutrons) are dripline isotopes.
The team also looked for sodium-38 and sodium-39 and although they saw no evidence for sodium-38, they did spot one sodium-39 nucleus – which has 28 neutrons. As a result, they conclude that the neutron dripline must be at or beyond 28 neutrons for sodium.
These observations do not fully agree with state-of-the-art calculations of the dripline for these elements – which suggest that both fluorine and neon should have a maximum of 24 neutrons. The model of the nucleus used in these calculations will therefore have to be revised.
Looking to the future, the Facility for Rare Isotope Beams (FRIB) at Michigan State University in the US will open in 2022 with beams that are significantly more intense than those at RIKEN. This should make it possible for physicists to resolve the dripline for sodium and begin to study magnesium, which is the next element in the periodic table.
Astronomers use adaptive optics to see past the Earth’s atmosphere, which distorts celestial objects and makes them appear to twinkle. A new technique called adaptive optics gonioscopy (AOG) applies similar principles to image the human eye.
Researchers at Indiana University published the results of their proof-of-concept study in Translational Vision Science & Technology (10.1167/tvst.8.5.5).
Glaucoma: origins and impact
The clinical motivation for developing this new imaging technique is glaucoma, an eye disease currently affecting 76 million people worldwide. Interventional procedures, such as laser therapies or surgery, have mixed outcomes, and the physiology of the disease is poorly understood.
In an eye that functions normally, clear fluid circulates throughout, supplying nutrition and maintaining the eye’s shape. The eye’s trabecular meshwork – a series of sequentially smaller pores – acts like a sieve, allowing fluid to drain properly and regulating pressure.
The trabecular meshwork is altered in glaucoma. Fluid no longer drains properly and intraocular pressure, a risk factor for glaucoma and its progression, rises.
“As a result, the trabecular meshwork is an important structure to study, but direct imaging has been difficult,” says Brett King, lead author on the study. “AOG allows us to overcome the natural, near total internal reflection in this region and the differences in index of refraction between the eye’s cornea and the air.”
To see or not to see with AOG
King’s team successfully used AOG to visualize the trabecular meshwork at near cellular-level resolution in a proof-of-concept study with nine individuals (seven healthy volunteers and two with pigment dispersion syndrome, thought to be a precursor to glaucoma). This high-resolution visualization is a vast improvement over current clinical imaging methods, which only allow clinicians to determine the relative level of pigmentation of the trabecular meshwork and evaluate an individual’s risk of developing certain types of glaucoma.
Left: the trabecular meshwork as it can be seen in a doctor’s office. Right: a portion of the same part of the eye seen in much greater detail using methods developed at Indiana University. (Courtesy: the Indiana University School of Optometry)
The AOG technique requires an existing adaptive optics system for imaging the retina, which includes a wavefront sensor and deformable mirror. The wavefront sensor measures optical aberrations while the deformable mirror corrects for them. The researchers added cameras to assist clinicians in placing the modified gonioscopy lens on a patient’s eye and added a head mount to reduce patient motion.
Shooting for the stars
One challenge that the researchers faced was knowing what it was they were looking at. “As AOG imaging of the trabecular meshwork had not been performed before, we had difficulty at first realizing where we were and what structures we were seeing,” says King.
The researchers compared their images to pathology images that were often distorted from surgical or post-mortem artefacts. They also tested the technique with a model eye used to train surgeons.
King and colleagues are now building a new device to image even deeper within the eye and account for anatomical differences between individuals. They hope that this work will improve our understanding of age-related and pathological changes that occur in the human eye, as well as responses to pharmacological and surgical interventions.
Whether they’re about space travel, superheroes, aliens, sentient robots or all the above and beyond, science-fiction movies are more popular than ever. Indeed, six out of the top 10 grossing films of 2018 were part of the sci-fi genre, including Avengers: Infinity War, Ant-Man and the Wasp, and Solo: a Star Wars Story.
But let’s be clear – even though these movies involve some element of science, they are still fiction, and their sole purpose is to entertain. So despite the best efforts of science advisers (see “Turning science into movie magic”), the science in the films is sometimes questionable. You’ll have your own favourite examples of “bad movie science”, but let’s look at three concepts that regularly put the fiction into science fiction.
Airlocks and gravity
Imagine the following scene (or remember WALL-E (2008) or Sunshine (2007) if you prefer). An astronaut is floating in space. Far from a planet or star, our hero pries open the airlock of a spacecraft before going inside. The outer door shuts, and the isolated chamber fills with air. Then, as if by magic, the astronaut suddenly falls to the ship’s floor seemingly under the influence of gravity.
Unless the ship has some futuristic gravity-inducing technology – like in the Star Trek (1966–present) and Star Wars (1977–present) franchises – the implication in these scenes is that because space has neither gravity nor air, the presence of air in the spacecraft means gravity kicks in.
Nope. That’s not how it works.
Gravity is an interaction between any two objects with mass, and it is so weak that you would never notice the force between any two normal sized objects. However, if one of the interacting masses is huge – and sci-fi movies just love massive asteroids, planets and black holes – then the gravitational force is noticeable. That means that there is indeed gravity in space, depending on where you are in relation to big objects. It’s why the Earth moves around the Sun even though there is no air out there.
So, for our astronaut in deep space, even after the airlock chamber is up to pressure, they should still be floating around.
It’s a bit different if the spaceship is in low orbit around a huge object – like the International Space Station (ISS) circling the Earth. Astronauts on the ISS only appear to be without gravity because of the orbital motion of the space station. The ISS is accelerating as it moves in a circular path around the Earth, but since the humans inside are experiencing the same acceleration as the ISS (due to the gravitational interaction), they “feel” weightless. The same thing can happen on the surface of the Earth – there are many popular amusement-park rides that place you in a free-falling seat. For the short moment that both the ride and the humans have the same free-fall acceleration, the humans will be “weightless”.
How to get it right In The Expanse, artificial gravity on the spaceship is created in a plausible way; and the thrusters are (correctly) only used to accelerate and decelerate. (Courtesy: AF archive/Alamy Stock Photo/SYFY)
There’s an obvious reason why moviemakers use the misconception that the presence of air in a spaceship means gravity – it’s much easier to film. On Earth, creating weightless scenes is hard. One option is to send your star actors (or doubles) onto a plane and get the pilots to perform a parabolic manoeuvre, which is how scenes in the 1995 movie Apollo 13 were filmed. Other tactics are to move actors around with complex string systems that get edited out, or to film underwater – both of which were put into effect for Gravity (2013). You could also use computer animations to simulate weightlessness. But given the complexities of each of these methods, it’s easier for movie directors to have the actors move around with gravity instead of bothering with those fancy special-effect tricks. So, when the air gets pumped back into an airlock, this makes an easy transition point for movie-makers to have characters go from “floating around” in space to being in a space ship.
One movie that gets the science right with airlocks is 2001: A Space Odyssey (1968) (see “Douglas Trumbull: a mutual appreciation between scientists and moviemakers”, Nov 2019). The lead character, a scientist called David Bowman, is in a small spacepod and he needs to get back into the large spacecraft Discovery One. Unfortunately, he’s left his spacesuit helmet behind during his travels and the computer (HAL) doesn’t want to let him back in. In a daring move, Bowman opens the spacepod door and shoots himself into an empty airlock – in which he (correctly) bounces around even as you can hear the air hissing back in.
Movies like Interstellar (2014) and The Martian (2015) go down the artificial-gravity route for their spaceships, but induce it in a scientifically realistic way, rather than using some mysterious futuristic technology off-screen. In these films, the astronauts experience a normal Earth-like gravity due to the rotation of the spaceship. By standing on the inside of a rotating object, the floor will have to push on the humans in order to make them move in a circle. If this force from the rotating floor has the same magnitude as the force the ground on Earth pushes back with, an astronaut would feel the effects of artificial gravity. This isn’t science fiction, but actual science.
Spacecraft and thrusters
As well as featuring airlocks, just about every science-fiction movie that takes place in space will have an obligatory scene where a big and ominous spaceship is seen travelling steadily through the void with a deep rumble. The noise is the first error as sound obviously can’t travel in a vacuum. But as the spaceship passes the “camera”, it gets worse. Viewers glimpse massive engines glowing from the use of thrusters that are apparently pushing the ship forward. Newton’s second law of motion says no. Forces don’t make things move – forces make things change speed.
On Earth, if you wanted to push an object across a table, say, at constant speed, you’d obviously need to apply a constant force to overcome the friction between the surface and the object, and the air resistance. When travelling through the vacuum of space, however, these opposing forces are not present. You only need to apply an initial force to accelerate into motion, change direction, speed up or slow down. For a starship travelling at a constant speed, it wouldn’t need the thrusters on at all.
So why do so many movies get this simple fact wrong? The fault lies with our everyday experience of moving objects over surfaces, which wrongly implies that you need a constant force to move them at a constant speed. It’s only when you consider the invisible frictional force that you realize why that’s not true.
How to get it wrong Spaceship thrusters are often incorrectly depicted as being used to travel at a constant speed. (Courtesy: iStock/3000ad)
Of course, not every sci-fi show gets this fact wrong. My favourite example is not a movie but the Syfy and Prime Video TV series The Expanse (2015 – present). This show takes place mostly in our own solar system using spacecraft that are pretty plausible. The vehicles move around using thrusters – but they don’t travel at a constant speed. Instead, the thrusters make the spacecraft accelerate and speed up. Then, when the starship is halfway to its destination, it flips around and uses the thrusters to slow down.
The show also uses these thrusters for artificial gravity. Since the spacecraft is accelerating, the people inside also need to accelerate. And in order for a human to accelerate, that human would need a net force. This net pushing force comes from the floor in the interior of the ship. It can be very similar to the force of the ground pushing on you now (assuming you are on Earth) such that it feels like gravity.
The biggest downside to using more plausible physics when portraying space travel is that the ships might then move in a way that will strike viewers as odd, unrealistic and unexpected. Not having ever travelled in space, most of us simply don’t have an intuition for how things would behave out there.
Massive and moving
Sci-fi films aren’t just set in space – sometimes a plot will unfold right here on the surface of the Earth. But still, it needs to be exciting and different in some way, and what better way to do that than to throw in huge robots, machines or monsters.
Take the Pacific Rim movies (2013 and 2018), which feature gigantic humanoid machines known as Jaegers, controlled by two or three co-pilots. Deployed to battle Godzilla-like monsters from an interdimensional portal, these machines are about 75–100 m tall. The question is, though, how would something this big move?
Well, imagine throwing a tennis ball vertically upwards to a certain height, h. Simple Newtonian mechanics says that the time for the ball to fall back down into your hand is t = 2(2h/g)1/2, where g is the acceleration due to gravity. So if you throw a ball up a metre, it takes about a second for the ball to go up and back down.
Big robots, big error If the Jaegers in Pacific Rim: Uprising (pictured) obeyed the laws of physics, the film would be a lot longer as they’d have to move much more slowly than they did in the movie. (Courtesy: Universal Pictures/Moviestore/Shutterstock)
Now imagine a giant 100 m Jaeger fighting a huge monster. If, in the course of the struggle, the robot takes a huge 20 m jump (just 20% the height of the robot) then, using the formula above, it’ll be off the ground for four seconds. That might not seem much, but in the world of films, it’s enough to turn a fast-paced action scene into a slow-motion ballet. To keep the movie magic, filmmakers tend to bend the laws of physics and speed up the robot’s movements – they make these giant humanoids move like normal humans, only bigger.
The same idea also works for things like punching. A human-sized arm can accelerate and move to a fully extended position in just a fraction of a second. So if you want a robot with an arm that is much larger to do the same motion in the same time, both the acceleration and velocity would have to be much higher.
Let’s say you have a human arm making a punching motion. As a rough estimate, this fist starts from rest and then moves a distance of about 50 cm (the length of an outstretched arm), and the whole punching motion takes under 0.1 s to complete. During this time, the fist accelerates up to speed and then slows down to stop. If the punching hand has a constant acceleration during the first half of the punch, basic kinematics says it would get up to a top speed of 5 m/s.
Now if you repeat this same calculation for a punch made by a giant 100 m-high robot, then – assuming it has the same proportions as a normal human – its punch will extend over a distance of about 25 m. But in order for the punching robot to look like a fighting human, it would still need to complete the punch in a time of about 0.1 s. Using the same time with the larger distance gives a maximum punching speed of 250 m/s. Just for comparison, this robot punching speed is about the speed of a flying commercial airliner. Yes, super fast.
The physics also gets complicated when things get tiny. In the movie Ant-Man (2015) the hero, played by Paul Rudd, has the power to shrink down to less than 1 cm in height, which would make even running extremely hard. For each stride of his run, he would be off the ground for a very short period of time. Sure, he could run – but it certainly wouldn’t look like a normal human running.
You might think that none of the physics I’ve described in this article is particularly cutting-edge or advanced – and that by picking holes in movie plot lines I’m spoiling the drama of Hollywood blockbusters. To some extent, you’d be right. However, examining the scientific principles in sci-fi movies is a great way of getting school students and non-scientists interested in simple science. There are countless examples that you can use and there’s always a new movie around the corner. So next time you’re rolling your eyeballs at a film that mangles the science, go home, do a few back-of-the-envelope calculations and you’ve got a perfect talking point to reveal to others the beauty and wonder of science.
I remember a conversation in the Physics World newsroom about the sheer volume of physicists whose profile photo features them alone in remote mountainous locations. Perhaps it fits nicely with the ideal of a physicist exploring the wonders of nature unfettered by the distractions of the city. Or it could just be that lots of physics labs are located near mountains – think CERN and all those mountain-top observatories. Whatever the reason, it’s fair to say that physicists have a special relationship with the great outdoors.
If you are one of those people, I’d highly recommend checking out the 2019 winter edition of the Mountains on Stage film festival, which is taking place in 100 cities in 15 countries across Europe. On Tuesday night, I attended the Madrid edition at CINESA Proyecciones, featuring five short documentaries shot in different mountainous regions across the world.
If you’d prefer a technical lecture on the geomorphology of glacial landforms, this is not the festival for that. Instead, the films focus on the human endeavour of exploring extreme environments – documenting true stories about climbing, skiing, trekking and other creative forms of alpine transport. All the while, the scenery is spectacular and there is plenty to discover about how professional adventurers deal with geological and other weather-related hazards.
For sheer production quality, the final film of the evening, This Mountain Life, was the pick of the bunch. Producer Grant Baldwin documents the epic six-month trek across British Columbia of daughter–mum pair Martina and Tania Halik (who turned 60 during the trip). Carrying a stash of home-produced dried fruit, the pair relied on aerial food drops as they covered the 2300 km from Vancouver to Skagway in Alaska, regularly encountering temperatures around –20 degrees Celsius.
In the film Changabang, we heard first-hand accounts from climbers who have tackled the notoriously difficult north face of this Himalayan peak. British mountaineer Andy Cave recalls the 1997 trip when his team member Brendan Murphy was swept to his death by an avalanche during the descent. The focus of the film is a 2018 expedition by three members of the French High Military Mountain Group, who retrace the footsteps of that 1997 adventure.
One moment in Changabang that captured my attention was a brief side-story about the Cold War era that could make for an intriguing documentary in itself. Apparently, the US worked with Indian intelligence agencies in 1965 to install a plutonium-powered sensor array on neighbouring peak Nanda Devi (the second highest mountain in India). According to Broughton Coburn, author of the book The Vast Unknown: America’s First Ascent of Everest, the Americans wanted to keep an eye on The People’s Republic of China, which was testing missiles in a shielded area just north of the Himalayas. The covert mission was interrupted by poor weather and the plutonium has never been recovered to this day.
The other films in the festival are:
Félicité (director: Antoine Frioux), a story narrated from the point-of-view of nature, following snowboarders Pierre Hourticq, Victor De Le Rue and Jeremie Heitz as they tackle the impossibly steep slopes of the Mont Blanc Massif.
Blutch (director: Nicolas Alliot), following eccentric French adventurer Jean Yves Fredriksen as he travels across the entire Himalayas, paragliding, playing his violin and sleeping in a bivouac. En route, he meets old friends among the local settlements, gets set upon by a pack of dogs and gets arrested twice.
Coconut Connection (director: Sean Villanueva O’Driscoll), another fun film with a musical flavour. Adventure-seeker Villanueva and his Italian friends go to Baffin Island to tackle some of the previously unclimbed big walls, always finding time for a jam with guitar, violin and tin whistle.
Throughout the films, I was less interested in some of the technical climbing details, though I’m sure that was appreciated by the many others who turned up to the cinema in full adventure gear. Of course, advances in science and technology have made mountaineering more accessible in the past few decades. Modern technical clothing is much lighter now than during the original ascents of Mount Everest in the 1950s. Exponential improvements in weather forecasting have made it safer to plan trips, and there have been giant leaps in navigation and communications equipment.
To find out more on that topic, take a look at ‘Physicist on top of the world’ by Melanie Windridge, a nuclear fusion researcher who successfully climbed Everest in 2018. I spoke to Melanie shortly after her trip for the Physics World Weekly podcast and she also made a series of films on the science of Everest for the Institute of Physics.