Skip to main content

A better way to test drinking water safety?

A new test for assessing the quality of drinking water is faster, and possibly cheaper, than the current method. While current tests take a couple of days, the new test can tell whether water is contaminated in only eight minutes (Plos One 10 1371).

Felipe Lombo and his BIONUC research team from the University of Oviedo in Spain exploited a naturally occurring bacteria-binding protein, colicin S4, fused to green fluorescent protein. This fluorescent sensor produces a light signal when exposed to UV, making it a great tool for detecting bacterial contamination in drinking water.

Colicin S4 is produced by some bacteria to kill rival bacteria and specifically detects Escherichia coli, a bacterium that is an indicator of faecal water contamination. While colicin S4 provides specificity, green fluorescent protein emits a light signal that allows detection. The fusion of the two parts results in a sensor that can bind E. coli in contaminated water and produce light signals. Using a filter that retains the bacteria, unbound sensor molecules are washed away. If bacteria were present in the water sample, a light signal will occur in response to UV light. If no bacteria were present, all sensor molecules are washed away and no light signal is observed.

The signal can then be detected by a portable device that weighs only half a kilogram. Not only can it determine whether water is contaminated, but it also gives information about how many E. coli bacteria are present in the water. The researchers showed that this device can accurately determine between 20 and 1000 bacteria.

Molecular details of the sensor (click to zoom)

The current bacteria detection method takes several days to provide a result, which leaves a long time span between sample collection and response to contamination, during which, a large population could be contaminated. In most countries, water samples are taken at some key distribution points only once a day. The faster detection method would therefore allow for more immediate responses and makes more frequent sampling possible. This way, outbreaks can be prevented before they happen.

Lombo’s method is not the first that offers faster testing of drinking water. However, it is the first method that is capable of doing so without a lot of equipment and cost. While the authors say that their method is cheap, they do not mention exact numbers; how the price compares to the current wait-and-observe-growth method remains to be seen.

Dangers of contamination
In recent years, disease outbreaks occurred in Norway and the USA, originating from contaminated water distribution networks. One of the microorganisms causing trouble was Giardia, which causes severe diarrhoea. In Canada, another microorganism called Toxoplasma, spread through contaminated water, damaging the brain of babies. Tropical countries face frequent outbreaks of cholera, another disease causing diarrhoea. In total, the World Health Organisation estimates that 663 million people lack access to safe drinking water and that contaminated water is one of the leading cause of deaths worldwide, after lung infections and AIDS.

Testing drinking water for all possible disease-causing organisms is impossible; there are simply too many of them. E. coli, which originates from faeces, is considered to be a good indicator organism because it does not multiply outside animals and therefore allows estimation of how much contaminating material was not removed from the water. The absence of E. coli generally indicates that the water was cleaned well. When the team examined water containing other bacteria, tests using this fluorescent sensor protein remained negative, indicating its good sensitivity.

Not just drinking water?
If this method is implemented, it would allow quicker monitoring of water quality and overcome the delay between contamination, detection and response. The low price of this fluorescent analysis and the inexpensive equipment necessary to carry it out could allow placement of several systems along the drinking water distribution pipelines, facilitating water quality monitoring every few kilometres, from source to final consumer.

The output is an electrical signal that, when above a threshold level, indicates bacterial contamination. Thus the results along a whole water distribution pipeline could be sent from a mobile phone to the water control headquarters, allowing a faster response in case of any contamination and preventing outbreaks. Finally, if the sensor turns out to also work in less pure water, its use could be expanded to the other two types of water that need regular testing: recreational water in pools and water used in farming.

Energy-time entanglement detected in photons

The best observation yet of the energy-time entanglement of photon pairs has been made by physicists in Canada. The feat was possible because Jean-Philippe MacLean, John Donohue and Kevin Resch of the University of Waterloo could measure the arrival times of photons to sub-picosecond precision.

Entanglement is a result of quantum mechanics that allows the properties of two or more photons (or other tiny particles) to be correlated more strongly than allowed by classical physics. Once seen as a quirky aspect of the quantum world, entanglement is now being used to create practical systems for quantum cryptography and quantum communications.

Most photon-entanglement experiments so far have looked at correlations between the polarizations of pairs of photons, but there are other ways that entanglement can occur. For example, a pair can be entangled in terms of the photons’ energies and the times at which the photons are detected in an experiment.

Precision detection

Physicists have struggled to measure energy-time entanglement because it requires a detector that can pinpoint the detection time of a single photon to a very high degree of precision. Now, MacLean and colleagues have built such a device.

Their experiment begins with a laser pulse being fired at a nonlinear crystal, creating a pair of entangled photons that are sent along two different paths. Each path can measure the energy of the photon or its arrival time – but not both properties at the same time.

The energy of the photon is measured to high precision using a grating-based monochromator. The arrival time is determined using a relatively new technique that resembles strobe photography. It involves combining the photon with a 120 fs-long laser timing pulse in a nonlinear crystal. If the photon and pulse overlap temporally, a higher-energy photon can be emitted from the nonlinear crystal. The detection of this higher-energy photon signals the entangled photon’s overlap with the timing pulse – and hence its arrival time is known to sub-picosecond precision.

Inequality violated

By adjusting what each arm measures, the team can determine several different correlations between the properties of pairs of photons. These are correlations between the energies of the photon pairs; correlations between the arrival times of photon pairs; and cross-correlations between the arrival time and energy of photon pairs. These data are then used to calculate a mathematical inequality that should be violated when the photon pairs are entangled – which is exactly what MacLean and colleagues found.

“In the last 10 to 20 years, researchers have been interested in exploring and exploiting energy-time entanglement for communication,” says MacLean. “By being able to measure ultrafast entangled photons, our measurement technique opens the door to exploiting entanglement in a whole new regime.”

The experiment is described in Physical Review Letters.

The business of physics

I have spent much of my career in industry using my physics education to solve problems. The nature of these problems has varied considerably over the years, from my industrially sponsored PhD in 1990 to running university spin-out businesses to founding my own company, PhotonStar LED Group, in 2007. Throughout this period, however, my favourite problems have been those that were both technical and commercial: I really like taking technology and using it to make things, and specifically products. I have done this with varying degrees of success over the years, and I hope to share some of these experiences as this column develops, with a view to tackling some of the age-old problems physicists face as they progress in their careers in industry.

Platform for success

One strong – and perhaps rather biased – opinion I hold is that a good physics background is a great place to start for nearly any career. Please don’t get me wrong: there are many pieces to the puzzle, and all of them are important. In particular, engineering and physics have similar skill sets, and I am proud to be a chartered engineer as well as a chartered physicist. Both of these accolades – which I gained through the Institute of Physics (IOP), which publishes Physics World – have been extremely helpful to me in my career, and I would urge anyone to consider applying for chartership.

Yet it seems to me that a deeper understanding of the underlying principles, as gained through a physics training, often enables people to come up with more radical solutions to problems – to “think outside the box” in some ways. Of course, I could also cite examples in my career where a deep understanding has actually been a distraction, and the solution came from simply getting on with the problem inside the box. But then, this is Physics World and I am among like-minded individuals, so perhaps we can leave that statement there without further proof or explanation.

Although I joined the IOP as a student and have been an avid reader of Physics World ever since, my career in industry has been quite specialist in some ways – and, as a result, I’ve been rather blinkered to all the amazing things going on in the wider physics community. This is something I have enjoyed remedying since autumn 2016, when I was elected to a four-year term as the IOP’s vice-president for business. This role has allowed me to see many similar themes and issues emerging in physics-based businesses. A particular highlight has been the chance to sit on the judging panel for the IOP Business Innovation Awards, where I get to see some fantastic innovations and meet entrepreneurs who are using physics to solve problems in many different fields. The number and quality of entries to these awards has risen year on year. Last year’s impressive set of winners demonstrated the impact of successfully applying physics in a wide range of sectors, including healthcare, space, quantum computing, communication and defence.

Until I took up this role, I didn’t realize just how important physics-based sectors are to the economy. All told, these sectors – which include manufacturing, energy production, the automotive industry and many others – contribute more than £177bn and €23bn to the UK and Irish economies, respectively, each year, while employing 6.7% of the workforce in the UK and 8.6% in Ireland. The IOP itself reflects this strength, with many IOP members working in business or industry, and (as I have discovered) the organization does a lot to support them.

Nevertheless, some physicists, especially those in industry, have at times perceived the IOP as being too academically focused. While I now disagree, it is certainly true that the IOP could always do more. (I should at this point be clear in that the views expressed in this column are my own and do not represent IOP policy.) This is one reason why we have recently set up a new IOP group dedicated to business, innovation and growth. This group – which had its formal launch in October 2017 during the Business Innovation Awards reception in the Houses of Parliament – will sit alongside the IOP’s existing groups (for women in physics, medical physics, energy and more than 40 other areas) and will, I hope, complement them. Its goal is to provide an opportunity for like-minded individuals to meet, network, share experiences and discuss common issues faced.

Next steps

In future columns, I plan to address a few of these issues, including the challenges and options available for funding physics-based businesses; the links between academia and industry; university spin-outs; the value of intellectual property; and examples of what can go wrong (and right!) during commercialization.

Extreme flood events now more frequent and severe

The first time that the residents of the Cumbrian village of Glenridding were flooded in 2015, they reassured themselves that this was an extreme event – one of those freak occurrences. But when storm Desmond, swiftly followed by storm Eva, resulted in the village being awash for the third time in less than four weeks, it seemed ludicrous to still be labeling these raging floods as rare events. So what is going on? Are extreme climate events becoming more frequent, or were the residents of Glenridding suffering a series of unlucky rolls of the dice?

Rising levels of greenhouse gases result in a warmer atmosphere. Warmer air can hold more water vapour, causing heavier rainfall events. At the same time we’d expect to see an increase in extreme flood events, but this isn’t always the case because floods also depend on other factors, such as whether the soil is saturated before rains fall. The floods we are most interested in are the extreme ones because they cause the most damage and disruption. Such floods can be identified from the large peaks they produce in flow rate in rivers. But because they are rare events it is difficult to find long enough historical records to demonstrate any trends.

To get around this problem, Wouter Berghuijs from ETH Zurich, Switzerland, and colleagues took a regional approach, aggregating data over many locations, in order to provide robust information on the changing nature of extremes across a large number of catchments.

The team analysed daily streamflow observations for the period 1980 to 2009, taken from 309 catchments in eastern Australia, 671 catchments in the continental US, 244 catchments in Brazil and 520 catchments located across Europe. For each catchment, the scientists searched for the largest daily flow rate events during this thirty-year period, in other words, a flood that has a 3.3% chance of occurring in any given year. They then split the data into two time periods – 1980 to 1994 and 1995 to 2009 – and compared the total number of these extreme events in each period.

The results show that both the frequency and magnitude of these extreme flood events has increased, with the total number of extreme floods increasing by an average of 26.6% for the latter time period. The increases were greatest in the northern hemisphere, with European catchments experiencing a 44.4% increase in extreme floods and 21.4% for the US. The changes have been less dramatic in the southern hemisphere, with an increase of 14% for Brazil and 11.6% for Australia.

Understanding the reasons for these changes is still difficult. It’s likely to be a combination of long-term effects such as climate change combined with short-term weather variability. “For example, Australia has suffered a big drought, which reduced the number of floods observed during the early 2000s,” said Berghuijs. “The result was that floods did not increase so strongly in Australia, but that does not mean it is sheltered from climate change.”

By taking this regional approach, the scientists have been able to quantify changes in these rare flood events and show that they have indeed increased as our climate has warmed. Currently flood protection tends to be based on local historical records, but this can be misleading, as the residents of Glenridding found to their cost. The new approach, reported in Environmental Research Letters (ERL) helps scientists and policymakers to understand if flood protection based on standards set in the past is sufficient for present-day conditions.

Neural networks improve PET time-of-flight estimates

Improving the timing resolution of scintillation detectors used for time-of-flight (TOF) PET could improve image quality, quantification and lesion detection. While the development of advanced detector technologies has certainly helped, most PET detectors still extract TOF information using simple signal processing techniques, such as leading edge discrimination or constant fraction discrimination.

With hardware for fast waveform digitization now readily available, it may be possible to employ advanced signal processing techniques to estimate TOF from digitized waveforms, and improve timing resolution further. With this aim, Eric Berg and Simon Cherry from the University of California-Davis are investigating the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate TOF directly from a pair of PET waveforms (Phys. Med. Biol. 63 02LT01).

Deep CNNs, which are increasingly popular in the imaging community, use a stack of convolutional and fully connected neuron layers to learn features that are characteristic of an input. The convolutional layers convolve the input – in this case, a pair of waveforms from a coincidence event – with a set of filters to extract features from the input data. The output from each layer becomes the input for the next, allowing the network to extract complex features. The fully connected layers then correlate these features to a particular ground-truth class, in this case, the predicted TOF.

“We recognized that CNNs may be well suited for estimating TOF from PET signals for a few reasons, including the flexibility and autonomy of CNNs, such that they are essentially able to engineer themselves once provided with suitable training data,” said Berg. “Many complex and intertwined processes ultimately determine the estimated TOF, therefore it is beneficial to use the self-learning feature of CNNs to build a timing estimator that uses all available information.”

One potential pitfall of machine learning is the requirement for ground-truth-labelled training data. For PET, however, this is not a problem. “Unlike most applications, it is trivial to obtain such data for TOF estimation, since the ground-truth TOF is exactly determined by the location of the radioisotope between two coincident detectors, based on the speed of light,” Berg explained.

Proof-of-concept
Berg and Cherry obtained TOF training data by stepping a 68Ge point source between a pair of scintillation detectors, placed 40 cm apart and each coupled to a single-channel photomultiplier. They examined 29 source positions, at 5 mm increments, collecting approximately 15,000 coincidence waveforms for each position, and digitized the waveforms using a bench-top oscilloscope.

The authors note that the CNN approach requires minimal signal pre-processing. They baseline corrected each waveform, applied a 430-590 keV energy window and 5 ns coincidence timing window, and then cropped each waveform pair to capture only the first 3.5 ns (the rising edge). For use in the CNNs, the waveform pairs were stored as 2D arrays and labelled with the ground truth TOF difference.

Digitized waveform pair before and after cropping

 

The researchers investigated CNNs with network depths of 3-7 layers (convolutional plus fully connected) and fixed or tapered filter sizes. They trained each CNN over three sessions, using 145,000 randomly chosen waveform pairs (5000 events from each source position) for each session. They then used the trained CNNs to predict TOF values for 87,000 random waveform pairs.

Comparing the timing resolution obtained from each of the CNNs with that of two standard techniques – leading edge discrimination and constant fraction discrimination – revealed that the best resolution (185±2 ps) was obtained with the tapered 6-layer CNN. This represents a 20% improvement over leading edge discrimination (231±3 ps) and a 23% improvement over constant fraction discrimination (242±4 ps).

Comparison of coincidence timing resolutions

The CNN depth had the largest impact on timing resolution, with a larger number of convolutional layers leading to improved resolution. There was no significant difference between the tapered and fixed networks. Specifically, timing resolution ranged from about 220 ps with 3-layer CNNs to 185 ps with 6-layer CNNs. The researchers note, however, that increasing network depth requires a longer training time. For example, training the 3-layer networks required about 30 minutes to reach convergence, while the 6-layer networks required 6-8 hours.

Looking ahead
Berg and Cherry note that, at present, it would be challenging to apply the CNN method clinically. “Clinical scanners currently do not have the capability to digitize and store the detector waveforms with sufficient sampling rates to implement this method,” Berg explained. “However, apart from this infrastructure restraint, we do not foresee any scientific challenges that would prohibit this method from being used in a clinical situation.”

This current study used a simple detector, whereas most detectors in clinical PET scanners comprise a larger scintillator crystal array coupled to multiple photodetectors. Thus the researchers will next test and optimize CNN-based timing for this type of detector. They also plan to investigate the behaviour of CNN-based waveform processing and examine different photodetectors.

“Looking even further forward, and somewhat ambitiously, one can imagine a scenario where a CNN is trained to estimate not only the TOF, but also the photon detection position and deposited energy from the digitized detector waveforms,” Cherry told medicalphysicsweb. “Traditionally, these parameters are estimated independently, but in reality they are not independent processes. Using a CNN to estimate all parameters from information available from the detector would provide a convenient all-in-one estimator, and hopefully overall improve detector performance.”

American Institute of Physics names new CEO

Photograph of Michael Moloney

Michael Moloney will become the new chief executive officer (CEO) of the American Institute of Physics (AIP) in March 2018. Originally from Ireland, Moloney did a PhD in physics at Trinity College Dublin and went on to work at the Irish Embassy in Washington and the Irish Mission to the United Nations in New York.

Moloney comes to the AIP from the US National Academies of Sciences, Engineering, and Medicine, where he spent the last eight years as director of the Space and Aeronautics at the Space Studies Board and the Aeronautics and Space Engineering Board. In previous roles at the National Academies, Moloney served on the National Materials Advisory Board, the Board on Physics and Astronomy (BPA), the Board on Manufacturing and Engineering Design, and the Center for Economic, Governance, and International Studies.

Advancing physical sciences

“So much of our modern life is dependent on technology,” says Moloney. “Technology, in turn, is based on the underlying fundamental discoveries of science,” he says, adding “I look forward to working with the AIP community to ensure we can continue to advance and promote the physical sciences for the benefit of all of us”.

The AIP is a non-profit of federation of 10 physics-related societies that together have more than 120,000 members. Member societies include the American Physical Society, the Optical Society, and the American Astronomical Society – which awarded Moloney a special citation in 2011 for his leadership on the decadal survey New Worlds, New Horizons in Astronomy and Astrophysics. The AIP also publishes a portfolio of physics journals.

The CEO role is currently shared by two AIP senior executives – Catherine O’Riordan and Catherine Swartz – who took over in an interim capacity when the previous CEO Robert Brown retired in May 2017.

Rogue magnetic fields put the brakes on laser-driven protons

Twisting away: magnetic fields that are generated by electrons when a laser pulse strikes a thin target

Rogue magnetic fields generated by electrons are thwarting efforts to boost the energies of laser-driven proton beams – according toMotoaki Nakatsutsumi at Osaka University in Japan and an international team of colleagues.

High-energy beams of protons have a number of different of uses ranging from fundamental studies of particle physics to the destruction of tumours in cancer patients. Particle accelerators are the conventional way of generating proton beams, but these tend to very large and very expensive.

Electric sheath field

Another solution is to fire intense laser pulses at a thin solid target, which drives electrons away from their atoms to create an extremely large electric “sheath field”. This field can accelerate nearby protons to form a high-energy beam. Until recently it had been assumed that the energy of the beam could be boosted to ever-higher values by increasing the intensity of the pulses.

Nakatsutsumi and colleagues noticed that an upper limit on proton energy was being approached – and now they have discovered a previously unforeseen magnetization effect that may be putting on the brakes.

“Unfortunately, the electrons that build the sheath also generate a current, which gives rise to a magnetic field”, explains Nakatsutsumi. “This magnetism jeopardizes the whole process by trapping electrons on the target surface, while protons are deflected away from the sheath.” The researchers found that above a laser intensity of around 10²¹ W/cm2, the deflected protons began to lose energy; an effect that was only amplified with further increases in laser intensity.

Undesirable effects

Nakatsutsumi’s team has proposed measures that could reduce the undesirable effects of the magnetic field. These include using extremely short laser pulses and thinner target materials. However, these measures would likely be costly, while not producing much improvement. More research will be needed to minimize the unwanted effects, say the researchers.

The study is described in Nature Communications.

Magnetic Josephson junctions could help make artificial brains

A new form of artificial synapse based on superconducting Josephson junctions and magnetic nanoclusters operates just like its natural counterpart and could be used to connect processors and store memories in future brain-like computers. The device is not only much more efficient than its biological counterpart in terms of the energy it consumes but it is also much faster.

Neuromorphic, or brain-inspired, computing aims to mimic the neural system at the physical level of neurons and synapses (which are the connections between neurons) and will rely on neuronal-like networks rather than series of binary 1s and 0s. It shows great promise and could drastically improve the efficiency of certain computational tasks, such as perception, decision making and language learning. It will also be able to more easily handle the vast data sets currently being generated around the world (big data) and support emerging technologies like artificial intelligence and the internet of things (IoT), thanks to it being massively parallel.

To make this new generation of machines, researchers are busy developing suitably plastic synapse-like devices – not least because synapses far outnumber neurons by several orders of magnitude in the human brain. And although advances are being made in leaps and bounds, most devices are still much less energy-efficient than the human brain.

Voltage spikes

The researchers, led by Michael Schneider from NIST, made their synapses using standard fabrication techniques borrowed from digital Josephson junction computing and magnetic random access memories (RAM). The Josephson junctions consist of two layers of superconducting materials with an insulator made of nanoscale clusters of manganese in a silicon matrix in between. When they apply an electric current through electrodes on the device (made from niobium) and this current exceeds a critical level, voltage spikes are produced.

A neuron also works in this way – by generating action potentials (spiking with a particular fire time) that propagate along the axon. These action potentials are transmitted through a junction to the next neuron and the more firing between neurons, the stronger the connection. Both real and artificial synapses can thus maintain old circuits and create new ones, explains Schneider.

Orienting spins

The nanoclusters in the artificial synapse are in fact bar magnets with spins that can be oriented either randomly or in a coordinated way. The researchers say they can control the number of nanoclusters pointing in the same direction, which affects the superconducting properties of the Josephson junction.

“The synapse remains in a superconducting state except when we activate it by applying current pulses in a magnetic field to order the nanocluster spins,” explains Schneider. “It then starts producing voltage spikes.”

Better than their biological counterparts

The researchers are also able to apply electrical pulses without a magnetic field to reduce magnetic ordering and increase the critical current. “This design, in which different inputs alter spin alignment and resulting output signals, is similar to how synapses in the brain operate,” says Schneider.

“However, these artificial synapses are in fact better than their biological counterparts, since they can fire much faster – 1 billion times per second compared to a brain cell’s 50 times per second using just one ten-thousandth as much energy.”

Efficient artificial synapses at last

“The spiking energy is in fact less than 1 attojoule and we don’t know of any other artificial synapse that uses less energy,” he adds. “Although researchers have made superconducting devices that mimic brain cells before now, efficient synapses such as ours were missing.”

The synapses can be stacked into 3D structures so they could be used to make large systems for neuromorphic computing circuits. According to simulations by the NIST researchers, these systems would transmit electricity without resistance and data in them would be transmitted and processed in units of magnetic flux.

It is not all plain sailing though. Until now, the researchers say that they have only succeeded in making tens of these devices, and they need to make millions (or even more) to really achieve the result they are hoping for. “So, although here is a lot of promise, we still have a long way to go in scaling these synapses up to useful circuits,” Schneider tells nanotechweb.org. “Our next steps are to build small circuits and then go from there.”

The new superconducting synapses are detailed in Science Advances DOI: 10.1126/sciadv.1701329.

Boldly going to a galaxy far, far away

As a science-fiction fan, I am often asked, “Star Trek or Star Wars?” It seems an odd question. When I go into the fish-and-chip shop, nobody says, “fish or chips?”. Star Trek and Star Wars are different in many ways, but are also complementary. Both franchises have enjoyed recent and largely successful rebirths, with director J J Abrams rebooting them to critics’ (and most fans’) delight on the big screen, while on the small screen, Star Trek Discovery is entertaining new and established audiences.

It’s no surprise, then, to see new books exploring the science of each coming out. When Physics World asked me to review astrophysicist and science writer Ethan Siegel’s Treknology: the Science of Star Trek from Tricorders to Warp Drives, I did not want to be accused of favouritism, so I suggested I might also review The Physics of Star Wars: the Science Behind a Galaxy Far, Far Away by physicist Patrick Johnson, from Georgetown University in the US. So buckle up as we navigate our way through these binary stars of science fiction.

Star Trek has always been synonymous with science. It has inspired countless of today’s scientists and astronauts, with NASA astronaut Mae Jemison even famously appearing as a character on the show. But at the same time, it’s not exactly “hard science fiction” of the likes of Gattaca or Robot and Frank. There’s Treknobabble – “Chaotic space intersects ours at the 18th dimensional gradient. Voyager entered through a trimetric fracture” – and for every scientifically accurate slingshot round a planet, there’s a “spore drive” able to jump a ship to anywhere in space and time…and even, occasionally, travel into other universes.

Siegel says that, for him, it was science first, and then Star Trek. As a child he fell in love with the ideas of exploration and discovery, dreaming of finding new worlds and alien life. So when Star Trek: the Next Generation hit our television screens 30 years ago, the young Siegel was delighted to find a show that shared his desire to go where no-one had gone before. Treknology is a beautiful book, full of imagery from the Star Trek franchise, and a few descriptive diagrams to accompany the science discussed in the text. While flicking through the book is an enjoyable experience, this is no coffee-table tome. To treat the text as secondary to the images would be a mistake. Siegel’s writing is well known to many who follow his Starts with a Bang blog on Forbes, and that same engaged and enthused writing fills this book.

The doctor hologram from Star Trek Voyager

A book on the science of Star Trek is perhaps an easier sell than one on the science of Star Wars, what with its hokey religions and ancient weapons, and Johnson addresses this in his introduction. In fact, the entire book provides an answer to the argument that Star Wars, being arguably more fantasy than science fiction, is not fertile ground for discussing physics. Johnson succeeds in doing just that in an engaging and entertaining way.

The book itself is relatively unremarkable to look at. Johnson mentioned to me that he was told from the start that the art department was not going to be giving him any help at all, and it shows – the book doesn’t contain a single image. In fact, its design owes more to textbooks than the coffee-table book one might expect for such a topic. The only colour is the yellow title on the front cover, which also includes a disclaimer to avoid being sued by Disney or LucasFilm.

But Johnson’s writing, filled with humour and a clear love for both Star Wars and physics, ably counters anything lost by the lack of images. I found myself swept along with his enthusiasm, and the textbook format is lost to a galaxy of memories of the films, mixed with a wide ranging and thoroughly entertaining romp through the physics of both our galaxy and the one far, far away.

Each book is divided into sections. Treknology has starship technology, weapons and defence, communications, computing, civilian technology and “medical and biological”; while The Physics of Star Wars is divided into chapters on space, planetary science, planet-based transportation, space travel, handheld weaponry, heavy weaponry, robotics and, yes, even “The Force”. We’ll come to that but wait for it, you must.

Treknology is not the first book on the science of Star Trek and its section on starship technology contains information that has already been explored in previous books. We read about warp drives, tractor beams, transporters and the like, but it is also a delight to find science not covered in those other books that tackle the science of the Star Trek franchise.

Set phasers to stun

Synthehol, for example, is an alcohol substitute in later Star Trek series with all the intoxication and improved confidence alcohol can deliver, but none of the hangover, upset stomach or blurred vision. Even better, the intoxication can be removed by an adrenaline shot in an emergency scenario, leaving the drinker stone-cold sober – handy if you suddenly need to control the Enterprise or defend your crew from a Borg assimilation attempt. Siegel explores the chemistry and physiology of how this might be possible, and discovers that substances with some of those properties might not always be fictional – indeed, some have even been trialled in the real world.

There are other, more classic, examples of how Star Trek was a forerunner to the technology we use in our daily lives, with flip communicators and personal-tablet devices looking almost eerily similar to the phones and computers so many of us use. But what of seemingly more far-out tech such as the “replicator”? In Star Trek, crew on board the spaceships are able to ask the computer to recreate any food for them and, thanks to the replicator, it appears in front of them, ready to eat. That seems a distant dream, if not impossible, but on board the International Space Station today there are astronauts for whom 3D-printing food is a reality already.

Despite the title of Johnson’s book specifically mentioning the physics of Star Wars, he does touch on other sciences – it would be a strange galaxy where one could consider life using physics alone – but the core of the book is focused on physics. For most physicists, there is a particular scene in Star Wars that causes reactions ranging from amusement, through a slight jarring, to outrage. Drinking in a bar full of undesirables from across the galaxy, our lovable rogue, Han Solo, claims that his ship, the Millennium Falcon, is so fast that it can do a particular route – the infamous “Kessel Run” – in “12 parsecs”. Did George Lucas, when writing the script, make a schoolboy error, using a unit of distance where he should have used one of time? I doubt anybody seriously would think otherwise, but Johnson posits some post-rationalization to put your belief safely back in suspense.

Johnson argues that we often interchange distance and time in everyday life: the answer to “How far is it to the shop?” is often “About 15 minutes.” Similarly, obstacle courses are not always about the time taken. What if the Kessel Run is a competition to see who can successfully navigate, for example, a series of black holes in the shortest distance? That would take a seriously fast ship, to get close enough to, but still escape, the gravitational pull of these stellar giants, and would also require an impressive pilot with excellent manoeuvring capabilities to pull it off. Suddenly Solo isn’t wrong, but just his usual boastful self. Could this newly imagined Kessel Run course be an exhilarating scene in the forthcoming film about Solo’s younger years?

That’s not to say that Johnson’s book sets out just to rationalize apparent scientific inaccuracies of Star Wars – the tone is more of a discussion of the issues, and he is equally at home calling out scientific nonsense in the franchise. Starkiller Base, for example, is “impossible” but rather than getting stuck on that or allowing it to spoil his enjoyment of the film, Johnson uses it as a springboard to discuss hyperspace, dark energy, plasma and electromagnetic fields, before eventually comparing “the most impressive weapon in Star Wars” to the Large Hadron Collider at CERN.

His writing style is almost conversational, leading the reader to feel as though they are chatting to the author in somewhere considerably more salubrious than the Star Wars Cantina. The section on “The Force”, the magical power that surrounds and guides everything in the Star Wars universe, takes the reader on an exploration of the potential scientific explanations of the topic through parasitic worms, James Randi’s never-won million-dollar prize and the bacteria in the human gut. This kind of wide-ranging exploration will leave you amused, and with a burning desire to promptly watch six of the Star Wars films all over again. (There’s very little that could make someone want to watch the other three, ever. Even if you are tempted, please do not do it. They are as bad as you remember, particularly Attack of the Clones.)

Anybody who watches Star Trek or Star Wars hoping to find a wholly scientifically accurate portrayal of a possible future (or distant past), will be sorely disappointed. One of the great pleasures of both franchises is the conversations between friends about the science fact, science fiction and everything in-between of each episode or film. Both these books serve to arm the reader with a wealth of data for such interactions. Johnson told me that he has equal love for both physics and Star Wars, which, he claims, makes him “a winner at parties everywhere”. Exploring the science of each franchise is hardly going where no-one has gone before, but both of these books would be welcome additions to the collections of any science-fiction fan with the slightest interest in actual science. Both Siegel and Johnson’s books can be read cover to cover, or dipped into as reference books. They are equally fun to read, offering a fascinating and entertaining ride through the science in an accessible way, without speaking down to the reader or oversimplifying complex issues too much.

There is plenty to learn in Treknology for even the most avid Physics World reader. The physics sections undoubtedly cover enough areas to ensure that the majority of readers maintain their interest, but there are a host of sections on other aspects of science from physiology to chemistry and climate science. Neither book would render the reader an expert in physics, but both would undoubtedly give a greater appreciation and understanding of a wide variety of topics within the field. While Treknology does feel like the more impressive book at first glance, with all its glossy imagery, there is just as much joy and exploration of fascinating topics in The Physics of Star Wars. Like fish and chips, to choose between them, needless it is and, I dare say, somewhat futile.

  • Ethan Siegel Treknology: the Science of Star Trek from Tricorders to Warp Drives 2017 Voyager Press £19.99hb 216pp
  • Patrick Johnson The Physics of Star Wars: the Science Behind a Galaxy Far, Far Away 2017 Adams Media £10.98pb 256pp

Nuclear power – game over?

Foratom, the European nuclear trade body, commenting on the European Commission’s Clean Energy for All Europeans plan, says the EU’s aim to decarbonize the economy by over 80% by 2050 cannot be achieved without nuclear power. “Nuclear energy accounts for half of the low-CO2 base-load electricity currently generated in the EU. It provides reliable low-CO2 base-load electricity and can provide the flexibility of dispatch required to balance the increasing share of intermittent energy sources, hence continuing to contribute to security of supply.” It wants an end to preferential treatment and “priority dispatch” rules for renewables.

Foratom is not alone in pressing the case for nuclear. The World Nuclear Association is looking to an extra 1000 GW of nuclear capacity globally by 2050, while a Global Nexus Initiative report says it will be extremely hard, if not impossible, to meet the Paris COP21 climate goals “without a significant contribution from nuclear power”  globally 4000 GW will be needed by 2100.

Given the somewhat constrained situation facing the nuclear industry at present, stuck at around a 11% global contribution while renewables roar ahead to 24% and beyond, with prices continually falling, is there any reality in these nuclear ambitions?

The European Commission predicts a decline in EU nuclear generation capacity up to 2025, since some member states are phasing it out or reducing its share. But the EC says that, subsequently, new reactors are predicted to be connected to the grid and the lifetime of others extended, so that by 2030 capacity would increase slightly and remain stable at between 95 and 105 GW by 2050. However, since electricity demand is expected to increase over the same period, the share of nuclear electricity in the EU would fall from its current level of 27% to around 20%.

20% might actually be rather optimistic. Switzerland has just confirmed its slow phase out plan, after an appeal and then a referendum – with 58.2% voting for it. No new nuclear plants can be built, but existing ones can remain in use for the duration, while renewables and efficiency are ramped up. All of Germany’s plants will have closed by 2022, with renewables taking the strain, and all of Belgium’s plants will go by 2025, if not earlier – more cracks were found recently.  Although its partial phase out has been delayed by five years, France is to cut back by at least 20% to a 50% nuclear contribution and boost renewables. That leaves the UK as the main nuclear hope, possibly along with Finland (still trying to complete its much delayed EPR) and some eastern EU countries, who want to build new plants, but with some facing financing problems.

Outside the EU it is a bit better, but not that much. China is often cited as the main hope, and longer term it should see more plants starting up, but its versions of the EPR and AP1000 have met with construction problems and delays, while renewables are accelerating ahead in China, supplying 10 times more power than it gets from nuclear. Wind has already outpaced it.  

Russia remains a major player in the nuclear sector, seeking to export its nuclear technology worldwide, but given economic constraints, it’s not moving very fast, and there have been delays in India’s nuclear programme, while its renewables are moving ahead – their output has overtaken that from nuclear. Vietnam has decided not to go ahead with its plans to build a nuclear plant, while Taiwan is to close its two plants and go for renewables. And all but five of Japan’s 37 remaining plants remain offline, with no new ones planned. South Korea was once seen as a bastion of nuclear power (and, like Russia and Japan, of nuclear technology export) but, following the recent election, it is now looking to close its existing plants and to abandon most of its expansion plans (though work on two new ones will continue) while boosting renewables.

Overall, China may be apart, the future does not look very promising for nuclear. While in some cases it has been a matter of environmental policy and post-Fukushima fears and uncertainties, the main reason for this situation is the poor economics of nuclear. The USA has provided plenty of examples of that of late. Unable to compete with cheaper rivals, including renewables, many plants have been closed well before their planned end of life dates, while the new plant construction programme is just limping ahead, suffering cost offshoots and delays. For nuclear enthusiasts it’s a grim tale.

One new TVA plant has started up, but the most recent twist was the halt of work on the 40% built twin VC Summers AP1000 project in South Carolina – after $9bn had been spent on this flagship project. That just leaves one new reactor construction project still going ahead in the USA and that is also facing issues.

However, true to form, Trump has announced a bold new move: We will begin to revive and expand our nuclear energy sector, which I’m so happy about, which produces clean, renewable and emissions-free energy. A complete review of US nuclear energy policy will help us find new ways to revitalize this crucial energy resource.”

This may include subsidies to keep old plants open; some have already been agreed. But US energy secretary Rick Perry says he wants “to make nuclear energy cool again” by “focusing on the development of technology, for instance, advanced nuclear reactors, small modular reactors”.

Nuclear hasn’t been “cool” in the US, arguably, since Three Mile Island in 1979, when unit 2 had a meltdown, so he will have his work cut out. Many others actually think the game is up, in the US and globally.

Nevertheless, there are still those that look to a nuclear revival and, for good or ill, the current UK government is certainly backing nuclear, despite the ever worsening case for its flagship EPR at Hinkley. The National Audit Office recently described it as “a risky and expensive project with uncertain strategic and economic benefits”. The news that the much delayed EPR at Flamanville in France, which it is now claimed will be complete sometime in late 2018, may need a safety refit in 2024, does not inspire too much confidence.

However, some say there may be other nuclear routes forward longer-term, and a new report Making Sense of Nuclear tries to freshen up the debate by looking at what’s new. Although it claims not to be about “promoting nuclear as the route to a low-carbon energy system”, it clearly seeks to makes a case for a positive view of nuclear prospects. Given that my own IOP ebook Nuclear Power, Past, Present and Future takes a very different view, it’s good to see that the IOP has also backed this report too – it all helps improve the debate. However, sadly this new report does not seem likely to move the debate on much. It’s a fairly standard recital of pro-nuclear views with, arguably, not actually much new to say. There is the familiar claim that nuclear power has not been as bad as is sometimes portrayed (just a few deaths from Chernobyl, none from Fukushima) and anyway clever new nuclear technology will be cheaper and safer. It’s assertions like this that need looking at, as I tried to do.  But take a look and make up you own mind!

Most agree that there is a problem. Some say effort should be refocused on improving the current type of generation III reactors, rather than trying to develop new generation IV technologies.

But that does not look a very promising option, given the current state of play, with major companies like Westinghouse, Toshiba and EDF in trouble. Some still look to new nuclear technology like scaled-down Small Modular Reactors, with collateral benefits and links being seen in relation to the use/development of mini-reactors for nuclear submarines. But in their civil application, they are still years away and uncertain economically, unless perhaps they can be accepted in or near cities so that they can supply heat and well as power. That seems a long shot. See my next post. With a range of renewables accelerating ahead globally, and their costs falling fast, perhaps it’s time to move on. Here’s a detailed round up of the state of nuclear power.

Copyright © 2026 by IOP Publishing Ltd and individual contributors