Skip to main content

Single photons pinpoint objects inside living tissue

Scientists in the UK have developed a new technique that uses light to locate objects deep within biological tissue and which could help physicians better diagnose lung diseases. Implemented without bulky equipment and in the glare of fluorescent lighting, the technique involves precisely measuring how long it takes single photons to leave the body after being sent down a fibre-optic extension of an endoscope.

The research has been carried out as part of the Proteus project, in which more than 40 scientists from three different universities – Edinburgh, Bath and Heriot-Watt – are working together to better observe bacteria in lungs. Doctors look inside lungs using endoscopes – long narrow tubes that they insert into the lung’s airways and which they guide using a lensed camera built into the device. However, as Michael Tanner of Heriot-Watt explains, endoscopes are usually more than a centimetre in diameter, which means they cannot get through the smaller airways and into the inner lung where bacteria grow rapidly.

Getting access to this area involves pushing millimetre-diameter bundles of optical fibres down the centre of the endoscope and then out of the far end. But even though these fibres can take images of the inner lung, there is no way to gauge exactly where they end up and therefore where the imaged bacteria are. As Tanner puts it, medics rely either on “expert practice or pot luck”.

Multiple scattering

The solution devised by Tanner and colleagues is in principle very straightforward. It involves simply sending additional pulses of light down the fibre and then observing where they leave the body. Light is heavily absorbed when passing through biological tissue, while the photons that do make it out are usually scattered many times, meaning they lose much of the information about their point of origin. But there is a small chance that even over long distances any given photon will pass straight through with very little scattering.

Because these essentially “ballistic” photons travel in a straight line they not only reveal where they come from – the fibre tip – but they also emerge from the body ahead of all the other photons. So the trick in establishing the tip’s location is to time the arrival of the photons so precisely that the ballistic ones can be isolated from the rest.

To implement their scheme, Tanner and co-workers sent a series of very short near-infrared laser pulses (pulse frequency: 80 MHz) through a length of fibre-optic cable inserted into a range of biological samples. They captured the emerging light using an array of single-photon detectors having a temporal resolution of around a tenth of a nanosecond. And they chose the light’s wavelength – 785 nm – to both limit absorption and distinguish the very weak signal from hospital fluorescent lighting, which has a number of well-defined spectral peaks in the visible range.

Early arrivals

Because each laser pulse results in very few ballistic photons leaving the tissue, the researchers had to build up histograms from multiple pulses to establish exactly which detectors had snared the earliest arriving particles (and therefore where the fibre tip was). Using exposure times of up to 17 s, they had no problem doing this when burying the fibre tip inside a ventilated sheep’s lung or behind a human hand. But they could only gather limited statistics when placing it underneath a 25 cm-thick human torso, and to pin down the fibre-tip location in this case they had to turn down the background lighting.

One snag with the latest work was the inability to independently confirm the location of the fibre tip. Although the researchers tied down the ballistic photons to just one or two possible pixels – equating to a spatial resolution of about a centimetre – Tanner says it is conceivable, although unlikely, that the photons had, for example, bounced off an air pocket and were therefore not travelling direct from the tip. To remove any doubts, in future they plan to use tissue phantoms that can be dissected after use to reveal the true location of the tip.

The group also aims to reduce the exposure time to a second or less, even for thick samples. This would allow the fibre tips to be located in real time – so enabling a clinician to overlay that position on, say, an X-ray image of a lung. Doing so, explains Tanner, could involve adding optics to fibre tips or upping the density of detector elements. As he points out, laser power, and therefore signal strength, is limited by safety considerations.

Key-hole surgery

Ultimately, adds Tanner, the group hopes to apply the new technology more broadly. Being relatively simple and compact – the prototype camera sitting in a box about the size of a biscuit tin and mounted on a tripod – he reckons that the technology could in principle be applied to all medical procedures in which instruments are inserted into the body, such as key-hole surgery and interventions requiring catheters. “Sometimes medics aren’t sure whether a catheter has gone the right way up a vessel and so need to use X-rays, which may cause delay,” he says. “Real-time imaging would be very useful.”

Hervé Rigneault, a physicist at the Institut Fresnel in France, points out that the latest technique is not the only one that could be used to probe deep into the lung. Among the alternatives, he says, is photo-acoustics, which creates biomedical images from sound waves generated by laser heating. “But,” he adds, “this is a nice piece of work that brings another possible imaging modality.”

The research is described in Biomedical Optics Express.

Big data, big responsibilities

The vices and virtues of big data

On any given day, most of us are likely to send a few e-mails, spend anywhere from a few minutes to a few hours on social media, look up a fact or figure on Google and watch the latest hit TV show on Netflix. As you go through each and every one of these seemingly mundane activities, a complex behind-the-scenes process of data-gathering occurs, as different providers aim to learn more about you, your habits, your preferences and your choices. In Big Data: How the Information Revolution is Transforming Our Lives, seasoned science writer Brian Clegg gets to grips with the good, the bad and the ugly world of big data and the huge impact it has on our lives today.

The first few chapters deal with what constitutes data, how to construct information using said data and how to ultimately convert the information into knowledge. As Clegg explains, the words in his book (or indeed, this review) are data; their arrangement into sentences create information, and what we take away from the book constitutes knowledge. But how do you do a similar process with a much larger and unorganized data-set? For example, Clegg describes the problems faced during the early days of the US census before computers were available. Despite a census being taken only once a decade, it took almost a decade to analyse the data from each one, thanks to its size and complexity – today this is much easier thanks to computing power.

A key point that Clegg repeatedly makes is that you have to ask the right question – however good and clean your data-set may be, it is only as good as the algorithms designed to manage the data, and these in turn depend on the assumptions of the people writing them. Clegg also does a very good job of lining up a host of companies and products that use big data, but one in particular that crops up again and again is Netflix. Clegg’s fascination with the company’s data-drive success is obvious, as is his interest in modern technologies such as Amazon Echo and Apple’s Siri. One chapter is dedicated to CERN and does a good job of pointing out that the discovery of the Higgs boson essentially involved searching through a monumentally large volume of data.

After extolling its many virtues, Clegg spends a lot of the latter half of the book detailing the immense power that big data affords and its misuse, explaining how both corporations and governments can easily exploit the data they gather. Clegg makes that case that the technology available today allows for detailed surveillance (which is often a very slippery slope), and we are left at the mercies of the ethics of big corporations to protect our private lives. The book is a quick but fascinating introduction to the big world of big data.

  • 2017 Icon Books £7.99pb 176pp

At the boundary of knowledge

Years ago, when I was a college student back home on vacation, my grandfather asked me a disarmingly simple question. “Why does –1 multiplied by itself equal 1?” He enjoyed these kinds of puzzles, and was asking from a place of genuine curiosity. I remember being taken aback by the simplicity of the question. This was just a fact about numbers that I took for granted – I’d never realized it could be questioned. Feeling pressured to come up with an answer, I struggled for a quick explanation, and spouted some jargonesque nonsense like “That’s just the definition… –1 is the multiplicative inverse of itself.” My grandfather gently let me know that this jumble of complicated sounding words did not constitute an explanation and I retreated to my room somewhat dejected. After thinking about it for a while, I came up with an answer that satisfied (indeed, delighted) him. You start with 1 + (–1) = 0, then multiply both sides by –1, and distribute. This leads you to realize that indeed –1 times itself is 1.

I hold on to three lessons from that encounter. First, that there’s a difference between jargon and understanding – a fancy-sounding word can be an easy place to hide your ignorance. Second, that you’re allowed to question things that others might take for granted. There’s even a certain joy and pleasure in doing so, and in seeing where it leads you. And last, that there’s no shame in not knowing, but there is shame in pretending that you know something you don’t. Which brings me to why I loved We Have No Idea by Jorge Cham – the artist behind the popular PHD Comics – and particle physicist Daniel Whiteson of the University of California, Irvine, US. This isn’t a book about things that we already know. Instead, it bills itself as “a guide to the unknown universe”. It’s a kind of encyclopedia of ignorance, shining a light on the nebulous boundaries between our species’ knowledge and our ignorance.

The first chapter asks why the universe follows “the Lego philosophy”, where everything in it (well, everything that we can touch and see… dark matter is a story for chapter two) is built from fundamental building blocks. At first, this might not seem like a question worth asking, but when you stop to think about it, as Cham and Whiteson do, you realize it’s a surprising fact about our world that warrants an explanation. In a later chapter, they puzzle about how strange it is that the quarks inside an atom have perfectly fractional charges (+2/3 and –1/3, not a tiny bit more or less) that can cancel each other out exactly. We have no idea why things are this way, but without this perfect cancellation we wouldn’t have neutral atoms, and stars and galaxies (and therefore you or I) would never have existed.

Some of my favourite questions in this book are the kind that I never stopped to think about. Why can we see so far through space? Why does the universe have a speed limit? Like jolts of caffeine, these questions shake the reader out of the mundaneness of everyday life, urging us to remark on the strange set of accidents and circumstances that led to us even being here in the first place. Through these questions, Whiteson and Cham zoom out from our everyday existence, bringing us up to the boundaries of human knowledge, and hand us a spacesuit to go exploring. The duo are frequently hilarious and deeply charming guides, offering up delightful illustrations and metaphors, oodles of puns and a fart joke or two thrown in for good measure.

In a book titled Why Don’t Students Like School?, the psychologist and educator Daniel Willingham writes, “Sometimes I think that we, as teachers, are so eager to get to the answers that we do not devote sufficient time to developing the question.” The same advice holds true for writing. One of the marks of great science writing is taking the time to develop a question. To let the size of a mystery sink into a reader’s mind before attempting an answer. Be it in questioning what space, time or mass really is; why we’re being bombarded with ludicrously high-energy particles from outer space; or even why the universe is so absurdly big, We Have No Idea excels in this measure. Rather than simply shrug their shoulders and accept that this is how the world works, the authors embrace these questions, and take them to heart.

While this is a book about the unknown, shining a light on ignorance is a clever way of explaining what we do in fact know. The book offers no easy answers, but is filled with many lucid explanations. Through stories such as the bag of beans in “Jack and the Beanstalk” to explain mass and binding energy, or an intricate scenario involving space hamsters and nerf guns to explain the theory of relativity, this book uses charming (though occasionally elaborate) analogies and witty cartoons to explain complex ideas.

The explanations are the kind that my grandfather would have loved, and I think you might too. The cartoons and jokes on nearly every page might make the book look deceptively simple, but it’s quite a feat to explain subtle ideas such as dark matter, the Big Bang and the evolution of the universe without resorting to physics technobabble. This is perhaps the book’s most impressive success. It embraces ignorance when it’s appropriate, and doesn’t hide ignorance in a buzzword. In so doing, this book is that rarest of things: genuinely honest. The philosophy is perhaps best summed up when the authors write, “We are both clueless and surrounded by clues.”

  • 2017 John Murray 368pp £16.99hb

Community spirit led to people staying put during Hurricane Matthew

Photo of Jennifer Collins of the University of South Florida conducting a survey

During Hurricane Matthew in 2016, how a person perceived the dependability of their social connections played a significant role in their decision to evacuate or not. That’s according to Jennifer Collins and colleagues at the University of South Florida.

People with dependable relationships were more likely to remain during the hurricane, whereas those with less community support tended to evacuate. Individuals in groups with higher incomes or more education were likely to have more dependable relationships. Decision to evacuate did not tally with the density and diversity of someone’s social connections.

To come up with these results, Collins and colleagues surveyed evacuees at a westbound rest stop on Interstate 4 in Polk County, Florida on October 5th and 6th 2016 and non-evacuees at a Home Depot in Titusville on October 8th and a Walmart Market in Port Orange on October 9th. The sample included 62 evacuees and 70 non-evacuees. Evacuees were often from outside the areas that were under evacuation orders and non-evacuees were sometimes found to be staying where they’d been told to evacuate.

The survey took 15 minutes and collected socio-demographic data as well as details of any previous experience of hurricane evacuation. Collins and co-authors used the Berkman-Syme Social Network Index (B-SSNI) to measure the diversity and density of an individual’s social connections and the Interpersonal Support Evaluation List (ISEL) to measure the perceived dependability of each person’s social network.

There are many factors that influence the decision to evacuate during a hurricane. How people perceived their individual risk during Hurricane Matthew depended on the characteristics of the storm, including intensity, size, location, as well as social variables, the researchers found. Understanding hazard evacuation behaviour is key to improving planning and warning protocols. Disaster services may also use this information to promote community resiliency and target education campaigns better.

On 7 September the University of South Florida Hurricane Research Team were deployed to conduct a similar study around the evacuees of Hurricane Irma.

Collins and colleagues published their article in Weather, Climate and Society.

A punt on Planck, physicist puts Frankenstein to music, would Brian Cox cope on Mars?

By Hamish Johnston

A paper napkin with a load of numbers scrawled on top has been an unusual source of excitement for physicists at the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. One evening in December 2013, a group of them had gathered at a local watering hole to celebrate the success of NIST’s latest watt balance – NIST-3 – that had just determined Planck’s constant to a new accuracy. While enjoying “happy hour”, NIST researcher Dave Newell pulled out a napkin and the 10 researchers began to write down their predictions for the final value of Planck’s constant that NIST would submit to the International Bureau of Weights and Measures in Paris to help redefine the kilogram. The researchers then sealed the napkin in a plastic bottle and buried it inside a cavity within the foundation of NIST-3’s successor NIST-4, which was then being constructed.

Fast forward four years and NIST-4 has now made an even more accurate measurement of Planck’s constant that is now being compared to values at other national labs. So whose prediction came closest? The winner was Shisong Li from China’s National Institute of Metrology who guessed 6.62606990000 × 10–34 Js against the measured value of 6.626069934 × 10–34 Js. And what better way to celebrate than with a trip back to the local together with an “Italian rum cake” that had the submitted number written in icing. Tasty.

“Composer–writer–physicist” is how Eric Sirota describes himself and his latest musical, Frankenstein, will be opening on 9 October in New York City. Sirota did a PhD in physics at Harvard University and has published more than 90 research papers and has 20 patents to his name. He also studied music theory and composition at Brown University.

Monster show: the latest from physicist Eric Sirota (Courtesy: Eric Sirota)

In an interview with the Dramatist, Sirota contrasts writing with physics: “I can’t compare it to rocket science, but I am a soft-condensed matter physicist, and writing a musical is much more difficult.”

Particle physicist and television personality Brian Cox has visited a “Mars analogue station” in the Utah desert to find out if humans could cope with living on the Red Planet. In this trailer, he suits up and goes into “full simulation mode” to visit scientists who spend weeks at a time practising to be settlers on Mars. There’s more about Cox’s visit in the BBC programme The 21st Century Race for Space.

Starting bell sounds for CHIME

Canada has finished the construction of the country’s largest radio telescope. The C$16m Canadian Hydrogen Intensity-Mapping Experiment (CHIME) near Penticton, British Columbia, is the first research telescope to be built in Canada in more than 30 years. A ceremony to mark the completion of the telescope was attended yesterday by Canadian science minister Kirsty Duncan.

CHIME is located at the National Research Council of Canada’s Dominion Radio Astrophysical Observatory, which is about 260 km east of Vancouver. The observatory boasts four 100 m-long u-shaped cylinders of metal mesh and collects radio waves with wavelengths between 37–75 cm – similar to the wavelength used by mobile phones. Signals collected by the CHIME telescope will be digitally sampled nearly one billion times per second, then processed to produce an image of the sky.

Bizarre bursts

Astronomers will use the telescope to map a quarter of the observable universe to help better understand the nature of dark energy as well as study fast radio bursts. “CHIME’s unique design will enable us to tackle one of the most puzzling new areas of astrophysics today – Fast Radio Bursts,” says Victoria Kaspi from McGill University, who is a member of the CHIME collaboration. “The origin of these bizarre extragalactic events is presently a mystery, with only two dozen reported since their discovery a decade ago. CHIME is likely to detect many of these objects every day, providing a massive treasure trove of data that will put Canada at the forefront of this research.”

CHIME is a collaboration of 50 Canadian scientists from NRC and the Universities of British Columbia, Toronto and McGill. “CHIME is an extraordinary example showcasing Canada’s leadership in space science and engineering,” notes Duncan. “The new telescope will be a destination for astronomers from around the world who will work with their Canadian counterparts to answer some of the most profound questions about space.”

Ions and atoms react in magneto-optical trap

A magneto-optical trap has been used by physicists in the US to study how ions and atoms interact to create hypermetallic alkaline earth oxides – materials that have potential technological applications.

Hypermetallic alkaline earth oxides are linear molecules in which an oxygen atom is sandwiched between two alkaline earth atoms. The properties of these oxides can be finely tuned through the choice of the alkaline earth atoms, creating structures that could prove useful for a wide range of applications including nonlinear optics, materials science or chemical synthesis.

Currently these oxides are made and studied in plasmas and this means that it is difficult to both control the process and to gain insights into how they form.

Quantum control

Now, Prateek Puri and colleagues at the University of California Los Angeles, University of Connecticut and the University of Missouri have come up with a way of making hypermetallic alkaline earth oxides by reacting ions and atoms in a magneto-optical trap. The process involves cooling the reactants to temperatures as low as 5 mK and controlling the reactants’ initial quantum states. As a result they were able to make an extremely precise study of the formation of an alkaline earth-oxide ion comprising barium and calcium (BaOCa+).

The team began by loading barium ions into the magneto-optical trap to create a string of equally spaced ions – dubbed an ion crystal. Then molecular ions comprising barium, oxygen and a methyl group (BaOCH3+) were introduced to the trap. These were cooled through interactions with the ion crystal. Then, a cloud of about three million calcium atoms are reacted with the BaOCH3+ and the desired BaOCa+ appears as new ions in the crystal. Finally, the trap is switched off and all the ions are directed at a mass analyser that determines the make-up of the products of the reaction.

Collision energy

By varying the temperature of the reactants – and therefore the kinetic energy with which they collide – over a temperature range of 5 mK–30 K, Puri and colleagues were able to study the effect that collision energy has on how the reaction occurs. They also used a laser to put the calcium atoms into specific quantum states before the reaction occurred. This allowed them to work out that atoms in certain quantum states are more likely to react than atoms in other states.

The research is described in Science.

Silicon device offers fast route to effective tissue therapy

Using conventional semiconductor manufacturing techniques in a clean room at Ohio State University, scientists have devised a convenient method for inserting genetic material into a selected patch of tissue. The new technique – termed “tissue therapy” – uses strong local electric fields focused by an array of nanoscale channels on a silicon wafer. Pressed against the skin, the device rapidly transports reprogramming factors to where they are needed. The treatment takes less than a second, and was shown to induce the growth of new blood vessels and neurons in mice, saving badly damaged limbs and doubling the chances of stroke recovery.

Many of the hardest conditions to treat, such as heart disease and stroke, promise to be treatable by cell or tissue therapy. Beneficial cells can be injected into the patient or delivered directly to damaged tissue. Here they facilitate healing by secreting therapeutic factors or through integration into the dysfunctional tissue.

Despite a lot of investment, limited cell resources and the need for pre-processing has kept this treatment from widespread use in the clinic. These issues can be circumvented by inserting sections of genetic code into the patient’s own cells (a process called transfection), reprogramming the tissue to change function. Although cell reprogramming has been accomplished before in vitro, injecting cells into organisms can cause unwanted interactions, so the ability to generate them in vivomakes the process both more efficient and more effective.

The transfection tools available to scientists range from viruses to chemicals, nanoparticles, and microscale needles, but they are either too untargeted, or they are too selective and time-consuming to be used practically in specific tissues in large organisms. Writing in Nature Nanotechnology, lead author Chandan Sen and colleagues at Ohio State University report a highly convenient method with which to topically transfect a region of tissue in an animal, affecting only a small area, and without the need for a viral vector.

The team’s device consists of a 200 μm-thick silicon wafer with T-shaped arrays of nanochannels. These channels serve to focus the electric field, which causes pores to open in the target cells. The nanochannels also deliver the reprogramming agents, which are then electrophoretically transported through the transient pores.

The researchers conducted experiments using mice to confirm that their approach had clinical potential. Mice with a transected femoral artery were administered factors that encourage blood vessel growth in the area of the cut. Within seven days of the treatment the limbs began to recover their blood flow, rather than withering away. Factors that induce neuron development were also applied to the skin of mice that had suffered a stroke. An extraction from the treated tissue was transferred to the brain, making the mice twice as likely to recover.

Although the field of cell therapy is fraught with complexity, the group’s thorough study presents a promising outlook for tissue therapy as it takes the next step on the path to clinical application.

Levitating droplets go with the flow

Look closely at a cup of coffee or tea and you might see a white mist hovering over the surface. This is thought to be micron-sized droplets of liquid, which physicists know can levitate over the surface of a hot liquid. Sometimes, the levitating droplets can even arrange themselves in a regular 2D array as they hang in the air.

This phenomenon is poorly understood, yet it has important implications for the thermodynamics of evaporation – and could also have a range of applications from chemical manufacturing to delivering drugs to patients.

Newer model

Now, Oleg Kabov and colleagues at Novosibirsk State University and the National Tomsk Polytechnic Research University in Russia as well as Southern Methodist University in the US have seen a similar array of tiny droplets over a hot solid surface. They developed a new model to explain the effect, which they say could also explain the behaviour of droplets over hot liquids.

Droplet levitation over hot dry surfaces is called the Leidenfrost effect and most previous studies have been done with surfaces well above the liquid’s boiling point. However, the team did its experiments on a copper block heated to just 85 °C. This allows the surface to be partially covered by a thin layer of water, allowing them to study levitation over both wet and dry surfaces. The block is monitored by a microscope connected to a high-speed camera that views an area measuring about 1 mm across.

Dry patch

The experiment begins with the copper covered in a uniform layer of water 400 μm deep. A jet of air is fired at the surface, creating a dry patch about 750 μm across. The heated surface is then switched on and droplets begin to form above the liquid layer. Some of these droplets then migrate to the dry patch, where they levitate.

Writing in Physical Review Letters, the team points out that the temperature of the surface is much lower than the conventional “Leidenfrost temperature”, above which a droplet will create an insulating vapour layer that both stops it from evaporating and stops it from falling onto the hot surface.

Vapour reflection

The team therefore had to develop a new model to explain this levitation over a lower-temperature dry surface. Their theory involves vapour flowing out from a droplet and reflecting from the copper surface, thereby causing the levitation. They also reckon that this outflow of vapour creates a repulsive interaction between droplets, which causes multiple droplets to form regular arrays.

Once they developed their model for dry surfaces, Kabov and colleagues revisited droplets over wet surfaces and concluded that the same evaporative effect is responsible for levitating droplets, and the formation of regular arrays.

Fewer hurricanes in the near-future, study suggests

New research predicts that North Atlantic hurricane activity will reduce over the next decade and a half, due to the El Nino Southern Oscillation (ENSO) and changes in North Atlantic sea surface temperature. The open ocean is expected to experience the largest decrease, with approximately four fewer tropical cyclones per decade.

Woosuk Choi from Seoul National University in Korea and colleagues used a track-pattern-based tropical cyclone model to examine the role of natural variability and anthropogenic forcing on climate in the near-future – the next one or two decades.

A predicted increase in the frequency of El Niño episodes provides unfavourable conditions for tropical cyclone formation – for example, enhanced vertical wind shear erodes the vertical structure that the storms need to maintain in order to develop. In the North Atlantic, the study shows, the cooling effects of natural variability dominate those of anthropogenic warming. This results in a cooling of the North Atlantic sea surface, which will also suppress tropical cyclone formation.

Many studies focus on cyclone genesis frequency or maximum intensity. But when considering impact, the location of the tracks is most important, as it relates to landfall. Choi and colleagues from the University of California, US, and City University of Hong Kong used a model that divides tropical cyclone tracks into four patterns. They based predictions for each pattern on climate projections from the Climate Forecast System version 2 (CFSv2) in the Coupled Model Intercomparison Project (CMIP), and compared tropical cyclone activity between 2002–2015 and 2016–2030.

Predicting cyclone activity in the near-future is complicated by uncertainties from both internal variability (natural oscillations) and external forcings such as greenhouse gases. The timescale lies between short-term predictions and long-term climate change, where in each case only one of the uncertainties dominates. Predictions in the near-future, however, are vitally important for planning mitigation strategies for extreme weather such as hurricanes.

Choi and colleagues published their article in Journal of Climate.

Copyright © 2026 by IOP Publishing Ltd and individual contributors