When I was about seven, I went on a tour of Jodrell Bank Observatory with my primary school headteacher and her kids. I remember loving every bit of it, wanting to know how everything worked and then coming home with a pack of glow-in-the-dark stars with which I covered my bedroom ceiling. I even copied some of the constellations on the packet, so that my room had its own Plough and Cassiopeia.
Over the next few years as my dad’s photography business became increasingly digital, I cobbled together and upgraded my own computer from the outdated parts. It frequently broke and I developed a knack for problem-solving to get it up and running again.
As a sixth-form student I was fortunate enough to have really inspiring maths and physics teachers, Donald Steward and Lisa Greatorex, who made these subjects not only interesting, but fun. At the same time, Brian Cox started making appearances on BBC’s Horizon and, while I wouldn’t attribute too much of my decision-making process to a TV presenter, I guess you could class me as one of the early physics students in the “Brian Cox Effect”.
What did your physics degree focus on? Did you ever consider a permanent academic career?
While I discovered a fascination for particle physics and quantum mechanics in particular, I never lost that childhood wonder about space. For my final-year project, I found myself peering into the sky through the University of Bristol’s optical telescope on the roof of the physics department. We were asked to calibrate the sensor and then test it with some observations, which granted us special access to the roof at night. I remember getting particularly twitchy during consistently cloudy nights in the month before our project was due, which nearly jeopardized our final mark. But we got a window of clear nights at the last minute and managed to secure a first for the project.
At the end of my BSc I found myself keen to apply some of my knowledge in some different fields. My best marks were in the practical elements of my degree such as my final-year experiments and so further research was not for me. Retrospectively, perhaps the most useful bits of my degree were the programming and Physics World science-communication modules that the university was running.
How did your interest in the arts, especially television and film technologies, emerge?
I come from a very creative family. My parents are both art teachers turned photographer and graphic designer, and my sister has worked with a host of performing-arts organizations. Some of that creativity must have rubbed off on me along the way as I spent my teenage years playing music and creating short films with my friends.
After graduating from university, I was looking for opportunities that could use the analytical approach gained from my physics degree, while reconnecting with the arts that I enjoyed as a teenager. As a result, I joined Bristol’s television industry as a runner and worked my way up through a number of technical roles, looking after some exciting natural history shows for the BBC and multi-screen cinemas in Japan.
When 360 video and VR began to boom, I started app development which introduced me to some of the innovative creative technology work that happens in Bristol.
What does your current role as “creative technologist” entail? What projects are you working on at the moment?
The South West Creative Technology Network is a partnership between four universities (UWE, Bath Spa, Falmouth and Plymouth) as well as the Watershed media centre in Bristol and the Kaleider production studio in Exeter. It’s a knowledge-exchange programme that creates connections across academia and industry in the South West to create innovation in three areas of interest; immersion, automation and data. As a creative technologist, I get involved in all sorts of fascinating conversations with research fellows and prototypers working on these themes. I try to identify the technical hurdles they may encounter and then help work out the best route to tackle them as they arise.
The projects we’re working on include the use of motion-capture data to improve mobility in the elderly, the creation of new musical instruments in virtual reality and extending the story of a theatrical performance beyond the confines of the stage.
How has your physics background been helpful in your work, if at all?
I’d say that, in particular, I improved two skills through studying physics, and they have been invaluable in the path I have chosen since my degree. First, a solid understanding of some of the core concepts that many specific areas of physics build on, whether that’s mathematical methods or how to derive equations. Second, and the most transferable skill, is the ability to break a problem down into a variety of approaches and then systematically solve it.
Any advice for today’s students?
If you have an idea of where you want your interests to take you, then stick to that goal and go for it. That’s what got me to the university I wanted to go to, studying the degree I picked. However, if you don’t, that’s where it gets really exciting; most of my decisions since graduating have been what I consider the “best choice available to me at the time”, which has led me to where I am now. And I’m very happy with that!
A new “lipid nanotablet” that resembles a biological cellular membrane in the way that it works can perform Boolean logic operations. The device is made of nanoparticles functionalized with surface chemical ligands of DNA that act as computational units tethered to a lipid bilayer circuit board. It could find use in a host of applications, including biocomputation, nanorobotics, DNA nanotechnology, biointerfaces and smart biosensors.
“In nature, cell membranes are analogous to a circuit board as they organize a wide range of biological nanostructures (such as proteins) as units that can be thought of as tiny computers,” explains Jwa-Min Nam of Seoul National University (SNU) who led this research study. The membranes compartmentalize the proteins so that they are separated from extracellular fluids that contain information important for vital functions. Each protein receptor takes chemical and physical cues (which can be processes like ligand-binding or changes in membrane voltage) from its environment as inputs and then generates outputs. These can be structural changes or dimerization/dissociation reactions, for example.
“These nanostructures allow the membranes to dynamically interact with each other and carry out complex functions as a network,” says Nam. “The ‘biocomputing’ processes they perform are massively parallel and are key to how living systems adapt to changes in their environment.”
Synthetic cell membrane circuit board
Nam and colleagues’ lipid bilayer is to all intents and purposes a synthetic cell membrane circuit board on which information-processing nanostructures are tethered using biomolecules. In their work, they use light-scattering plasmonic nanoparticles as the circuit components (instead of proteins) and DNA as surface ligands. To perform computation, they programme the ways the tethered nanoparticles interact with one another using the surface ligands.
When placed in a solution containing DNA strands, the computing units change their structure as they sense these molecules. This is the input signal for the single-nanoparticle logic gate that then triggers particle assembly and disassembly as the output. The researchers employed high-resolution dark-field microscopy, which detects the strong and stable light scattering signals from the nanoparticles to track them and their interactions.
The SNU team says it can also couple multiple nanoparticle computing units into a reaction network and thereby wire a number of logic gates into a combinatorial circuit, such as a multiplexer, for more complex information processing. “Using this approach, which we call ‘interface programming’, we show that a pair of nanoparticles on the lipid bilayer can carry out AND, OR and INHIBIT logical operations, taking multiple inputs (‘fan-in’) and generating multiple outputs (‘fan-out’),” says Nam.
Scalable architectures
One of the goals in this work was to use individual nanoparticles as linking nano-parts. “Our concept proves that we can reliably implement such modular and molecular (in this case DNA) computing with nanoparticles for the first time,” he tells Physics World.
“Such scalable architectures have been lacking until now because of the difficulties in wiring multiple logic gates into large electronics circuits in solution – inputs, logic gates, and outputs all diffuse uncontrollably around, and in all directions.” Indeed, previous such nano-biocomputation was limited to one simple logic operation per test tube due to lack of compartmentalization or relied on complicated enzyme-based molecular circuits in solution. Tethering the nanoparticles means that they are sorted from the solution in which they are in and can only interact with each other in one direction – as they diffuse across the surface of the lipid bilayer. They can thus be controlled, says Nam.
The advantage of the new approach is that it should now be possible to incorporate a variety of nanoparticles, with their intrinsic features – such as their photonic, catalytic, photothermal, optoelectronic, electrical, magnetic and material properties – into the lipid nanotablet.
“In this way, we can design a network of particles (each with their own unique properties) to autonomously respond to external molecular information on such a platform,” explains Nam. “Being able to control these nanoparticle networks in a programmed way will be very useful for developing applications like smart sensors, precise molecular diagnostics and nanorobots for biological environments. We will also be able to make hitherto inaccessible nano-bio-interfaces and biological hybrid systems.”
42. That is the “answer to the ultimate question of life, the universe, and everything” as purported in the cult novel The Hitchhiker’s Guide to the Galaxy by Douglas Adams. While one could be forgiven for wishing that the answer were as simple in reality, the characters in Adams’ tumultuous universe soon find that their question itself is flawed, and requires further deep thought.
While many, if not most, physicists are chasing after a “theory of everything”, what Davies and his compatriots are pursuing is something beyond uniting quantum mechanics and relativity. Instead, Davies is attempting to tie together such seemingly disparate fields as nanotechnology, molecular biology and genomics, as well as fundamental physics, chemistry, quantum mechanics and biology. The underlying bedrock to unifying all of these sciences is a concept that Davies describes as organized information. The author attempts to unravel how information behaves in systems as incongruent as an atom and an embryo, and how it seemingly changes and adapts to the network it flows within.
Beginning with a detailed look at Maxwell’s eponymous demon, and the laws of thermodynamics built around it, Davies deftly segues into everything from Darwinism and collective behaviour to the evolution of cancer and quantum biology, wrapping up this bold book with a chapter on consciousness (quantum and otherwise.) While the ideas in the book are not completely new, Davies’s lucid writing on this emerging scientific area is just what the pop-sci reader ordered. He is the perfect host to this admittedly dizzying journey, as he spins yarns of quantum demons, double-headed worms and everything in-between.
To find out more about his thoughts on life, I put some questions to Davies.
In some ways, Demon in the Machine is a follow up to your 1998 book The Fifth Miracle, where you also tackle the origins of life. What has changed since you wrote that book?
The science has changed. I was motivated to write this book, in part because I am now surrounded by some very clever young people who are coming up with all sorts of wonderful ideas, but also because of advances, not only in biology, but in fundamental physics. The demon in the title of the book is something that’s beloved of all physicists: Maxwell’s demon. It’s the sort of thing you learn, you think, “Hmm, well okay. I understand,” then you move on because it’s been an inconvenient truth at the heart of physics for over 150 years. It’s only just in the last few years that people have actually built devices based on the concept. This is now part of nanotechnology – you can build devices that, in a thermal background, can actually discern individual degrees of freedom and operate mechanisms to convert heat into work or use information as a fuel or a source of energy.
I have to say, it’s on a very small scale. My favourite is the information-powered refrigerator, which is being built in Finland. Don’t expect anything from your kitchen appliances soon, but it establishes the principle that to fully understand the nature of thermodynamics, we have to take into account information as a physical quantity, and not just as some sort of airy-fairy concepts that we use in daily life. That really is, I like to say, the chink in the armour of mystery that surrounds the question “what is life?”. I think we begin to see that if information can have causal leverage over matter, then that opens the way to understanding how we might adapt the laws of physics to incorporate this information thing, which is at the heart of what makes life tick.
“What is life?” is such a fundamental and huge question – does it ever overwhelm you?
Somehow it doesn’t seem to. Maybe it’s the hubris of old age or something that has motivated me to tackle this theme. Erwin Schrödinger was one of the greatest physicists of the 20th century. He was the architect of quantum mechanics. But he also gave a series of lectures in Dublin in 1943 called What is Life? that culminated in a book of the same title, which exercised a great influence. Scientists founded the field of molecular biology based on his insights. What Schrödinger did in that book was raise the possibility that there might be new physics lurking in life. We might even find a new type of physical law prevailing in it, he wrote quite explicitly.
I read that book when I was a student, so I remember thinking to myself, “Hmm, yeah life is odd.” When you think of all the things it does, it seems to somehow have its own laws, it does its own thing. Atoms just follow basic physics, but put all these stupid atoms together into a cell and they do incredibly clever things. It looks like it’s physics at the atomic level and magic at the cellular level. What’s the source of that magic? I remember thinking to myself, it’s deeply mysterious and probably will forever remain so. But throughout my career, I’ve had this feeling that I would like to understand life, not through the eyes of a biologist, although that’s interesting enough, but through the eyes of a physicist. What is life as a physical phenomenon?
Just in the last 10 years or so, I suppose, I’ve begun to see a confluence of different subjects. Partly, this is advances in nanotechnology. Partly, it is a convergence of physics and computing and biology and information theory – all these subjects are coming together in the realm of large molecules or tiny machines, where life and chemistry and physics all intersect. That’s the new frontier – the physics of the very complex, where the traditional subject boundaries melt away.
Demon in the details: In his new book, Paul Davies writes that it is information that interweaves many diverse scientific fields, and may ultimately be the answer to life’s biggest questions. (Courtesy: Allen Lane)
You write that “life = matter + information”. If so, then how do you define “non-life”?
The concept of information that is often used at the level of thermodynamics, or even quantum information, is a rather austere type of information because it’s really just bits or qubits, and it’s a head count of those. However, when you think about it, in biology, that concept of context, or system – whichever word is better – it’s clear that there’s a big distinction between a particular letter in DNA that is part of a gene that is coding for some biological functionality, or is just junk. So we have to generalize the concept of information, not just as raw bits, but as functional bits in some sense.
Now functionality refers to the whole system and we’ve got here right to the heart of what Schrödinger called “a new type of physical law”, because if we are to have a law that is relevant to individual molecules or particles that relies on the global context, then that is going to be a very different type of law from the traditional laws of physics, which are local in space and time.
One of my mentors was John Wheeler, the gravitational physicist. I once asked him, if he looked back at the legacy of his life, what is the takeaway idea that he would like people to have? He said “mutability”. He was convinced that nothing is fixed and even the laws of physics, ultimately, should not be, as he put it, “cast in tablets of stone, from everlasting to everlasting”. It was a wonderful, poetic turn of phrase that stuck in the back of my mind. What we’re proposing is not that there are laws of physics operating in living matter that just change with time. They’ll be fixed, but they will be a function of the state of the system.
There’s a nice analogy that I can give, which is the game of chess. Chess proceeds according to fixed rules and this leads to certain patterns of play. You can look at a board and you can analyse the state of play and, if you wanted, you could work backwards to the starting state. You might think any particular configuration of pieces on the board would be consistent with somebody’s game. Well that’s not true; it’s easy to show there are many patterns that are impossible to achieve by the fixed rules of chess. So now we can imagine a different game, which is the book I call “Chess Plus”. Suppose you’re playing chess and you wanted to even the score a little bit, maybe if black is losing by some criterion, then perhaps black is allowed to move pawns backwards instead of forward. So the rule has changed, but it’s not changed at a particular location, it’s changed because of the overall state of play. If you run computer simulations, you find that you can then reach impossible states that simply could not be reached any other way.
These states are unbounded in their complexity and in their evolvability. In other words, there are pathways to new patterns of complexity that could not be achieved through fixed rules. This is beginning to sound a bit like life. Life does things that non-living things simply can’t do. There are pathways to new forms of complexity and we can hope that if we formulate these laws correctly, that they will show us how matter can be fast-tracked to life through exploring these new types of complex pathways that could not occur in any other way.
Towards the end of my book, I conjecture how such pathways might actually be manifested in experiments – what we might look for, at the interface of physics, biology, chemistry, nanotechnology, and computing and information theory.
If we are able to identify certain informational motifs or patterns that characterize living things, we could then have a definition: when you see this particular scaling law or this particular network of information flow – especially if we see that the information processing is being done in the global degrees of freedom, and not in the local – and there’s a coherent effect, that might be a hallmark of life.
So what you’re suggesting is that there could be a spectrum, going from non-life to life?
Yes. There may be transitions from non-life to life, and it may not be a single one – there may be a sequence of transitions and for each one, there would be new informational motifs. We could define life in software terms instead of hardware.
There is an interesting corollary to this, which I’ve thought a lot about recently: if we sent a spacecraft to Enceladus and flew through the plume of material that’s spewing out from its interior, we could collect a molecular sample. Could we tell from an inventory of those molecules whether we were dealing with life, or almost-life, or what was living and is now smashed up? What would be a signature that would convince us?
It becomes very subjective for the simple reason that if we carry out this experiment, and the spacecraft finds amino acids, people wouldn’t be very excited. They’d say a simple chemistry experiment can make amino acids. If instead we found a ribosome, which is the little machine in known life that makes proteins, everybody would say, “Well it must be life. You wouldn’t get a ribosome otherwise.”
Where on that spectrum between the two is life, and can you quantify it? Can we build a life meter that would say, “Aha! Yes, it’s gone over the 50% mark, so that’s a signature of life.” It’s a very tough problem. But we’re working with Lee Cronin in Glasgow. He’s a chemist, and he thinks he can build a life meter. He’s irrepressibly confident and enthusiastic about this. The way he sees it, is that you can take a particular molecule, such as a ribosome, and ask, “What are the chemical pathways by which you could assemble that, and how complex are those pathways?” You need an object that gives you a window into the process – the complexity is all in the assembly, which gives you an end product. Cronin thinks there should be a mathematical criterion that will pick out a collection of molecules and will say, “Yes, that is life.” I have hope that we will be able to do that and, in effect, find informational signatures of life beyond Earth, but using the molecular detritus as a surrogate for that signature.
When you are writing about such a burgeoning field, how do you ensure that lay readers can distinguish between the accepted science and the speculation?
For me it’s been a matter of professional pride to be clear on such distinctions. I’m happy to write about slightly wacky ideas, but I hope I always make it very clear. In the past, I have received complaints from readers who say that they found it hard to know what my view on a subject was at all thanks to my attempts to be balanced and impartial. What I try to do is be fair in reporting what the mainstream view is, and then saying that there exists a dissenting minority.
There is a chapter on consciousness in this book, which is a wacky field, in a way. I write about that in a slightly light-hearted manner. But there are copious footnotes. These are caveats, really, for the reader, as I would hate to think that I’m distorting the subject.
To hear more of this interview, check out the Physics World Weekly podcast on 28 February
The Demon in the Machine: How Hidden Webs of Information are Solving the Mystery of Life Paul Davies 2019 Allen Lane £20hb 252pp
The indicator describes how easy it is for the colder, fresher, surface waters of the Arctic to mix with the warmer, saltier waters below. It’s linked to the loss of Arctic sea ice, according to the researchers.
“We live in times of big changes in the Arctic, and the change of halocline strength plays one of the key roles,” says Igor Polyakov of the University of Alaska Fairbanks, US. “We cannot paint a complete picture of Arctic changes without knowledge of changes in its parts.”
The retreat of sea ice is one of the most recognizable effects of climate change, for both scientists and the public alike. Most of the loss appears to be driven by changes in the atmosphere, specifically greater surface air temperatures.
But that is only one half of the story. Atlantic waters funnel into the Arctic via the passageways between Greenland and the Svalbard archipelago, and between the Svalbard archipelago and mainland Norway. These waters carry enough heat to melt all the sea ice several times over. But they don’t cause this melting because the Arctic Ocean contains a natural buffer layer between the colder, fresher waters of the Arctic, and the warmer, saltier waters of the Atlantic. The more pronounced this halocline layer, the harder it is for the two waters to mix.
In 2017, Polyakov and others found evidence that the halocline in the Arctic Ocean’s eastern Eurasian Basin has weakened over recent decades, becoming more similar to that in the western Eurasian Basin, where there has long been less difference in salinity between shallow and deep waters. The researchers suggested that this change stymied the formation of sea ice in winter in the eastern Eurasian Basin. Now, however, Polyakov has gone one step further by proposing that the Arctic halocline strength is an indicator of climate change.
Together with Andrey Pnyushkov at Alaska Fairbanks and Eddy Carmack at Fisheries and Oceans Canada, Polyakov collected water-column observations dating back to the early 1980s, to understand how the role of the Arctic halocline has changed. The team characterized the halocline via the available potential energy — essentially the energy required to mix the waters above and below.
The researchers found that the halocline in the Eurasian Basin has weakened overall, while in the Amerasian Basin, it’s strengthened. This increasing contrast between the available potential energies of the Eurasian and Amerasian basins could be a “new, straightforward climate indicator”, the team writes in Environmental Research Letters (ERL).
Polyakov and colleagues are now working on a new set of data that shows a continuation of changes in the eastern Eurasian Basin. “That is a very exciting part of our life,” says Polyakov.
Preclinical studies play a valuable role in developing novel treatment strategies for transfer into the clinic. The introduction of image-guided preclinical radiotherapy platforms has enabled radiobiological studies using complex dose distributions, with photon energies and beam sizes suitable for irradiating small animals. But there remains a drive to create platforms with ever more complex irradiation capabilities.
Beam shape modulation could improve the conformality of preclinical irradiation, but systems based on fixed aperture collimators cannot easily deliver beam sizes down to 1 mm. Jaw techniques such as the rotatable variable aperture collimator (RVAC) offer another approach, but these are challenging to implement robustly for millimetre-sized fields and, as yet, unvalidated.
A team at MAASTRO Clinic has now proposed another option: synchronized 3D stage translation with gantry rotation for irradiations from multiple beam directions. They demonstrated that this dose-painting technique can improve the precision of radiation delivery to complex-shaped target volumes (Br. J. Radiol. 10.1259/bjr.20180744).
The increased dose conformality that dose painting offers could benefit a range of preclinical studies. “The vast majority of radiobiology studies administered non-conformal dose distributions, which is not realistic compared to clinical practice,” explains senior author Frank Verhaegen. “Also, for studies on normal tissue response, being able to exactly target certain structures can lead to more relevant research results.” He adds that the technique can also be employed to deliver non-uniform doses, to study boost methods, for example, where hypoxic tumour regions receive higher dose.
Paint the target
Verhaegen and colleagues used the SmART-ATP planning system to design plans for delivery on the X-RAD 225Cx image-guided preclinical radiotherapy research platform, and simulated the plans using a Monte Carlo (MC) model of the platform.
Dose painting uses heterogeneous irradiations from multiple directions. For each beam direction, a 2D area is defined — based on the projection of the target volume — and divided into many single-beam MC simulations. The team validated their MC model using radiochromic film measurements of the field shape and dose output of several complex fields, including a 225 kVp, 2.4 mm diameter beam.
The researchers considered two scenarios based on a CT image of a mouse with an orthotopic lung tumour. In case 1, the target was the tumour, while case 2 targeted a length of spinal cord. They created dose-painting plans to deliver 10 Gy to the target using the 2.4 mm beam, resulting in 256 beams for case 1 and 280 beams for case 2. Beam-on times were optimized to achieve a D95% and V95% of 100%.
For comparison, the researchers also created plans using a fixed aperture collimator and an RVAC. They selected four gantry angles for case 1, (anterior, posterior, lateral and medial directions) and two opposed lateral angles for case 2. Fixed aperture plans used the smallest available collimator that achieved complete target coverage: a 5 mm diameter beam for case 1 and a 20 × 20 mm beam for case 2. RVAC plans were created using the optimal angle and beam aperture size.
Balancing benefits
All irradiation methods achieved good target coverage and sharp cumulative dose–volume histograms for both cases. For the lung tumour irradiation, the increased conformality of the RVAC and dose-painting methods resulted in considerably lower doses to the left lung, trachea and heart compared with fixed aperture irradiation. Dose painting led to slightly lower dose homogeneity in the target.
For the spinal cord case, dose painting resulted in slightly better dose homogeneity in the target than achieved by the other two methods. Dose painting gave the best conformality, with a slightly larger volume of low doses, but considerably lower volume of higher dose, to both lungs. Fixed aperture irradiation resulted in far higher doses to all avoidance volumes.
These results suggest that for targets that match available fixed aperture field shapes and sizes, dose painting adds limited value. But where no fixed aperture collimators match the required field size, dose painting may surpass the plan quality achievable with RVAC. A major benefit of dose painting is that it can achieve conformal irradiation of concave target volumes. It also provides increased versatility to avoid organs-at-risk and deliver heterogeneous dose distributions to targets.
One disadvantage of dose painting is the greater radiation delivery duration, which can increase the risk of motion errors. Case 1, for example, required a total beam-on time of 1587 s with dose painting, compared with roughly 225 s for the other approaches. This results from the unavoidable trade-off between larger beam sizes with higher dose rates and smaller beams with higher spatial resolution but increased beam-on times. The team note that beam size should ideally be determined by the planning system based on time and plan quality constraints.
Dose painting also requires considerably more time for radiation planning and calculation. The authors suggest that with tighter process integrations and further software optimizations, data processing issues should not limit practical implementation of this approach.
The team note that since this irradiation strategy only requires more advanced software and no hardware modifications, it can be used to increase the versatility of current-generation image-guided preclinical irradiation platforms. “The plan now is to implement this new method in our treatment planning system and combine it with beam-optimization methods,” says Verhaegen.
Optical fibres can be used to monitor neural activity by detecting the presence of a fluorescent protein. Researchers in the US have shown that bundles of hundreds or thousands of optical microfibres separate and spread after penetrating the brain, occupying a 3D volume of tissue. They also demonstrated that, when sampling density is high enough, source-separation techniques can be used to isolate the behaviour of individual neurons. The work will further the field of systems neuroscience by revealing how neural circuits relate to animal behaviour (Neurophotonics 10.1117/1.NPh.5.4.045009).
A relatively new way to monitor and influence brain activity, optical interfacing is used in transgenic animals whose neurons have been modified — either before birth or by an engineered virus introduced into the brain — to express a specific protein.
The protein used by Nathan Perkins and colleagues, at Boston University and the University of California, San Diego, changes shape when bound to calcium and, in this state, fluoresces green in response to blue light. Calcium-ion kinetics underlie one type of action potential in brain cells, so observing how these ions accumulate and dissipate gives an indication of neural activity.
Because of the scattering properties of brain tissue, direct illumination of target neurons deeper than about a millimetre is impossible to achieve with precision. Instead, excitation light and the resulting fluorescent signal are typically delivered and collected using implanted optical fibres.
The group previously demonstrated that bundles of several thousand optical microfibres — each just 8 μm across, or about the size of a neuron cell body — can be introduced to a depth of 4 mm without badly damaging the brain or provoking an immune response. As the tissue resists penetration by the microfibres, tiny mechanical forces cause their trajectories to diverge, spreading the individual tips over an area of about 1.5 mm2.
“The method can be readily extended down to 5 mm without any modification. Beyond that, a new challenge emerges, which is the flexibility of the fibres,” says Perkins. “This is not deep at all by human brain standards, but for mice and songbirds this allows us access to much of the brain.”
Simulated signals
Using stochastic simulations of the set-up, Perkins and colleagues modelled the fluorescence signal obtained for different neuron densities and microfibre counts, finding that the latter had the greatest influence on how many neurons could be detected simultaneously. At the lower end of the range, simulating a bundle composed of fewer than 200 microfibres, there was little overlap between excited regions, and each tip could detect the activity of a separate population of just two or three neurons.
As the microfibre count increased, the excitation light was delivered more uniformly throughout the modelled volume, causing more and more neurons to fluoresce at a level above the detection threshold. With a sufficient density of microfibres — and therefore illumination strength — the field-of-view of each tip expanded to encompass so many fluorescing neurons that any single cell would likely be recorded by more than one microfibre.
Under such a regime, the researchers showed, fluorescence signals picked up by separate microfibres can be correlated, enabling source separation techniques to be used to isolate the activity of individual neurons. The method is easiest to apply when signals are sparse (so the signature of a given neuron is not drowned out by background activity), and is especially effective with fluorescent indicators that peak sharply and fade quickly thereafter.
Until now, this degree of detail had been available only for superficial areas of the brain that can be imaged directly. At greater depths, researchers had observed individual neurons in isolation, or broad patterns of activity at the regional scale, but the long-term behaviour of a population of neurons was inaccessible. Understanding dynamics at this level is crucial to learning how the brain encodes information.
Music on the brain
An example of the sort of phenomenon that can be investigated with this approach has fascinated Perkins since the early days of his research: “Zebra finches are remarkable birds, where each male bird learns a unique song, inspired by that of their tutor (usually their father),” he explains. “They can then produce this song with amazing precision hundreds or even thousands of times per day for the rest of their life.”
The mystery, says Perkins, is that “neurons are able to produce this precise, stable behaviour even though they, like much of biology, are noisy and inconsistent”.
Previous experiments using optical recording for the brain’s surface, and surgically implanted electrodes at depth, showed that the patterns of activity behind the bird’s vocal performance remain stable even as the population of participating neurons varies. Although similar in form to the splayed optical microfibres used in the most recent research, the deep electrodes could not track the activity of individual neurons for longer than a day.
“Electrophysiological and optical interfacing offer very distinct trade-offs,” says Perkins. “Electrophysiological has much better time resolution and requires no fluorescent protein, but electrodes rarely can record from a specific cell for a long period of time. Optical recording has more longevity, allowing specific neurons to be tracked over many days, and the ability to target particular neural sub-populations.”
The optical method has another advantage in that it can control neural activity as well as monitor it. This is achieved by engineering neurons so that they express the proteins necessary to form light-gated ion channels — light-sensitive versions of the structures that regulate ion transport across cell membranes.
Because these ion channels and the fluorescent indicator proteins respond to different frequencies of light, the two processes could be made to occur simultaneously. “By being able to record what the neurons are doing in one context and re-activating the same neurons in a different context, it is possible to further understand their role in controlling behaviour,” says Perkins.
An international team of researchers has proposed an ambitious new experiment that would involve firing neutrinos from a particle accelerator in Russia to a detector 2500 km away in the Mediterranean Sea. The researchers claim that the facility would provide unparalleled insights into the properties of neutrinos and elucidate the mystery of why matter dominates over antimatter in the universe.
Neutrinos are fundamental particles that are created in huge numbers by cosmic sources but can also be produced by nuclear reactors and particle accelerators. As they interact only weakly with matter they are difficult to detect. There are currently three known types of neutrino that can oscillate between their different “flavours” as they travel. It was long believed that neutrinos have no mass, but we now know that they have one of three tiny, discrete masses. Yet scientists have not yet been able to determine the relative ordering of the three neutrino masses as well as discover the extent to which neutrinos violate charge-parity symmetry — a finding that could help to understand why the universe is dominated by matter rather than antimatter.
The experiment proposal has substantial support within the particle physics community in Europe as well as in Russia
Dmitry Zaborov
There are already several “long-baseline” accelerator neutrino experiments that are in operation or being developed, which are attempting to shed further light on the nature of neutrinos. The T2K experiment in Japan sends neutrinos from the Japan Proton Accelerator Research Complex in Tokai with energies around 600 MeV to the Super-Kamiokande detector some 295 km away. The US-based NOvA experiment, meanwhile, operates at 2 GeV over the 810 km distance between Fermilab in Chicago and a detector based in Minnesota. The planned Deep Underground Neutrino Experiment project, which is currently under construction, will produce a 3 GeV beam of neutrinos at Fermilab that are then sent 1300 km to an underground detector in South Dakota.
Maximum oscillation
Researchers in Europe have now proposed their own long-baseline facility. A collaboration of 90 researchers from nearly 30 research institutes have published a letter of interest to build the Protvino-ORCA (P2O) experiment. In the letter, they explain how they would upgrade a 70 GeV synchrotron particle accelerator at Protvino — 100 km south of Moscow — to generate a neutrino beam. According to the plans, this would then be sent to the Oscillation Research with Cosmics in the Abyss (ORCA) detector, which is currently being built off the coast of Toulon, France by the KM3NeT collaboration.
Neutrinos achieve maximum oscillation at different distances depending on their energy levels. P2O — with its 2595 km baseline — would allow it to achieve maximum oscillation at neutrino energies of around 4-5 GeV. Astroparticle physicist Paschal Coyle, who belongs to the KM3NeT collaboration, says that these parameters make P2O ideal for disentangling the effects of mass ordering and charge-parity violation. “In other long baseline experiments there are ambiguities that make it harder to decouple the two contributions,” he adds.
The realisation of the P2O experiment will, however, not come soon. It would require the funding and construction of a new neutrino beamline at the accelerator in Protvino, aligned towards the ORCA site, as well as increases in the accelerator’s beam power from 15 kW to at least 90 kW. It may also require an upgrade to the ORCA detector, which has been designed to detect atmospheric neutrinos and is still under construction.
Dmitry Zaborov from the Kurchatov Institute in Moscow, who is one of the authors of the letter, says that P2O has been proposed for inclusion in the upcoming update to the European strategy for particle physics and also has the support of the KM3NeT/ORCA community. “The experiment proposal has substantial support within the particle physics community in Europe as well as in Russia,” he adds. The outcome of the strategy update will be concluded in May 2020.
Explaining your research, especially as a PhD student, can be a struggle. But communicating it via dance – that’s a challenge. Last week, the 11th annual “Dance your PhD” contest, sponsored byScience Magazine and the American Association for the Advancement of Science (AAAS), selected its winner: a physicist working on superconductivity.
Out of 50 submissions, the judges chose the work of Pramodh Senarath Yapa, who’s pursuing his doctorate at the University of Alberta, Canada. The topic of his research, superconductivity, relies on electrons pairing up when cooled below a certain temperature, and imagining these electrons as dancers was a natural choice for Yapa.
You can watch his winning submission here:
If you’ve got any special occasions to celebrate with friends, a good idea would be to treat yourselves to a cheese fondue. Wildly popular in the 1970s, this Swiss dish is now making a comeback.
It’s timely, then, that scientists have uncovered the secret of the perfect cheese fondue – which they say will optimize both texture and flavour. They found that you can prevent your fondue mixture from separating by including a minimum concentration of starch, which is 3 g for every 100 g of fondue. Also, according to this research, adding a bit of wine can help the mixture flow and taste better.
Next month is the 30-year anniversary of the WorldWideWeb, the world’s first ever Web browser. Back in 1989 Sir Tim Berners-Lee proposed a global hypertext system to solve the growing problem of information loss at CERN at the time. This system, which he later named the “World Wide Web”, has since evolved into something we all use in our everyday lives.
In honour of this anniversary, a CERN-based team has rebuilt WorldWideWeb, so it can now be simulated and viewed in any modern browser. For those of you who are feeling nostalgic or curious, CERN has released links not only to the rebuilt browser, but also to some helpful instructions and explanations on how to use it.
Four years have passed since Physics World proclaimed “Science cleans up at the Oscars” — and things have only got better since then. Last year, The Shape of Water became the first sci-fi film to win Best Picture; this year, Black Panther might make it back-to-back wins for the genre (although Roma — a drama by Alfonso Cuarón, who won Best Director for Gravity in 2014 — is the bookmakers’ favourite).
The Shape of Water and Black Panther are both wonderful science-themed films. In particular, the Black Panther character Princess Shuri has been hailed as a role model for encouraging girls to study STEM subjects. But what are the best films about science or scientists? It’s a question you might like to discuss as you watch this year’s Oscars ceremony, which begins on Sunday 24 February at 5 p.m. Los Angeles time.
To help get your discussion under way, below is my suggestion for the top five films about science or scientists – in no particular order. They span 51 years and include two Stanley Kubricks, one Ridley Scott, one Steven Spielberg, and one film by the not-so-famous Shane Carruth.
None of these films were big Oscar winners: they only have four wins between them, and none of those were in the “big five” categories of Best Picture, Best Director, Best Actor, Best Actress or Best Screenplay. But all of them capture a special something about science or scientists.
Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb
1964 | Director: Stanley Kubrick
Dr. Strangelove is best known as a black comedy, with the actor and comedian Peter Sellers playing three of the lead roles. (Sellers was paid one million dollars for the film, leading Stanley Kubrick to remark: “I got three for the price of six.”) It is also a brilliant pre-echo of the kinds of issues discussed by Physics World‘s Anna Demming in her recent review of Hello World: How to be Human in the Age of the Machine: “when it comes to the stars of the numerous hapless human–machine encounters recounted throughout the book, their ill-advised approach was far from obvious”.
Fun Oscars factDr. Strangelove was the first sci-fi film to be nominated for Best Picture. It lost to My Fair Lady.
2001: A Space Odyssey
1968 | Director: Stanley Kubrick
Let’s be honest: the day-to-day work that leads to scientific discovery can sometimes be mind-numbingly boring. In the original 161-minute version of 2001, Kubrick tried to capture this monotony with a lengthy scene where astronaut Dave Bowman simply jogs around and around (and around) the interior of the Discovery One spacecraft. It worked a bit too well: at the film’s premiere, the audience began booing, hissing, and saying “Let’s move it along” and “Next scene”. Following that disastrous premiere — 241 people walked out the theatre, including executives from the production company MGM — Kubrick cut 19 minutes from the film.
Fun Oscars fact Kubrick was beaten to Best Director for 2001 by Carol Reed, who directed the musical Oliver! (“I think I better think it out again,” as Fagin might have said about one of the biggest snubs in Oscars history.)
Jurassic Park
1993 | Director: Steven Spielberg
Steven Spielberg’s third sci-fi blockbuster – after Close Encounters of the Third Kind (1977) and E.T. the Extra-Terrestrial (1982) – is the film most cited by scientists and science journalists when explaining new findings to the public. Between 1998 and 2017, Jurassic Park was referenced 21 times in Nature News, from “Jurassic Park got it right” in an article about how theropod dinosaurs such as velociraptors used their tails, to “Say goodbye to Jurassic Park” in a discussion about the difficulty of restoring species that have been extinct for more than a few thousand years. You’ll even find the film’s best quote being used to start an article in Physics World.
Fun Oscars fact Of the films in this top five list, Jurassic Park is the only one to have had a good night at the Oscars — it won all three categories in which it was nominated (Best Sound Editing, Best Sound Mixing and Best Visual Effects). The only other Oscar winner in this list is 2001 (Best Visual Effects).
Primer
2004 | Director: Shane Carruth
As Niels Bohr once said: “How wonderful that we have met with a paradox. Now we have some hope of making progress.” He would have enjoyed Primer, a cult independent film in which two scientists — along with us, the audience — are brutally confronted with the paradoxes of time travel. It is the ultimate “hard” sci-fi film. Where 2001 was fairly light on exposition, Primer is an exposition-free zone. For 2001, Arthur C Clarke’s novel helpfully filled in the gaps; for Primer, there are (mercifully) a number of fan websites that do a fabulous job of explaining how it all works.
Fun Oscars fact Primer is definitely not the kind of film that gets nominated for Oscars. However, it did win the Grand Jury Prize at the Sundance Film Festival, which is a prestigious event for independent film-makers.
The Martian
2015 | Director: Ridley Scott
If this was a list of the top five sci-fi films, Ridley Scott would probably have two entries: Alien (1979) and Blade Runner (1982). But this is a list of the top 5 films about science or scientists — and there is no film better than The Martian at showing us a scientist using science (and, on occasion, duct tape) to solve problems. In the words of astronaut Mark Watney: “In the face of overwhelming odds, I’m left with only one option. I’m gonna have to science the shit out of this.”
Fun Oscars fact The Martian, which was nominated for seven Oscars in 2016, and Interstellar, which received five nominations the year before (it won Best Visual Effects), both feature Matt Damon marooned on an alien planet.
Almost seven years after the discovery of the Higgs boson at CERN, how would you sum up the current state of particle physics?
We are at a very exciting time in particle physics. On the one hand, the Standard Model – the theory that describes the elementary particles we know and their interactions – works very well. All the particles predicted by the Standard Model have been found with the Higgs boson, which was discovered at the Large Hadron Collider (LHC) in 2012, being the last missing piece. In addition, over the past decades the predictions of the Standard Model have been verified experimentally with exquisite precision at CERN and other laboratories around the world. On the other hand, we know that the Standard Model is not the ultimate theory of particle physics because it cannot explain observations such as dark matter and the dominance of matter over antimatter in the universe and many other open questions, so there must be physics beyond the Standard Model.
Precise measurements of known particles and interactions are just as important as finding new particles
Is it concerning that the LHC has failed to spot evidence for particles beyond the Standard Model?
The discovery of the Higgs boson is a monumental discovery. It is one that has shaped our understanding of fundamental physics and has had an enormous impact not only on particle physics but also on other fields such as cosmology. We are now confronted with addressing other outstanding questions and this calls for new physics, for example, new particles and perhaps new interactions. We have found none so far, but we will continue to look. To improve our current knowledge and to detect signs of new physics, precise measurements of known particles and interactions are just as important as finding new particles.
How are you tackling this at the LHC?
The Higgs boson is related to the most obscure sector of the Standard Model, as the part that deals with the Higgs boson and how it interacts with the other particles raises many questions. So we will need to study the Higgs boson in greater detail and with increasing precision at current and future facilities, which could be the door into new physics. At the LHC, we are addressing the search for new physics on the one hand by improving our understanding of the Higgs boson and on the other hand by looking for new particles and new phenomena.
CERN recently released plans for the Future Circular Collider (FCC) – a huge 100 km particle collider that would cost up to $25bn. Given the huge cost, is it a realistic prospect?
We are currently studying possible colliders for the future of particle physics beyond the LHC. We have two ideas on the table. One is the Compact Linear Collider (CLIC), which would produce electron-positron collisions from 380 GeV – to study the Higgs boson and the top quark – to 3 TeV. The other is the FCC, a 100 km ring that could host an electron–positron collider as well as a proton–proton machine operating at a collision energy of at least 100 TeV. We are currently at the stage of design studies so neither of these projects are approved. At the beginning of next year, the European particle-physics community, which is updating the roadmap for the future of particle physics in Europe, will hopefully give a preference to one of them. Both CLIC and the FCC would be realized in several stages so that the cost will be spread over decades. As we push the technologies, which will also benefit society at large as the history of particle physics shows, we should also be able to reduce the cost of these projects.
Accelerators have been our main tool of exploration in particle physics for many decades and they will continue to play a crucial role also in the future
How might the FCC expand our knowledge of the Higgs?
The FCC as an electron–positron collider would allow us to measure many of the Higgs couplings – the strength of its interactions with other particles – with unprecedented precision. The proton–proton machine would complement these studies by providing information on how the Higgs boson interacts with itself and how the mechanism of mass generation developed at a given time in the history of the universe. Both machines combined would give us the ultimate precision on the properties of this very special and still quite mysterious particle.
Japan is expected to give some indication about plans to build the International Linear Collider (ILC) in March. If Japan goes ahead, would CERN get behind the ILC as the next big machine in particle physics?
The fact that Japan is considering building a linear electron–positron collider demonstrates that there is great interest in the study of the Higgs boson as an essential tool for advancing our knowledge of fundamental physics. If Japan decides to go ahead with the ILC, it will undertake negotiations with the international community – Canada, Europe and CERN, the US and other possible partners – to build a strong collaboration. In this case, the most likely option for CERN would be to build a proton–proton circular collider that is complementary to the ILC.
If Japan goes for the ILC and China opts to build its own 100 km Circular Electron Positron Collider (CEPC), is there a danger CERN would get left behind?
The ILC and the CEPC are both electron–positron colliders. We know that we also need a proton–proton collider, which would allow us to make a big jump in energy and search for new physics by producing possible new, heavy particles. The FCC proton–proton collider would have an ultimate collision energy almost a factor of 10 larger than the LHC and would increase our discovery potential for new physics significantly.
You are halfway through your term as director-general of CERN; what are your plans for the second half?
One major goal in the months to come for our community, including myself, is to update the European strategy for particle physics – a process that will be concluded in May 2020. We will have to identify the right priorities for the field and start preparing the post-LHC future. Accelerators have been our main tool of exploration in particle physics for many decades and they will continue to play a crucial role also in the future. The outstanding questions are compelling and difficult and there is no single instrument that can answer them all. For instance, we don’t know what the best tool is to discover dark matter. It could be an accelerator, an underground detector looking for dark matter particles from the intergalactic halo, a cosmic survey experiment or something else. Thus, we have to deploy all of the experimental approaches that the scientific community has developed over the decades and accelerators have to play their part.
Collaborations in particle physics are getting larger and larger sometimes consisting of thousands of scientists, do you think there is a danger of “group think”?
Collaboration is very much in the DNA of particle physics. CERN brings together 17,000 physicists, engineers and technicians from more than 110 different countries around the world. Collaboration is fundamental because the type of physics we do requires instruments such as accelerators, detectors and computing infrastructure that no one single country could ever realise alone. So we need to pull together the strengths, the brains and the resources of many countries to be successful in this endeavour. Small projects can obviously also do great science, but the Higgs boson, gravitational waves and neutrino oscillations could not have been discovered by small experiments conducted by small groups. Both small and big projects are needed, it depends on the question you want to address.
What role can science diplomacy play in today’s turbulent times?
Science can play a leading role in today’s fractured world because it is universal and unifying. It is universal because it is based on objective facts and not on opinions – the laws of nature are the same in all countries. Science is unifying because the quest for knowledge is an aspiration that is common to all human beings. Thus, science has no passport, no race, no political party and no gender. Clearly, places like CERN and other research institutions cannot solve geopolitical conflicts. However, they can break down barriers and help young generations grow in a respectful and tolerant environment where diversity is a great value. Such institutions are brilliant examples of what humanity can achieve when we put aside our differences and focus on the common good.