Skip to main content

Celebrate 30 years of the World Wide Web with the March 2019 issue of Physics World

The dark side of social media
Social media can bring people together for good, but it can also connect supporters of terror
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected

      It was 30 years ago this month that Tim Berners-Lee, then a physicist-turned-computer-scientist at CERN, published a document entitled “Information management: a proposal”. The document described a way to let the thousands of scientists at the lab keep track of all the information needed to build and operate the upcoming Large Hadron Collider. Envisaging the use of hypertext to link documents, Berners-Lee’s proposal was the birth of what became the World Wide Web.

      This month you can celebrate the 30th anniversary of the Web with a special issue of Physics World magazine. In the cover feature of the 80-page issue, which you can also read here, Neil Johnson from George Washington University in the US describes how physics can help in the battle against terrorism, by tracking the birth, growth and death of extremist groups who have gathered together online through social media. The video above provides an overview of the feature.

      Cover of the March 2019 issue of Physics World magazine

      Elsewhere in the issue, you can find out how machine learning and artificial intelligence are affecting physics – from statistical and medical physics to quantum and materials science. There’s also a great feature on how computing and simulation have evolved into the “third pillar” of science alongside theory and experiment. Problem is, too many physicists insist on developing their own software from scratch, despite not always being the best coders in the world. Plus, they love to redeploy successful software for new applications, leading to code that quickly gets bloated and inefficient.

      For a bit of fun, try our special Internet and Web-themed cryptic crossword and take a peek at a special, two-page graphic by Jess Wade outlining the history of the Web. If you don’t get all her references, you’ll just have to look them up – on the Web. And if you fancy a career of  your own in computing, software or IT, check out what Federico Carminati, from CERN’s openlab, has to say.

      You can enjoy the January 2019 issue of Physics World magazine via our digital apps for iOSAndroid and Web browsers. (membership of the Institute of Physics required). Let us know what you think about the issue on TwitterFacebook or by e-mailing us at pwld@iop.org.

      For the record, here’s a run-down of what’s in the issue.

      • Hi-tech firms seek clarity amid Brexit confusion – As the UK prepares to leave the European Union this month, leaders of Britain’s industrial-physics community are eyeing up their options. Margaret Harris reports

      • Vague but exciting – James McKenzie reflects on how the World Wide Web has transformed every aspect of our lives since its creation at CERN 30 years ago

      • A frame of mind – An ongoing debate in artificialintelligence research reveals productive interactions between cognitive scientists and philosophers, says Robert P Crease

      • Out of the margins – Janice Hudgings and Chaelee Dalton describe how integrating pro-equity material into a standard physics curriculum can improve the learning experience of students from under-represented groups

      • The dark side of social media – Social media can bring people together for good, but it can also connect supporters of terror, extremism and hate. Neil Johnson shows how physics can shed light on this darker side of our online world

      • The third pillar – Computing has quickly evolved to become the third “pillar” of science. But to reap its true rewards, researchers need software code that is flexible and can be easily adapted to meet new needs, as Benjamin Skuse finds out

      • A learning revolution – The groundwork for machine learning was laid down in the middle of last century. But increasingly powerful computers – harnessed to algorithms refined over the past decade – are driving an explosion of applications in everything from medical physics to materials, as Marric Stephens discovers

      • Life, the universe and everything – Writer, broadcaster and physicist Paul Davies‘ latest book grapples with the laws that govern the emergence of life. Tushna Commissariat reviews The Demon in the Machine and questions Davies about the science and the motivations behind his new work

      • What has the Earth ever done for us? – Ian Randall reviews Origins: How the Earth Made Us by Lewis Dartnell

      • Particles of the future – Federico Carminati, computerphysicist at CERN openlab, talks to Tushna Commissariat about career opportunities in highenergy physics and computing

      • Cyberspace cryptic crossword – As part of the World Wide Web’s tricennial celebrations, put your wetware processors through their paces with this Internet-and-computer-themed cryptic crossword compiled by Ian Randall

       

      Virtual lens enhances X-ray microscopy

      A new computational imaging technique promises to improve the resolution of transmission X-ray microscopy (TXM). Fourier ptychography, in which the objective is moved during acquisition, was developed for use at visible wavelengths in the last few years. Now, researchers at the Paul Scherrer Institute (PSI) and ETH Zürich in Switzerland have demonstrated a variation that works with X-rays, providing quantitative phase- and absorption-contrast images at high resolution. The method will allow biological samples to be studied in more detail than is currently possible (Science Advances 10.1126/sciadv.aav0282).

      Given a perfect set of optical components, the resolution of an imaging system is limited by the wavelength of the light being gathered. X-rays, then, with wavelengths orders of magnitude shorter than visible light, should allow for images hundreds of times sharper than those produced using optical microscopes. The problem is, X-ray optical components are far from perfect.

      One of the sources of compromise in X-ray optics is the objective lens, which focuses light from the target onto the detector. An ideal lens for high-resolution imaging is one with a large numerical aperture, meaning that it captures light over a large angle of incidence. Because X-rays are not refracted significantly by any known material, X-ray optics has no direct equivalent to the glass lens used to focus light in visible-light microscopy.

      Simulated objective

      Instead of refracting lenses, X-ray microscopes are commonly built around a Fresnel zone plate (FZP), which focuses radiation by diffraction. PSI’s Klaus Wakonig, and colleagues at PSI and ETH Zürich, used just such a device, but they moved the FZP parallel to the imaging plane between acquisitions. This let the researchers sample a much greater portion of the diffracted beam, so when they reconstructed an image from the separate measurements, it was as if they had used a physical objective with a much larger numerical aperture.

      X-ray phase contrast image

      Although the researchers’ demonstration used coherent X-rays from a synchrotron, they think the same benefits could be realised by making quite simple modifications to existing TXM sources. “This is at the core of our current research interests: how to relax conditions of stability, beam manipulation and coherence such that Fourier ptychography may become applicable at a wider range of instruments?” says Andreas Menzel, the project’s principal investigator.

      One aspect that the researchers are confident of improving is the speed of the process. At the moment, reconstructing a given image requires more than a hundred separate acquisitions of a few seconds each, with the objective and detector moved every time.

      “Indeed, Fourier ptychography is commonly slower than other full-field imaging techniques,” says Menzel. “However, larger detectors are currently being developed which will enable us to keep the detector at the same position. Due to the small scan range and the low weight of FZPs, the remaining scan can be orders of magnitude faster.”

      Biological applications

      If current TXM apparatus can be adapted to use X-ray Fourier ptychography, even modestly equipped laboratories could be given a significant new capability. Biological materials typically vary little in how strongly they absorb X-rays, so images can fail to reveal important structural details. The phase of transmitted radiation, on the other hand, is much more sensitive to differences in composition, meaning phase-contrast images can show features too subtle for standard TXM to capture.

      Another advantage that makes the technique particularly suited to biological contexts is the relatively small amount of radiation that it delivers to the sample. “X-rays have the tendency to destroy the very structures that you’re interested in imaging,” says Menzel. “But in our experiments, the improvement beyond standard TXM came with a virtually indiscernible increase in required radiation.”

      This ability to image delicate structures at high resolution would be invaluable in studying radiation-sensitive samples like tissues and cell cultures, and could yield insights into any number of disorders, from cancer to Alzheimer’s disease.

      The dark side of social media

      Are you a member of a Facebook group? I belong to a couple, including one for jazz musicians interested in playing at local gigs. In fact, there are three billion active users of Facebook – that’s roughly half the planet – and each of them is typically a member of more than one Facebook group. So the chances are you’re in a Facebook group too.

      Facebook and its international competitors – such as VKontakte in Russia – purposely design their group features to bring people together into relatively tight-knit clusters so that they can focus on some shared interest or purpose (figure 1). Popular Facebook groups include one for fans of the actor Vin Diesel, another for those who love exotically flavoured crisps, and one for self-proclaimed SAHDs (stay-at-home-dads).

      However, not all online groups (or their simpler cousins, “pages”) are as benign. That’s because social-media tools – just like any technology – can be used for bad as well as good. So while groups or pages can bring together people from across the planet who like crisps, they can also link those with a potential interest in far more dangerous activities such as terrorism, extremism and hate against a particular sector of society.

      Social media mapping

      There are plenty of examples where online narratives have incited individuals to commit violent acts. On 8 March 2015, for example, there was a post on VKontakte in a group supporting jihadism and the so-called “Islamic State” (IS) that said, “[translation] IS are preparing to attack the city of Karbala [in Iraq], 500 tonnes of explosives are ready”. This was followed a few months later by the discovery of booby-trapped vehicles and IS members in a small town 80 km west of Karbala. Another example of an online group potentially influencing an individual to violence was the fatal stabbing of a black university student in Maryland, US, in May 2017, where the suspect – a white student at a neighbouring university – belonged to a Facebook group called “Alt-Reich: Nation”, which featured white-supremacist content.

      The challenge

      But could we turn these examples on their head and use such social-media activity to foresee horrible real-world events? That might seem unlikely, given that such attacks appear to come from out of the blue, carried out by “lone-wolf” individuals with no criminal record. And with billions of online users, detecting who will act sounds like looking for a needle in a haystack – especially as, prior to any attack, each “needle” may be effectively indistinguishable from any other straw of “hay”.

      This was the problem that I and my colleagues, Pedro Manrique and Minzhang Zheng from the University of Miami, began grappling with back in 2011. That year we had joined a multidisciplinary team that included computer scientists from HRL Laboratories in Malibu and social scientists from Harvard, Boston and Northeastern universities, to take part in the Open Source Indicators (OSI) challenge run by US Intelligence Advanced Research Projects Activity (IARPA).

      IARPA’s research question sounded simple on the surface: if you have access (as we all do) to all the public information available on the Internet, can you provide reliable warnings about future societal activity such as civil unrest and violence? With various countries in Latin America acting as a test-bed, our task was to predict the date, location, cause and level of violence of such events. Run as a competition against other combined university–industry teams, all predictions were submitted electronically in real time and later scored by IARPA according to whether the event actually happened and how the details played out compared to the prediction.

      We, like all the teams, initially assumed that the answer would lie in the Twitter activity of users. After all, the challenge took place just after the 2010 “Arab Spring” – a series of anti-government protests and armed rebellions across the Middle East – when it had been claimed that Twitter was being used to co-ordinate individuals for street protests. We did indeed find Tweets of this nature – but there were far too many compared to the actual events, meaning that the number of false alarms was huge. As a result, the scores of all teams remained modest.

      Then things got worse. Along came the “Brazilian Spring” in 2013 – a huge spate of large-scale street protests that broke out unexpectedly around a range of social and political concerns. All the Twitter-based models, however, had missed this completely. Indeed, the Twitter feeds had looked fairly typical prior to the onset. Where, if any, was the online precursor signal ahead of the offline riots?

      Physics patterns

      The other teams, who were primarily engineers and computer scientists, immediately turned their attention to finding the individuals whose Twitter feeds had acted as the trigger for these unpredicted protests. In other words, they went in search of a guilty needle in the huge haystack of Tweets – assuming implicitly that there was one. We instead decided to take a step back and think of the underlying physics.

      Physics tells us that large-scale changes in a physical system – like water “suddenly” boiling – cannot properly be understood in terms of what a single member molecule is doing. Instead, the answer lies in the collective, “many-body” behaviour – the correlations that develop during the build-up, between molecules from across the entire system. When a system approaches the phase-change, these correlations begin to cluster, and the number and size of these correlation clusters escalates. The precursor signal therefore lies not in the needles themselves, but in how they cluster in time. On social media, by analogy, the precursor can be found in online groups – not with the individuals themselves. Each group, after all, is nothing but a cluster of correlated individuals (figure 1).

      Figure 1: Social media schematic

      With this thinking in mind, we went back and studied Facebook groups in the build-up to the Brazil Spring. And there was the precursor signal we had been looking for – an escalation in the number of Facebook groups (i.e. correlation clusters) debating and discussing disagreements with particular policies and issues. Moreover, instead of a single group growing in size and hence being responsible, we found that the signal lay in the pattern the groups were creating across the system. So just as water starts bubbling feverishly as it approaches its boiling point, the creation of Facebook groups begins to escalate.

      Our 2016 Science paper (352 1459) showed that the escalation rate of Facebook-group creation follows an inverse algebraic divergence (figure 2) as the onset approaches, in a way that is mathematically identical to a physical phase transition – but with the crucial new feature that this is an escalation in time as opposed to an escalation in an external control variable such as temperature. It therefore represents a new piece of physics: a dynamical phase transition in an out-of-equilibrium system.

      Figure 2: Phase transition

      As well as revealing new physics, this experience taught us an invaluable lesson about social media that paved the way for our subsequent understanding of online support for terrorism, extremism and hate. Unlike online Facebook groups where in-depth discussions can develop organically over time, Twitter acts more like a platform for shout-outs. Just as you probably would not be convinced to change your opinion about a complex issue such as Brexit simply by what an individual shouts out on a busy high street, nor do groups of people gravitate toward collective opinions or actions on complex issues because of individual Tweets. Fans of flavoured crisps, as well as stay-at-home dads, seek an exchange of opinions and advice through social-media groups, not Twitter shouts – and so too do people with a common enemy such as the West, immigrants, or people of a different race, religion or gender.

      We therefore expanded our study to look at other forms of shared hatred – not against a political system, but against the West as a whole. It was now early 2014 and IS was starting to develop. Immediately it became clear to us that Facebook was doing a good job of shutting down groups developing extreme pro-IS narratives – a good thing, but bad for our research. However, we did find them on VKontakte – a social-media platform based in Russia that hosts almost one billion people worldwide. Like Facebook, VKontakte has a group tool that enables people with common interests to aggregate together online. However, unlike Facebook, VKontakte is less able to find and quickly shut down extremist and violent groups, making it the “go-to” place for many who wish to share such opinions. Indeed, the platform appears to have been a crucial tool for IS recruitment, particularly among university students, and we found a near-identical algebraic escalation to the Brazil Spring in the pattern of pro-IS groups created prior to IS’s sudden and unexpected attack on Kobane, Syria, in September 2014.

      People, not molecules

      So, job done? No. This was still a systems-level theory – like thermodynamics – and did not explicitly include the fact that humans, unlike water molecules, are all different. The implicit physics assumption of identical particles therefore invalidates the use of any many-body theories to explain collective human behaviour.

      To get a “many-people” theory, we would have to do something that no many-body physics theory had ever done. We could not assume humans are like, say, unconscious, interchangeable atoms but had to include the heterogeneity of living, thinking people. Our hypothesis was that we could take a “mesoscopic” perspective where we sacrifice specific details of each individual in order to capture the overall diversity of the population. Inspired by how wildlife diversity is used to shed light on an ecosystem’s development, we hoped that incorporating a “cartoon” representation – just the basic skeleton of the system – of human diversity might be similarly sufficient. We therefore allowed each human to have a certain “character” typified by a single number between 0 and 1. Though this sounds like a very restricted description for a human being, it turns out it matters little if this character is more complicated – like a multidimensional vector – since the key lies in allowing the individuals to be distributed fairly evenly across the character space between 0 and 1 (i.e. the population is diverse).

      Figure 3: growth of IS groups

      Making this simplification then allowed us to describe mathematically how the different characters manage to “gel” into groups. For this, we took inspiration from gelation theory, which is well established for identical particles and has been used to describe aggregation in a wide variety of physical systems, such as milk curdling when proteins form inter-molecular bonds. As expected, however, it fails to describe the online pro-IS group dynamics because it assumes that all particles are identical. But as shown in our 2018 paper (Phys. Rev. Lett. 121 048301), our generalized gelation equations “with character” explain not only the timing of the onset of different groups forming, but also their wide range of growth patterns (figure 3). And when we added in the fact that groups that develop strong narratives in support of terrorism and extremism get shut down by social-media moderators, we obtained an almost perfect fit for the evolution of pro-IS groups online.

      Human brain or social media?

      Having understood these moving parts, we could then provide the first ever picture of how a worldwide terror/extremism “organism” evolves in time online. Figure 4 shows snapshots throughout the organism’s lifespan: from its birth in 2014, through a period of rapid growth and evolution, to maturity in mid-2015, and then a gradual decay in activity toward old age and death as the pro-IS groups got shut down more aggressively and their members migrated to encrypted platforms such as Telegram. Not only does each picture look like a brain, but the network behaviour over time is remarkably similar to what is currently known about a brain network during a human’s lifetime.

      It turns out that we had stumbled upon a very unlikely but precise connection between online terrorist support and the human brain. In this analogy, each online group acts as a “functional unit” like synapses in the brain, into which “structural units” – users or neurons – connect (figure 1). And just as neurons can engage in multiple synapses, users can be members of more than one online group.

      Figure 4: Global IS support

      The early stages of the pro-IS “brain” in figure 4 show a large amount of redundancy, with many different groups serving similar functions – just as in a real infant brain. Between infancy and maturity, some of these functional units begin to dominate, as in an early adult brain, and there is an optimal blend between specialization and synchrony in the system. By old age, several giant groups (functional units) dominate, but they share very few common users and hence lack overall synchrony – as in a human brain in old age. Moreover, the functional network (i.e. the network of groups) suffers a loss in small-world behaviour as it heads into old age, while the structural network (i.e. the network of users) shows the opposite trend – exactly as in an ageing human brain.

      Inadvertently, our study of online support for terrorism/extremism has thrown up a new proxy for a human brain, with the advantage that its individual pieces and connections can be measured precisely over time from public Internet data – unlike its biological counterpart. This in turn makes it a potentially unique model for assessing “what-if” scenarios in a real brain, such as cutting out particular pieces, or delaying or stimulating growth of certain parts. Moreover, our latest work on “hate-speech” groups (arXiv:1811.03590) shows similar phenomena, suggesting that the way in which humans conduct clandestine, anti-societal and/or illicit activities online, follows a common pattern. This in turn may feed into the universality observed in other human contexts (see “Maths meets myths” by Ralph Kenna and Pádraig MacCarron Physics World June 2016).

      There is of course much still to do. As you read this, there are undoubtedly individuals online who are developing the intent and capability to carry out further violent attacks. So how might such “many-people” physics theories help detect them before they act? Imagine you meet someone in your university and are interested in knowing the next step in their career. But instead of asking them their current thoughts and getting a potentially vague answer since they themselves may not yet know, you simply ask them what courses they have taken so far. This will then tell you the spectrum of things that they have been exposed to, and hence lets you narrow down what job they are likely to end up in – perhaps better than they themselves could at that stage. In an analogous way, such generalized many-body physics models, in the hands of security specialists, could play a similar role for terrorism, extremism and hate by seeing which individuals have passed through which groups and hence are likely to have the necessary intent and capability.

      It is unlikely to be a perfect solution – it is definitely unconventional – but surely it is better than waiting for something horrific to happen before we take any action.

      Carbon rise could cause cloud tipping point

      Climate scientists have confirmed a high-level hazard, a cloud tipping point, that could send global warming into a dramatic upwards spiral.

      If carbon dioxide concentrations in the atmosphere become high enough, the clouds that shade and cool some of the tropical and subtropical oceans could become unstable and disperse. More radiation would slam into the ocean and the coasts, and surface temperatures could soar as high as 8 °C above the levels for most of human history.

      And this dramatic spike would be independent of any warming directly linked to the steady rise in carbon dioxide concentrations themselves, the scientists warn.

      In Paris in 2015, a total of 195 nations vowed to take steps to contain global warming to “well below” a maximum of 2 °C above the average before the start of the Industrial Revolution, powered by the exploitation of fossil fuels.

      In the last 200 years, levels of the greenhouse gas carbon dioxide in the atmosphere have increased from 288 parts per million to around 410 ppm and the average global temperature has already increased by about 1 °C.

      Researchers have repeatedly warned that the Paris promises have yet to be turned into coherent and consistent action, and that if the world goes on burning coal, oil and natural gas on a “business as usual” scenario, catastrophic consequences could follow.

      Now US researchers warn in the journal Nature Geoscience that they know a bit more about the climate mechanisms by which global warming could accelerate.

      If carbon dioxide ratios climb to 1,200 ppm – and without drastic action this could happen in the next century – then the Earth could reach a tipping point, and the marine stratus clouds that shade one-fifth of the low-latitude oceans and reflect between 30% and 60% of shortwave radiation back into space could break up and scatter.

      The sunlight they normally block would slam into the deep blue sea, to warm the planet even faster.

      Avoidance possible

      “I think and hope that technological changes will slow carbon emissions so that we do not actually reach such high CO2 concentrations,” said Tapio Schneider, an environmental scientist at the Jet Propulsion Laboratory, the research centre managed for the US space agency NASA by the California Institute of Technology.

      “But our results show that there are dangerous climate change thresholds that we have been unaware of.”

      The role of clouds in the intricate interplay of sunlight, forests, oceans, rocks and atmosphere that controls the planet’s climate has been the subject of argument. Do clouds really slow warming? And if so, by how much, and under what conditions?

      There may not be a simple answer, although researchers are fairly confident that the thinning of clouds over the California coasts may have made calamitous wildfires in the state more probable.

      So to resolve what Professor Schneider calls “a blind spot” in climate modelling, he and his colleagues worked on a small-scale computer simulation of one representative section of the atmosphere above the subtropical ocean, and then used supercomputers to model the clouds and their turbulent movement over a mathematical representation of the sea. And then they started to tune up the atmospheric concentrations of carbon dioxide.

      Carbon threshold

      They found that, once CO2 levels reached 1,200 ppm, the decks of stratocumulus cloud vanished, and did not reappear until CO2 levels dropped to well below this dangerous threshold.

      If – and this has yet to happen – other researchers use different approaches to confirm the result, then the US scientists will have established a better understanding of one component of natural climate control.

      The research may also illuminate a puzzle of climate history: 50 million or more years ago, during a geological epoch called the Eocene, the Arctic ice cap melted. Climate models have shown that, for this to happen, atmospheric carbon ratios would need to rise to 4,000 ppm.

      These, the Caltech team, suggests, would be “implausibly high” CO2 levels. The latest study suggests this might be an overestimate: a mere 1,200 ppm would be enough to set the planetary thermometer soaring.

      Plant genetic engineering goes nano

      Carbon nanotubes can be used as tools to more easily deliver genes into plant cell nuclei and chloroplasts, say two groups of researchers – one at the University of California at Berkeley and the other at the Massachusetts Institute of Technology (MIT). The UC Berkeley approach involves grafting DNA onto a carbon nanotube to deliver the biomolecule into plant cells while the MIT technique makes use of a mechanism called lipid exchange envelope penetration (LEEP) for delivery into chloroplasts specifically. Both techniques are very different to conventional genetic engineering methods – such as biolistics (which is done by firing genes into plant tissue) or delivering genes using infectious bacteria.

      Genetically enhanced plants can be produced in higher yields and be made to be more resistant to disease and drought. This will be important for feeding a growing population in the coming decades, especially in the context of climate change. They may also be used to provide cleaner and more efficient biofuels as well as to biosynthesise pharmaceuticals.

      Genetically editing plant cells is done with tools like DNA, RNA and proteins, but these biomolecules are difficult to deliver through plant cell walls, which are rigid and multi-layered structures that envelope the plant cell membrane.

      Bacteria and gene guns

      At the moment, the most common way to deliver genes involves making use of bacteria that naturally infect plants to deliver genes for desired traits, but this technique is efficient for only a narrow range of plant species. The DNA delivered in this way also integrates into the genome of the plant, which then means that it has to be labelled as being genetically modified (GMO). Biolistics (also known as the gene gun) is another commonly-employed technique and can “shoot” genetic material like a bullet into a wider range of plants but it is destructive and, again, inefficient. “It is like blowing a hole in a plant cell and hoping that your gene and the cell both survive,” explains Markita del Carpio Landry, who led the UC Berkeley research effort.

      Landry and colleagues developed two distinct grafting methods to load green fluorescent protein (GFP)-encoding plasmids on single- and multi-walled carbon nanotubes (SWCNTs and MWCNTs) for delivery through the plant cells walls of four plants: Nicotiana benthamiana (Nb), Eruca sativa (arugula), Triticum aestivum (wheat) and Gossypium hirsutum (cotton).

      High strength and needle-like aspect ratio

      “Carbon nanotubes are interesting here because they are among the few nanoparticles that can be made narrow – just 1-nm-wide – which means they are small enough to slip through the plant cell wall,” she says. “Their exceptionally high strength and needle-like aspect ratio allows them to be passively internalized into the plant cells of many different species.”

      Delivering genes to chloroplasts

      The researchers tracked the nanoparticles and found that the plants glowed green when irradiated with UV light. This shows that the GFP gene had been transcribed and translated into protein, just as it would if it was the plants’ own gene.

      They also found that the CNTs end up in both the cell nuclei and in chloroplasts (which each contain about 80 of the genes that code for proteins involved in photosynthesis). Indeed, over 90% of the CNTs end up in chloroplasts

      Another good thing about the nanotubes is that they act as a shield and prevent the DNA from being inserted into the plant’s genome, explains Landry. This means that the plants do not have to be designated as GMO in the US and many other countries (excluding the European Union). And that is not all: to their surprise, the researchers also found that when adsorbed onto the surface of CNTS, the DNA cargoes showed much less endonuclease-based degradation.

      “Nucleases are proteins in cells responsible for degrading foreign DNA and RNA,” explains Landry. “One of the challenges for DNA and RNA delivery to many cell types, not just plant cells, is to inhibit this degradation. Our work shows that CNTs not only facilitate DNA delivery to plant cells, they may also be ‘hiding’ the DNA from being recognized and degraded by nucleases.”

      Getting into chloroplasts

      Meanwhile, the MIT team, led by Michael Strano has made of use of the so-called lipid exchange envelope penetration (LEEP) mechanism to deliver nanoparticles with DNA on their surface into plant cells – and more specifically into chloroplasts. Since plant cells have dozens of chloroplasts, those expressing foreign genes could allow for much larger amounts of a desired protein to be generated than is possible in only cell nuclei expressing these genes. Editing chloroplast genes in this way could help increase the amount of energy produced by photosynthesis, which, in turn would allow plants to grow bigger and more quickly, and in greater yield.

      Strano and colleagues discovered LEEP a few years ago when they found that they could make nanoparticles penetrate plant cell membranes by tuning the electrical charge of the particles (so that they had a high enough zeta potential) and their size (so they were of the right dimensions).

      In their new technique, the researchers wrapped single-walled carbon nanotubes in chitosan, a biopolymer that is commonly found in the shells of shrimps and other crustaceans. This chitosan-complexed-SWCNT is positively charged, which allows it to bind to negatively-charged plasmid DNA through electrostatic attraction.

      Weakened binding

      “We simply infiltrated this plasmid-DNA-SWCNT complex through plant leaves,” explains team member Tedrick Thomas Salim Lew. “The complex enters the leaf through stomata, passes through the cell wall and cell membrane, and finally localizes in the chloroplasts.”

      Since chloroplasts are slightly more basic (pH 8) than the cytosol or the environment outside the plant cell, the chitosan-SWCNT becomes less positively charged when it reaches the chloroplasts. Binding with plasmid-DNA is thus weakened and the DNA unloads within the chloroplasts. Once unloaded it can be translated into proteins

      In their experiments, the researchers delivered a gene for yellow fluorescence protein (YFP), which allowed them to easily visualize which plant cells expressed the protein. They found that roughly 47% of the plant cells did so. This figure could be increased by delivering more particles, they say.

      Technique works for wide range of plant species

      The new nanoparticle-mediated delivery approach is simple, easy to perform, inexpensive and works for a wide range of plant species, Lew tells Physics World. Indeed, the researchers tested it out in spinach, watercress, tobacco, arugula and Arabidopsis thaliana. “Our technique is fundamentally different to Landry and colleagues ‘since we design our nanoparticles to selectively traffic into the chloroplast and target the chloroplast genome.”

      Since the chloroplast genome can only be inherited from maternal cells, the beneficial traits can be passed onto offspring, but, importantly, cannot be transferred to other, nearby, plant species,” he explains. This means that there is less risk of undesirable gene spread.

      Like Landry and colleagues’ nanotubes, these nanocarriers can also protect the plasmid from enzymatic degradation.

      “We believe that our nanoparticle-mediated approach is a useful complement to the plant biotechnology toolkit,” says Lew. “It could also help optimize pharmaceutical product synthesis in plant chloroplasts – some vaccines are actually produced within the chloroplasts through genetic engineering.”

      Applications in crop engineering and plant biology studies could also benefit. “For example, in the future, we may be able to engineer crops that are resistant to diseases or droughts by delivering the appropriate gene to the chloroplasts.”

      More permanent effects required

      However, the researchers do stress that further improvements have to be made to their technique before this can happen. “The gene expression we have observed is transient (lasting only a few days), meaning that the foreign DNA is not stably integrated in the plant genome. For the offspring to inherit the genetic traits, we have to realize more permanent editing,” adds Lew.

      Landry and colleagues’ technique also results in transient gene expression.

      “Our studies do show, however, that we can control how nanoparticles enter plant cells with a very fine degree of precision,” Landry tells Physics World. “This will allow us to dream up future applications in transgene-free crop editing, rapid testing of how plant genes may confer desirable traits to creating more robust crops, and even engineering photosynthetic proteins that are coded in the chloroplast genome.”

      “Exciting new frontier”

      “We posted a bioRxiv of our CNT-based gene delivery work back in mid-2017 in the hope that our finding – that CNTs could be used to deliver genes to plants passively – would be leveraged by the scientific community. I’m now thrilled to see that these nanomaterials are being adopted for broader applications in plant transformation, such as in Strano and colleagues’ study as well as and several other research groups in the plant nanobiotechnology field, demonstrating carbon nanotube-based delivery of genes to the chloroplast.

      “Plant science nanotechnology is an exciting new frontier and we are only beginning to understand how nanoparticles can squeeze through the cell wall. My lab has recently completed a study using DNA origami to look at how parameters like nanoparticle size, stiffness and even shape can affect whether or not particles make it into the plant cell. Understanding how these particles manage to passively penetrate the cell wall will help us develop better tools in the future so that we can implement other exciting tools like CRISPR for engineering the next generation of crops for an improved agriculture.”

      Full details of the research from both groups are published in Nature Nanotechnology. The MIT group work is here and the UC Berkeley’s here.

      Optical frequency comb fits in your back pocket

      Physicists in Russia and Switzerland have built the smallest optical frequency comb to date, fitting the entire device into a volume of just 1 cm³. Their research is a significant step towards cheap, easily-produced microcombs – which would be suitable for applications including information processing and telecommunications.

      An optical frequency comb is a laser that produces a spectrum of discrete, equally-spaced frequency lines – resembling teeth in a comb. In recent years, they have played important roles in metrology, spectroscopy and communications.

      One of the most successful methods for creating the combs’ signature spectra is to couple a continuous wave laser to a microresonator waveguide. These microresonators are ideal for producing the so-called “dissipative Kerr Soliton states” – waves that retain their shapes as they travel. This method, however, still has high loss rates, driving up the power requirements of the devices. This means the driving lasers are large and expensive; making low-cost, portable frequency combs unfeasible.

      Minimizing losses

      Now, a team led by Tobias Kippenberg at the Swiss Federal Institute of Technology Lausanne has found a way to get around this problem. The team used a highly-sophisticated deposition process to fabricate an ultralow-loss silicon nitride microresonator. They then coupled the microresonator to a chip-based indium phosphide laser diode – a compact device that is widely available commercially.

      Intrinsic scattering from the microresonator waveguide reflects a small portion of the laser light back to the laser. This feedback helps to stabilize the laser, eliminating any need for bulky on-chip tuning mechanisms such as electronics and heaters.

      Occupying 1 cm³, the team’s frequency comb is the smallest ever produced; allowing it to be integrated onto a single chip and controlled electrically. This was mostly possible because the optical losses sustained by the device’s integrated silicon nitride waveguide were unprecedentedly low, meaning power thresholds were low enough to excite dissipative Kerr soliton states. The device consumes less than 1 W and the spacing between the comb’s teeth is less than 100 GHz.

      Kippenberg and his colleagues are confident that such a device could allow for next-generation applications including precise distance measurements using LIDAR, as well as facilitating extremely fast information processing in data centres.

      Full results are published in Nature Communications.

      Robert Myers named as new director of the Perimeter Institute for Theoretical Physics

      The Canadian theoretical physicist Robert Myers has today been unveiled as the next director of one of the world’s leading theoretical physics centres. Myers, 60, takes over from Neil Turok as head of the Perimeter Institute for Theoretical Physics (PI) in Waterloo, Ontario. Turok steps down after a decade leading the Canadian centre to spend more time doing research at the PI.

      The PI focuses on fundamental questions in nine areas of physics including cosmology, condensed-matter physics, particle physics and quantum information. Home to more than 150 resident researchers as well as 1000 visiting scholars, the PI was founded in 1999 by Mike Lazaridis, the founder of Research in Motion — the company that made the Blackberry wireless handheld devices.

      Perimeter is fortunate to have a respected, dedicated scientist and likeable person like Myers as its new director

      Donna Strickland

      Founded in 1999, the PI’s research programme began in 2001 with nine scientific staff, soon expanding to 24 by the end of the year. In 2004, the institute moved into its now iconic premises but the PI’s first executive director Howard Burton left suddenly in June 2007 apparently after negotiations over a new contract broke down. A year later, the institute appointed the cosmologist Neil Turok, who was then at Cambridge University, as director — a position he has held for over a decade before announcing  last year that he was going to step down.

      Under Turok’s stewardship, the PI continued to expand. In September 2011, it opened the Stephen Hawking Centre, which doubled the size of the institute and provided a space for its Masters students. In 2017, the institute opened its Center for the Universe, which brings together researchers to deal with the huge influx of data emerging from major telescopes such as the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Event Horizon Telescope (EHT).

      The next phase

      Myers’ areas of expertise include black holes, string theory and quantum entanglement. He received his PhD at Princeton University in 1986 and, after a stint at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, moved in 1989 to McGill University before joining the PI in 2001. Along with the directorship, Myers will hold the BMO Financial Group Isaac Newton Chair in Theoretical Physics – a position that is supported by a C$4m endowment fund by BMO Financial Group that was made in 2011.

      The Perimeter Institute is not just an institute, but a family

      Robert Myers

      “[Myers] is the perfect choice to lead Perimeter into the future,” says Turok, who will now lead the PI’s Center for the Universe. “He has been my closest advisor throughout my time as director and I am delighted to remain at Perimeter as a researcher with him charting the course for the institute.”

      According to Lazaridis, Myers is highly respected throughout the global physics community. “We are thrilled to move into the next exciting phase of Perimeter’s evolution under Myers’ leadership,” adds Lazaridis. “He possesses the drive and vision to advance Perimeter at a particularly exciting time in the history of the Institute and of physics more generally.”

      Myers already has some experience leading the PI, having served as interim director for a year following Burton’s exit and been the PI’s interim director since 1 January after Turok stepped down. “I am delighted to have the job,” Myers told Physics World. “It’s a really exciting time and I am looking forward to leading the institute.”

      Myers pays tribute to Turok, who he says took the PI to the “next level”, had great ideas and a “keen eye for talent”. But given the recent expansion of the PI into new areas of research, particularly condensed-matter physics, Myers says that initially he will focus on consolidating those areas given that some are small groups. “The PI is not just an institute, but a family,” says Myers. “There are a lot of smart people here and I want to listen to what people think and not be too prescriptive about the future.”

      Myers says there are many opportunities in theoretical physics, mostly thanks to the vast amounts of data that are being collected by various experiments such as CHIME, EHT and the LIGO gravitational-wave detectors in the US. Yet Myers doesn’t believe that theoretical physics is in “a deep crisis” as Turok once admitted. “Particle physics is somewhat at a crossroads,” he says. “Describing it as a crisis is slightly dramatic, but I would agree that people have been relying on the status quo for too long and relying on certain models from decades ago.”

      Indeed, Myers now challenges researchers to think in new ways. “Young people are the future and we want to instil in them to question the status quo,” he adds. “After all, it is the people here that make the PI such a special place.”

      A “jewel” of theoretical physics

      Myers’ appointment has been welcomed by the physics community. Donna Strickland from the University of Waterloo who shared the 2018 Nobel Prize for Physics says that Myers is an “excellent choice”. “Perimeter is fortunate to have a respected, dedicated scientist and likeable person like Myers as its new director,” she adds. That is backed up by theorist Ed Witten from the Institute for Advanced Study at Princeton University, who says that Myers is an “extremely influential voice in theoretical physics” who will be a “great leader” for the PI.

      John Preskill from California Institute of Technology, meanwhile, calls the PI a “jewel” of theoretical physics. “I’m very glad to see that PI will remain in capable hands after Turok steps down as director,” adds Preskill. “Myers is a visionary physicist, a natural leader, and a great guy. I’m confident that with [his] guidance, PI will soar to even greater heights.”

      Sabine Hossenfelder from the Frankfurt Institute for Advanced Studies, who spent three years at the PI, told Physics World that Myers is “highly qualified” for the role and believes he will do well. Yet she warns that regardless of how well-financed and skilled Perimeter’s scientists are, they need to be aware of biases that are currently “entirely unaddressed” in science and “stand in the way of progress”.

      “I hope that the new director will take measures to limit the influence of social pressures on research decisions,” says Hossenfelder. “Scientists are not immune to social biases and this can stand in the way of scientific progress.” Hossenfelder adds that the first step “to alleviate the problem” is to raise awareness. “All researchers should have a basic education about cognitive biases and decision making in groups,” she adds. “Myers is in the position to lead the way in this. I hope he will.”

      Discussing the mystery of life with Paul Davies, plus new catalogues of topological materials

      In this episode of Physics World Weekly we’re exploring the spaces where physics overlaps with other disciplines.

      To kick things off, Tushna Commissariat is in conversation with the physicist and science communicator Paul Davies about his new book The Demon in the Machine: How Hidden Webs of Information are Solving the Mystery of Life. No stranger to tackling the big questions, Davies is seeking answers regarding the nature of life and how it can emerge from the inanimate. Drawing on his work at the Beyond Center at Arizona State University in the US, Davies attempts to unite such seemingly disparate fields as nanotechnology, quantum mechanics and molecular biology.

      Then in the second part of the podcast, we begin by discussing the release this week of  two comprehensive online catalogues of so-called “topological materials”. The work demonstrates that these exotic states of matter – the theme of the 2016 Nobel Prize for Physics – are actually far more common than previously thought. These new online systems for organizing and predicting topological materials could help researchers to develop applications such as low-power devices and quantum computing.

      As always, we bring you a round-up of some of the other research news highlights from the website this week. If you enjoy what you hear, you can subscribe to Physics World Weekly via the Apple podcast app or your chosen podcast host.

      Ubiquity of topological materials revealed in catalogues containing thousands of substances

      Two comprehensive catalogues of potential topological materials have been published by independent teams of physicists. The catalogues contain thousands of crystalline materials and suggest that topological materials are much more common than previously thought. Knowing the topological properties of such a wide range of materials could be a boon to researchers trying to develop technologically useful devices.

      The study of topological materials is a hot topic in condensed-matter physics, but the field is still relatively young – with the first papers appearing in the mid-2000s. Since then, physicists have identified several hundred materials with topological properties.

      Many of these are topological insulators, which are electrical insulators in the bulk but very good conductors on the surface. This occurs because electrons on the surfaces of these materials are unable to backscatter from an impurity or defect without reversing the direction of their spins – which is a result of the topology of their electronic surface states.

      Topological states are robust to perturbations arising from impurities, defects or noise and therefore topological materials could prove very useful in creating low-energy electronic devices and even quantum computers.

      Symmetry indicators

      A few weeks ago, Physics World reported on an arXiv preprint that described a comprehensive search for topological materials using “symmetry indicators”. This work was done by Ashvin Vishwanath at Harvard University in the US, Xiangang Wan at China’s University of Nanjing and colleagues. Using a technique developed in 2017 by Vishwanath and others, the team calculated specific properties of candidate crystalline materials at high symmetry points in the materials’ electronic band structures. This allowed the team to decide whether a material has topological properties without having to do overly complicated and computationally-intensive calculations. They were able to identify nearly 400 materials that could be topological insulators and nearly 700 potential topological semimetals.

      A similar method of “symmetry indicators” has also been developed by an international team that includes Princeton University’s Andrei Bernevig. In 2017, Bernevig and colleagues unveiled an approach called “topological quantum chemistry”, which they have now used to calculate the relevant symmetries of nearly 27,000 materials. As a result, team has identified more than 3000 topological insulators and more than 4000 topological semimetals. To make it easy for other researchers to access their results, the team has created a searchable online catalogue.

      The result was astonishing: more than a quarter of all materials exhibit some sort of topology

      Andrei Bernevig

      “Once the analysis was done and all the errors corrected, the result was astonishing: more than a quarter of all materials exhibit some sort of topology,” says Bernevig. “Topology is ubiquitous in materials, not esoteric,” he adds.

      Meanwhile, at the Institute of Physics (IOP) of the Chinese Academy of Sciences in Beijing a team including, Zhong Fang, Chen Fang and Hongming Weng used a similar approach to calculate the potential topological properties of about 39,000 known crystalline materials. The IOP team identified more than 8000 topological materials and have also made their results available in a searchable online catalogue.

      Not all are ideal

      Chen Fang also says he was “very surprised” by the large number of topological materials that have been identified. However, he cautions that many of these materials are not “ideal” – and the topological properties of some may be smeared-out by more non-topological properties.

      Nevertheless, Fang expects the catalogue to be very useful for both fundamental research and the development of practical topological devices. For example, someone trying to create quantum-computing devices based on topological superconductors could use the catalogue to find materials that can have both topological and superconducting properties.

      Bernevig – who says that his team’s results are in strong agreement with the IOP group – adds, “When fully completed, [our] catalogue will usher in a new era of topological material design.” He adds, “This is the beginning of a new type of periodic table where compounds and elements are indexed by their topological properties rather than by more traditional means.”

      Marcel Franz, a theorist specializing in topological states of matter at the University of British Columbia, told Physics World, “I was not really surprised by the number of newly identified topological materials”. He added, “That topology would be ubiquitous in nature was already suggested by the large number of (3D) topological insulators discovered within a couple of years of the original theoretical prediction”.

      “What surprised me was that the exhaustive database search did not really discover any obvious ‘hidden gems’. For instance it seems that bismuth selenide (one of the first topological insulators originally discovered) will remain the best topological insulator in terms of its bulk gap size and other indicators.”

      All three research groups have published their results in separate papers in Nature, which can be accessed here: Vishwanath; Bernevig; Fang.

      Non-destructive electron microscopy maps amino acids

      Researchers from Oak Ridge National Laboratory have developed an electron microscopy technique that can detect different isotopes in amino acids. The non-destructive technique means that scientists can spatially track amino acids, enabling access to unprecedented details of biological processes. This is particularly useful in the study of protein interactions, which can shed light on disease progression and other complex biological events (Science 10.1126/science.aav5845).

      Traditionally, to study protein interactions, scientists label the protein-of-interest with a specific isotope and watch how its mass changes. Current methods, such as mass spectrometry and optical techniques, are mainly useful at the macroscopic level and can destroy the sample in the process. This new type of microscopy will allow scientists to follow the interaction through space and map the location of labelled amino acids while leaving them intact.

      New vibrations

      The research team used monochromated electron energy-loss spectroscopy (EELS) in conjunction with a scanning transmission electron microscope (STEM). With this technique, a negatively charged electron beam is positioned very close to the sample so that the beam only grazes over it. This means that the machine can both excite and detect molecular vibrations without destroying the sample.

      Usually, the negatively charged electron beam used for electron microscopy is only sensitive to protons. However, the frequency of the molecular vibration is also dependent on atomic mass, which means that heavier isotopes shift the vibrational modes. These shifts were then used by the team to track the labelled amino acids.

      The researchers were able to distinguish between amino acids labelled with carbon-12 and carbon-13 with nanoscale spatial resolution. They then used this method to look at crystals of alanine and map the distribution of the labelled acids throughout the structures.

      Working with mass spectrometry

      Protein labelling is usually performed using mass spectrometry, which has excellent sensitivity, but destroys the sample in the process, thus losing key information about atom connections. Therefore, the information extracted is only a snapshot of one moment in time. The authors of this study don’t believe that their new technique will replace mass spectrometry, but instead suggest that it will offer a complementary method.

      “Our technique is the perfect complement to a macroscale mass spectrometry experiment,” says lead author Jordan Hachtel. “With the pre-knowledge of the mass spectrometry, we can go in and spatially resolve where the isotopic labels are ending up in a real-space sample.”

      The technique could also have applications in areas such as research into polymers and other soft matter. It could find particular use in the field of quantum materials, where isotopic substitution is critical to control superconductivity.

      Copyright © 2025 by IOP Publishing Ltd and individual contributors