Skip to main content

IceCube identifies four galaxies as likely sources of cosmic rays

A huge observatory at the South Pole has identified four galaxies as likely sources of cosmic rays. Rather than detecting cosmic rays, the team analysed a decade’s worth of data gathered by the IceCube Neutrino Observatory to pinpoint the sources, which are expected to also emit huge numbers of neutrinos. The team says that this is the best-ever identification of cosmic ray sources.

Cosmic rays are high-energy charged particles that originate outside the solar system. They are thought to be created by violent astrophysical processes capable of accelerating particles to near the speed of light. However, working-out exactly where cosmic rays come from has proven very difficult because their trajectories are deflected by the magnetic fields permeating interstellar space. Cosmic neutrinos offer a solution because they should be produced in the same places as cosmic rays but are not deflected by magnetic fields.

IceCube comprises of strings of photomultiplier tubes that are suspended within a cubic kilometre of ice at the South Pole. Occasionally a muon neutrino will collide with an atom in the ice, creating a muon that will then emit Cherenkov light as it travels through the ice. This light is detected by the photomultipliers and the signal can be used to work-out where the neutrino came from.

Atmospheric background

Locating neutrino sources in the cosmos is not easy because the IceCube detector is swamped by signals from muons and muon neutrinos created by cosmic ray collisions with the atmosphere. These create a large and diffuse background signal and the challenge is to pick-out point sources of cosmic neutrinos within this background.

The IceCube team used a new data-analysis technique that could process all full-sky observations made between April 2008 and July 2018 – something that was not possible before for software-related reasons. The quasar-like galaxy NGC 1068 emerged as a particularly likely source of cosmic ray neutrinos, standing out of the background with a 2.9σ statistical significance. When combined with three other galaxies that were identified, the four sources collectively stand above the background at a statistical significance of 3.3σ.

Although this remains well short of the 5σ that is normally considered a discovery, the IceCube analysis is strongest evidence that these four galaxies are cosmic-ray emitters. The researchers now hope that their results will motivate further studies of these sources by looking for more neutrinos as well as gamma rays and X-rays – which are also associated with cosmic-ray sources.

The study is described in Physical Review Letters.

A nanosensor to detect epileptic seizures

A new, highly sensitive nanosensor can detect changes in potassium ion levels in the brains of mice as they undergo induced epileptic seizures. The device, developed by researchers at the Institute for Basic Science (IBS) in South Korea and Zhejiang University in China, can record changes in multiple brain regions at the same time, and could thus further our understanding of the mechanisms behind epilepsy and other neurological disorders.

The presence of potassium ions (K+) outside the confines of nerve cells, or neurons, affects the electric potential difference between neurons’ interior and exterior membranes. When the concentrations of these extracellular ions change, the neurons’ ability to transmit signals changes with it. Some such changes are known to be related to chronic neurological disorders such as epilepsy – a condition that affects 1 in 100 people worldwide, and is characterized by recurrent and unpredictable seizures that often have no apparent external trigger.

Sensing small changes in the levels of K+ is important because it might make it possible to predict an imminent epileptic seizure, but most such sensors today cannot do this – particularly in freely-moving animals. They are also susceptible to interference from sodium ions (Na+) because the efflux of K+ is preceded by an influx of Na+ when electric impulses travel along the membrane of a neuron.

Potassium selective

The new nanosensor, which was developed in a team led by Zhong Chen and Daishun Ling at Zhejiang University and Taeghwan Hyeon at the IBS, overcomes these problems. The device consists of an optical potassium indicator (a dye molecule that fluoresces in the presence of K+) embedded in mesoporous silica nanoparticles shielded by an ultrathin layer of a potassium-permeable membrane. This membrane is very similar to the potassium channel in brain cells, and the pore size of the nanoparticles prevents other cations (including Na+) from reaching the indicator. This means the device captures K+ ions exclusively, and can detect their presence at concentrations as low as 1.3 micromoles per litre. Thanks to this high sensitivity, the researchers were able to spatially map sub-millimolar variations of extracellular K+ in three different regions of the mouse brain: the hippocampus, amygdala and cortex.

After injecting the nanosensors into various locations within the brain of a test mouse, the team electrically simulated the mouse’s hippocampus to induce an epileptic seizure and recorded the nanosensors’ optical responses. They then compared these readings with those obtained from simultaneous measurements made using conventional electroencephalography (EEG). They found that in localized epileptic seizures, the extracellular K+ concentration increases from the hippocampus to the amygdala and cortex over time, while in generalized seizures it increases almost simultaneously in all three brain regions.

The researchers say that these results back up the widely-accepted view that electrical stimulation in the hippocampus first involves the adjacent brain area and then propagates through the entire brain. “We expect that our multipoint K+ measurements in freely-moving animals will be a useful technique in neuroscience for examining the functional connections between sub-regions of the brain, as well as neuronal activities occurring in disorders like epilepsy,” they tell Physics World.

Towards whole-brain imaging

The team plan to use their device to detect how seizure activity spreads through the entire brain during an epileptic seizure. Looking further ahead, they would also like to develop tissue-penetrating near-infrared emission-based Ksensors that could be used to precisely detect epileptic foci. “Such a device might help in the diagnosis and treatment of epilepsy and even reduce the need for surgery,” they explain. “If loaded with antiepileptic drugs and coated with nanocomposites that can be disrupted by elevated K+ levels, these nanosensors might even allow for highly localized and on-demand drug release at the point of a seizure,” they say.

The device, which is described in Nature Nanotechnology, might also be adapted to detect cations other than K+ with high sensitivity and specificity, they add.

Microneedle patch combines cold plasma and immunotherapy to treat melanoma

An inter-disciplinary research team headed up at the University of California, Los Angeles (UCLA) has pioneered a new, minimally invasive approach to skin cancer treatment. The technique uses a novel microneedle patch to facilitate the delivery of cold plasma to tumours – and could make immunotherapy more effective for treating melanoma (PNAS 10.1073/pnas.1917891117).

Prolonged survival

As part of the work, the researchers engineered a thumb-sized patch containing more than 200 hollow-structured microneedles. They used the patch to deliver cold atmospheric plasma, a unique type of ionized gas that can kill cancer cells, to tumour tissues in mice with melanoma. The microneedles also deliver immunotherapeutics – immune checkpoint inhibitors – directly to the tumour.

Treatment with the patch significantly delayed tumour growth in the mice and prolonged survival. In addition to inhibiting growth of the targeted tumour, the researchers found that the technique was also capable of reducing the growth of tumours that had already spread to other parts of the body.

As senior author Zhen Gu explains, through this thumb-sized device, cold plasma can efficiently trigger cancer cell death, which initiates a tumour-specific immune response. This immune response is further augmented by the immunotherapeutics released from the device.

“We found that this local device can inhibit the growth of the tumour and prolong the survival of the mice,” says Gu. “More importantly, it could also trigger the systemic immune response to inhibit the growth of distant tumours. The study is also the first to demonstrate that cold plasma can be used in synergizing cancer immunotherapy.”

Clinical potential

Although Gu believes that immunotherapy shows great promise in treating cancers, he stresses that several challenges still remain – for example, the fact that immune checkpoint blockade therapy “overall has low objective response rates and is also associated with systemic toxicities”. In response to these ongoing limitations, he reveals that he and the rest of the research team were motivated by a desire to engineer approaches capable of boosting the overall efficacy of cancer immunotherapy.

“The local device we reported could enhance the anti-tumour efficacy and potentially minimize the side effects related to immune checkpoint blockade therapy,” Gu says.

Moving forward, Gu is confident that the new strategy holds potential for clinical treatment of cancer. However, on a more cautious note, he stresses that although the research team obtained several promising results in preclinical models, the new technique will have to go through further testing and approvals before it could be used in humans.

“Factors to be considered in the future study are the control of the cold plasma – such as time, intensity and frequency – and optimization of the microneedle device, as well as the dosage of cancer immunotherapeutics,” he says.

And this strategy could potentially be extended beyond melanoma treatments. “Integrated with other treatments, this minimally invasive method could be extended to treat different cancer types and a variety of diseases,” Gu adds.

Once a physicist: Julie Bellingham

Julie Bellingham

What sparked your initial interest in physics?

I’d always been interested in physics and how the world worked, and this was encouraged by my parents and some good teachers at school. My dad was always keen on space and astronomy and on one memorable holiday, we visited Kennedy Space Center and saw a shuttle launch, which was amazing. This strengthened a lifelong fascination with astronomy. I’d also always loved art and design too, but while choosing my A-levels I felt like I had two possible life paths: whether to pursue design-led subjects like architecture, or physics. I chose the latter.

You studied at the University of Warwick – what did your MPhys and PhD focus on?

Initially I focused on astronomy, but broadened in my undergraduate degree as I got interested in other areas of physics. By the time I chose my PhD, I was more interested in practical applications, and also thinking of future job options, so I studied surface science, using analytical techniques to study semiconductors.

What made you leave academia, especially after eight years at the STFC?

After my PhD, I decided that pure research wasn’t for me. I had found doing my own research a little isolating as I was often checking on experiments at antisocial hours and working alone. I had spent a lot of time looking at ways to make the PhD students in the department interact more, and learn about each other’s work, so I was looking for ways to bring people together with science, and I found that at the Science and Technology Facilities Council.

My first role there was managing EU projects, working with people across Europe who were developing new technologies for particle accelerators. Later I became the industry liaison for CERN and other European facilities.

How did you get interested in art, design and gardening?

I really enjoyed working at the STFC, but I always felt that there was a deep interest in design which I hadn’t fully explored. Making the change happened quite suddenly. One night I woke up at about 2 a.m., turned to my husband and said I wanted to retrain as a garden designer. A year before this, he had swapped from physics to making films, so he was an enthusiastic supporter of my new direction.

I spend a lot of time sketching, making things and exploring art museums. Garden design is a great way to bring these three things together, but at school I even didn’t know that it was a career option. I use specific artists as inspiration for designs – I love the work of Sol LeWitt, Olafur Eliasson and Dan Flavin to name a few.

What were some of the challenges in setting up your own business?

Getting to grips with being self-employed and the time it takes to retrain was the first challenge. I graduated from the Cotswold Gardening School with a distinction and a prize for being the top student of the year. But a big problem with something like garden design when you start out is the lack of a portfolio. It can take a year to redevelop a garden, and longer still for the plants to reach their full potential. I decided a way around this was to design and build a show garden, so I applied to the Royal Horticultural Society Malvern Spring Show. I felt like the design was a merging of my old career and new career, as the garden showcased astronomical redshift and the role of telescopes in our understanding of the universe. It featured sculptures representing telescopes, and I used swathes of colour to represent the redshift as the flower colours merged from yellows, through orange, to reds. The garden was part-funded by the Institute of Physics and the Royal Astronomical Society, and had a huge reach. I won a silver medal for the garden and was able to use that experience to attract new clients.

How has your physics background been helpful in your current work, if at all?

With garden design, some people think I’m a gardener, but actually most of my time is spent at a drawing board and it can be quite technical. For example, working out how to manage gradient changes while keeping the design intact and working precisely to scale in a technical way is definitely easier thanks to my physics training. Though I’ve been inspired by art, I’m also inspired by science in my designs. My own garden design layout is based on CERN data, and another project was inspired by constellation patterns. I definitely have a different perspective thanks to my background.

Any advice for today’s students?

Physics provides a great grounding and is such an interesting subject. Despite my dramatic career change, I’m pleased I studied physics. Don’t worry if you’ve not decided on a future career path, as there are many options open to you. Keep your eyes open for opportunities. When thinking about your career, don’t just think about the type of work you like, but also what lifestyle you’d like and where you might like to live. Certain careers can open (or close) doors to lifestyle options, so I would recommend considering everything as a whole when looking for work.

Sound waves boost the modulation speed of quantum cascade lasers

Lasers that switch on and off billions of times per second are the backbone of optical communications networks, but this feat is only possible at certain laser frequencies. A team of researchers from the University of Leeds and the University of Nottingham in the UK has now taken a step towards extending this frequency range by using sound waves to modulate the emission of a terahertz (THz) quantum cascade laser. In this new technique, the modulation rate is limited only by the duration of the acoustic pulse applied, meaning that rates of up to hundreds of gigahertz (GHz) are possible. This could allow data to be transmitted at 100 gigabits per second – around a thousand times faster than current wireless systems.

Although THz signals have a shorter effective range than the microwave signals used in today’s wireless data systems, their higher frequency means they can carry more information over the same time frame. That makes THz radiation promising for ultra-fast, short-range data exchange – for example across hospital campuses, between research facilities at universities or in some satellite communications.

For such a system to be practical, however, the THz laser would need to be modulated extremely quickly – around 100 billion times a second. That has proved challenging, but researchers led by John Cunningham in Leeds and Tony Kent in Nottingham say they have found a way to do it using acoustic waves in THz quantum cascade lasers (QCLs).

Applying acoustic waves

Unlike standard semiconductor lasers, which generate photons when electrons and holes combine inside a material with a given band gap, QCLs consist of dozens of thin layers of semiconductors. Each electron that travels through the device “cascades” through a series of quantum wells in these layered semiconductors, emitting multiple photons as it does so, at frequencies set by the structure of the semiconductor layers. It is this emission process that must be controlled in order to modulate the laser output.

One option is to modulate the output electronically, by applying a bias electrical current to the QCL. However, the maximum modulation rate for such a system is roughly 30 GHz at best. The Leeds and Nottingham team instead used an approach based on acoustic waves. “We generated the acoustic waves by impacting a pulse from another laser onto an aluminium film in the QCL,” explains Aniela Dunn, a research fellow at Leeds and the study’s lead author. “This caused the film to expand and contract, sending mechanical waves through the device.”

Perturbation theory analysis

In effect, Dunn says that these mechanical waves cause the energy levels of the electrons within the QCL’s quantum wells to vibrate. The team explain this process using perturbation theory: by assuming that the effect of the acoustic pulses is small, they can treat the changes to the energy levels as a perturbation to the QCL’s behaviour under normal operating conditions. This enables them to predict how the number and frequency of the QCL’s emitted photons will change in response to the applied acoustic wave.

“By approximating the amplitude of the acoustic (strain) pulse generated in the QCL, we can calculate the effect the sound pulse will have on the energy levels of the QCL,” Dunn tells Physics World. “It is thus possible for us to determine the subsequent effect of the pulse on the voltage across the device and the THz power emitted from it.”

Application areas

Although the team could not stop and start the flow of photons from the QCL completely, they could control its light output by a few percent – a “great start”, according to Cunningham. In the future, they hope to refine the technique to gain more control over the photon emissions. Dunn adds that they also hope to integrate the sound-generating structures into the THz laser itself, so that no external sound source is needed.

In addition to high-speed communications, Dunn thinks acoustic modulation could come in useful in areas such as high-resolution spectroscopy, active mode-locking and frequency comb synthesis. “Ultimately, it allows us to explore the interaction between THz sound and light waves, something that could have real technological applications,” she says.

The researchers report their work in Nature Communications.

Gravity-defying ski moguls and snowmaking in warmish weather

Last week I was at the Grandvalira ski area in Andorra, having a great time whizzing down the slopes in glorious sunshine. Sitting on the chairlifts, my mind naturally turned to the physics involved in maintaining the pistes – and I became fascinated by two things: the moguls that quickly form on an initially smooth slope; and the snowmaking systems that help keep the slopes full of snow.

Nights are very busy at a ski area. Snow cannons hiss as they take advantage of the colder night-time air while huge bulldozer-like “piste bashers” smooth-out the slopes for the next day’s skiing.

After a few hours of being skied on, however, a smooth slope can be covered in moguls – bumps that are about a metre or so across and tens of centimetres high. These tend to form on steeper slopes where skiers control their speed by weaving back and forth across the piste, rather than plunging straight down. Each turn pushes some snow up into a pile and it is easier for successive skiers to weave around these piles, further building them up into a mogul field.

Uphill motion

Like sand dunes, moguls struck me as being ripe for investigation by physicists who study granular materials. A quick Internet search during a coffee break revealed a remarkable thing about moguls – they tend to migrate uphill. That is the conclusion of physicist and skier Dave Bahr and colleagues, who a decade ago trained a camera on an ungroomed mogul field in Colorado for an entire ski season.

The teams’s explanation for this apparent defiance of gravity is that most skiers turn as they go over the top of a mogul. This is because turning involves straightening your knees to lift your upper body, which is much easier to do with the upward boost of a mogul. After the turn, the skier will usually skid along the downhill side of the mogul, pushing some snow down the slope and onto the next mogul. This has the effect of moving both moguls uphill.

In the study, Bahr (along with Tad Pfeffer and Ray Browning) found that the moguls move uphill at a rate of about eight centimetres per day.

Unusually warm

Today, most large ski areas worldwide rely on snowmaking technology to supplement natural snowfall. While snowmaking is seen as a potential solution to global warming, it is not a panacea. It was unusually warm when I was in Andorra, and for several nights the temperature remained well above freezing – even at an altitude of 2100 m. This meant that the snowmaking cannons lining the slopes were not used overnight.

That got me wondering about how snowmaking works – and whether it could be used at air temperatures at or even above 0 °C? Apparently, the answer is yes, under the right conditions.

There are several types of snow cannons but the basic principle involves combining cold water with compressed air and blasting out a mist at high velocity. The sudden drop in pressure at the nozzle cools the water, causing tiny ice crystals to form and grow.

Pellets, not flakes

This nucleation and growth are like what happens in a snow cloud, but at an accelerated rate. It therefore produces a pellet of snow, rather than a flake. While this is no good for powder skiers, this type of snow is apparently better than natural snow for making groomed pistes.

According to the Italian snow-cannon maker TechnoAlpin, snow can be made when the ambient “wet bulb temperature” drops below −2.5 °C. The wet bulb temperature accounts for relative humidity, which is an important factor in snowmaking. The higher the relative humidity, the harder it is to make snow. A wet bulb temperature of about −2.5 °C occurs at about 0.5 °C at 50% relative humidity so snowmaking is possible without freezing air temperatures. Indeed, if the relative humidity is a desert-like 25%, a snow cannon could operate at a balmy 2 °C.

While snowmaking won’t help when temperatures remain above freezing for sustained periods of time, the technology does allow ski areas to make large amounts of snow when it is cold. This snow can then be used to extend the skiing season into the spring. Indeed, some ski areas are looking at storing large quantities of snow over the summer so they can open earlier in the autumn.

There are downsides to snowmaking, however. It takes large amounts of energy to operate cannons and some systems use chemical additives to boost nucleation. And then there is the huge amount of water required – which is often supplied by building alpine reservoirs that can disrupt local hydrology.

Quantum computing from the ground up

Chris Monroe IonQ

Chris Monroe is a physicist at the University of Maryland, US, and the co-founder and chief scientist of IonQ, a start-up that is developing quantum computers using trapped ions as qubits. He recently spoke to Margaret Harris about the rise of quantum computing, and how his previous experiences – including stints in the labs of two physics Nobel laureates – fed into his decision to start the company in 2015.

How did you get interested in quantum computing? Because you really got into the field at the very beginning…

Yes, I’ve been in this field for more than 25 years, and I have to say it sort of landed in my lap. I did my PhD work on cold atomic gases at the University of Colorado, Boulder, in the group of Carl Wieman and Eric Cornell, who went on to share the physics Nobel prize in 2001 for making the first Bose–Einstein condensate. But I always knew that at some point I might want to get a “real job”, and atomic physics is good in that respect because it involves working with practical things like optics and lasers and photonics. There’s a lot of equipment involved, and I was attracted to the technical nature of the work.

After I got my PhD, I went on to do a postdoc. The postdoc system is a little stressful, because it’s a temporary job, and you’re in your late 20s, and everyone else you know is building their career. But postdocs are also wonderful opportunities to try something random, because there’s very little at stake if it doesn’t work out. And in my case, I didn’t have to move very far. I stayed in Boulder and went down the road to work with David Wineland at the National Institute of Standards and Technology (NIST).

In the early- to mid-1990s, Wineland’s lab was basically the atomic-clock division of the US government. He is an amazing researcher, and NIST allowed him to do academic-type research within this government lab. So instead of building the clocks that people use as a real time standard, we were doing research on how you might make better clocks. One of the crazy ideas we had was that by entangling multiple atoms or ions, we could make our clock run faster (and therefore more accurately), so we came up with a scheme to entangle two ions.

As it turns out, that meant we were building a quantum gate for a tiny quantum computer. But we didn’t know those terms at the time. I didn’t hear about quantum computing until the summer of 1994, when I learned of Peter Shor’s algorithm for factoring large numbers using a quantum computer.

When Wineland and I saw Shor’s article, it entirely changed our direction of research. We were still at NIST doing atomic clocks, but now we were also doing quantum computing, and government agencies got very interested in seeing what we needed to do to scale it up. Everything we did in that laboratory was ground-breaking, and Wineland went on to win the Nobel prize in 2012 largely based on his work in the 1990s. It was a pretty cool beginning to my career.

Several years passed from when you first heard about quantum computing to when you set up IonQ. What made you decide “This isn’t just a research topic anymore, I’m going to start a company”?

For the first 10 years or so, there was a lot of research to do. Picking out the best type of quantum gate. Deciding which atomic species to use. Working out how well the lasers perform and how big our quantum systems could be before they got killed by noise. It took a long time, not just for me, but for the whole community to do those experiments. And in terms of scaling things up from a handful of qubits or gates, really nothing happened for a long time apart from high-level proposals for scaling. Although we were starting to understand the limitations of the physics, we weren’t ready to do the engineering.

Beginning in 2010, though, we started to narrow things down and make decisions, and by 2014 or 2015 we had our first tiny quantum computer. And that was interesting, because after we initialized and calibrated the system in the morning, we stopped doing atomic physics in the afternoon. Once the system was seeded, we could stop tinkering with the lasers, go over to the PC that was controlling the experiment and run algorithms.

After that, a couple of things happened. I had a long-standing collaboration with a colleague from Duke University, Jungsang Kim. He’s an engineer, and we recognized that we kind of filled each other’s gaps. I’m a physicist, and I’d been in this field for a long time, but he has great experience in what’s called systems engineering, and he thinks differently about physical systems than I do. We realized that, together, we could do some amazing things.

Around that same time, in mid-2016, IBM built a five-qubit superconducting quantum computer and put it in the cloud so that people could use it. At first, that seemed a little goofy to us, because five qubits is really small – we’re not going to learn anything from that. But it was more than just a publicity stunt. I mean, it was a publicity stunt, but it also allowed anybody to use the system, which was huge – a genius move.

As it happened, the system we were building was also exactly five qubits, but in terms of performance it was much better than IBM’s. This is because atomic qubits are nearly perfect and exactly replicable; because we could connect a pair of the atom qubits with reconfigurable laser beams; and because we could run “deeper” circuits. So people started approaching us, saying that they’d tried to use the IBM cloud, but it didn’t work for what they wanted to do – could we help? Initially, it was more like a scientific collaboration: we started running applications and algorithms that other people would send us. But we realized that to go to the next step required such a serious dose of engineering that it probably couldn’t be done at a university. And that was the genesis of IonQ.

What do you know now that you wish you’d known when you started IonQ?

One thing I’ve learned has to do with the computer science aspect of our systems. Moving the operations around in our algorithms so they’re mapped to our system in an optimal way turns out to be incredibly powerful – much more powerful than I imagined. If I’d known that two or three years ago, I would have hired more computer-science theorists.

As an analogy, the first PC I ever used had four kilobytes of memory. Now we have hundreds of gigabytes, and that means we waste it – we take pictures that are way too high resolution and store them on our hard drive because memory is a commodity. It’s cheap, it’s easy. There’s no reason not to waste it. But at the early stages of any technology, including quantum computing, you have to squeeze out every ounce of efficiency you can, because it might mean the difference between running an application and not being able to. In 10 or 20 years, I hope that qubits and gates will be more of a commodity, and then we can be more wasteful with them. But to get there we have to extract as much efficiency as possible. That’s not really physics – it’s quantum computer science, and it’s a very rare skillset right now.

In 10 or 20 years, I hope that qubits and gates will be more of a commodity, and then we can be more wasteful with them

That leads nicely to my last question. Do you have any advice for today’s physics students?

With a physics degree, you can pretty much do anything. The challenge is that the doors are not open for you to the same extent as they are in some other fields. When you study engineering, for example, it’s almost like going to business school. You make connections, there are job fairs and the doors open for you to go work for these big engineering firms. You can still do that as a physicist. It just won’t come to you. You have to go find it.

So the advice I would give is to keep your options open. If you do a PhD, it may appear like you’re narrowing your options, because you’re working for several years on just one thing. But if you can solve a problem at the forefront of your field, even if it’s a narrow problem, you learn how to do that in any field. Going in-depth in physics will help you no matter what you want to do, even if it’s something unrelated, such as finance.

We all learn quantum physics as physics students, but in recent years this field has taken on a whole new life. It’s not an esoteric theory anymore, something that only describes tiny effects in extreme forms of matter. It’s going to form the basis for a whole new type of technology. So I think that, because physicists have a bit of a leg up in this area, they should go all-in.

Protein aggregation goes catalytic

A new mathematical model that describes how proteins self-assemble into stacks known as amyloid fibrils could aid the search for drugs to treat diseases like Alzheimer’s and Parkinson’s. The model, developed by researchers at the University of Cambridge in the UK and Lund University in Sweden, is based on rate equations and confirms that the microscopic reaction steps behind protein aggregation occur in a catalytic manner, similar to reactions involving enzymes.

The self-assembly of proteins into amyloid fibrils is a natural biological process known to be involved in several diseases, including Type II diabetes as well as Alzheimer’s and Parkinson’s. By unravelling the reactions that lead these biofilaments to proliferate, researchers hope to gain insights into how to treat such diseases. Ultimately, such knowledge could even make it possible to prevent these illnesses by developing drugs that inhibit the protein self-assembly processes at their heart.

Two-step catalytic process

Researchers know that biofilaments assemble from protein monomers in a slow “primary” nucleation-reaction process, followed by a faster process that allows the filaments to elongate. Secondary reaction processes – including nucleation of new filaments on the surface of existing ones, filament breakage and branching – frequently accompany these first two steps.

Previous work has shown that the elongation step is best described as a two-step catalytic process controlled by the so-called Michaelis-Menten rate law, which was first employed in 1913 to describe the rates of enzyme-driven reactions. The secondary nucleation step – which is known to be crucial for the aggregation of an Alzheimer’s-associated protein called Ab40 – is described by a similar rate law and is also catalytic.

Catalyzed aggregation

A team led by Tuomas Knowles and Alexander Dear has now built on these insights to develop simple, highly general mathematical equations that model the kinetics of amyloid fibril formation and changes in protein aggregate concentrations over time. The team also applied this model to experimental data on the aggregation of Ab40.

Experiments had hinted that Ab40 begins aggregating at interfaces – for example, the surface of a liquid solution or the glass wall of the test tube in which the experiments were performed. Dear says that the close match between their rate-law model and earlier in vitro results supports this hypothesis, indicating that interfaces may play an enzyme-like role in promoting protein aggregation. “The new model sheds light on the nature of aggregation by showing that the individual steps in the aggregation process (filament growth, and both primary and secondary nucleation of new filaments) are typically all catalytic in nature,” he tells Physics World. Previous models, he adds, did not take this into account.

Enzyme-like effects

A simpler form of the model also enables the researchers to see how enzyme-like “saturation” in each step affects the overall proliferation of amyloid fibrils, as the various catalytic surfaces become fully occupied at high protein concentrations. “We can thus determine the protein concentrations at which each reaction step saturates from the measured fibril growth curves,” Dear says.

The discovery that primary nucleation of Ab40 is catalysed by interfaces also has implications for the way proteins aggregate in the human body. Since such heterogenous nucleation is highly environment-dependent, Dear says that relatively small changes in the biochemical environment in the brain or body could dramatically change the propensity for amyloids to form. Ultimately, the researchers, who report their work in the Journal of Chemical Physics, believe that their result could be used to develop potent inhibitors of amyloid formation, and thus new treatments for diseases like Alzheimer’s.

Smart band-aid senses and treats bacterial infections

Smart band-aid

Antimicrobial resistance is a serious threat to global health, a phenomenon that is largely driven by incorrect treatment regimens, which result in misuse and overuse of antibiotics. Therefore, it is important to develop rapid and cheap ways of detecting bacteria, along with their sensitivity or resistance to antibiotics. This would allow rapid diagnosis of infections, tailored prescription of drugs and, in turn, a more informed and sustainable use of antibiotics.

Sense-and-treat

In response to these demands, a team of researchers from the Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, has developed a paper-based adhesive plaster, or band-aid, based on a “sense and treat” approach. The plaster senses the presence of bacteria by changing its colour and releases antibiotics when necessary. In addition, the paper is able to distinguish between certain drug-susceptible and drug-resistant bacteria, therefore informing the best disinfection strategy (ACS Cent. Sci. 10.1021/acscentsci.9b01104).

The plaster works like a traffic light. It appears green under normal conditions, while it turns yellow in the presence of drug-sensitive bacteria and automatically releases antibiotics to kill them. If the bacteria are drug-resistant, the paper becomes red and photodynamic therapy (PDT) can be used instead. PDT is performed by shining 628 nm light onto the plaster, which induces the production of reactive oxygen species that kill or weaken the resistant bacteria.

Ad-hoc chemistry

The plaster exploits chemical compounds that change colour in the presence of bacteria. In particular, the material is soaked in bromothymol blue, a pH indicator that turns from green to yellow when exposed to the acidic environment created by bacterial metabolism. The paper also contains a pH-sensitive metal–organic framework, a compound that acts as a cage containing antibiotic molecules. Upon increased acidity, the framework breaks open and the encapsulated antibiotic is released.

A wide class of resistant bacteria produce β-lactamase, an enzyme that destroys certain types of antibiotic molecules. In response to this, the plaster is also equipped with nitrocefin, an antibiotic that shows a distinct colour change from yellow to red when interacting with β-lactamase, hence signalling the presence of drug-resistant bacteria and the need for a therapy other than antibiotics.

Based on these principles, the researchers have shown that the plaster accelerates wound healing in mice infected with both sensitive and resistant bacteria. They monitored the status of the wound over three days, and clearly observed improved tissue regeneration following the disinfecting action of either antibiotics or PDT.

The team also demonstrated the potential of the paper device in a fruit preservation model, in which it was attached onto an infected tomato that successfully recovered after three days of sensing and treatment.

The future of diagnosis and treatment

The team’s novel adhesive plaster offers great potential for the future of diagnosing and treating wound infections. It is cheap, easy to use, and effective against certain types of bacterial infections. Extending the method to practical and point-of-care applications will be the next challenge towards widespread use. Technologies like this could contribute significantly to the fight against antimicrobial resistance.

The 10 most important future big-science facilities in physics

On Saturday I headed to Oxford for a one-day meeting about big science in physics that was organised by the St Cross Centre for the History and Philosophy of Physics at Oxford University. Held in the Martin Wood Lecture theatre at the Department of Physics, the meeting covered the past, present and future of big science. The audience was made up of academics as well as the general public, with 200 people having registered to attend.

First up was Helge Kragh from the Niels Bohr Institute in Copenhagen, who gave a fascinating talk about what we define as big science and how that term has changed over the past century. Kragh’s focus was on the Manhattan atomic-bomb project and what followed regarding the development of large particle accelerators.

Continuing the particle-physics theme was Isabelle Wingerter-Seez from the Laboratoire d’Annecy-le-Vieux de Physique des Particules, which belongs to the French National Centre for Scientific Research, who spoke about the beginnings of CERN and the discovery of the Higgs boson in 2012 at the lab’s Large Hadron Collider.

Frank Close from Oxford University

Big science doesn’t get much bigger that the ITER experimental fusion reactor, which is currently being built in Cadarache, France, and ITER director general Bernard Bigot was on hand to give a status update about how the facility is on track for first plasma in December 2025 (though actual deuterium-tritium plasma won’t be performed until 2035). Bigot noted that this year and the next will be critical for ITER as it begins assembling all the components that have been manufactured by its eight member states, all of which has to be carried out “to millimetre precision”.

In the afternoon session, Carole Jackson, director the Netherlands Institute for Radio Astronomy (ASTRON), discussed big projects in astronomy, particularly the Square Kilometer Array and the European LOFAR network.

I gave the final presentation of the day, in which I spoke about the coming decade of new facilities and the possible science that may result. I presented my “top 10 facilities to watch”, which were, in no particular order: the James Webb Space Telescope; ITER; The European Extremely Large Telescope; the European Spallation Source (ESS); the Extreme Light Infrastructure; Hyper Kamiokande; the Square Kilometre Array; the Long Baseline Neutrino Facility; the Electron-Ion Collider; and a future electron-positron collider.

I also offered a few predictions for the decade ahead (“brave”, one person said to me afterwards), including that the International Linear Collider will definitely get the go ahead, that CP violation in neutrinos will be measured at the Hyper-Kamiokande facility in Japan and at the Long Baseline Neutrino Facility in the US, and that astronauts will return to the Moon. I was the odd one out in being a journalist, not an active academic or researcher, but I hope that gave the audience a somewhat independent perspective.

One common theme in questions from the audience (of which there were many) was how big-science facilities could become, well, more green. There are some facilities that are working towards this, notably the SESAME synchrotron in Jordan and the ESS, both of which use or will use renewable energy to power the accelerator complex.

But given that big science is getting bigger and ever-more important, energy sustainability needs to become a much greater consideration for those planning, designing and building these future facilities.

Copyright © 2025 by IOP Publishing Ltd and individual contributors