Gravitational waves from the merger of two neutron stars were observed by the LIGO Livingston detector on 25 April 2019 – according to an international team of astrophysicists in the LIGO and Virgo collaborations. This is the second time that a signal from such an event has been seen and the merger is puzzling astrophysicists because it appears to have created an object with “unusually high mass”.
In a paper that has been submitted to The Astrophysical Journal Letters, the researchers say that the merger occurred about 500 million light-years away. The above video is a simulation of the merger process and the gravitational waves it produced.
The signal (dubbed GW 190425) was not recorded by the LIGO Hanford detector, which was not operating at the time, nor was it detected by the Virgo detector.
Gamma-ray pulses?
A recent preprint from an independent team of astronomers in Russia suggests that two gamma-ray pulses were also emitted during the April 2019 merger. No other electromagnetic radiation associated with the event has been reported.
LIGO comprises two 4 km long interferometers in the US – one in Livingston, Louisiana and the other in Hanford, Washington. The Virgo interferometer stretches over 3 km in the Italian countryside near Pisa. In August 2017 the two LIGO detectors spotted gravitational waves from the merger of two neutron stars – the first time ever that such an observation was made. A signal was not seen in Virgo, but this non-detection allowed LIGO–Virgo scientists to better locate the merger in the sky.
The merger created a huge “kilonova” explosion and astronomers observed signals across a wide range of the electromagnetic spectrum from radio waves to gamma-rays. This was the first-ever “multimessenger astronomy” observation involving gravitational waves and it has already shed light on important issues in astrophysics including how heavy elements are created in the universe.
No light show
While GW 190425 was not accompanied by a spectacular light show, it does have an intriguing feature – the object created in the merger has the mass of about 3.4 Suns. This is much greater than 2.9 solar masses, which is the maximum combined mass of neutron stars in known binary systems in the Milky Way.
“From conventional observations with light, we already knew of 17 binary neutron star systems in our own galaxy and we have estimated the masses of these stars,” explains LIGO team member Ben Farr. “What’s surprising is that the combined mass of this binary is much higher than what was expected,” adds Farr, who is at the University of Oregon.
This high mass had led to early speculation that GW 190425 could have been the result of a merger of a neutron star and a black hole – making it the first such event to be observed. That, however, has since been discounted by the LIGO–Virgo team. Surabhi Sachdev of Penn State University explains: “What we know from the data are the masses, and the individual masses most likely correspond to neutron stars”. She also says that unusually high mass of the system “could have interesting implications for how the [neutron-star] pair originally formed”. Indeed, the LIGO–Virgo team suggest that new models of pair formation need to be developed in order to explain the observation.
Independent of the LIGO–Virgo team, Alexei Pozanenko of the Russian Academy of Sciences and colleagues point out that a gamma-ray spectrometer on board the INTEGRAL satellite observed two weak gamma-ray bursts 0.5 s and 5.9 s after gravitational waves from GW 190425 were detected by LIGO. INTEGRAL also observed a gamma-ray burst shortly after the 2017 neutron-star merger, as did the Fermi Gamma-ray Space Telescope.
Pozanenko and colleagues suggest that Fermi did not detect bursts in 2019 because the Earth was between it and the merging neutron stars when the signal arrived. This, they point out, could help further localize GW 190425 in the sky. Because it was only detected by LIGO Livingston (and not detected by Virgo), the location of the merger can only be constrained to within 20% of the sky – a much greater area than the 0.04% of the sky that the 2017 event was limited to. This uncertainty in location could explain why astronomers have not been able to observe electromagnetic radiation from GW 190425.
While the LIGO–Virgo team say that there are no “confirmed” electromagnetic observations associated with GW 190425, they do acknowledge the INTEGRAL observation in their submitted paper.
Tiny plastic particles are showing up in all sorts of places and have become a worldwide problem. Scientists are in the early stages of determining how microplastics, defined as particles smaller than 5 mm, enter the environment and are transported – often up to hundreds of kilometres – into previously pristine ecosystems. Researchers presented several observations on microplastic migration at the annual meeting of the American Geophysical Union (AGU) in San Francisco, California, in December.
Scientists group microplastics into three categories. The first are spherical microbeads, which typically come from facial cleansers and similar pharmaceutical products. The second are irregularly-shaped microfragments that have degraded from larger plastic objects. The third type are hair-like microfibres shed from synthetic fabrics.
Fields to mountains
Meredith Sutton, an undergraduate student at the University of Virginia, US, told researchers at the AGU meeting that microplastics are expected to be found downstream of urban centres. However, they are increasingly found downstream of agricultural areas as well. Sutton and her colleagues suspected that fertilizers made from sludge at wastewater treatment facilities might be a source, so they conducted a controlled experiment in the midwestern state of Nebraska. After rainfall, they found that much higher concentrations of microplastics (mainly fragments) ran into streams from fields treated with sludge-based fertilizers than from unfertilized control plots. The sludge treatment process does not screen for or filter out microplastics, Sutton said.
Julia Davidson of the University of Nevada–Reno, US, described finding microplastics on surface snow in remote areas of the Sierra Nevada mountains that straddle the California-Nevada border. Microfibers, especially, showed up 250 to 320 km from the nearest human habitation, and were probably borne there by wind. Davidson, an undergraduate, noted that microplastics do not degrade quickly in the alpine environment and would be difficult to remove due to their tiny size and the vast extent of the mountains. She also pointed out that the Sierra Nevada snowpack is a major source of drinking water for humans and animals. The health effects of microplastics, she said, are not yet known.
According to Davidson, hers is the second study of microplastics in snow. Another, from earlier in 2019, found particles in both the Alps and the Arctic. Both studies suggest that microplastics are widespread and that particles are travelling long distances, Davidson said.
Coming out in the wash
Plastic-based microfibres enter the environment whenever synthetic fabrics are laundered, but especially when detergents are used. According to Emmerline Ragoonath-De Mattos, an undergraduate at Columbia University, US, who conducted a study at Columbia’s Lamont-Doherty Earth Observatory, the drying part of the cycle contributes the most microfibres. The Columbia study found both fibres and fragments in zooplankton – a food source for fish – in Long Island Sound, which lies east of New York City and downstream of wastewater treatment facilities.
Other research projects described at AGU uncovered large accumulations of microplastics in Virginia floodplains, US; in Chinese mangrove estuaries; in Vietnam’s Mekong River and its delta; in almost all Japanese rivers; and in the ocean. The convener of the sessions, Monica Arienzo of the Desert Research Institute in Nevada, says that research into microplastics is in its infancy. “There are still a lot of questions about plastic sources, transport mechanisms, and deposition in the environment,” she told Physics World.
Abstracts for the AGU microplastics sessions may be read here and here.
Dogs trained to sniff out early signs of cancer in human breath are probably detecting large molecules captured on aerosols rather than more volatile substances present as gases, say researchers in the US and Canada.
Joachim Pleil and colleagues at the US Environmental Protection Agency collaborated with cancer-screening company CancerDogs in Canada to identify the chemicals that distinguish samples classified by dogs as either positive or negative for cancer. While liquid chromatography mass spectrometry (LCMS) detected a suite of chemical features that shared a statistically significant correlation with the dogs’ diagnoses, gas chromatography mass spectrometry (GCMS) turned up no significant relationship. The result bodes well for potential preclinical screening techniques, as it is simpler to capture aerosols and other particles from breath than it is to sample the component gases (J. Breath Res. 10.1088/1752-7163/ab433a).
Spotting cancers early is key to successful treatment, but sometimes the disease can spread before clinically detectable symptoms begin to appear. Routine screening might one day identify such preclinical cases, and could be used to test members of high-risk groups – such as firefighters, who are exposed regularly to carcinogens – even if it proves unrealistic to screen the population at large.
Specially trained dogs have already been used for this purpose with some success, identifying signs of cancer in breath samples from apparently healthy human subjects. As dogs cannot explain their methods, however, we still do not know exactly how they arrive at their conclusions.
“The primary disadvantage of using dogs is that they can’t talk, so we don’t know what compounds they are cueing in on,” says Pleil. “Ultimately, if we want to figure out the biochemistry of cancer and convince ourselves that dogs are really finding preclinical biomarkers, then we need some serious laboratory investigations.”
For Pleil and colleagues, “serious laboratory investigations” meant subjecting breath samples collected on hospital-style face masks to analysis by LCMS and GCMS. The masks had been worn by firefighters enrolled in a cancer-screening trial run by CancerDogs, and subsequently had been unanimously but independently categorized by the four dogs involved in the trial as either “case” (meaning cancer) or “control” samples.
“What we want to find out is whether there are any specific compounds that light up as different between cases and controls, and try to figure out if they make any sense biochemically,” explains Pleil.
From the start, the researchers were not expecting to gain much insight from any gas-phase compounds identified using GCMS, since volatile molecules usually escape from the mask material before the dogs get near them. Their hunch was borne out when a statistical analysis of the 44 distinct chemical features detected in the samples failed to observe any significant correlation between the dogs’ classifications and the laboratory results.
The team’s analysis of larger, semi-volatile or non-volatile molecules detected by LCMS was more fruitful, however. This technique identified 345 distinct compounds, whose relative abundance varied according to whether they came from samples that had been classified as case or control during the canine trials. Even so, the explanatory power of the LCMS results was weak, suggesting that, when categorizing the samples, the dogs were sensitive to chemicals or chemical combinations not yet recognized by the instrumental analysis.
“We can’t speculate about what the dogs are really detecting, but our interpretation is that there is likely an overlap between the laboratory methods and canine olfaction,” says Pleil. “Keep in mind that instruments do not use the same analytical ‘senses’ as dogs. The dogs could be cueing on large reactive compounds that might not survive chromatography, or they could be detecting patterns that may not be the same in the instruments due to differences in sensitivity.”
To some extent, this uncertainty can be attributed simply to insufficient data. The researchers hope that this knowledge gap can be filled by increasing the number of samples tested and applying multiple analytical techniques. Extending the investigation to include chemical species not yet detected should strengthen the team’s predictive model.
For context, metamaterials have seen a surge in research interest over the past two decades, with scientific publications per year rising from 66 in 2000 to just short of 17,000 in 2019. It’s easy to see why. Metamaterials are artificial, structured materials with unique electromagnetic, acoustic, thermal and/or mechanical properties engineered through design and fabrication (and typically consisting of periodic arrays of subwavelength resonators). As such, scientists are able to access material and device functionalities beyond those found in nature, opening up diverse applications in markets such as next-generation energy technologies, defence and security, aerospace, healthcare and advanced communications, among others.
Meta together
Postgraduate researchers (PGRs) are drawn to XM2 for a variety of reasons, though chief among them is the prospect of a very different postgraduate experience to the traditional “lone scholar” PhD pathway. For starters, XM2 is all about harnessing the knowledge and experience of the collective – a cohort-based learning model that embeds PhD students within a local support network made up of their peers, academic supervisors and postdoctoral scientists. Equally important is the structured training programme at XM2, built around continuing science education, professional skills development, as well as extensive industry and public engagement.
“Our young researchers demonstrate fantastic spirit and engagement, resilience and the drive to make a difference,” claims Alastair Hibbins, director of Exeter’s Centre for Metamaterial Research and Innovation (CMRI), which fosters XM2. “We have built a training programme that provides our graduates with the skills and understanding to become tomorrow’s leaders in industry, as well as in our academic institutions.”
That’s a view echoed by XM2 programme manager Anja Roeding, whose goal is “a more holistic PhD programme for early-career scientists”, one that encourages innovation across subjects and where all students can make the most of their individual strengths. “It is fair to say that the first year, in particular, is quite a challenge for our PGRs,” she explains. “This is when they have to learn how to juggle the diverse training elements, assessments and their research activities – though they have plenty of support from their peers and the staff who accompany this journey of professional development and personal growth.”
Connected community
A case in point is Elizabeth Martin, a fourth-year XM2 student whose research focuses on microfluidic metamaterials for lab-on-chip technology. Specifically, Martin is investigating the collective behaviour of ferromagnetic elements in so-called magnetoelastic membranes – continuous periodic structures with applications in next-generation medical diagnostic devices.
After completing an undergraduate degree in maths and physics at Exeter, Martin was initially attracted to the university’s PhD programme by the opportunity to specialize in metamaterials R&D. What really sold her on XM2, however, is the extensive training programme that sits alongside and supports students’ core PhD research activities.
“The soft-skills training has been really useful, focusing on areas where PhD students typically lack confidence or experience,” says XM2 researcher Elizabeth Martin. (Courtesy: University of Exeter)
The front-loaded training programme kicks in from day one, with new students challenged to produce a high-quality piece of research during their first six months in XM2. “It’s a bit daunting, but the benefits are clear,” explains Martin. “The tight deadline means you start doing really useful research in those first few weeks while you’re still finding your feet.”
In many ways, this six-month project looks like a PhD in miniature, with students required to produce a report on their findings in the form of an academic paper. There’s also a viva with senior XM2 researchers, plus a formal presentation that the students give to their year-group peers. “Ultimately, the process is all about collective support, guidance and learning by doing – plus getting into good research habits early on,” says Martin.
Another unique aspect of XM2 is the emphasis on wider professional skills training and development. Throughout their four-year PhD project, students will undertake a range of soft-skills courses spanning topics such as project management; public speaking; teaching in higher education; creativity and innovation; and intellectual property and business awareness.
There’s additional support from the University of Exeter’s specialist cognitive behavioural coaching (CBC) unit. Staff from the CBC team run one-to-one or small group sessions with XM2 students to foster continuous improvement on operational issues such as work/life balance, upwards management, perfectionism, and work planning and scheduling.
“The soft-skills training has been really useful, focusing on areas where PhD students typically lack confidence or experience,” says Martin. “It’s a very different perspective. We’ve even had presentation and interview training courses delivered by professionals with backgrounds in theatre and TV. There’s also been support on lecturing and voice therapy, covering stuff like body language and how to present yourself effectively in a minute.”
Network effects
Fellow researcher Joe Shields is currently in the second year of a PhD in which he is developing tunable metasurfaces for a range of photonics applications – for example, flat lenses and next-generation displays. Like Martin, he chose XM2 because of the extensive training programme and the community-centric research programme. “I wanted to be part of a bigger, diverse group of people working across many different specialisms, with the opportunity to make connections and learn more widely,” he explains.
Alongside their core research, XM2 students undertake a busy schedule of science training during the first two years of their PhD programme. Topics include metamaterials physics, materials engineering, device production, characterization and programming skills. Shields, for his part, has also completed courses on machine learning and computational techniques for engineers.
Collective endeavour: XM2 gives PGRs the opportunity to learn more widely as part of a bigger, diverse group of scientists working across many different specialisms. (Courtesy: University of Exeter)
“I was a teacher for a couple of years after completing my MSc in physics at Oxford, so I’d already done a lot of the soft-skills training by the time I joined XM2,” says Shields. “However, the emphasis on outreach at XM2 has given me the opportunity to present my research to the general public and people of different ages and backgrounds. That experience has been invaluable.”
Shields is also a big fan of the “hit the ground running” induction period for all new XM2 arrivals. “The initial six-month research project definitely worked well for me. It was a packed schedule and covered almost all the fabrication techniques I need for my PhD. It’s great to jump straight into it – making devices and testing them with the support of the wider group.”
That peer-to-peer learning and support is hard-wired into the day-to-day activity of XM2, notes Shields. “We have a large office – around 30 PhD students comprising new-entrants and second-year students like myself. A lot of our dialogue centres on collaboration, whether that’s help with a fabrication technique, a specialist piece of software or a more direct research interaction.”
It’s good to talk
Another central feature of XM2 is the monthly cross-cohort group meetings, a platform for PhD students from different year groups to share insights about their research and professional development activities – e.g. industry and research visits, conferences and external training. During her third year, Martin acted as chair for one of these groups, planning and organizing meetings for students with a range of research interests, domain knowledge and experience.
“The cross-cohort groups are by the students for the students – we have total freedom to self-organize and prioritize our respective agendas,” explains Martin. That could mean getting university staff along to do some CV training or running a workshop on interview best practice. There’s also plenty of informal peer-to-peer instruction in areas of common interest like data analysis and programming skills. “The group can be a great starting point for deeper learning and an excellent way of working out who knows what within the XM2 programme,” Martin adds.
In fact, taking responsibility early is a recurring theme among XM2 students. Martin herself spent last summer supervising an undergraduate student research project – an experience that will come in useful should Martin ultimately decide to pursue an academic research career.
“I submitted the original research proposal, secured funding from the XM2 project budget, and even supervised the application and interview process,” explains Martin. “I would never have had this opportunity if I wasn’t part of XM2.”
All told, it seems there’s plenty of scope for PGRs to push the boundaries within the XM2 programme. “You’re not spoon-fed here,” concludes Martin. “It’s down to you to decide which opportunities you want to pursue.”
Tiny particles of gold used in medical diagnostics, cosmetics and food can degrade inside biological cells despite the metal’s low reactivity, say researchers in France. The result, which contradicts earlier assumptions, proves that cells can metabolize gold even though the element is not essential for their function.
Gold in various forms has been used in medicine since antiquity. Thanks to the metal’s unique optical properties, gold nanoparticles are today used in several therapeutic applications, including photothermal therapy and radiotherapy. They are also useful in diagnostic techniques such as photoacoustic imaging and two-photon luminescence.
Once ingested or absorbed through the skin, gold nanoparticles mostly end up in the liver and spleen. There, they are internalized by macrophages and sequestered inside lysosomes – the “waste recycling centre” of cells. Although little was known about their long-term fate, gold’s status as a “noble” metal – that is, chemically inert – suggested that the nanoparticles would remain intact within these cellular structures for indefinite periods.
Now, however, a team led by Florence Gazeau and Florent Carn from the University of Paris, Sorbonne University and the University of Strasbourg has proved otherwise. The researchers tracked how gold nanoparticles measuring 4 to 22 nm in diameter evolve inside cells over a period of six months. In their work, they studied what happens to the particles inside primary fibroblasts. This is a type of cell that is ubiquitous in the body and that has a low proliferation rate. This means that the nanoparticles can remain in the same cell for long periods and so be followed over a period of several months.
“Significant transformations”
Thanks to a combination of electron microscopy imaging and measurements of the expression of 18000 genes, Gazeau, Carn and colleagues found that the nanoparticles undergo significant transformations after just a few weeks.
For example, the smallest nanoparticles are degraded first by an enzyme called NADPH oxidase, which produces highly oxidizing reactive oxygen species in the lysosome. The researchers also observed a recrystallization process in which biomineralized gold particles 2.5 nm in size self-assemble into leaf-shaped nanostructures. Such structures had previously been observed in patients with rheumatoid polyarthritis who had been treated with ionic gold or “gold salts”.
The researchers suggest that because of this similarly, gold salts and gold nanoparticles may share the same metabolism of degradation. This unexpected result could help clinicians better evaluate the toxicity of these particles in the future and determine how the body eliminates them. The study also shows that gold, whatever its initial form, can be metabolized by mammals despite not being necessary for their survival, they add.
When I returned to work last week after the holiday break, I had a belated present waiting on my desk. In December, I agreed to review a pair of graphene headphones made by a Canadian start-up, ORA, which Physics World contributing editor Belle Dumé wrote about back in 2016 when ORA was developing graphene components for loudspeakers. A hitch in the trans-Atlantic post delayed the headphones’ arrival, but now they were here and ready for testing.
As regular readers of Physics World know, graphene is extraordinarily stiff and lightweight. Speaker components made from graphene will therefore vibrate more rapidly (for a given energy input) than components made from materials like polyethylene terephthalate (PET) or cellulose. They will also warp less, which is good news for audiophiles, since warping distorts the speakers’ sound.
At least, that’s what the company claims. Regular readers of Physics World will also know about graphene hype, whereby this two-dimensional form of carbon gets touted as the ideal ingredient in pretty much anything. According to graphene pioneer Konstantin Novoselov, who gave a lecture on “The First 15 Years of Graphene” at the Royal Society last October, the first commercial graphene product was a tennis racket. Since then, graphene has appeared (with, I suspect, varying degrees of usefulness) in several other consumer devices, including motorcycle helmets, bicycle wheels, fishing rods and a very expensive watch made by the Formula 1 supercar manufacturer McLaren.
ORA’s headphones stand out in this gaggle of graphene gadgets for two reasons. One is an endorsement from Novoselov himself, who notes that (unlike some “graphene-enabled” products), the ORA device contains a relatively high amount of actual graphene. The other reason, of course, is that (unlike the graphene motorcycle helmet, watch, etc.), I got to try the headphones myself.
For my first test, I chose the song “Sum” by the Swedish singer-songwriter Loney Dear. I picked it partly because I wanted to see if ORA’s headphones could transport me back to the sun-drenched festival where I first heard it, but mostly because Loney is cool and indie, and if I was going to pretend to be a music journalist for an afternoon, then by God I was going to do it properly.
I started out by listening to “Sum” on my usual ‘phones: a Soundcore Space noise-cancelling model that has seen me through a flight to Boston for last year’s APS March Meeting and numerous full-volume conversations from Physics World’s business development manager Ed Jost, who sits behind me. They’re a decent pair of cans, and I figured their over-the-ears design would make a good form-factor comparison to ORA’s devoce.
After a couple of repeats of “Sum” (and surprisingly few comments from my colleagues about “working” with my eyes closed), I figured I had a suitable baseline. Out came the ORA headphones, and once they paired with the Bluetooth on my mobile phone, I pressed “play” and waited to see if I could hear the difference.
Reader, I could. The opening arpeggios of “Sum” were noticeably clearer, the bass notably more solid, and though the ORA device lacks an active noise-cancelling feature, the shimmering wall of sound in Loney’s mesmerizing electronica meant I had no trouble tuning out the usual office noise (albeit at a time when Ed was temporarily out of the office).
In addition to Novoselov, the ORA headphones have also been endorsed by the Los Angeles Philharmonic conductor Gustavo Dudamel, who says they provide “a level of clarity I’ve only ever experienced from the podium in front of an orchestra”. I’ve never been on the podium in front of an orchestra, so I can’t judge Dudamel’s claim directly. I have, however, spent some time in the choir stalls, so I tried the ORA device on the final movement of Carl Orff’s Carmina Burana. I found the dynamic range between the chimes and timpani particularly fine, and – in contrast to my experience of standing behind the timpanist during a concert – my ears weren’t ringing afterward.
Special ingredient ORA claims that its proprietary material contains 95% graphene oxide, and that the material’s properties translate into a better sound quality. (Courtesy: ORA)
At that point, I was ready to call the graphene headphones a success. But then my colleague Hamish Johnston – who describes his usual headphones as “a £10 pair of earbuds bought in an airport on my way to Canada”, and whose office pair is currently shedding little bits of black fuzz all over his desk – suggested I get a second opinion. Namely, his.
I handed the headphones over. A few minutes later he returned them. “Those are really good,” he said. He tested them with Miles Davis’ jazz classic “Kind of Blue”, and the main thing he picked out was the (intentionally) distorted sound of Davis’ trumpet in the opening. “That’s not something I’d notice on my usual tinny headphones,” he said. He also tested the Pink Floyd song “Fearless”, and noted that at the end, he could pick out the lyrics to “You’ll Never Walk Alone” as sung by Liverpool football fans. “That was a bit of a revelation,” he said. “I’m tempted to get a good pair of headphones now.”
Hamish suggested trying the headphones on someone with younger ears, so I sidled up to Physics World’s features editor Sarah Tesh. A few minutes later she appeared at my desk, headphones in hand. “Were they supposed to be noise-cancelling?” she asked. “Because I definitely missed that.” She reported that while the bass range was good, the higher tones sounded “a little bit tinny” on Billie Eilish’s “Bad Guy” and Oh Wonder’s “Hallelujah”, which she describes as “mainstreamy, Radio One-y alternative music”. “They’re quite nice, but I wasn’t overwhelmed by the experience,” she said.
The fourth tester was our production editor Emily Heming – chosen not only for her young ears, but also because (unlike the rest of us) she’s used to listening to music on high-end headphones. “I didn’t expect to notice any difference, but they’re really good,” she said, handing them back. Although she, too, wished for a noise-cancelling feature, and judged the ORA phones to be less aesthetically pleasing than her usual Sennheiser ones, she also thought they highlighted the eerie quality of her test track, which was NAO’s “Orbit”.
By now, people were queuing up to try the graphene headphones. Ed liked the way they handled the bass on Chase & Status’ remix of “Original Nuttah” but thought the headphones’ weight would make them uncomfortable on a long flight. Advertising sales manager Chris Thomas reported that the Wurzels’ “Combine Harvester” had never sounded so good. He also praised the “very deep bass, expansive mid-range and brilliant treble” in “Red Eyes” by The War on Drugs. “I heard previously undiscovered high notes that brought a smile to my face,” he said.
The last tester was Physics World’s editor-in-chief Matin Durrani. After trying the headphones on Mika’s “Billy Brown” (“my daughter put it in my playlist”) and a harpsichord piece by Carl Phillip Emmanuel Bach, he declared himself pleased. “I’m no expert, but all the sounds felt distinct, particularly on the vocals,” he said. “The harpsichord sounded clear and sharp, too.” Though he cautioned that he would need to try other high-spec headphones before he could tell whether the graphene made a difference, his conclusion was positive. “There’s a real depth of sound to them,” he said. “I’d have a pair.”
Alas, these particular graphene headphones are now on their way back to ORA and the eager hands of another tester. However, if anyone wants to lend me a graphene watch instead…well…!
Concurrent chemoradiotherapy, in which chemotherapy drugs and radiation therapy are used together, is a standard treatment for many locally advanced cancers. However, this approach is associated with severe side effects, including nausea, vomiting, significant weight loss, and radiation-induced lung injury that can lead to hospitalization.
For decades, concurrent chemoradiotherapy has been administered using photon-based radiotherapy. Now, a US research team has investigated whether irradiation using protons instead of photons can reduce this toxicity, by reducing the radiation dose to normal tissues (JAMA Oncol. 10.1001/jamaoncol.2019.4889).
The retrospective study – led by researchers at Washington University School of Medicine in St. Louis and the University of Pennsylvania – included 1483 patients with non-metastatic, locally advanced cancer. Common tumour sites included head-and-neck, lung, brain, oesophagus/gastric tract, rectum and pancreas. All patients received concurrent chemoradiotherapy with curative intent: 391 undergoing proton therapy and 1092 photon therapy (1016 of whom received intensity-modulated radiotherapy). Both patient groups received a similar integral radiation dose to the planning target volume. However, proton therapy delivered a lower integral dose to tissues outside the target.
The researchers note that patients treated with protons were, on average, significantly older and had more medical problems than those receiving standard photon therapy. Despite this, they found that proton therapy reduced the number of 90-day severe adverse events that caused unplanned hospitalizations by almost two-thirds: from 301 (27.6% of patients) in the photon cohort to 45 (11.5%) in the proton group.
Proton chemoradiotherapy also led to significantly less decline in performance status during treatment, and a significantly lower risk of adverse events that impaired the patient’s daily activities.
“We observed significantly fewer unplanned hospitalizations in the proton therapy group, which suggests the treatment may be better for patients and, perhaps, less taxing on the healthcare system,” says first author Brian Baumann in a press release from Washington University. “If proton therapy can reduce hospitalizations, that has a real impact on improving quality-of-life for both our patients and their caregivers.”
Importantly, the team saw no differences in disease-free or overall survival between the two groups. implying that proton therapy is as effective as photon therapy for treating the cancer.
The researchers suggest that the study has three important implications for future research. First, the lower observed toxicity of proton therapy raises the possibility that its higher up-front cost may be offset by savings from reduced hospitalizations and enhanced productivity from patients and caregivers. Second, this lower toxicity offers an opportunity to combine proton therapy with intensified systemic therapy or dose-escalated radiotherapy, which could improve survival. Third, proton therapy may allow older, sicker patients to receive the most effective combined-modality treatments.
As such, the team concludes that prospective clinical trials of proton versus photon chemoradiotherapy are warranted to validate these results.
In May 2019 the UK went an entire fortnight without using any coal to generate electricity. The last time this happened, Queen Victoria was on the throne. From having had its first coal-free day in summer 2017 to recording its first coal-free week in May 2019, the UK has done an impressive job of weaning itself off the dirtiest fossil fuel. But as environmentalists cheer the good news and policy-makers give themselves a pat on the back, a terrible truth has come to light: biomass power plants – a key renewable-energy source and one of the main replacements for coal-fired power – are emitting more carbon dioxide from their smokestacks than the coal plants they have replaced. In its haste to get rid of coal, the UK may have inadvertently made global warming worse.
The logic behind biomass energy is simple. Trees and plants absorb carbon dioxide from the air, use photosynthesis to isolate the carbon, and then use it to build tree trunks, bark and leaves. But when the plant dies, it rots down and much of the carbon is released back into the atmosphere as carbon dioxide. “When we use biomass as an energy source, we are intercepting this carbon cycle, using that stored energy productively rather than it just being released into nature,” explains Samuel Stevenson, a policy analyst at the Renewable Energy Association in London.
Now as we all know, burning fossil fuels releases carbon from geological reservoirs, which would have remained locked up for many millions of years if left untouched. So switching from fossil fuels to biomass energy seemed like an obvious way for European Union (EU) nations to meet their obligations under the Paris climate agreement (signed in 2016). Back in 2009 the EU committed itself to 20% renewable energy by 2020 and included biomass on the list of renewable-energy sources, categorizing it as “carbon neutral”. Several countries embraced bioenergy and started to subsidize the biomass industry.
Currently around half the EU’s renewable energy is based on biomass, and this figure is likely to rise
Currently around half the EU’s renewable energy is based on biomass – a figure that is likely to rise. “The benefit of biomass is that it can be implemented rapidly and uses the current energy infrastructure,” says Niclas Scott Bentsen, an expert on energy systems based at the University of Copenhagen in Denmark.
In the UK, the Drax Group has led the way with this green and leafy energy revolution. Over the last decade the Drax coal-fired power station in North Yorkshire, which produces around 5% of Britain’s electricity, has seen four of its six generating units being converted to run on biomass. Today Drax generates around 12% of the UK’s renewable electricity – enough for four million households.
A massive operation: Albert Hall-sized storage domes for wood pellets at Drax power station, North Yorkshire, UK. (Courtesy: Kate Ravilious)
A massive operation
Standing next to the train track at Drax in September 2019, I watched as 25 wagons of wood pellets were slowly disgorged into one of the four Albert Hall-sized storage domes. My guide told me that most of the pellets are made from the sawmill residues and waste left over from managed forestry in the US and Canada. This can include tree tops and limbs, misshapen and diseased trees not suitable for other use, and small trees removed to maximize the growth of the forest. Virtually every day shipments arrive in ports at Immingham, Hull, Newcastle or Liverpool, each carrying around 62,000 tonnes of wood pellets – enough to keep the boilers going for two and a half days. Unloading the ship takes three days and requires 37 freight train journeys.
You might think that the greenhouse-gas emissions associated with transporting the pellets over such a vast distance must be huge, but I’m told they make up a surprisingly small proportion of the supply chain emissions. “As long as wood fuels are transported by ship, the distance doesn’t matter too much,” says Scott Bentsen.
The size of the operation at Drax is absolutely staggering. In just under two hours an entire freight-train’s worth of wood pellets goes up in smoke. Although I know that the pellets are made from sawdust and forest thinnings, I’m still struck by how colossal the demand for timber must be in order to produce leftovers on this scale. However, Drax says that by creating a market for timber waste it is helping to prevent deforestation. “Using the low-grade material for wood pellets provides the landowners with additional income, making their land more profitable and helping to incentivize them to maintain and improve the forests, rather than using the land for something else,” says a spokesperson for Drax.
According to Drax, the decision to move from coal to biomass has slashed the plant’s CO2 emissions by over 80% since 2012. “In that time, we moved from being western Europe’s largest polluter to being the home of the largest decarbonization project in Europe,” writes Will Gardiner, chief executive of Drax Group, on the company’s website.
However, those calculated savings rest on a few key assumptions: first, that the carbon released when wood pellets are burned is recaptured instantly by new growth; second, that the biomass being burned is waste that would have released carbon dioxide naturally when it rotted down. But are those assumptions right?
Advocates of biomass energy claim that when forests are harvested sustainably, and the timber industry thinnings are used as fuel, the smokestack emissions are cancelled out by the carbon absorbed by forest regrowth. However, some scientists say that this carbon accounting simply doesn’t add up. “Wood bioenergy can only reduce atmospheric CO2 gradually over time, and only if harvesting the wood to supply the biofuel induces additional growth of the forests that would not have occurred otherwise,” says John Sterman, an expert on complex systems at Massachusetts Institute of Technology (MIT) in the US. The time needed for the regrowth to mop up the additional CO2 is known as the “carbon debt payback” time, and it is this that is hotly disputed.
High hopes
Sterman – who is keen to point out that his bioenergy research is independent, funded neither by the forestry or bioenergy industry, nor environmental groups – says he was initially optimistic about biomass energy. “The climate crisis is so dire that when we began our work, I dearly hoped that wood would prove to be part of the solution,” he says. But the more he looked into it, the more concerned he became.
Using a lifecycle analysis model, Sterman and his colleagues calculated the payback time for forests in the eastern US – which supply a large share of the pellets used in the UK – and compared this figure to the emissions from burning coal. Under the best-case scenario, when all harvested land is allowed to regrow as forest, the researchers found that burning wood pellets creates a carbon debt, with a payback time of between 44 and 104 years (Environ. Res. Lett.13 015007). “Because the combustion and processing efficiencies for wood are less than coal, the immediate impact of substituting wood for coal is an increase in atmospheric CO2 relative to coal,” Sterman explains. “This means that every megawatt-hour of electricity generated from wood produces more CO2 than if the power station had remained coal-fired.”
Sterman stresses that he is not advocating a return to burning coal. “Coal and other fossil-fuel use must fall as soon and as fast as possible to avoid the worst consequences of climate change. [But] there are many ways to do that, with improving energy efficiency being one of the cheapest and fastest.”
However, biomass energy advocates say that Sterman’s carbon debt is a fallacy, created by assessing the forest stand by stand (referring to a group of trees planted at the same time and then harvested a few decades later) rather than viewing it at the landscape level. “What actually happens is that one part of the forest is harvested (typically 3–4%) while the rest of it grows (typically net growth after harvesting is about 0.7 to 1% per year), supported by active forest management,” says Stevenson in London.
But Sterman argues that the opposite is actually true. “Harvesting one part of a growing forest does not cause trees miles away to grow even faster,” he says. “The trees harvested for bioenergy would have continued to grow, thus removing more CO2 from the atmosphere. The faster a forest is growing, the greater the future carbon storage is lost.”
It had been assumed that young trees mop up more carbon than old ones because they are fast-growing, but recent studies have revealed that ancient woodland growing in temperate regions takes up more CO2 than young plantations. This is because in some cases, growth accelerates with age and CO2 absorption is approximately equivalent to biomass (Nature507 90). “Far from plateauing in terms of carbon sequestration at a relatively young age as was long believed, older forests (for example over 200 years of age without intervention) contain a variety of habitats, typically continue to sequester additional carbon for many decades or even centuries, and sequester significantly more carbon than younger and managed stands,” researchers write in the journal Frontiers in Forests and Global Change (2 27).
Cycles of felling: Managed forests are usually felled and replanted in sections, meaning that different areas are at different stages of regrowth. (Courtesy: iStock/Herzstaub)
From growth to rot
But even if old trees are continuing to draw down CO2, what happens when a tree dies? Current carbon accounting assumes that all the carbon from dead wood is released back into the atmosphere again. Removing forest thinnings and burning them to produce energy is therefore viewed as better than leaving them on the forest floor to rot. Indeed, Biomass in a Low-carbon Economy – a report produced in November 2018 by the UK Committee on Climate Change – states that “Unharvested, the maintenance of these carbon stocks in perpetuity is essential to ensure that the sequestered carbon does not re-enter the atmosphere.”
However, Sterman argues that this fails to take account of the entire system. “We need to consider the carbon stored in the soil too. Removing and burning ‘waste’ wood lowers the source of carbon for forest soils. This allows soils to become net sources of carbon to the atmosphere as bacterial and fungal respiration continue to release soil carbon into the atmosphere,” he says.
Mary Booth, an ecosystem ecologist and director of the Partnership for Policy Integrity in Pelham, Massachusetts, shares Sterman’s concerns. In 2017 she used a model to calculate the net emissions impact – the difference between combustion emissions and decomposition emissions, divided by the combustion emissions – when forestry residues are burned for energy. “It is the percentage of combustion emissions you should count as being ‘additional’ to the CO2 the atmosphere would ‘see’ if the residues were just left to decompose,” she explains. Her calculations revealed that even if the pellets are made from forestry residues rather than whole trees, combustion produces a net emissions impact of 55–79% after 10 years (Environ. Res. Lett.13 035001). Even after 40 years her model shows that net emissions are still 25–50% greater than direct emissions. Like Sterman, Booth concludes that it takes many decades to repay the carbon debt, and she concludes that biomass energy can’t be considered carbon neutral in a timeframe that is meaningful for climate-change mitigation.
Booth was so concerned by what she found that she co-ordinated a lawsuit against the EU in March 2019 (eubiomasscase.org), challenging its treatment of forest biomass as a climate-friendly renewable fuel. “Our position is that policies should count biogenic carbon emissions, and burning forest wood for fuel should not be eligible for renewable-energy subsidies,” says Booth. Currently she is waiting to hear if the court will accept the case.
Danish methods
But even if biomass energy isn’t 100% carbon neutral, there may still be a place for it in the energy mix. Currently around two-thirds of renewable energy in Denmark is provided by biomass, and it plays a vital role in keeping district heating systems running, particularly when the wind fails to blow.
In 2018 Scott Bentsen in Copenhagen calculated the carbon debt and payback time for a combined heat and power generation plant in Denmark. His results suggested that the carbon debt was paid back after just one year, and that after 12 years greenhouse-gas emissions were halved relative to continued coal combustion (Energies11 807). These numbers are vastly different to the 40-plus years of payback time estimated by Sterman, so what makes Danish biomass energy different to the kind of process seen at Drax?
Calculating the carbon payback time for a specific supply chain can play a significant role in helping to fine-tune management practices and minimize emissions
Scott Bentsen explains that there are a number of key differences. In this Danish study, the plant burns wood chips rather than pellets, which reduces processing energy. Furthermore, the wood is sourced locally from mixed forests in a cold temperate region, which have different growing characteristics from trees in a warm temperate region. And the energy it produces is maximized, producing both heat for local houses and electricity. “Obviously we shouldn’t cut down all forests just to burn them for energy purposes, but as long as we can harvest biomass in a way that doesn’t permanently jeopardize the forest’s carbon storage and its ability to grow, then it makes scientific and climatic sense to use biomass to displace fossil-fuel resources,” says Scott Bentsen. He believes that calculating the carbon payback time for a specific supply chain can play a significant role in helping to fine-tune the management practices and minimize emissions from individual biomass energy plants (Renewable and Sustainable Energy Reviews73 1211).
Sterman accepts that there are arguments for using timber industry waste as a biofuel. “It’s not wrong to use sawmill residues for energy, but these sources are already fully utilized. There is not enough timber industry waste to allow the biomass-energy industry to grow without using more roundwood,” he explains.
Getting more out of biomass
Energy form: UK biomass plants use wood pellets of leftover material from managed forests. (Courtesy: iStock/srdjan111)
Simon McQueen Mason, a biologist from the University of York, UK, thinks that simply burning biomass is missing a trick. “Just using it to generate heat and electricity seems like a waste of a really good resource,” he says. Instead, McQueen Mason is investigating ways of making gas and liquid fuel from biomass, by getting micro-organisms and bacteria to munch their way through woody material, and collecting the resulting gas and liquid produced as the bugs digest the biomass. Pilot plants using sugar cane residue are already proving promising and could provide a solution to the vexing problem of de-carbonizing the petrochemical industry. “We’ve done a good job in reducing emissions from heat and electricity, but we’ve barely touched our emissions from transport,” he says. “Biofuel is probably the only way we can decarbonize the aviation industry in the next hundred years or so.”
Others suggest that this “waste” biomass might have other more valuable uses (see box, above), but right now burning it is providing governments with a quick-fix way to reduce emissions. Despite the obvious smoke belching out of the boiler chimney at Drax, the EU’s classification of biomass as a renewable form of energy means that the UK government can ignore the carbon dioxide being produced here and assume it will be mopped up by trees on the other side of the ocean. Making use of this form of carbon “loan” has played a significant role in reducing reported emissions across the EU, with figures suggesting that the EU will exceed its target of reducing greenhouse emissions by 20% by 2020. This form of carbon accounting is undermined by guidance published in November 2018 by the UK Committee on Climate Change, which concluded that there is a limited supply of sustainable biomass and that “no further policy support (beyond current commitments) should be given to large scale biomass plants that are not deployed with carbon capture and storage technology”.
But even if living trees can claw back these carbon-dioxide emissions relatively quickly, there is a danger in front-loading our emissions in this way. “Regrowth is not certain,” says Sterman. “Forest land may be converted to other uses such as pasture, agricultural land or development. And even if it remains as forest, wild fire, insect damage, disease and other ecological stresses including climate change itself may limit or prevent regrowth, so that the carbon debt incurred by biomass energy is never repaid.”
A statistical solution to the infamous three-body problem of classical physics could explain why the LIGO-Virgo gravitational-wave detectors have observed numerous black-hole mergers.
The three-body problem involves three classical objects (such as stars, planets or even black holes) orbiting and interacting with one another. In principle, the behaviour of a three-body system at a future time in its evolution is uniquely determined by the initial conditions of the system. However, infinitesimal changes in these initial conditions can accumulate over time to become huge differences in outcomes. As it is never possible to measure initial conditions with infinite precision, it is therefore never possible to use them to predict long-term outcomes – a signature of deterministic chaos.
A general closed-form solution of the three-body problem does not exist, but if the objects are very different in mass, the system can be approximated by two-body problems with small perturbations from the third object. Things become more daunting, however, when the three masses are similar. Now, astrophysicists in the US have built computer models of such “non-hierarchical” triple systems. Created by Nicholas Stone of the Hebrew University of Jerusalem (who did the work while at Columbia University in New York) and Nathan Leigh of University of Concepción in Chile, the simulations could help to explain why LIGO-Virgo have seen an abundance of gravitational-wave signals from merging black holes.
Breaking up is easy to do
Researchers know that non-hierarchical three-body systems are not stable in the long term and tend to break apart. “Non-hierarchical triples tend to break up not very long after they’re born,” explains Stone. “That means the non-hierarchical triples out there now in the universe doing interesting things were formed in some special way.”
Stone and Leigh modelled one of two known mechanisms by which such a non-hierarchical triple system can form. “If you have a very dense star cluster, individual binaries will frequently scatter off single stars that pass by,” explains Stone. “If a single star comes close enough it can temporarily capture into the binary and make an unstable, non-hierarchical triple, which later disintegrates to leave a binary with different properties.”
The researchers ran several hundred thousand computer simulations of such events. After the third body is captured, the three-body system evolves through a series of long periods of time in which two stars effectively behave as a binary, while the third star interacts weakly from a distance. These periods of calm are interrupted by relatively brief, intensely chaotic “scrambles” when the three stars are all close together. Each scramble ends when one star – not necessarily the same star as before – gains enough energy to escape the centre of the system. This allows the other two to return to a relatively stable near-binary orbit. The energy is insufficient for the ejected star to truly escape the three-body system, however, and when it returns another scramble occurs. When one of the three stars does acquire escape velocity during a scramble, the three-body system breaks up, leaving behind the new binary.
Giving birth to binaries
Although it remains impossible to predict how a specific system will evolve, the new research presents a statistical distribution showing the probabilities of a how chaotic non-hierarchical triple systems are most likely to break up. Moreover, it allows astronomers to draw inferences about the probable conditions in triple systems that gave birth to binaries with specific properties. “We can tell you that the orbits are generally elliptical rather than circular, with some probability distribution of ellipticities,” explains Stone.
“I think the results are very significant from a theoretical astrophysics point of view, but they have a more general application beyond this small group,” explains Adrian Hamers of the Max Planck Institute for Astrophysics in Munich. In particular, the creation of certain types of binary systems could explain why the LIGO–Virgo gravitational-wave detectors have observed several mergers of solar-mass black holes in binary systems.
Such a black-hole binary could be produced by an isolated pair of stars each collapsing to form a black hole. The problem with this, however, is that stars of the appropriate masses and separation to create such a black-hole binary are expected to merge as stars long before they become black holes. If the stars are further apart – and therefore have enough time to collapse to form black holes –then the black holes are expected to take longer than the age of the universe to merge.
The presence of a third body could be the answer as Hamers explains: “In these types of dynamical interactions, you can drive black holes into tighter orbits where they can merge within much shorter times.
Johan Samsing of the Niels Bohr Institute in Copenhagen sees several important implications for the work – and one key shortcoming: the formulas work only for systems evolving chaotically through a series of scramble states, forgetting their initial conditions. At present, he says, a computer simulation is needed to calculate whether the system will enter such a state: “In about half of all interactions, the third object will promptly go through the binary and perturb it in a single passage,” he explains. “I would see this as a first step: if they can expand on their formalism and say which parts of phase space are chaotic and which are not, that would be a natural next step.”
Modern imaging techniques have yielded fresh insights into how insect larvae use powerful suction organs to move around fast-moving alpine waterways. The work, by researchers in the UK and Germany, reveals the internal structure of the organs in three dimensions and highlights features that could aid the design of bio-inspired tools for attaching to smooth and rough surfaces in wet and dry conditions.
The aquatic larvae of net-winged midges beat all records of insect attachment strength. The six suction cups on the bottom of their streamlined bodies attach so tightly that forces greater than 600 times their body weight are needed to dislodge them. This allows the larvae to graze on algae in alpine streams and rivers that can flow as fast as three metres per second.
“The force of the river water where the larvae live is absolutely enormous, and they use their suction organs to attach themselves with incredible strength. If they let go they’re instantly swept away,” says Victor Kang, a zoologist at the University of Cambridge and lead author of the study. “They aren’t bothered at all by the extreme water speeds – we see them feeding and moving around in all directions.”
Multiple imaging modes
While the powerful adhesion and lifestyle of net-winged midge larvae have fascinated entomologists for decades, most work on their suction organs has used light microscopy. In the latest research, published in BMC Zoology, Kang and others at Cambridge teamed up with imaging experts at the Karlsruhe Institute of Technology to take a closer look. Together, they used confocal laser scanning microscopes, X-ray microtomography, scanning electron microscopes and interference reflection microscopy to study the morphology of the suction cups and record them in action.
The images revealed tens of thousands of microscopic hairs covering each suction disc. These hairs appear to increase contact and friction on rough surfaces, helping the larvae cling on in their high-drag environment by increasing resistance to shear forces. The team also identified a second type of specialized hairs on the rim of the disc, which may help form a tight seal on rough surfaces.
Behind the suction disc is a circular chamber with a cone-shaped central piston. When the piston is pulled away from the surface, the volume in this suction chamber increases. This reduces the pressure relative to the water outside, generating a powerful suction force. Imaging showed that the fibres of the muscles controlling the piston are characteristic of slow-moving powerful muscles, suggesting they are optimized for attachment strength, not speed.
Quick-release valve
The suction discs also have a feature that hasn’t been seen elsewhere: a V-shaped notch on the rim. When this opens, the suction chamber depressurizes rapidly, enabling the larvae to lift and reposition the sucker near another patch of algae.
Video footage of moving larvae on a glass surface, taken using interference reflection microscopy, demonstrated the animals’ fine control over the V-notch. Each notch has its own pair of dedicated muscles that can be used to open it independently at various speeds.
Imaging also suggests that flaps on the V-notch are arranged in a way that creates a valve when they are closed. This prevents water from flowing into the suction chamber during attachment, helping to maintain the pressure difference.
V for valve Detailed views of the V-notch on a sample larva reconstructed from micro-computed tomography data. A) A side view of a suction disc reveals its internal structures, including the V-notch (marked with an asterisk). b) A top-down view of the V-notch valves, showing their flap-like structure (scale bars: 30 μm). (Image from 2019 BMC Zool4 10, Creative Commons Attribution 4.0 International License, http://creativecommons.org/licenses/by/4.0/)
Inspiration for engineers
Human-engineered suction cups only work well on smooth, clean surfaces, and the team hope their findings will enable them to develop more adaptable alternatives. “By understanding how the larvae’s suction organs work, we now envisage a whole host of exciting uses for engineered suction cups,” says Cambridge’s Walter Federle. “There could be medical applications, for example allowing surgeons to move around delicate tissues, or industrial applications like berry-picking machines, where suction cups could pick the fruit without crushing them.”
Jessica Sandoval, a materials scientist at the University of California, San Diego, who studies the suction cups of clingfish but was not involved in the present work, called the research an “exciting model” for bio-inspired suction cups. “Whether it is mimicking the microscopic microtrichia to mimicking the ‘V-notch,’ there is much to learn from the suction organ of this larvae,” she tells Physics World.
While wet environments generally make adhesion challenging, Sandoval believes that bio-inspired versions of these suction discs could make it easier to affix objects to wet, rough surfaces. Other applications are also possible. “Bioinspired suction cups that can withstand highly directional forces could be used across a wide variety of fields, from robotics to sensing,” she says. “In the field of robotics, such suction cups could be applied to manipulation or locomotion, especially in unstructured or rough terrain.”