Skip to main content

Non-toxic supercapacitors go fully recyclable

Researchers at the Empa in Dübendorf, Switzerland, have made a high-performance supercapacitor entirely from recyclable, non-toxic materials. The device can withstand thousands of charge and discharge cycles, resists pressure and shock, and works even at freezing temperatures, making it an environmentally-friendly option for powering Internet of Things (IoT) devices.

The development of the IoT has heightened the already-difficult problem of electronic waste. Many electronic components for IoT applications are produced by the million, have a short service life and are powered by lithium-ion or alkaline batteries. While these batteries perform well, they contain toxic materials that need to be collected at the end of their life and then recycled using special processes.

Onto the compost heap

Researchers led by Gustav Nyström have now developed an alternative: an electric double-layer capacitor (EDLC) made from a disposable paper-like material. In the context of electronics, the researchers explain that “disposable” devices are those that can be thrown away in the trash, do not release toxic substances and ultimately fragment into small particles. “At the end of its service our new EDLC can be processed as a noncytotoxic compostable material or simply be left in nature,” says Xavier Aeby of Empa’s Cellulose & Wood Materials lab. “Indeed, the device disintegrates and loses 50% of its mass within nine weeks, leaving only a few visible carbon particles.”

The device, which can also store much more charge than a conventional capacitor, was made using a method called direct-ink writing. In this technique, viscoelastic gel ink is extruded line-by-line and layer-by-layer from a printer nozzle to form three-dimensional objects.

recyclable capacitor

Nyström and colleagues’ ink contains only nontoxic and renewable materials: nanocellulose as a gelling and network-forming agent and as a substrate; carbon powder to make high-surface-area electrodes; glycerol as a plasticizer in the nanocellulose and as the electrolyte; and water as the solvent. The nanocellulose itself contains high-performance bio-nanofibres, which are a finer-scale version of the micron-sized fibres used to make paper.

While the recipe may sound relatively simple, it certainly wasn’t easy to develop. “It took an extended series of tests until all the parameters were right, and each of the components flowed reliably from the printer and the capacitor worked,” Aeby says. “As researchers, we don’t want to just fiddle about, we also want to understand what’s happening inside our materials.”

Ideal for the IoT

To test their device’s suitability for real-world applications, Nyström and colleagues printed an energy-storage circuit composed of six supercapacitors connected in series on a complex 3D surface. They found that this system stores electricity for several hours and is capable of powering a standard digital alarm clock.

In the future, Nyström and Aeby say that such capacitors could power a sensor or microtransmitter for hours after being charged using an electromagnetic field, making them ideal for IoT applications. The number of microdevices is also expected to increase as “point-of-care” diagnostics, such as self-testing devices for diabetics, come into more widespread use. “A disposable cellulose capacitor could be well-suited for these applications too,” Nyström says.

The researchers, who report their work in Advanced Materials, say they now plan to study ways to improve the amount of charge that can be stored in their supercapacitor – “ideally without sacrificing the non-toxicity requirement,” Nyström tells Physics World.

Thermodynamic origins of reaction heterogeneity in lithium battery electrodes

Want to learn more on this subject?

During battery (dis)charging, lithium (de)intercalation in electrodes is usually spatially non-uniform across multiple length scales. Such phenomenon is a major impediment to battery performance and life as it causes energy under- utilization and induces over-(dis)charging, etc. While reaction heterogeneity is often attributed to mass transport limitation, this webinar will highlight the important roles of thermodynamic factors including elastic energy and phase transformations, the understanding of which is important for the development of mitigation strategies.

Through combined modelling and characterization, we elucidate how stress could destabilize the lithium (de)lithiation front in single crystalline and polycrystalline intercalation compounds, and also provide a fundamental driving force for dendrite growth on the lithium metal anode during electrodeposition. Stress relief thus offers a promising approach to improving reaction uniformity at the particle level. At the cell level, we discover that the reaction distribution within the porous electrode is strongly influenced by how the electrode’s equilibrium potential varies with the state of charge.

Two types of prototypical reaction behaviour emerge from common electrode materials with significant impact on the thick electrode performance. This finding leads to an efficient analytical model for optimizing battery configurations in place of common battery cell simulations.

Want to learn more on this subject?

Ming Tang is an associate professor in the Department of Materials Science and NanoEngineering at Rice University, US. After receiving a PhD in materials science and engineering from the Massachusetts Institute of Technology, US, he worked at Lawrence Livermore National Laboratory as a Lawrence postdoctoral fellow and then staff scientist. In 2013, Tang joined Shell Oil as a materials and corrosion engineer, and became an assistant professor at Rice University in 2015. His group is currently interested in applying combined modelling and experimental methods to understand mesoscale phenomena in energy-storage systems and use the acquired knowledge to guide microstructure design. Tang is a recipient of the 2018 Department of Energy Early Career Award.

New gate optimization strategy could boost efficiency in trapped-ion quantum computers

Physicists at the University of Maryland, US and the quantum computing firm IonQ have found a new way to make a central operation in quantum computing more efficient. By slashing the laser power required to perform a so-called two-qubit gate, the collaborators showed that they could speed up the gate’s operation, thereby boosting the performance of their trapped-ion quantum computer.

The building blocks of a quantum computer are qubits – quantum bits that can be in any superposition of two states. In this work, the researchers used ions as their qubits. Rapidly oscillating electric fields trap the ions in a chain, making it possible to perform computational operations by shining laser light on one or more ions.

Two-qubit entangling gates

These computational operations generally divide into two types: single-qubit gates and two-qubit gates. While single-qubit gates are relatively simple to perform and pose no significant challenges, two-qubit gates cost significant time and power. That has consequences for the overall efficiency of the quantum computer, says Norbert Linke, a fellow of Maryland’s Joint Quantum Institute (JQI) and a co-author of the current study. “The performance of two-qubit entangling gates typically limits the overall system since they require the most calibration time and introduce the most error,” Linke explains. “Improving these gates is therefore crucial to boost the performance and eventually scale up these systems.”

Ideally, gate operations would be fast, use minimal laser power, and leave the qubit in the desired state with no errors (maximum fidelity). In the real world, errors in two-qubit entangling gates come from having imperfect control over experimental parameters such as the frequency of the laser and the trapping field. The general technique to achieve the highest fidelity is therefore to take great care in designing the control signal (that is, the laser beam) that interacts with the ions, eliminating all undesirable effects by fine-tuning the parameters of the protocol. This constrains the design space for the control signal.

The IonQ–JQI team’s idea was to sacrifice a small amount of fidelity to save a significant amount of laser power – in some cases an order of magnitude. “We consider the constraints that don’t contribute significantly to the error processes when removed,” explains fellow co-author Yunseong Nam, quantum theory lead at IonQ and adjunct assistant professor at the University of Maryland. “This way, while we sacrifice a minimal amount of fidelity, we can significantly increase the size of the design space, which can then be used to better optimize the power requirement.”

Nam and his colleagues implemented their protocol on the JQI’s programmable trapped-ion quantum hardware with five qubits. When they measured both the power and the fidelity of the gate operations, they found that they could create a maximally entangled state with their method without losing significant fidelity.

Generalizing the technique

Now that the team has carried out a successful proof-of-concept demonstration, its members plan to implement their two-qubit entangling gate in various quantum algorithms. This should allow them to verify whether the newly developed protocol leads to an increase in overall efficiency. Linke adds that they are also exploring ways to generalize their method. “We are working on other schemes for generating entangling gates with different control parameters,” he says. “This will provide the optimal quantum gate mechanism for the particular noise or error characteristics of different devices.”

The paper is published in Physical Review Letters.

Meringue-like material offers lightweight soundproofing for aircraft engines

A new meringue-like material that is a strong absorber of sound over a broad range of frequencies has been developed by Michele Meo and colleagues at the University of Bath. They say that their extremely lightweight aerogel is produced using a low-cost, environmentally friendly process that could soon be replicated on an industrial scale. The material promises to be highly effective in reducing the noise of aircraft engines and could also be used in other advanced engineering applications.

As a sound wave passes through porous materials like cellular foams and fibrous polymers, its energy can be strongly dissipated within microscopic pores. As a result, these materials are highly effective sound absorbers at mid-range frequencies – between roughly 800–2000 Hz. To absorb lower frequencies, however, materials must usually be heavier and bulkier, making them impractical for some soundproofing applications.

To create a light-weight material that dissipates lower-frequency sound, Meo’s team used a graphene oxide-polyvinyl alcohol aerogel (GPA). To manufacture this substance, they first produced a foam by whipping up a blend of graphene oxide sheets and PVA polymer using ultra-high shear mixing. After embedding the foam within a honeycomb scaffold, the researchers then freeze-cast it onto a surface, by exposing one side of the material to liquid nitrogen. This caused ice crystals to grow vertically from the side touching the nitrogen, pushing any larger and lighter air bubbles upwards. Finally, the foam was freeze-dried through sublimation, producing a meringue-like aerogel.

Hierarchical and highly tuneable

The resulting material had a hierarchical and highly tuneable porosity and a greatly enhanced ability to dissipate sound energy compared with other porous materials. In addition, the GPA had a density of just 2.1 kg/m3 – making it one of the lightest acoustic materials ever produced. Having adjusted the composition and thickness of their aerogel to optimize its acoustic properties, and evaluated the influence of different processing times, the team created a material with high sound absorption across the 400–2500 Hz range.

Within this range, average losses in sound transmission reached as high as 15.8 dB – which would reduce the roar of a jet engine to the loudness of a hairdryer. As a result, the researchers say that their GPA would be ideally suited for use as an acoustic insulator within jet engine housings – improving comfort for passengers, while adding very little to the weight of the aircraft.

Alongside aerospace applications, the team is also exploring the potential use of their aerogel in vehicles including cars, helicopters, and submarines, as well as in building construction. Indeed, they say that the aerogel could be available for commercial use within just 18 months.

Beyond acoustics, the researchers predict that similar light-weight materials could be created for other applications including fire resistance and electromagnetic shielding.

The new material is described in Scientific Reports.

Backward-travelling sound wave appears in a metamaterial

An unusual type of sound wave that can travel backwards in space and has previously only been observed in ultracold quantum systems may also exist at ambient temperatures in artificially-engineered materials. Researchers led by Martin Wegener at Germany’s Karlsruhe Institute of Technology (KIT) found evidence for these unusual sound waves, known as rotons, in a so-called “metamaterial” that was designed to shape the flow of acoustic waves. The result might make it easier to manipulate sound in air as well as in solid materials.

In normal sound waves, or phonons, the energy of the sound wave travelling through a medium increases linearly with its momentum. With rotons, however, low energy can be associated with high momentum. Certain frequencies of rotons also generate three different co-existing acoustical modes with the same polarization but different wavelengths. The slowest of these three modes is a backward wave, or “return flow” as the 20th-century physicist Richard Feynman put it.

Until now, rotons – which, like phonons, are particle-like collective excitations or quasiparticles – have only been studied in ultracold quantum systems such as helium-3 and, more recently, Bose-Einstein condensates (BECs). These systems contain electrons that interact strongly with each other in a way that allows them to behave like superfluids – that is, a fluid that flows without any friction. This superfluidic behaviour was first predicted by the Soviet physicist Lev Landau, who suggested that it was due to the presence of phonons and rotons. However, superfluid helium-3 and BECs only exist at temperatures just above absolute zero, which somewhat limits their technical applications.

Roton-like behaviour

Wegener and colleagues designed their model metamaterials such that each unit, or cell, within the material interacts with its third-nearest neighbours. The researchers then used these structures to “mould” the flow of acoustic waves through the material. In their simulations, they observed roton-like behaviour without any quantum effects under normal ambient conditions and at almost any wavelength.

The KIT researchers have now begun making real metamaterials based on their design. To replicate the structures in their simulations, they are using an ultraprecise laser printing technique that can “write” a host of different microstructures with a tightly-focused “pen” of light in three dimensions. “Currently we are working on finding direct experimental proof for the existence of rotons,” Wegener tells Physics World. “We hope to submit our results for publication soon.

The researchers report their present work in Nature Communications.

Optical imaging could reduce recall surgery for breast cancer patients

Samuel Streeter

Patients with early-stage breast cancer often undergo breast-conserving surgery, which involves local excision of cancer with a surrounding margin of healthy tissue. The goal is to remove the entire tumour and minimal healthy tissue, but excision is primarily based on visual inspection and relies on the surgeon’s expertise. Roughly 20% of patients require a second surgery due to incomplete initial removal of cancerous tissue.

Improving the tumour margin assessment during the operation could reduce the number of repeat surgeries and associated health and financial costs. Other options available for intraoperative margin assessment include projection radiography and, more recently, volumetric micro-CT. But these X-ray imaging techniques cannot effectively differentiate between normal, abnormal benign and malignant fibrous tissues.

A team from the Thayer School of Engineering at Dartmouth and Dartmouth-Hitchcock Medical Center is now investigating whether optical imaging could help, reporting their findings in Physics in Medicine & Biology.

“We choose to investigate optical scatter imaging, because it is non-contact, provides rapid scanning and relies only on endogenous contrast in the tissue,” explains first author Samuel Streeter. “The fine patterns of light used by the technique probe only superficial layers of tissue, making the approach particularly well suited for analysing tissue margins.”

Optical scatter imaging

Streeter and colleagues used wide field-of-view optical scatter imaging to assess 57 resected breast tumour slices, containing 13 distinct tissue subtypes, from 57 patients. The optical scatter method employed, known as spatial frequency domain imaging, illuminates the tissue with one-dimensional sinusoidal light patterns and images the reflected light intensity. It rapidly images the top layer of tissue with increased sensitivity to tumour-associated, collagen-rich matrix structures.

For comparison, the researchers also imaged the samples with micro-CT and diffuse white light (DWL) imaging, which is similar to the surgeon’s view in the operating room. To enable quantitative comparison between the three modalities, they converted the colour DWL images to greyscale intensity (luminance) and derived monochromatic optical scatter images from the shortest wavelength (490 nm).

Sample images

In general, the optical scatter images exhibited similar contrast to the micro-CT view of the tissue. The team quantified the similarity between the micro-CT and optical images using two image similarity metrics: mutual information (MI) and the Dice coefficient. For healthy and benign tissue specimens, both optical scatter and DWL images exhibited similar Dice coefficients and similar MI to micro-CT.

However, for the cancerous specimens, optical scatter imaging exhibited greater MI with micro-CT than DWL with micro-CT, as well as greater (for invasive ductal carcinoma) or similar (for invasive lobular carcinoma) Dice coefficients with micro-CT. Analysing all specimens together revealed that optical scatter images exhibited greater similarity with co-registered micro-CT in 89% of specimens using MI and in 81% using the Dice coefficient.

The researchers next analysed the coefficient-of-variation, a measure of the amount of feature content in each image, in all of the wide-field images. In all specimens, optical scatter imaging gave significantly higher values (representing the highest image quality) than either DWL or micro-CT, revealing additional features associated with fibrous tissue structures that may be of diagnostic relevance.

Finally, they analysed micro-CT, optical scatter and DWL images of 2.0 x 2.0 cm regions containing boundaries between malignant tissue and healthy or benign fibrous tissue. In four representative specimens, optical scatter exhibited the highest coefficient-of-variation and the highest contrast ratios across the boundaries, suggesting a greater sensitivity to malignant–fibrous tissue boundaries.

The researchers note that micro-CT is sensitive to microcalcifications, which are related to breast cancer histology and are not visible in the optical images. Along with the fact that micro-CT cannot clearly differentiate fibrous tissues, the optimal approach for margin assessment during breast cancer surgery may be to couple the two modalities.

“This study focused on demonstrating the benefits of optical scatter imaging using tumour slices,” says Streeter. “But to truly combine these modalities in a clinical tool, the optical scatter imaging should be mapped to the three-dimensional surface of the specimen and not just constrained to flat tissue surfaces.”

As such, the team is now using established 3D optical imaging techniques to advance optical scatter imaging by mapping the reflectance to 3D structures. “In so doing, we will be able to overlay optical scatter reflectance from intact specimen margins with the volumetric micro-CT scan,” Streeter tells Physics World. “This multimodal solution could help clinicians identify suspicious margins rapidly – in the operating room – in order to avoid costly re-excision procedures.”

Machine learning could save firefighters from deadly flashovers

New machine learning algorithms could soon help firefighters forecast dangerous flashover ignition events using sensor data from burning buildings. Called P-Flash, the system was developed by Thomas Cleary and colleagues at the National Institute of Standards and Technology (NIST) in the US and Hong Kong Polytechnic University. Trained using data from thousands of simulated fires, the model can predict some flashovers in housefires up to 30 s before they occur.

Flashovers are among the most hazardous threats faced by firefighters. At high temperatures, all exposed combustible material in a room can be ignited simultaneously, releasing a huge amount of energy. To avoid danger, while maximizing the amount of time spent searching a fire for victims, it is critical for firefighters to predict these events as far in advance as possible. However, firefighters engaged in life-saving operations in smoky environments can sometimes overlook the characteristic precursors to flashovers such increasingly heat and flames rolling across the ceiling.

In their study, Cleary’s team aimed to develop more robust forecasting techniques based on the data gathered by heat sensors – which are often installed alongside smoke alarms in modern US homes. While these devices tend to fail above 150 °C, the team believes that prior to failure, data from the sensors can be used to measure temperature trends in rooms and track the distribution of heat throughout a building. From this information, machine learning algorithms could predict when and where flashovers will occur.

Virtual three-bedroom house

To demonstrate this, Cleary and colleagues developed the Prediction Model for Flashover (P-Flash), which they trained using data from more than 4000 simulated fires in a virtual three-bedroom house. In each simulation, they varied details including the arrangement of furniture in each room, and which windows and doors were open or closed. After training, the team then fine-tuned the model’s predictions using 500 additional simulations, then tested its performance with a final 500 runs. By capturing the complex relationship between temperature signals and flashover conditions, P-Flash predicted simulated flashover events 1 min in advance some 86% of the time.

Next, the researchers tested their model’s performance in 13 real housefires, carried out in controlled conditions at UL (formerly Underwriters Laboratories). When fires were started in open spaces like kitchens and living rooms, Cleary’s team found that P-Flash could successfully forecast a flashover 30 s before it occurred. However, the system was far less accurate at predicting flashovers in smaller rooms such as bedrooms.

The team now aims to tackle this shortcoming by further training of P-Flash with a focus on smaller rooms. If P-Flash can be improved, it could be installed on handheld devices that communicate with heat sensors via the cloud. This could help firefighters to predict the likely times and locations of flashovers before even arriving at the scene.

The researcher is described in Proceedings of the AAAI Conference on Artificial Intelligence.

Construction go-ahead for €2bn Square Kilometre Array

The go-ahead has been given to build what will be the world’s largest radio telescope network. Last week the council of the Square Kilometre Array Observatory (SKAO) gave the green light to construct the €2bn Square Kilometre Array (SKA) in Australia and southern Africa. To be complete by 2028, it is anticipated that the SKA will operate for the next 50 years.

This moment has been 30 years in the making

Philip Diamond

As its name suggests, the SKA is a facility that intends to have a total collecting area of 1 km2, achieved by spreading out thousands of individual dishes in southern Africa as well as a million wire antennas in Australia. SKA is designed to provide astronomers with unprecedented views of the first stars in the universe and observations of gravitational waves via the radio emissions from pulsars, among other things.

That initial design, however, proved too ambitious and in 2013 officials concentrated on building a much smaller preliminary facility known as SKA1, which was to be complete by 2018. It would feature 250 mid-frequency radio dishes and 250,000 low-frequency dipole antenna to keep costs below a cap of €674m. Despite further woes with members dropping out, such as Germany, and increases in the baseline cost of the project to €900m, that timeline was delayed. Yet a big boost for the project came in March 2019 when Australia, China, Italy, the Netherlands, Portugal, South Africa and the UK signed the SKA convention treaty in Rome. That came into effect earlier this year after five countries – including Australia, South Africa and the UK – ratified the convention, creating the SKAO in the process.

An “ecstatic” moment

More than 500 engineers from 100 institutions worldwide have been involved with the design of the SKA telescopes with over 1000 scientists from 40 countries working on the science case of the project. The final SKA design to be built — similar to that proposed for SKA 1 — includes 197 radio dishes in South Africa, including 64 dishes belonging to the existing MeerKAT array,  as well as 131 072 individual antennas in Australia. The cost of constructing the two telescope arrays and operations for the coming decade will be about €2bn – €1.3bn to build the instrument and €700m for operations. The UK, which hosts the headquarters of the observatory at the Jodrell Bank site in Cheshire, will contribute £270m.

SKA will be built in stages with eight dishes and an 18-station array of antennas – each station featuring 512 antennas — ready by 2025. By the start of the following year SKA will include a 64-dish array and 64 antenna stations while in 2027 it will have a 133-dish array and 256 antenna stations. In 2028 there will be an “operation readiness review” with the following year marking the end of construction.

SKA is going to be a key piece of global science infrastructure for astrophysics and will do fantastic science

Richard Easther

Philip Diamond, director-general of SKAO, says he is “ecstatic” by the latest development. “This moment has been 30 years in the making,” he says. “Today, humankind is taking another giant leap by committing to build what will be the largest science facility of its kind on the planet; not just one but the two largest and most complex radio telescope networks, designed to unlock some of the most fascinating secrets of our universe.”

That view is backed up by Catherine Cesarsky, who is chair of the SKAO Council. “Giving the green light to start the construction of the SKA telescopes shows…the professional work that’s been done by the SKAO to get here, with a sound plan that is ready for implementation, and in the bright future of this ground-breaking research facility.”

“Lack of candour”

Richard Easther, a cosmologist at the University of Auckland says that it is “great news” to see a construction schedule and budget for SKA. “[It] is going to be a key piece of global science infrastructure for astrophysics and will do fantastic science,” he adds.

Yet one open question is whether the original intention of SKA that features 2500 radio dishes and a million radio antennas – later dubbed SKA 2 – will ever be constructed. Indeed, in 2013 SKA 2 was budgeted at over €1.5bn, which is now near to the cost for SKA 1.

Easther says that there has been a “lack of candour” about this timeline from the SKA leadership. “There was no formal downsizing so that SKA 2 – representing 90% of the actual project — has effectively been airbrushed away, even though its capabilities were key to the hype that got the project rolling in the first place,” adds Easther, who supported a decision by New Zealand in 2019 to pull out of the project. “This certainly undercut the value of the project for New Zealand — it became harder to claim that the investment stacked up scientifically or economically.”

Timeline: The Square Kilometre Array

2006 Southern Africa and Australia are shortlisted to host the Square Kilometre Array (SKA) beating off competition from Brazil and China. Due to be completed in 2020 and cost €1.5bn, the facility would comprise about 4000 dishes, each 10 m wide, spread over an area 3000 km across

2012 The SKA Organisation fails to pick a single site for the telescope and decides to split the project between Southern Africa and Australia. Philip Diamond is appointed SKA’s first permanent director-general replacing the Dutch astronomer Michiel van Haarlem, who had been interim SKA boss

2013 Germany becomes the 10th member of SKA, joining Australia, Canada, China, Italy, the Netherlands, New Zealand, South Africa, Sweden, the UK. SKA’s temporary headquarters at Jodrell Bank in the UK opens. SKA members propose a slimmed-down version of SKA known as SKA1. With a cost cap of €674m, it would consist of 250 dishes in Africa and about 250 000 antennas in Australia

2014 Germany announces it will pull out of SKA the following year

2015 Jodrell Bank beats off a bid by Padua in Italy to host SKA’s headquarters. India joins SKA

2017 Members scale back SKA again following a price hike of €150m, which involves reducing the number of African dishes to 130 and spreading them out over 120 km

2018 The first prototype dish for SKA is unveiled in China. Spain joins SKA

2019 Convention signed in Rome to create an intergovernmental body known as the SKA Observatory. The Max Planck Society in Germany joins SKA. New Zealand announce it will pull out of SKA in 2020

2021 SKA Observatory comes into force. Start of construction annouced.

Black holes merging with neutron stars have been spotted by LIGO–Virgo for the first time

Gravitational waves from two separate mergers of a black hole with a neutron star have been seen by the LIGO observatories in the US and the Virgo observatory in Italy. Although hints of similar mergers have been spotted by the detectors before, these are the first confirmed events of this kind. One signal was detected on 5 January 2020 and the other was observed less than two weeks later, on 15 January.

LIGO–Virgo has already detected the mergers of pairs of black holes and pairs of neutron stars, so these observations complete the set of possible mergers of these objects. “We finally have the final piece of the puzzle: black holes swallowing neutron stars whole,” says LIGO–Virgo team member Vivien Raymond, from Cardiff University’s Gravity Exploration Institute. “This observation really completes our picture of the densest objects in the universe and their diet.”

Gravitational waves are ripples in space–time that are generated when pairs of massive objects such as black holes and neutron stars orbit each other in a rapid inspiral before merging. The LIGO and Virgo observatories are kilometre-scale interferometers that can measure the minuscule expansion and contraction of space–time that occurs when a gravitational wave passes through Earth.

The first of the two events has been named GW200105, and scientists believe that it involved the merger of a 9 solar mass black hole with a 1.9 solar mass neutron star. Despite being seen in only two of the three LIGO–Virgo detectors (LIGO Livingston and Virgo, LIGO Hanford was offline at the time), the signal was strong enough to meet the threshold of a detection. Scientists calculate that the merger occurred about 900 million light-years away.

Difficult to pinpoint

The GW200105 signal was much weaker in Virgo than in LIGO Livingston, which meant that scientists were not able to pinpoint its origin – it could have been anywhere in a region of sky about 34,000 times the size of a full Moon.

The second event has been dubbed GW200115 and it occurred about 1 billion light-years away and involved a 6 solar-mass black hole and a 1.5 solar mass neutron star. Because it was spotted by all three detectors, researchers were able to narrow down its location in the sky to an area of about 3000 times the size of a full Moon.

When gravitational waves are detected by LIGO–Virgo, a notice goes out to other astronomers who then train their telescopes on that region of the sky and look for electromagnetic radiation from the merger. However, unlike the merger of two neutron stars that was observed by LIGO–Virgo in 2017, no electromagnetic radiation from GW200115 and GW200105 was spotted by other telescopes.

No light show

According to the LIGO–Virgo team, this lack of other observations is expected for several reasons. For one thing, the black hole is expected to swallow the neutron star whole, with little matter being flung out to generate an electromagnetic signal. This is unlike neutron-star mergers, in which the resulting object explodes spectacularly. Furthermore, the great distances to the mergers means that any light produced by the events would be very dim indeed.

“These were not events where the black holes munched on the neutron stars like the cookie monster and flung bits and pieces about. That ‘flinging about’ is what would produce light, and we don’t think that happened in these cases,” says LIGO spokesperson Patrick Brady at the University of Wisconsin-Milwaukee.

The LIGO detectors spotted their first gravitational waves in 2015, from the merger of two black holes. In 2017 the Virgo detector spotted its first signal and since then all three observatories have been upgraded. A fourth observatory – KAGRA in Japan – joined the search for gravitational waves in February 2020.

The observations are described in Astrophysical Journal Letters.

How gravitational waves are detected

In the video below, LIGO–Virgo team member Nergis Mavalvala of the Massachusetts Institute of Technology explains how gravitational waves are detected.

How to persuade a venture capitalist to fund your business

I read somewhere that about a third of all workers in the UK don’t enjoy their jobs – and that more than half would rather work for themselves. So if you feel the urge to start your own business, you’re not alone. In fact, one new company is set up every minute in Britain. They’re generally founded by people who have spotted a gap in the market or have the technology to solve an existing or future problem.

These days, setting up a company is easy. You only need to spend a few hundred quid filing the firm, opening a bank account, picking a website domain name and sorting out an e-mail address. The real challenge is securing enough money to fund the business until it can sustain itself from sales. Records show that 89% of start-ups in the UK survive their first year, but fewer than half make it beyond five years.

It can take several years – and many rounds of investment – for physics-based businesses to succeed

James McKenzie

According to the Institute of Directors, about half of all new businesses (across all sectors) start out with less than £5000 in their pocket. That’s probably enough if you’re a gardener, hairdresser or decorator. But most physics-based businesses need much more. In fact, it can take several years – and many rounds of investment – for them to succeed. That’s why many hi-tech firms look to venture capitalists (VCs) – people who will invest cash in potentially risky but promising businesses in return for a stake in the company.

Keys to success

But with fewer than 10% of all start-ups ever securing VC funding, what’s the secret to winning their support? For answers I turned to Hermann Hauser, the physicist who in 1997 co-founded Amadeus Capital Partners – a VC firm that specializes in the computer, semiconductor and telecoms sectors. Hauser rose to fame in the 1980s as the co-founder of Acorn Computers, which later spun-out Arm – today’s hugely successful chip-licensing business.

Hauser, who is an honorary fellow of the Institute of Physics, told me that after 30 years in the VC business, he realized that deciding whether to invest in a company boils down to three key factors. “The first,” Hauser told me, “is the size and growth rate of the market. The second is the team. And the third is the defensibility of the technology”.

There is, indeed, little sense in investing heavily in a product or service if the market is too small. Hauser cites Cambridge Silicon Radio (CSR) – a company making single-chip Bluetooth that Amadeus invested in in 1998. Admittedly, the market size was zero, but the growth rate was potentially huge if single-chip radios like Bluetooth became part of mobile phones, which they did. CSR went public in 2004 and the company was bought by Qualcomm in 2015 for a staggering $2.5bn.

As for the team, Hauser says he wants people who stand out. “I am looking for a star – usually a technical star – as it’s easier to build a team around them as people want to work with them.” To Hauser, people are crucial and, when I quizzed him further, he was even blunter. “I have seen more situations where an A-grade team with C-grade technology has been successful than I have a C-grade team with A-grade technology win out”.

I have seen more situations where an A-grade team with C-grade technology has been successful than I have a C-grade team with A-grade technology win out

Hermann Hauser, Amadeus Capital Partners

Hauser’s third point – about what he calls “defensible technology” – refers to the fact that if a product does well, everyone else will try to copy it. You therefore must protect your product, processes or materials through patents and other forms of intellectual property. That will give you a chance to grow and maintain high gross margins to build the business value. “Of the 100 investments I have made in the physical sciences, the technology doesn’t always work as well or as fast as predicted – but only once has one of my companies failed because the technology didn’t work at all.”

Picking winners

When talking to VCs, it’s worth remembering that they have to raise funds to invest in the companies based on their own reputation to pick winners and deliver returns. In the Wall Street Journal, Shikhar Ghosh – a lecturer at Harvard Business School – is quoted as saying that 75% of 2000 venture-backed companies he’d surveyed never returned any cash to investors. In fact, in 30–40% of cases, they lost their entire initial investment.

Fortunately, for physics-based start-ups there are some patient early-stage capital funds out there. In Britain, there’s the UK Innovation and Science Seed Fund (UKI2S), which supports hi-tech start-ups either in (or linked to) government labs. It recognizes that such firms can take a long time to mature – indeed about half the companies it backs are physics-based. Mark White, a UKI2S investment director, told me the fund is especially interested in great science that could answer unmet commercial needs.

“Of the investments the fund has made, fewer than 10% have not changed applications or markets for the technology as development has progressed,” he told me. “So we like to make sure the team has identified at least two or three applications for the technology.” This approach has paid off, with successes including Cobalt Light Systems – a Raman spectroscopy firm that first targeted medical markets before switching to airport security and pharmaceuticals. It was snapped up by US tech giant Agilent in 2017.

White’s advice for early-stage firms is not to fall into the trap of describing solutions in search of a problem. Instead, before even engaging with investors, you should first identify the problem you want to solve. If you want to secure VC funding, you’ll have to use your business plan to pitch the market opportunity and explain how you’re going to unlock it. If the VC agrees with you – and really believes in your team and your approach – then there’s a great chance you can get the investment to grow your business to the next stage.

Copyright © 2026 by IOP Publishing Ltd and individual contributors