Skip to main content

A decade of Physics World breakthroughs: 2013 – the first observations of high-energy cosmic neutrinos

IceCube is a particle detector comprising 5160 digital optical modules suspended along 86 strings, each up to 2.5 km long and embedded under the ice at the Amundsen-Scott South Pole Station. The strings are spaced to create a total detector volume of a cubic kilometre. Neutrinos interact with matter only extremely weakly, so the array instead detects the tiny flashes of Cherenkov light produced when the neutrinos collide with hydrogen or oxygen nuclei inside the ice and create secondary particles.

In November 2013, the IceCube Collaboration published details of its observation of 28 extremely high-energy particle events, created by neutrinos with energies of at least 30 TeV. The findings represented the first evidence for high-energy cosmic neutrinos, which arise from outside of our solar system.

“This is the dawn of a new age of astronomy,” principal investigator Francis Halzen commented at the time. Looking back, this development may indeed have heralded the onset of multimessenger astronomy – studying the Universe using not just electromagnetic radiation, but also information from high-energy neutrinos, gravitational waves and cosmic rays.

Tracking the source

Two years later, IceCube confirmed the cosmic origin of high-energy neutrinos with an independent search in the Northern Hemisphere. This study examined muon neutrinos reaching IceCube through the Earth, using the planet to filter out the large background of atmospheric muons. Data analysis suggested that more than half of the 21 neutrinos detected above 100 TeV were of cosmic origin.

Interestingly, the neutrino flux measured from the Northern Hemisphere had the same intensity as that from the Southern Hemisphere. This suggests that the bulk of the neutrinos are extragalactic, else sources in the Milky Way would dominate the flux around the galactic plane.

Blazar

Then last year, IceCube published evidence that a known blazar – called TXS 0506+056a and located about 4 billion light years from Earth – was a source of high-energy neutrinos detected by the observatory. A blazar is an active galactic nucleus, with a massive spinning black hole at its core, that emits twin jets of light and elementary particles. By chance, one of these jets is aimed at Earth.

“The neutrinos detected in association with this blazar provided the first compelling evidence for a source of the high-energy extragalactic neutrinos that IceCube has been detecting,” explains IceCube spokesperson Darren Grant. “We anticipate, however, that blazars may not be the entire story. One of our recent papers looks at 10 years of IceCube data and we see there are some other sources that are becoming potentially interesting.”

Future upgrades

Next in the pipeline is the recently announced IceCube Upgrade, which will see another seven strings of optical modules installed within the existing strings. The upgrade will add more than 700 optical modules to the 5160 sensors already embedded in the ice, and should nearly double the detector’s sensitivity to cosmic neutrinos.

The new sensor modules will be two to three times more sensitive than those in the current detectors, thus collecting more of the Cherenkov light produced in each rare neutrino event. The enhanced modules will include multiple photomultiplier tubes (PMTs) – either two 8-inch or 24 3-inch PMTs – as well as incorporating advanced power and communication electronics.

Sensor module

IceCube has two key goals for the upgraded observatory. First, the close spacing of the new detectors will extend detection capabilities to lower energy neutrino events. This will increase the precision of measurements of atmospheric neutrino oscillations, in which neutrinos transform from one type to another as they travel through space.

“In particular, this will let us conduct precision tests of oscillations to tau neutrinos that will explore if the neutrino mixing matrix is “unitary” – a key check on the Standard Model of particle physics,” Grant explains.

Another goal is re-calibration of the ice around the sensors for the entire IceCube detector. As well as enabling improved reconstructions of future neutrino events, this will also allow the team to reanalyse previous data with increased precision.

And IceCube’s development plans don’t stop there. Knowledge acquired while designing and deploying the new detectors will provide a launch point for a ten-times more sensitive future extension, known as IceCube-Gen2. The team also intend to develop a shallow radio detector array that will significantly increase sensitivity to ultrahigh-energy neutrinos. This would furnish IceCube-Gen2 with the ability to detect neutrinos with energies from a few GeV to levels above EeV.

“The current design would include about 120 additional strings of optical sensors deployed in the deep ice over a volume of about 6 cubic kilometres, while the radio array would encompass some 500 km2 of instrumented volume,” Grant tells Physics World.  “This represents a significant technological and logistical challenge; the aim is to have the full Gen2 in operation early in the 2030s.”

A decade of Physics World breakthroughs: 2012 – discovery of the Higgs boson at CERN

It was a long time coming. The existence of the Higgs boson was predicted in 1964, but the world had to wait until 2012 before it was discovered by physicists working on the ATLAS and CMS collaborations on the Large Hadron Collider at CERN.

So, what happened next?

The very next year, François Englert and Peter Higgs shared the Nobel Prize for Physics for their theoretical prediction of the Higgs boson in 1964. While it was pretty certain that the 2013 Nobel would be related to the Higgs discovery, there was much talk in the run-up to the announcement that the Nobel committee would break with tradition of a maximum of three winners and give at least a portion of the prize to the CERN collaborations.

Earlier this year I spoke to Lars Brink who was chair of the committee that awarded the 2013 prize. He explained that the three-person-maximum per prize rule is unlikely to ever be broken because the committee does not want to create thousands of laureates – which would have happened if ATLAS and CMS had won the Nobel.

More prizes

While I understand Brink’s sentiment, it is unfortunate that ATLAS and CMS missed out on a Nobel – although I suppose it is possible that three leading physicists from the collaborations could bag a future Nobel prize.

There are other physics prizes and in December 2012, a special prize from Yuri Milner’s Fundamental Physics Prize Foundation was shared by seven physicists who headed CERN’s Large Hadron Collider (LHC), ATLAS and CMS. This included a massive $3m in prize money that was split amongst the winners.

Meanwhile at CERN, work on the Higgs has progressed by leaps and bounds since 2012. Indeed, you may recall that the original discovery was described as a “Higgs-like boson” – but I think it’s now pretty certain that it is the particle that was predicted 55 years ago.

This week I had a chat with particle physicist Uta Klein at the University of Liverpool, who works on the ATLAS experiment and is also involved in planning for the next generation of colliders after the LHC.

Deeper understanding

She told me that since 2012, physicists have gained a much deeper understanding of the mechanisms that lead to the production of the Higgs boson when protons are smashed together at the LHC. They also have a much better understanding of how the Higgs quickly decays to other particles, which are then detected by the huge experiments along the collider.

Klein points out that an important milestone came just last year, when ATLAS confirmed that the Higgs decays to a to a bottom quark/antiquark pair. This is the most common decay channel for the Higgs at the LHC and should account for nearly 60% of all decays – but had proven extremely difficult to spot it amongst the vast number of particles that are produced by proton-proton collisions at the collider.

Also last year, physicists working on both ATLAS and CMS made discovery level measurements of a relatively rare process whereby the Higgs is produced in conjunction with a top quark/antiquark pair.

Studying how the Higgs couples to quarks provides important insights into how quarks acquire their masses. This, in turn could provide clues about the microscopic mechanism that caused the Higgs field to emerge and fill space and time, which is poorly understood. The strong nature of the Higgs-top quark coupling also means it should be relatively straightforward to detect any possible deviations from the Standard Model, which could point to new physics.

Rarer decay channels

Physicists have also been sifting through the vast amounts of data from the LHC to get a better understanding of some of the rarer decay channels taken by the Higgs. These include how the Higgs decays to a muon/antimuon pair, which has not yet been measured at discovery level. This would provide important insights into how the Higgs interacts with leptons and shed light on how these second-generation leptons acquire mass.

The holy grail of Higgs physics is a determination of whether the Higgs boson couples to itself. This coupling was predicted many years ago by Higgs, Englert and Robert Brout (who died in 2011, so missed out on the Nobel) and is at the core of the Standard Model. Self-coupling could be seen at the LHC in the form of pairs of Higgs bosons, but these are rarely produced in the proton-proton collisions.

In 2026 the LHC is | switch on after undergoing a high-luminosity upgrade, which will increase the proton-proton collision rate by a factor of 10. That means a lot more Higgs bosons to study, so hopefully physicists will get a better handle on those rare processes including double Higgs production.

Next-generation colliders

Looking further into the future, there are several types of next-generation collider that would cast more light on the Higgs. One proposal is the 100 km circumference Future Circular Collider (FCC), which is a larger version of the LHC that would collide protons at collision energies of about 100 TeV – compared to 13 TeV at the LHC. This would produce vast numbers of Higgs particles that could be studied.

An important challenge would be to isolate Higgs-related signals from the vast numbers of unrelated particles that would also be produced. That’s where an electron-positron collider such as the proposed International Linear Collider or Compact Linear Collider come in. Although such a facilities would operate at much lower collision energies (about 1 TeV), they would be “clean” sources of Higgs particles with a much lower background of other particles getting in the way.

A third option is a hybrid of the two approaches, smashing protons into electrons. This is the aim of the Large Hadron Electron Collider, which would involve building an electron accelerator at CERN and smashing the electrons into protons from the LHC.

Klein believes that building all three types of collider would be the best way forward. Whatever happens, physicists are bound to learn much more about the Higgs – and with a little luck, intense study of the Higgs could reveal new physics beyond the Standard Model

Nonlinear metamaterials improve MR imaging

MRI is a mainstay of clinics around the world. It is an imaging technique that provides unparalleled anatomical and pathological information while also avoiding exposure to ionizing radiation.

MR does have some limitations, however. MR signal arises from a small fraction of protons within the anatomy being imaged. It is therefore difficult and expensive to separate MR signal from noise. The protons are also excited by radiofrequency (RF) radiation, which, while non-ionizing, dissipates energy as heat within tissues. As a consequence, for patient safety reasons, there are regulatory limits on the maximum RF power at which clinical MR scanners can operate.

A research team led by Xin Zhang of Boston University has developed a new type of MR signal receiver coil that amplifies the MR signal acquired during image acquisition without increasing the RF power transmitted during scanning. The approach could open the door to quicker, higher quality MR imaging, while maintaining the high levels of patient safety associated with MR scans (Adv. Mater. 10.1002/adma.201905461).

Going nonlinear

An MR scan can be thought of as an orchestra of RF pulses interspersed with the expertly timed application of magnetic field gradients. It is the rapid switching of gradient directions that creates an obnoxious racket during an MR examination. The RF pulse is used to excite the patient’s protons into coherent motion. Once the protons are in motion, the externally applied RF can be switched off, and the protons produce RF radiation of their own accord. It is this that is recorded as the MR signal.

Metamaterials, composites with properties not readily reproduced in nature due to their structure, are not new. Linear metamaterials have been used to improve the signal-to-noise ratio (SNR) in MR images by amplifying the RF pulses. The problem with linear metamaterials is that RF amplification occurs during both excitation and signal acquisition, meaning that more of the energy from the RF field is absorbed. Over long periods of time, this RF absorption can pose a safety risk, thus limiting the utility of linear metamaterials.

To address this, Zhang and colleagues produced a nonlinear metamaterial by coupling a coil of wire with a small break in the circuit, known as a split ring resonator, to a linear metamaterial. They used a 3D-printed scaffold to structure the new composite material. Using a split ring resonator allowed the researchers to control the response of their nonlinear metamaterial to applied electromagnetic fields.

Metamaterial

The team used models and simulations to show that the resonance frequency of the new composite material is a function of the applied electromagnetic field strength. In periods of high field strength, as is the case during proton excitation, the resonance frequency of the nonlinear metamaterial drops and switches off the amplification caused by the linear metamaterial. In contrast, when the field strength is low, as during read-out, the nonlinear metamaterial boosts the signal in the receiving coil.

“Our approach allows us to boost the received [SNR] without running into issues of excessive [RF absorption] on the transmission side,” says Zhang.

Bottles and onions

To experimentally validate the new metamaterial, the researchers imaged an oil-filled water bottle and an onion using clinical and non-clinical MR imaging sequences. They found that nonlinear metamaterials could be integrated seamlessly into a variety of clinical MR imaging protocols.

MR images

Their data revealed a near 16-fold enhancement in the peak SNR. Comparisons also showed that the SNR enhancement achieved by the nonlinear metamaterial was larger and decayed at a slower rate with increasing distance between the metamaterial and the imaging volume.

“[Nonlinear metamaterials are] several years from the clinic, [but] the approach addresses a major hurdle to clinical adoption,” Zhang maintains. Increased signal in MR imaging without increasing RF power could help reduce scan times, increase department throughput and give more patients access to the diagnostic capabilities of MR.

Not bad for a few coils of wire.

Physics World announces its Breakthrough of the Year finalists for 2019

Top Ten icon

One of the highlights in the Physics World calendar is the annual announcement of our Breakthrough of the Year, which will be made this year on Thursday 12 December.

Today, we are revealing the 10 finalists for 2019, which serves as a shortlist from which we will pick the Breakthrough of the Year.

This year’s Top 10 Breakthroughs were selected by a crack team of five Physics World editors, who have sifted through hundreds of research updates published on the website this year. In addition to having been reported in Physics World in 2019, our selections must meet the following criteria:

  • Significant advance in knowledge or understanding
  • Importance of work for scientific progress and/or development of real-world applications
  • Of general interest to Physics World readers

Here are the Physics World Top 10 Breakthroughs for 2019, in the order in which we covered them this year. Come back next week to find out which one has bagged the Breakthrough of the Year award.

Neuroprosthetic devices translate brain activity into speech

Shared equally by Hassan AkbariNima Mesgarani at Columbia University’s Zuckerman Institute and colleagues and Edward Chang, Gopala Anumanchipalli and Josh Chartier of the University of California San Francisco for independently developing neuroprosthetic devices that can reconstruct speech from neural activity. The new devices could help people who cannot speak regain their ability to communicate with the outside world. Beneficiaries could include paralysed patients or those recovering from stroke. Beyond medical applications, the ability to translate a person’s thoughts directly into speech could enable new ways for computers to communicate directly with the brain.

First image of a black hole

First images of a black hole unveiled by astronomers in landmark discovery

To astronomers working on the Event Horizon Telescope for capturing the first direct visual evidence of a black hole and its “shadow”. The now-iconic image shows the doughnut-shaped ring of radio emissions surrounding a supermassive black hole that lies at the centre of a galaxy 55 million light-years from Earth. Although black holes are inherently invisible, the researchers have managed to obtain images near the point where matter and energy can no longer escape – the event horizon. This was done by combining the outputs of eight radio dishes in six different locations across the globe, which itself is an engineering triumph.

First detection of a “Marsquake”

To scientists working on NASA’s InSight mission for detecting a seismic signal on Mars. The first “Marsquake” was detected on 6 April 2019 and the researchers believe that the tiny tremor originated from within the planet rather than being the result of wind or other surface phenomena. The Red Planet now joins the Moon as a place where extraterrestrial seismic activity has been detected – and like the Moon, Mars does not have tectonic plates and therefore is expected to be much quieter than Earth when it comes to seismic activity. Studying the seismology of Mars should provide important information about the interior of the planet and how it was formed.

CERN physicists spot symmetry violation in charm mesons

To physicists working on the LHCb experiment on the Large Hadron Collider at CERN for being the first to measure charge–parity (CP) violation in a charm meson. The team spotted CP violation by measuring the difference in the rates at which the D0 meson (which contains a charm quark) and the anti-D0 meson decays to either a kaon/anti-kaon pair or a pion/anti-pion pair. Since the D0 and anti-D0 decays produce the same products, the big challenge for the LHCb team was working out whether an event was associated with a D0 or an anti-D0. While this latest measurement is consistent with our current understanding of CP violation, it opens up the possibility of looking for physics beyond the Standard Model.

Magnet

“Little Big Coil” creates record-breaking continuous magnetic field

To Seungyong Hahn and colleagues at the National High Magnetic Field Laboratory (MagLab) in Tallahassee, Florida for creating the highest continuous magnetic field ever in the lab. The 45.5 T record was set using a compact, high-temperature superconductor magnet dubbed “Little Big Coil”. Whereas the previous record of 45 T was set by a magnet that weighs 35 tonnes, the MagLab device is a mere 390 g. The magnet was designed to achieve even higher fields but was damaged during its record-breaking run. The breakthrough could lead to improvements in high-field magnets used in a range of applications including magnetic resonance imaging for medicine, particle accelerators and fusion devices.

Casimir effect creates “quantum trap” for tiny objects

To Xiang Zhang of the University of California, Berkeley and colleagues for being the first to trap tiny objects using the Casimir effect – a bizarre phenomenon in which quantum fluctuations can create both attractive and repulsive forces between objects. Zhang and colleagues used tuneable combinations of attractive and repulsive Casimir forces to hold a tiny gold flake between gold and Teflon surfaces with no energy input. Measuring the tiny forces involved in the trapping process was a triumph of optical metrology and provides a better understanding of how Casimir forces affect the operation of micromechanical devices. If the forces can be further controlled, there could even be practical applications involving trapped particles.

Antimatter quantum interferometry makes its debut

To the Quantum Interferometry and Gravitation with Positrons and Lasers (QUPLAS) collaboration for doing the first double-slit-like experiment using antimatter. Their experiment involved sending a beam of positrons (antielectrons) through a period-magnifying two-grating Talbot–Lau interferometer and showing that the antiparticles behave like waves and undergo quantum interference. They observed a diffraction pattern that changed as they changed the energy of the positron beam – something that is predicted by quantum theory and cannot be explained by classical physics. The breakthrough could lead to other experiments that look for differences between the quantum natures of matter and antimatter.

Quantum computer outperforms conventional supercomputer

To Hartmut Neven, John Martinis and colleagues at Google AI Quantum and several other US research institutes and universities for being the first to do a calculation on a quantum computer in a much shorter time than if done on a conventional supercomputer. This “quantum supremacy” over conventional computers was achieved by a quantum computer comprising 53 programmable superconducting quantum bits. It performed a benchmark calculation in about 200 s, whereas the team estimates that a supercomputer would take about 10,000 years to do the same calculation. While critics have since claimed the actual supercomputer execution time is more like 2.5 days, the team has still shown a clear advantage for quantum computing.

Trapped interferometer makes a compact gravity probe

To Victoria Xu and colleagues at the University of California, Berkeley for creating a new and more compact means of using trapped atoms to measure the local acceleration due to gravity. Their “quantum gravimeter” relies on the interference pattern generated when clouds of atoms are first vertically separated in space, and then allowed to recombine. Whereas most gravimeters measure the effects of gravity on atoms as they fall through space, the Berkeley device suspends the atoms in an optical trap where they interact with the gravitational field for up to 20 s. This improves the sensitivity of the measurement, paving the way for applications ranging from geophysical exploration to sensitive tests of fundamental forces.

Paediatric MEG system

Wearable MEG scanner used with children for the first time

To Ryan Hill, Matthew Brookes and colleagues at the University of Nottingham, the University of Oxford and University College London for developing a lightweight “bike helmet” style magnetoencephalography (MEG) scanner that measures brain activity in children performing everyday activities. Traditional MEG systems measure the tiny magnetic fields generated by the brain using cryogenically cooled sensors in a one-size-fits-all helmet that is bulky and highly sensitive to any head movement. Instead, the team used lightweight optically pumped magnetometers on a 500 g helmet that can adapt to any head shape or size. The scanner was used on a two year old (the hardest age to scan without sedation), a five year old watching TV, a teenager playing computer games and an adult playing a ukulele.

Stay tuned next week, when we will announce the Physics World Breakthrough of the Year for 2019.

RSNA annual meeting showcases radiology research

RSNA 2019, the annual meeting of the Radiological Society of North America, is taking place this week, with some 50,000 attendees heading to Chicago to learn about the latest advances in radiology research. Here’s a small selection from the studies presented at the conference.

Focused ultrasound may enable Alzheimer’s therapies

There currently is no effective treatment for Alzheimer’s disease, in part because the blood–brain barrier (BBB) prevents potential medication from reaching targets inside the brain. Animal studies have shown that pulses of low-intensity focused ultrasound (LIFU) can reversibly open this barrier to enable targeted drug and stem-cell delivery. In a clinical trial led by Ali Rezai, director of the WVU Rockefeller Neuroscience Institute, researchers at three sites delivered MRI-guided LIFU to brain sites critical to memory in three women with early-stage Alzheimer’s disease.

The treatment uses a helmet containing over 1000 ultrasound transducers that’s placed over the patient’s head after they are positioned in the MRI scanner. Each transducer delivers ultrasound targeted to a specific area of the brain. Patients also receive an injection of microbubble contrast agent. As ultrasound is applied, the bubbles oscillate and transiently loosen the BBB. Post-treatment MRI confirmed that the BBB opened in target areas immediately after treatment and then closed within 24 hours. Patients received three treatments at two-week intervals and experienced no adverse effects.

Research so far has focused on the technique’s safety; in future, the team intends to study LIFU’s therapeutic effects. “We’d like to treat more patients and study the long-term effects to see if there are improvements in memory and symptoms associated with Alzheimer’s disease,” says co-author Rashi Mehta. “As safety is further clarified, the next step would be to use this approach to help deliver clinical drugs.”

AI helps detect heart disease on lung cancer screens

Low-dose chest CT is employed for lung cancer screening in high-risk people, such as long-time smokers. Such CT scans can also visualize coronary artery calcification (CAC), a measure of arterial plaque used to help decide whether to prescribe cholesterol-lowering medication. But despite its value, CAC is not routinely determined in CT lung screens, as the measurements require dedicated software and add time to the interpretation.

CT scan

Researchers from the Cardiovascular Imaging Research Center at MGH and the Artificial Intelligence in Medicine Program at Brigham and Women’s Hospital have developed a technique that uses deep learning to automatically measure CAC on chest CT images. “If our tool detects a lot of coronary artery calcium in a patient, then maybe we can send that patient to a specialist for follow up,” says lead author Roman Zeleznik.

The researchers trained their deep learning system on cardiac and chest CTs with manually measured CAC values, and then tested the algorithm on CT scans from thousands of heavy smokers. They found that deep learning-derived CAC scores corresponded closely with those of human readers. There was also a significant association between deep-learning scores and cardiovascular death over follow-up of 6.5 years.

The deep learning system adds no time to the exam and can be used on almost every chest scan to generate clinically relevant information for a large number of patients. “There’s information about cardiovascular health on these CT scans,” says co-senior author Michael Lu. “This is an automated way to extract that information, which can help patients and physicians make decisions about preventative therapy.”

Brachytherapy effectively treats skin cancers

High-dose-rate (HDR) brachytherapy offers excellent cure rates and cosmetic outcomes when treating skin cancers in elderly patients, according to a study presented by Ashwatha Narayana from Northern Westchester Hospital. Squamous cell carcinoma (SCC) and basal cell carcinoma (BCC), the most common skin cancers, are usually treated by surgery or external-beam radiotherapy. But for elderly patients, who don’t heal well and may have additional medical problems, surgery may not be the best option.

Brachytherapy outcome

“If the affected area is the tip of the nose, ear or on the eyelid, multiple surgeries and skin grafting may be required,” explains Narayana. External-beam radiotherapy, meanwhile, can be too lengthy and painful for elderly patients. It also exposes healthy tissue around the lesion to radiation, which can increase side effects. “Brachytherapy delivers a higher dose of radiation directly to the tumour while sparing healthy tissue nearby,” he adds.

HDR brachytherapy delivers a precise radiation dose to tumour cells via catheters implanted into a custom-fitted applicator. A treatment course comprises six three-minute sessions over two weeks. Patients have minimal recovery time and typically experience few or no side effects. In the study, radiologists used HDR brachytherapy to treat 70 patients with early-stage BCC and SCC. The patients had a total of 81 lesions on the nose, face, forehead, scalp, ear, neck and legs. The procedure provided a cure rate of 96% for SCC and 98% for BCC, with excellent cosmetic outlook in 90% of cases.

MRI-guided ultrasound ablates prostate cancer

Prostate cancer is the second-leading cause of cancer death in men, and treatment is challenging: surgery and radiation are not always effective and can cause serious side-effects. MRI-guided transurethral ultrasound ablation (TULSA) has emerged as a promising minimally invasive treatment option. TULSA delivers precise ultrasound doses to diseased prostate tissue while sparing surrounding healthy tissue, using a rod-shaped device inserted into the urethra. The device incorporates 10 ultrasound-generating elements that are automatically controlled to adjust the shape, direction and strength of the therapeutic ultrasound beam. The entire treatment takes place in an MRI scanner, providing real-time monitoring of the thermal dose and ablation efficacy.

In a new multicentre study, researchers used TULSA to treat men with localized low- or intermediate-risk gland-confined prostate cancer. They delivered ultrasound to the entire gland, in a treatment that took an average of 51 minutes. Overall, clinically significant cancer was eliminated in 80% of participants, with 72 out of 111 men having no evidence of any cancer after one year. Blood levels of prostate-specific antigen, a marker of prostate cancer, fell by a median of 95%. Patients had low rates of severe toxicity and no bowel complications.

“We saw very good results in the patients, with a dramatic reduction of over 90% in prostate volume and low rates of impotence with almost no incontinence,” says co-author Steven Raman from UCLA. “There are two unique things about this system. First, you can control with much more finesse where you’re going to treat, preserving continence and sexual function. Second, you can do this for both diffuse and localized prostate cancer and benign diseases, including benign hyperplasia.”

Floating the solar photovoltaic boat

Solar energy is on the rise. According to the International Renewable Energy Agency (IRENA) there is over 480 GW of grid-linked solar photovoltaic (PV) capacity worldwide — around 175 GW of it in China alone. With costs for PV solar falling dramatically — 76% since 2009 — more is planned. Indeed, the consultancy DNV-GL says that solar power could have a 40% share of total global electricity generation by 2050 while Shell projects solar supplying 32% of total global energy by 2070.

Yet there are limits to PV solar deployment on land. Roof space in urban environments may not be sufficient to deliver enough power for a city. There could also be land-use constraints of solar farms in rural areas. In this case, looking at the use of floating PV arrays on reservoirs and lakes or even at sea could be attractive.

Lakes and reservoirs

A major impetus for mounting PV cells on floats or pontoons is to avoid using precious farming land, especially in high-population-density countries like Japan, which has developed many projects. There are also other benefits in that floating arrays help reduce evaporation from reservoirs, which can be a major issue, especially in hot countries. They also provide a way to keep the cells cool, which is especially vital in hot countries.

Floating solar has developed quite rapidly, particularly in Asia. According to one estimate, over 1 GW is now installed globally with many of the largest arrays in Japana leader in the field. It is still expanding in India, which is planning a 105 MW scheme, while South Korea is reportedly looking to a 102 MW system. China’s largest project so far is a 70 MW chain of floating arrays, which was built with the help of the French firm Ciel & Terre who have been involved with many other projects around the world.

While installation costs are higher on sea than conventional solar arrays on land, once installed the cell performance is better due to the cooling effect 

Outside Asia, there were some raft/pontoon-mounted solar PV projects in California and in Europe, including one on a Swiss lake. The UK has some large projects on reservoirs with more planned.  And US projects are now underway, with the largest being  a 4.4. MW floating solar array in Sayreville, New Jersey. Although the main aim of some early US projects was to deal with reservoir evaporation problems in drought-prone areas, a review for Yale University suggested that the energy potential was large. “If 6% of Lake Mead’s surface were devoted to solar power, the yield would be at least 3400 MW of electric-generating capacity – substantially more than the Hoover Dam’s generating capacity of 2,074 MW”, the review notes. Indeed, a study in 2018 by the US National Renewable Energy Laboratory suggested that floating solar might supply around 10% of US power.

Development on that scale may be some way off, given that most projects outside have only been for a few MWs. Even the largest one in Europe is a 17 MW project in France where the total potential has been put at 20 GW. However, while floating PV is clearly a way to avoid incursion into farm land, its other features are also a significant attraction in some locations, especially in hot countries. According to the claims of a developer of a reservoir project in Australia, evaporation losses could be cut by up to 90% and potential efficiency improvements from PV cooling of up to 50%. That might be a little high, but it depends on the local climate context and the cells used.

All at sea

There is also the option of going offshore with near-shore floating PV arrays perhaps in protected man-made lagoons. Japan already has a 70 MW project — the Kagoshima Nanatsujima Mega Solar Power Plant – that is built on platforms in an area reclaimed from the sea. India, meanwhile, has a 100 MW version off the coast of Kerala and Singapore has also built an offshore floating solar power plant in the Strait of Johor.

How far could floating PV solar go? Presumably not very far out to sea given the roughness of the environment. Even on inland lakes and reservoirs there will be corrosion and storm damage issues to consider. In all locations, there will also be local environmental concerns in terms of the impact on the ecosystem. Visual intrusion may be a key issue too in some scenic areas.

While installation costs are also higher on sea than conventional solar arrays on land, once installed the cell performance is better due to the cooling effect, so average generating costs will be lower. Certainly, the global resource is quite large. According to a Reuters report for the World Economic Forum, the technology’s global potential is about 400 GW. That is about as much generating capacity as all the solar photovoltaic panels installed in the world by 2017.

That may, however, actually be a significant underestimate. LUT University in Finland has suggested that there could be 4.4 TW of floating PV on hydro plant reservoirs even if only 25% of the area of the reservoirs of the current 1.2 TW of current hydro plants was used. That could generate 6270 TWh — well over twice the output available from hydro (2500 TWh), which accounts for about 16% of global electricity at present. When available, the PV output could be fed direct to the grid and when not, hydro power could be used. LUT adds that if non-hydro reservoirs were also used, the PV output would then be over three times that of hydro globally. It’s a bold ambition and even if that is a wild overestimate, it certainly seems like a 10% power contribution is globally realistic for PV floated on various surfaces.

Quantum dot LEDs go cadmium-free

Light-emitting diodes (LEDs) containing semiconducting nanocrystals called quantum dots are ideal for applications such as large-panel displays and solar cells thanks to their high efficiency and colour purity. To date, the chief drawback of these quantum-dot LEDs has been their toxicity, since most contain cadmium or other heavy metals. Now, however, a team of researchers at Samsung in South Korea has engineered cadmium-free light emitters with an efficiency, brightness and lifetime comparable to those of their environmentally unfriendly predecessors.

Quantum dots (QDs) emit light via a process known as radiative recombination. When an electron in the valence energy band within the QD absorbs a photon and moves to the conduction band, it leaves behind an electron vacancy, or hole. The excited electron and hole then recombine, releasing a photon.

The first photoluminescent QDs contained cadmium (Cd) and were made by coating the semiconducting nanocrystals with organic molecules. In later versions, researchers surrounded the semiconductor core with a shell of a different semiconducting material with a large bandgap (that is, a large energy difference between the valence and conduction bands). In such a structure, the bandgap prevents electrons and holes at the core from escaping to the QD’s external surface, making the device intrinsically photoluminescent.

A promising alternative

Indium phosphide (InP)-based QDs are promising alternatives because their photoluminescence quantum yield – the number of photons emitted by the QD, divided by the number absorbed – is high, at 93%. Even so, the light-emitting performance of (InP)-based LEDs has lagged those of their Cd-containing cousins for reasons that are thought to derive from structural defects in the material. These defects reduce the external quantum efficiency (the number of photons exiting the LED, divided by the number of charges injected into it) of InP devices to just 12.2%.

A team led by Eunjoo Jang of the Samsung Advanced Institute of Technology in Suwon overcame this problem by synthesizing QDs from a uniform InP core, a thick inner shell of zinc selenide (ZnSe) and a thin outer shell of zinc sulphide (ZnS). The researchers made their QDs in a two-step process, using hydrofluoric acid to etch out the oxidized core surface during the initial growth of the ZnSe shell, then annealing the core/shell interface at 340 °C. This approach eliminates the efficiency-sapping defects at the interface, they say.

Suppressing undesirable effects

The resulting structures have a highly symmetric spherical shape, with a potential energy profile that gradually increases with distance from the centre. This so-called soft confinement potential reduces the rate of Auger recombination, which occurs when the energy of the photoexcited electron-hole pair is transferred to another electron or hole without a photon being emitted. The uniform shape of the QDs also reduces the number of cavities or sharp corners on the surface of the QD or at the core-shell interface, which helps to suppress Auger recombination still further.

To evaluate the light-emitting properties of their QDs, the Samsung team used them to make LEDs into which they injected electrons and holes. The measured external quantum efficiency for their device was 21.4%, the theoretical maximum for QD-based LEDs. The devices also boasted a maximum brightness of 100,000 candelas/m2 and a lifetime of around a million hours at 100 candelas/m2.

These values are comparable to those of state-of-the-art devices containing cadmium, the researchers say. As such, Jang says that the Samsung team’s InP-based QD-LEDs could soon be useable in next-generation commercial displays, with TV displays being one of the most promising immediate applications.

“Each of the values we measured is a record for cadmium-free QD-LEDs,” she tells Physics World. “Our results clearly show that InP can overcome previous concerns that QDs based on this material are difficult to fabricate perfectly – especially for applications that require a very high external quantum efficiency, like displays.”

The researchers detail their work in Nature.

Beyond biology

Race is a social construct and not a scientific concept, so why do many people still believe it has some basis in biology? That’s the big topic that science journalist Angela Saini explores in her latest book Superior: the Return of Race Science.

No other popular book that I know of has provided such a comprehensive overview of the science of race as Superior. Each one of its 11 chapters covers how views of race have developed in various fields, including anthropology, cognitive science and eugenics (the belief that the genetic quality of a human population can be improved). Although this path through history is well-trodden, Saini is an enlightening guide.

The highlight of her tour is a follow-the-money investigation into a pseudoscientific journal, its support by a secretive fund, and the “race realists” who wrongly think race exists (beyond a social construct), and therefore twist scientific facts to legitimize their racist beliefs.

At one point, Saini recounts an exchange with German biochemist Gerhard Meisenberg, the journal’s editor-in-chief, who can only communicate via e-mail from a cruise ship in the Caribbean. Describing supposed differences in school performance for pupils from various ethnic groups in the US, Meisenberg claims “The differences are small, but the most parsimonious explanation is that much and perhaps most of this is caused by genes.” Not only is there little evidence to support his assertion, it betrays an outdated belief in genetic determinism. In fact, the way complex features like intelligence ultimately manifest (the phenotype) is often the result of interactions between an ensemble of genetic variants (the genotype) with the environment and other factors. There’s a satisfying feeling of schadenfreude when (often white) men like Meisenberg reveal their ignorance through their own words.

The only misstep in Saini’s book – and it’s a significant one – is chapter six, which focuses on my area of interest, biology. After describing how the phrase “human biodiversity” was hijacked by an online forum of race realists, Saini (who has a background in physics and engineering) makes an unconvincing argument that “population” is a 21st-century rebranding of “race”. She writes that “Population genetics itself was born out of post-Second World War efforts to move away from traditional race science and eugenics,” but this is incorrect as the field began in the early-1930s. And while it’s true that “population” could be used to describe a group with distinctive features, it’s always been a general term for a collection of individuals with anything in common, such as inhabiting a particular geographic location.

Chapter six revolves around the Human Genome Diversity Project, a large study of variation first proposed in the 1990s that involved collecting “isolates” from indigenous populations. Besides ethical issues, the project was controversial in that it chose which populations to study (thereby risking unconscious bias in the analysis), rather than selecting random samples from across the world. As Saini explains, “Anti-racist as the scientists behind the project were, they had somehow fallen into the trap of treating groups of people as special and distinct, in the same way that racists do. They were still forcing humans into groups, even if they weren’t calling those groups races.”

But Saini’s argument against grouping is stretched too far. In 1972 population geneticist Richard Lewontin published a paper showing that, based on differences in a few proteins, there’s more variation within traditional “races” (such as Aborigines, African and Caucasian) than between them. Since then, some studies have reached similar conclusions. Cherry-picking the supporting evidence, Saini criticizes the Human Genome Diversity Project: “If the genetic variation between us was already known to be trivial, then why embark on a multi-million-dollar international project to study it at all?”

That’s a rhetorical question and yes, we’re almost identical at a global level, but that doesn’t mean genetic variants don’t cluster within populations at smaller scales. Saini believes we shouldn’t use such genetic clustering to put humans into any kind of group. “Dismantling the edifice of race is about fundamentally rewriting the way we think about human difference, to resist the urge to group at all.” So any scientist who thinks humans can be classified into groups is effectively a race realist.

The Human Genome Diversity Project was led by population geneticist Luigi Luca Cavalli-Sforza. In 2000 he wrote the book Genes, Languages and Peoples, which includes a section titled “Why classify things?” that explains the reasons behind taxonomy – the practice of putting things into groups and giving them names such as “species”. Saini criticizes Cavalli-Sforza for not mentioning politics or social history in this section, missing the fact he was explaining why humans classify anything. Saini seems to want biologists to treat humans as a special case, the opposite of what a decent biologist should do, which is to study ourselves objectively, as if we were just any other organism.

So why shouldn’t we classify humans? While Saini never states her position explicitly, many of her points relate to how research on human variation is perceived by the public. When relaying what philosopher Lisa Gannett thinks of using statistics to cluster genetically similar groups (with her catchy-yet-empty phrase “statistical racism”), Saini says “It doesn’t necessarily matter how differences are distributed, so long as they are there in some form or another [which] is what has since been seized upon by people with racist agendas.”

Should climate scientists not publish data on temporary cooling periods because it might be used by climate-change deniers as evidence that global warming isn’t happening? If the central chapter of Superior is suggesting that biologists shouldn’t create groups – because they could be seen as a “race” – for fear that their results could be misrepresented by race realists, that’s a cowardly solution. It’s ironic because this book required incredible bravery to write.

Those who are prejudiced against others will always find ways to justify hatred

In my (possibly unpopular) opinion, I don’t think biologists should stop trying to classify humans because they’re afraid their work will be misused. Those who are prejudiced against others will always find ways to justify hatred no matter what – based on political or religious ideology, for example, not just race. It stems from tribalism or the “us-versus-them” mentality (othering). Personally, I prefer the approach of environmental activist Greta Thunberg, to focus anger at those in power. Real racism is when people like Donald Trump associate features such as skin colour with behaviour, which perpetuates cultural stereotypes.

At 12 years old I was almost suspended from school for fighting a fellow pupil who called me a “chocolate bar”. You can never fully understand racism unless you experience it, just as you can never truly understand sexism without being a woman, as convincingly covered in Saini’s previous book, Inferior (July 2017).

In the prologue to Superior, Saini says that she “wanted to understand the biological facts about race”. Her misunderstanding of why biologists classify things shows that she hasn’t quite managed to shift her view of biology from a human-centric perspective, but I highly recommend this book nonetheless. Superior’s subtitle suggests that race science is returning. But when Saini’s book is considered as a whole, from studies of newly discovered fossils to IQ scores, that doesn’t seem true – the sad truth is it never went away.

  • 2019 4th Estate £16.99hb 352pp

A decade of Physics World breakthroughs: 2011 – the power of weak measurement in the quantum world

I can’t believe it’s eight years since Physics World awarded the 2011 Breakthrough of the Year to Aephraim Steinberg and colleagues from the University of Toronto in Canada for tracking the average paths of single photons passing through a Young’s double-slit experiment – a feat, Steinberg said, that physicists had been “brainwashed” into thinking was impossible.

Steinberg’s work essentially challenged the notion, drilled into generations of quantum-mechanics students, that if you gain any knowledge of the paths taken by individual photons as they travel through two closely spaced slits, then you immediately destroy the interference pattern.

Turns out you can gain some information and keep the pattern intact using the technique of “weak measurement”. In his experiment, Steinberg didn’t actually use a double slit but instead passed photons through a beamsplitter linked to two optical fibres so that the photons went down one or other of the fibres. After emerging from the fibres, the photons made an interference pattern on a detector screen.

Now understanding the experiment is the easy bit. What’s harder is remembering why an ordinary double-slit pattern is destroyed if you elicit information about the photons’ paths. Trickier still is understanding how weak measurement works. And to really understand why it doesn’t destroy the interference pattern, well now you’re entering truly mind-bending territory, fascinating though it is.

Inspired by Steinberg’s award, Physics World invited him to write a feature article on weak measurement, which I edited in late 2012. I remember working on the article, entitled “In praise of weakness”, and Steinberg being incredibly patient as I asked one dumb question after another to try to get to the bottom of this slipperiest of slippery problems. It’s why quantum mechanics is so beloved in the popular-science market.

Every time I thought I’d finally understood the essence of weak measurement, my grasp slid away like sand through fingers. That’s no fault of Steinberg, just that weak measurement is so deeply profound and thought-provoking.

Wondering whether weak measurement has lived up the promise of its early days, I e-mailed Steinberg for this thoughts. He told me it’s continued to grow as a field even if it remains controversial. “This is healthy,” he admits, “as it means people are still grappling with what they do or don’t tell us about reality, and whether or not they are of practical use in various situations.”

People are still grappling with what [weak measurements] do or don’t tell us about reality.

Aephraim Steinberg

Steinberg cites plenty of convincing precision-measurement experiments that have used weak-value amplification, and, although he admits there are “very specific limited situations in which this is truly an important aspect of the measurement technique”, he does think it’s clear that some such cases exist.

Weak values have been investigated in more and more physical systems, Steinberg says, having been used for example by Jeff Lundeen from the National Research Council of Canada in Ottawa to carry out “direct” measurements of the wave function, which was a runner-up in the 2011 Physics World Breakthrough of the Year awards.

Steinberg himself has shown that one optical beam can weakly measure the photon number in another beam, and confirmed that even a single photon can act like eight photons in terms of how strongly it affected this probe beam.

This year his group has also submitted a paper that tries to determine how long it takes rubidium atoms to tunnel through a barrier, which is a bit like working out what a tunnelling atom was doing before it was transmitted. Now I don’t know about you, but that’s one helluva weird question to be able to not only ask – but answer too.

Weak measurement has even been used to investigate fundamental puzzles, such as the “quantum Cheshire cat” (i.e. can a particle’s charge or mass be in one place, while its spin is somewhere else?) and the violation of the “pigeonhole principle” (can you put three pigeons in two holes, without there being any chance that any two pigeons are in the same hole?). Steinberg also mentions something about the “quantum shutter” effect (don’t ask).

So it seems like even though people are still debating whether “weak values” are elements of reality, there are lots of intriguing experiments in the pipeline – including some fundamental questions from the past that can finally be addressed experimentally and theoretically, at least in part.

I’m convinced that Steinberg’s 2011 work was a worthy winner of the third-ever Physics World Breakthrough of the Year award.

Physics techniques line up to advance medical care

Last week, the Institute of Physics in London hosted the meeting Physics-based Contributions to New Medical Techniques. A fascinating day of presentations saw 10 speakers describe physics technologies that enable a diverse range of medical applications, from biological and diagnostic imaging to radiation and proton therapies, as well as drug design and water purification.

The first speaker, Xavier Golay from University College London, explained how MRI has evolved beyond simple anatomical imaging into a non-invasive method of choice for imaging physiology. Golay described how MR techniques can be used to measure brain metabolism. Perfusion imaging, for example, can assess cerebral blood flow, while blood oxygenation level dependent (BOLD) MRI can determine how much oxygen gets from the blood into the brain. Elsewhere, sodium imaging provides information on sodium levels in the brain, while in vivo MR spectroscopy can directly assess metabolites.

Golay described one project aiming to develop therapies for babies suffering from hypoxic ischemic encephalopathy, in which oxygen restriction damages the brain. The researchers found that the lactic acid signal in MR spectra provided the best predictive biomarker of outcome. And by performing lactate imaging on piglets, they found that melatonin could provide a potential neuroprotective agent.

The “holy grail of metabolic MR imaging”, says Golay, is direct assessment of glucose metabolism, as currently performed using FDG-PET. He explained that this could be achieved using a technique called chemical exchange saturation transfer (CEST) MRI of glucose, or glucoCEST, under development by the GLINT project, and presented some initial results using CEST to image gliomas.

Kevin Brown

Elekta’s Kevin Brown continued the theme of biomarkers, in this case, the potential for biomarker imaging using the Unity MR-linac. Brown began by describing the challenges of creating an integrated MRI-guided radiotherapy system and how its developers used physics to overcome problems that many thought insurmountable.

The Unity is now in clinical use at 14 sites worldwide, enabling users to adapt radiotherapy plans based on high-contrast soft-tissue imaging. In future, however, the system could also be used to image biomarkers that reveal the underlying biology of a patient’s tumour. Such information could be used, for example, to predict whether the patient will be a good or bad responder, or to tailor dose to their individual biology.

Brown explained that a radiotherapy session on the MR-linac, which currently involves scanning, adaptation and treatment, could also incorporate biomarker acquisition without adding to the treatment time or cost. Biomarker imaging could be performed during plan adaptation, for example, or even during radiation delivery. “I think imaging biomarkers have potential to make a massive change to the way we do radiotherapy,” he told the audience.

Radioimmunotherapy

Moving on to nuclear medicine, Steve Sugden from Harwell Consulting described a novel therapeutic radioisotope, along with a new way to create it. He noted that while 90% of nuclear medicine procedures are diagnostic, therapeutic applications – such as radioimmunotherapy (RIT), in which a radioactive payload is linked to a tumour-targeting biological agent – are on the increase.

Copper-67 is a beta emitter with ideal characteristics for therapy. It was used in some successful clinical trials in the late 1990s, but progress was stymied by the lack of availability. Copper-67 can be made on a research reactor or a very high-energy proton accelerator, but not with high purity, Sugden explained. As an alternative, his team is developing ways to create medical isotopes using a high-energy electron accelerator.

The technique involves bombarding an enriched zinc-68 target with high-energy photons to create copper-67. One big advantage of this approach, Sugden explained, is that the target and product are different elements and thus easy to separate. Using an enriched zinc target ups the yield five-fold (compared with a natural zinc target) and minimizes production of unwanted isotopes. He also noted that the extraction process enables target material recovery for reuse.

The isotope is now being produced by linacs at Idaho State University and Argonne National Laboratory, with production limited to about 2 Ci/day (enough to treat 10 patients). The team is also working with Clarity Pharmaceuticals to develop RIT for brain and prostate cancer, using copper-64 and copper-67 for diagnosis and treatment, respectively. This approach is currently in clinical trials in Australia.

Catia Costa

Next up, Catia Costa from the University of Surrey told delegates about the UK National Ion Beam Centre (UKNIBC), which provides ion beam technologies for use by researchers. She also introduced the RADIATE project that brings together several ion beam facilities around Europe. Costa described some of the biomedical projects being undertaken using ion beam analysis, including the use of PIXE (particle-induced X-ray emission) to identify unknown metal atoms within proteins and to determine the optimal approach for aerosol vaccine delivery.

Tackling bacteria

Kathryn Whitehead from Manchester Metropolitan University presented an entertaining, if slightly frightening, talk on the use of a microfluidic atmospheric-pressure plasma reactor to treat bacteria-contaminated water. “You think your water is clean? It’s not,” she declared, explaining that bacteria may lurk in pipes and taps, and emphasizing the particular danger if this occurs in a hospital setting.

Whitehead described tests demonstrating that the microfluidic plasma reactor could inactivate monocultures of E. Coli or Pseudomonas aeruginosa after a residence time of around 5 s. She noted, however, that it’s important to also study mixed bacterial samples, which may well be more resistant to plasma treatment. She also explained how bacteria remaining after such treatment may be in a small colony variant (SCV) or viable non-culturable (VBNC) state – persistent states that cause reinfections – and emphasized that this is a serious concern to consider in future studies.

Nick Lockyer

Rounding off the morning, Nick Lockyer from the University of Manchester described the use of nanoscale secondary ion mass spectrometry (nanoSIMS) to examine the intracellular distribution of the boron neutron capture therapy (BNCT) drug BPA. BNCT is a cancer treatment in which a drug containing boron-10 localizes within cancer cells, then the tumour is irradiated with low-energy neutrons that cause a fission reaction and create cell-killing alpha particles.

Lockyer and colleagues used nanoSIMS to examine boron-10 levels in biopsied human glioblastoma tumour cells cultured with BPA. They found that more BPA accumulated in cancer cells than in surrounding tissue, although both received a therapeutic BPA dose. They also saw that mean levels were slightly higher in cell nuclei than in cytoplasm, and that pre-treatment with tyrosine reduced BPA uptake in tumour, but not in surrounding cells. Lockyer also shared early results using SIMS to examine samples from two patients given BPA prior to tumour excision.

In the afternoon session, Ian Gilmore from the National Physical Laboratory explained how mass spectrometry can be used to help design better pharmaceuticals. In particular, he described how it can identify drug compounds that are unlikely to work by determining, for example, whether a drug reaches its target, binds to the target and, if so, whether it changes the behaviour of the disease. Gilmore presented the OrbiSIMS mass spectrometer, which enables 3D sub-cellular resolution imaging of metabolites and their response to potential drugs.

The remainder of the presentations focused on various aspects of particle therapy. Hywel Owen from the University of Manchester gave an introduction to UK proton therapy centres and described some novel projects aiming to improve the capabilities of such facilities. Simon Jolly from University College London described a new approach for proton therapy QA. And Dominik Perusko from Cosylab closed the day with a look at a next-generation dose delivery system for particle therapy. Keep an eye out for more detailed coverage of these talks coming up shortly on Physics World.

Copyright © 2025 by IOP Publishing Ltd and individual contributors