Skip to main content

The third pillar of science

Theory and experiment. They are the two pillars of science that for centuries have underpinned our understanding of the world around us. We make measurements and observations, which we then link to theories that describe, explain and predict natural phenomena. The constant interplay of theory and experiment, which allows theories to be confirmed, refined and sometimes even overturned, lies at the heart of the traditional scientific method.

Take gravity. At the start of the 20th century, Newton’s law of universal gravitation had stood the test of time for over 200 years, but it could not explain the perihelion precession of Mercury’s orbit, which was slightly off what Newtonian gravity predicted. Different ideas were proposed to explain this blot on Newton’s otherwise spotless copybook, but they were all rejected based on other observations. It took Einstein’s general theory of relativity – which revolutionized how we perceive time, space, matter and gravity – to finally explain the tiny disagreement between the predictions from Newton’s theory and observations.

Even now, a century later, Einstein’s theory continues to be poked and prodded for signs of weakness. The difference today is that these tests are not necessarily based solely on experiments or observations. Instead, they often rely on computer simulations.

New views of the world

For more than half a century, computational science has expanded scientists’ toolkit to go far beyond what we could hope to observe with our limited vision and short lifespans. In the case of relativity, astrophysicists can now build entire simulated universes based on different modifications to relativity. Powerful computers and clever codes let researchers alter the initial physical conditions in the model, which can be run automatically from soon after the Big Bang to today and beyond.

“Computation fills in a gap between theory and experiment,” says David Ham, a computational scientist at Imperial College London in the UK. “A computation tells you what the consequences of your theory are, which facilitates experimentation and observation work because you can tell what you are supposed to look for to judge whether your theory is valid.”

Computation is not just an extra tool. It is a new way of doing science, irrevocably changing how scientists learn, experiment and theorize

But computation is not just an extra tool. It is a new way of doing science, irrevocably changing how scientists learn, experiment and theorize. In her 2009 book Simulation and its Discontents, Sherry Turkle – a sociologist of science from the Massachusetts Institute of Technology in the US – outlined some of the anxieties that scientists had from the 1980s to the 2000s over the growing pervasiveness of computation and how it was changing the nature of scientific inquiry. She summarized them as “the enduring tension between doing and doubting”, pointing to a worry that younger researchers had become “drunk with code”. So immersed were they in what they created on screen, they could no longer doubt whether their simulations truly reflected reality.

However, Matt Spencer, an anthropologist from the University of Warwick in the UK, believes that science and scientists have evolved in the decade since Turkle’s book was published. “I don’t think you’ll hear this kind of opinion that much in physics these days,” says Spencer, who studies computational practices in science. Indeed, he points to many natural phenomena – such as the behaviour of whole oceans – that simply cannot be manipulated on an experimental scale. “Through exploring a vastly greater range of possible states and processes with simulations, there may be better ways to truly understand underlying physics with simulations,” he says.

Ham agrees, noting that physicists have had to accept they will never be able to understand everything about a simulation. “It’s an inevitable consequence of the systems we study and the techniques we use getting bigger and more complicated,” he says. “The proportion of physicists who actually understand what a compiler does, for example, is vanishingly small – but that’s fine because we have a mechanism for making that OK, and it’s called maths.”

Computer simulations

Disruptive technology

Although most physicists now view computation and simulation as an essential part of their work, many are still coming to terms with the disruption caused by this third pillar of science, with one of the biggest challenges being how to verify the increasingly complex codes they produce. How can researchers, in other words, be confident their results are correct, when they don’t know if their code does what they think it does? The problem is that while programming is widely taught in undergraduate and graduate physics courses, code verification is not.

Ham voices another concern. “Something that absolutely is the enemy of progress is the ‘not-invented-here’ syndrome,” he says, referring to the belief – particularly in smaller research groups – that it is cheaper, better and faster to develop code in house than bring it in from elsewhere. “It’s a recipe for postdocs spending most of their science time badly reinventing the wheel,” Ham warns.

With limited budgets to explore better methods, Ham says that researchers often find themselves using code that “doesn’t even work very well for them”. So to help these teams out, Ham and colleagues at Imperial in the Firedrake project are building compilers that take the maths of a given problem from the scientists and then automatically generate the code to solve it. “We’re effectively separating out what they would like numerically to happen from how it happens,” he says.

For physicists who do develop code that fits their purpose well, it’s tempting to use it to solve other scientific questions. But this can lead to further problems. In 2015 Spencer published an account of 18 months he spent observing a group of researchers at Imperial, in which he explored their relationship with a software toolkit called Fluidity. Developed in the UK in around 1990, Fluidity started out as a small-scale fluid-dynamics program to solve problems related to safely transporting radioactive materials for nuclear reactors. But its elegance and utility were soon recognized as having potential in other fields too.

The Imperial group therefore started building new functions on top of Fluidity to serve more researchers and solve further problems in geophysics, ranging from oceans, coasts and rivers to the Earth’s atmosphere and mantle. “Scientific discovery often takes opportunistic pathways of investigation,” says Spencer. “Because Fluidity grew in complexity in this organic way, without foresight of how extensive it would eventually become, problems did accumulate that over time made it harder to use.”

Keeping the code flexible, readable and easy to debug eventually became a serious challenge for the researchers, to the point where one member of the Fluidity team complained that “there were so many hidden assumptions in the code that as soon as you change one detail the whole thing breaks down”.

The team responded by investing in a rewrite of Fluidity, reimplementing the algorithms using best practice software architecture. Yet growing software is not just a technical challenge; it is also a social one. “We could think about the transition to the new Fluidity as crossing into something more akin to ‘big science’,” says Spencer. “Large-scale collaboration often has to be more bureaucratically managed, and this of course is a fresh source of tension and negotiation for researchers.”

Code that can grow

One large software project that has considered the inherent problems with extending code and collaboration from the outset is AMUSE. Short for the Astrophysical Software Environment, it’s a free, open-source software framework developed over more than a decade, in which existing astrophysics codes – modelling stellar dynamics, stellar evolution, hydrodynamics and radiative transfer – can be used simultaneously to explore deeper questions.

Like LEGO bricks, AMUSE is made up of individual codes, each of which can be added or taken away as required to create a new simulation. Managed by a core team led by Simon Portegies Zwart at Leiden Observatory in the Netherlands, the design allows researchers to change the physics and algorithms without affecting the global framework. AMUSE can therefore evolve while keeping the underlying structure solid.

Yet even AMUSE faces challenges. Portegies Zwart sees funding for basic software maintenance and development as the greatest threat to AMUSE and the next generation of large-scale simulations. “Compilers change, operating systems change, and the underlying codes eventually need to be updated,” he says. “It’s relatively easy to acquire funding to start a new project or to write a new code, but almost impossible if you want to update or improve an existing tool, regardless of how many people use it.”

It’s relatively easy to acquire funding to start a new project or to write new code, but almost impossible if you want to update or improve an existing tool

Expert support

So where does all this leave us? We stand at a point where theoretical calculations coupled with laboratory experiments are often not enough to deal with the complex physics questions researchers are trying to address. Even then, simulating this physics is usually beyond the coding skills of a solo scientist or even a group of scientists. As a result, computations and simulations are increasingly being developed in collaboration with expert coders who have very different and complementary skills.

To keep up the pace of scientific progress, many computational scientists feel that funders need to fully recognize that our third pillar of science must support all of these experts. If they don’t, those who maintain and develop collaborative large-scale physics code bases could start looking to take their careers elsewhere. After all, they have many valuable and marketable skills. “There are,” Ham warns, “plenty of non-science jobs out there for them.”

Does iron make icebergs green?

Most icebergs look white or blue but since the early 1900s sailors have reported seeing dark green icebergs off some parts of Antarctica. And residents of Davis and Mawson stations, on opposite sides of Prydz Bay in East Antarctica, commonly see “jade bergs”. Now scientists may have identified the cause of the strange colouration – iron oxides from rock dust.

Icebergs result when large chunks of ice break off into the ocean from the ends of glaciers or ice shelves. As well as snow, and ice formed from compacted snow, icebergs calved from floating ice shelves may contain marine ice – ocean water frozen onto the bottom of the shelf in a layer that can be 100 m thick. It’s this marine ice that sometimes appears green; if it reaches the edge of the shelf without melting and breaks off as part of an iceberg that later tips over, it will be visible to passing sailors and researchers.

Graphic depicting marine ice formation under ice shelf

An early candidate for producing the green colour was dissolved organic carbon but later measurements showed similar levels of this carbon in both green and blue marine ice.

Rocky start

Stephen Warren of the University of Washington, US, and colleagues turned their thoughts to iron oxides following the discovery of large amounts of iron in East Antarctica’s Amery Ice Shelf.

Erosion turns iron-based rocks at the base of the ice sheet into “glacial flour”, the theory goes. Particles of this flour enter the water and nucleate ice crystals that float upwards, collecting more particles en route, to form a layer of green marine ice underneath the floating ice shelf.

But why is that ice green? If it doesn’t contain air bubbles, ice appears blue because it preferentially absorbs red wavelengths. Air bubbles, as found in the glacier ice formed from compacted snow, reduce this absorption by refracting light and scattering it out of the material. This clouds and whitens the ice’s appearance.

Photo of researcher on top of composite iceberg

Marine ice, though, tends not to contain air as it forms under pressure several hundred metres beneath the ocean, where air is more soluble in water. In the absence of iron oxide, this marine ice appears clear and blue.

Add iron oxide particles to the mix and the marine ice’s natural blue combined with the red or yellow introduced by the mineral shifts the ice to absorb least light at green wavelengths. The result is an emerald green colour.

Warren began studying green icebergs in 1988, when he took a core from one example of the phenomenon near the Amery Ice Shelf.

“When we climbed up on that iceberg, the most amazing thing was actually not the colour but rather the clarity,” says Warren. “This ice had no bubbles. It was obvious that it was not ordinary glacier ice.”

Eat your greens

What’s more the iron in the icebergs may be playing a role in the wider ecosystem.

“We always thought green icebergs were just an exotic curiosity, but now we think they may actually be important,” says Warren. “The iceberg can deliver this iron out into the ocean far away, and then melt and deliver it to the phytoplankton that can use it as a nutrient. It’s like taking a package to the post office.”

Warren and colleagues from Bowdoin College, University at Albany, and State University of New York, all in the US, and the Australian Antarctic Division reported their work in Journal of Geophysical Research Oceans.

 

Disappointment as Japan fails to commit to hosting the International Linear Collider

Particle physicists have expressed their disappointment after the Japanese government today failed to announce its intention to host the ¥800bn ($7.5bn) International Linear Collider (ILC). Officials in Japan said that their government has formally “expressed an interest” in the 20 km-long particle smasher but has not decided whether to host the machine. The final go-ahead will only be given if enough international support and funding can be found to construct the machine and there is a consensus within the Japanese scientific community that the project is worth pursuing.

First mooted over a decade ago, the ILC would accelerate and smash together electrons with positrons to study the Higgs boson and other particles in precise detail. The ILC’s five-volume technical design report was published in June 2013, in which it called for a 30 km-long linear collider that operating at around 500 GeV (see timeline below). The Japanese physics community quickly got behind the project, expressing its desire to host the machine, with a site in the Tōhoku region, about 400 km north of Tokyo, chosen as a potential location.

However, the government has dragged its feet over whether to support the ILC and in 2017 physicists came up with a revised plan to make it more palatable. This involved reducing the ILC’s energy to 250 GeV — an energy aimed to study the 125 GeV Higgs boson — and shortening the length of the tunnel to around 20 km, with the option of later upgrading the collider to energies of around 1 TeV. However, their plans were hit late last year when an independent committee of the Science Council of Japan (SCJ) issued a report that failed to support the ILC’s construction in Japan. The SCJ pointed out that the ILC did not yet have enough international backing, stating that the collider’s importance beyond research was “unclear” and “considered to be limited”.

Setting conditions

Given that European particle physicists are currently updating their future strategy, which is due out by May 2020, officials then gave the government a deadline of March to decide on whether to support the ILC in Japan. Today’s announcement means that government agrees with the Japanese particle physics community that the ILC is worth pursuing and that it will enter negotiations to host the machine. However, it will only give the ILC the final go-ahead if those negotiations are successful and if international support, particularly funding, can be found. It is expected that the government could supply half the $7.5bn with the other half needing to come from international partners such as the US, CERN and Europe as well as Canada.

Timeline: twists and turns of a linear accelerator

2004 An international panel of experts decide that a future linear collider should be based on superconducting technology that has been developed at the DESY laboratory in Germany

2005 Barry Barish of the California Institute of Technology in the US is chosen to lead the effort to build the International Linear Collider (ILC). The ILC’s first tentative design is released calling for a 20 km-long machine that would operate at 500 GeV with a possible future upgrade to 1 TeV that would require extending the tunnel by an additional 18.6 km

2007 Updated “reference design” is released for the ILC calling for two 12 km-long arms to collide electrons with positrons. The estimated cost of the ILC is $6.7bn

2011 The Japanese particle-physics community announces it will bid to host the ILC with possible candidate sites in Kyushu and Iwate

2013 Lyn Evans, who masterminded the Large Hadron Collider’s construction, takes up the reins as linear-collider director, overlooking the design of the ILC. The “technical design report” for the ILC is released calling for a 31 km-long track of superconducting cavities that accelerate electrons to 500 GeV. The ILC community identifies a location in the Iwate prefecture north of Tokyo as a possible site for the ILC

2016 Japan’s High Energy Accelerator Research Organization (KEK) releases a 12-page plan showing that they have measures in place if the Japanese government decides to begin negotiations with other countries to start construction

2017 The International Committee for Future Accelerators, which oversees work on the ILC, endorses plans to reduce the scope of the collider. Estimated to cost $7.5bn, the ILC would be built in a 20 km-long tunnel and with an initial design energy to 250 GeV with the option of further energy upgrades

2018 A report by the influential Science Council of Japan raises several issues about the Japan hosting the ILC and does not support its construction.

Another condition laid out by the government is that the ILC needs to be more widely supported by the Japanese scientific community. That means that it will need to complete the necessary procedures to be included in the next roadmap of large science projects put together by Japan’s Ministry of Education, Culture, Sports, Science and Technology. Physics projects included in MEXT’s 2017 roadmap included the Hyper Kamiokande neutrino detector as well a high luminosity upgrade for the Large Hadron Collider. The government also said that the ILC must be included in the SCJ’s master plan of large-scale projects, which will not reach a conclusion until October this year.

“It is actually extremely positive that the Japanese government says that it has an interest in the ILC project,” Hitoshi Murayama from the Kavli Institute for the Mathematics and Physics of the Universe in Tokyo told Physics World, adding that to make such a statement meant that discussions must have taken place with other government ministries including the finance ministry.

This is a real window of opportunity for Japan, but this window cannot remain open for much longer

Philip Burrows

Speaking at a press conference this morning in Tokyo organised by the International Committee for Future Accelerators (ICFA), which oversees work on both the ILC and a rival design the Compact Linear Collider, ICFA chair Geoffrey Taylor noted that that they have been encouraged by the government’s comments. “It shows us the political and executive environment in Japan is rapidly moving towards the ILC,” he says. “The important step we await now is for the government to commit and declare its interest in becoming the host of the ILC.”

Moving on

Yet particle physicist Brian Foster from the University of Oxford, who was European regional director for the ILC’s Global Design Effort, says that he is “disappointed” by the statement. “It is difficult to be convinced that the Japanese government is serious about this,” says Foster. “Delaying the decision in this way seems like a typical Japanese way of saying ‘no'”. Foster adds that it is especially concerning that the Japanese government has now referred the decision process back to the SCJ, which has not been enthusiastic about the ILC. “How all this will now play out in the European strategy discussions is hard to know”.  At the press conference, it was noted that negotiations on cost sharing will now be carried out by officials from the KEK particle-physics lab.  Yet Foster warns that this is a “waste of time”. “Negotiations needs to take place at a higher level,” he adds.

“It is difficult to be convinced that the Japanese government is serious about this. Delaying the decision in this way seems like a typical Japanese way of saying ‘no’.

Brian Foster

The announcement from the Japanese government is also unlikely to result in linear-collider physicists getting behind the ILC and ditching plans for CLIC, which would offer higher collision energies up to around 3 TeV. “Today’s MEXT statement appears to fall short of a clear positive decision by the Japanese government, and is, frankly, disappointing”, says particle physicist Philip Burrows from the University of Oxford, who is CLIC’s spokesperson. “CLIC represents a serious alternative design for an energy-frontier linear collider and we will make every effort to keep it on the table for consideration pending  greater clarity on ILC from the Japanese government. This is a real window of opportunity for Japan, but this window cannot remain open for much longer – the world must move on, other projects are advancing, and CLIC provides a great opportunity for a linear collider Higgs factory in Europe.

That view is shared by particle theorist John Ellis from King’s College London who told Physics World that the statement is “disappointing” for the community. “For the time being, the European particle physics strategy update will have to continue without assuming that the ILC will go ahead,” he says, adding that there are other projects on the table such as CLIC, China’s Circular Electron Positron Collider and the Future Circular Collider that could “do similar physics and provide ways forward for the community”.

International Linear Collider Q&A

What is the International Linear Collider (ILC)?

The ILC is a 20 km-long particle collider that will accelerate electrons and positrons to energies around 250 GeV. To do so, it will consist of thousands of superconducting radiofrequency accelerator cavities made of niobium. The ILC will then smash these beams together roughly 7000 times a second at an “interaction point”. The ILC will have two all-purpose detectors based around the interaction point — SiD and ILD — that would take turns being in the beam. Interchanging the detectors is estimated to take around a day to complete.

How is this different from CERN’s Large Hadron Collider?

The 27 km-circumference LHC is a circular collider that is made up of more conventional technology such as radiofrequency cavities that accelerate the beam, dipole magnets that bend the particles along a circular path, and quadrupole magnets that focus the beam. The ILC instead uses superconducting cavities to accelerate the beam along a straight path before being focussed by quadrupole magnets. This linear acceleration has the advantage that the electrons do not lose energy via X-rays when travelling along a circular path. The benefit of a circular machine is that it allows for more integration points — four in the LHC’s case, with no need to swap detectors.

Has this accelerator technology been tested before?

Yes. The European X-ray Free Electron Laser (E-XFEL) facility near Hamburg, Germany, uses 768 superconducting niobium cavities to accelerate electrons to 17.5 GeV over 1.7 km. Rather than collide the particles, however, the E-XFEL makes them produce X-rays that are then used for a range of experiments from biophysics to condensed-matter physics. The E-XFEL is considered to some extent as an ILC prototype.

What would the ILC study?

Its main aim would be precision studies of the Higgs boson, which was discovered in 2012 at the LHC. The LHC has managed to measure the properties of the Higgs – notably how it couples to other particles – with a precision of around 20%. Yet the LHC’s proton-proton collisions suffer from a large amount of “debris” that affects the precision of the measurements. As electrons and positrons are fundamental particles, their collisions are much “cleaner” meaning that the ILC would improve this precision to 1% or lower. The ILC could also be later upgraded to higher energies to study the top quark.

Why is this exciting?

Physicists hope that the door to “new physics” could be opened through precision studies of particles such as the Higgs. This would come from deviations from those predicated by the Standard Model of particle physics.

Why has Japan dragged its feet for so long?

Japan has balked at the potential cost of building the ILC, which is one reason why physicists proposed a scaled-down version in 2017 that is both cheaper and would not take as long to build.

So why now?

Particle physicists are feverishly planning the next collider following the LHC and the ILC is the most mature proposal. The time is also right given that Europe will update its particle physics strategy next year. A further reason is using the project as part of the reconstruction efforts following the magnitude-9 earthquake and tsunami that hit the Tōhoku region in 2011.

Are we far from a decision?

Who knows. Negotiations will continue with international partners, but if the Japanese government does not receive sufficient support then the ILC could still not go ahead. The coming years will be crucial for the project.

When could the ILC see the light of day?

If everything goes well, 2035 at the earliest. Negotiations and preparations could take around four years to complete with construction then taking a decade.

Protocells help make DNA computer

Arrays of synthetic cell-like capsules, or protocells, made of proteins and polymers can communicate with each other via chemical signals and perform molecular computation thanks to DNA logic gates entrapped inside them. This is the new finding from researchers in The Netherlands and the UK who say that the circuits might serve as molecular biosensors for diagnosing disease or in therapeutics applications, such as to control drug delivery.

DNA computers work using programmable interactions between DNA strands to transform DNA inputs into coded output, explain Stephen Mann of the University of Bristol and Tom de Greef of Eindhoven University of Technology, who led this research effort. These devices operate very slowly, however, since they work in biological environments where they rely on random diffusion to interact with each other and execute a logic operation.

If these DNA strands were assembled inside capsules that could transmit DNA input and output signals to each other, this would increase operating speed. Such encapsulation would also protect the entrapped DNA strands from being degraded by enzymes in blood or serum, for example.

BIO-PC

Mann and de Greef and colleagues have now made such a system using a platform known as “biomolecular implementation of protocellular communication” (BIO-PC). This is a programmable messaging system based on protein microcapsules called proteinosomes containing molecular circuits that can encode and decode chemical messages from short single-stranded nucleic acids.

The proteinosomes are permeable to short single-stranded DNA (containing less than 100 bases), which makes them ideal for protocellular communication, say the researchers. The molecular complexes inside the proteinosomes can work as signal processors, such as logic gates, and contain two different DNA strands tagged with fluorescent labels (so that the researchers can track the activity of the DNA). The proteinosomes themselves are sandwiched between pairs of small pillars in a microfluidic device.

“An incoming DNA strand with the correct sequence can bind with one of the gate’s DNA strands,” explain Mann and de Greef. “This displaces the gate’s other DNA strand. The ejected strand then leaves the protocell and acts as the input signal for a second protocell containing a different gate.”

Towards disease detection

By carefully tailoring the protocells, the DNA gates and the signals transmitted between them, the researchers say they are able to construct a range of different circuits. These include logic gates like AND and OR, and a feedback circuit in which the output strand from one group of protocells deactivates the fluorescent tag in another group. Some protocell circuits can even amplify signals as they transmit them, they explain.

The team, reporting its work in Nature Nanotechnology, says that it is now further developing its DNA circuits into a system that could diagnose disease by detecting tell-tale patterns of microRNAs (which help regulate gene expression). “We’re currently working with Microsoft to build a DNA computer that can do microRNA processing from human blood using this technology,” says de Greef. “I think a DNA computer could eventually do this fully autonomously.

“Microsoft has also been working on storing large quantities of data within DNA using the molecule’s sequence of bases to encode this data, he says. “We hope to integrate this approach with our DNA computing technology.”

Tumour growth models shed light on radionuclide therapy

Prostate cancer is the most common cancer in men and at later stages of the disease many patients develop painful bone metastases. One promising modality for management of skeletal metastases is targeted radionuclide therapy, in which a radioactive drug travels through the patient’s bloodstream to the tumour where it delivers radiation to cancer cells.

Radium-223, which targets areas of increased bone turnover, has emerged as a key radionuclide for such treatments. The atoms decay via emission of alpha particles that deposit a high amount of energy over a short distance (70–100 μm), damaging the DNA of targeted cells while limiting exposure to healthy tissue.

A large trial (ALSYMPCA) of radium-223 dichloride (223RaCl2) in patients with metastatic prostate cancer bone metastases provided proof that the treatment prolonged the time to a patient’s first symptomatic skeletal event (SSE) and increased overall survival. But questions remain regarding the dosimetry and pharmacodynamics of 223RaCl2.

To investigate the mechanisms of action of 223RaCl2, a team at Queen’s University Belfast has performed mathematical modelling of tumour growth using three different uptake models. To determine the most realistic scenario, they compared the models’ predictions of time to first SSE with published clinical data (J. Radiat. Oncol. Biol. Phys. 10.1016/j.ijrobp.2018.12.015).

Three scenarios

The researchers used the established Gompertz model to simulate tumour growth, and incorporated the effects of 223Ra treatment into the model, based on three biophysical scenarios. First, they considered uniform exposure, which assumes that all tumour cells are equally affected by radiation dose.

“We were surprised how much the uniform model over-estimated the effects of 223Ra treatment, so we knew that less of the tumour must be exposed,” explains co-author Stephen McMahon.

As such, McMahon and colleagues tested two other scenarios: an outer layer effect, where only the surface of the metastatic volume is exposed to radiation; and constant volume exposure, where the number of affected cells remains constant throughout tumour growth.

Three exposure scenarios

“With suitable dose-rate tuning, we were able to get good agreement for patients who failed at late time-points. This suggested that the uniform model was good for small tumours, but rapidly saturated — and the constant volume model is the simplest approximation of that behaviour,” says McMahon. “The true biology is likely more complex, but this worked well for this initial analysis.”

To test their models, the researchers employed clinical data from the aforementioned  223RaCl2 trial, which included both treated and control groups. They used these data to generate a “virtual patient population” with a range of initial tumour volumes that reproduced the observed time to SSE in the control group.

The researchers then simulated the effects of 223Ra treatment on the virtual population, using each of the three radiation exposure models. To relate tumour growth to SSE, they assumed that skeletal events occurred when the number of tumour cells reached 80% of the total tumour burden that can be supported by the patient.

 Clinical comparisons

Using an initial dose rate based on published rates for an average-sized patient, the uniform effect scenario produced over-optimistic results. It gave 12- and 6-month delays in reaching the number of cells corresponding to an SSE, for late and early tumour stages, respectively. Even with a lower initial dose rate, this model overestimated the effects on patients with high disease burden and under-estimated the effects on patients with lower disease burden.

Data comparison

The outer layer effect scenario predicted cell killing rates that were too high for early tumour growth stages and too low for later tumour growth stages, with poor agreement with clinical data.

The constant volume exposure scenario provided the closest match to the clinical data for patients treated with 223RaCl2. These results suggest that the effects of 223Ra saturate rapidly with tumour volume, only affecting a constant number of cells regardless of tumour growth (once the tumour volume exceeds a certain limit).

The authors conclude that metastatic tumour cells do not experience a uniform dose exposure, with only a sub-population of the tumour affected by 223Ra. This finding is particularly significant since uniform distribution of radionuclide activity in bone metastatic volumes is frequently assumed in conventional dosimetric calculations.

“We were able to show that some common models of 223Ra uptake did not work well to describe the clinical data, and that there was strong evidence of saturation of uptake in even relatively small metastases,” says McMahon. “This represents some of the first biophysical analyses of these data, and the saturating uptake, if verified, may have significant implications for future attempts to optimize drug design and scheduling in radionuclide therapies.”

McMahon notes that the team’s clinical partners are currently completing a trial combining 223Ra treatment with external-beam radiotherapy. “This is an exciting opportunity, as this trial includes detailed molecular and MR imaging, which will enable us to quantify the disease burden and delivered dose to these patients in detail,” he tells Physics World. “This will give us a valuable testing data set to validate the model’s assumptions.”

Rethinking power: pipes versus wires

Underlying the policy debate on energy is a fault line – a chasm between two basically different approaches. Not the usual one between big centralized and small decentralized energy, although that is part of it. This goes deeper. It concerns the basic, often unspoken, assumption that electricity is the key energy vector. We have the idea that electrification is modernization. It’s not just Lenin who said that, it’s everyone ever since, everywhere. It made sense. Electricity was clean, fast, controllable, and it has become increasingly valuable.

However, that means it’s become increasingly expensive, in part since the main ways of producing it involved the use of increasingly scarce fossil fuels. Interestingly that, and the ever-growing environmental impacts of burning those fuels, led to drives to use it more efficiently.

We have a polarity of views – essentially between backers of “pipes” and “wires”

That has worked in some places. Demand in some industrial countries has fallen, and not just because energy-intensive manufacturing activities have been exported to developing countries. For example, US residential electricity use fell in recent years and is now flat. Electricity use overall, i.e. in all sectors, also fell in the UK, back to 1994 levels, partly due to energy-saving measures, despite continued economic growth. It’s also fallen elsewhere – in 18 of the 30 IEA (International Energy Agency) member countries.

That is good news, surely, although it may worry the companies who generate and sell electricity. Help may be on hand for them though, since demand for electric vehicles is growing. And some governments are looking to electricity as the way to provide heating.

Gas on?

However, there is another viewpoint from which these possible new electric-power-demand-boosting developments do not look such good news. It’s based on a rival assessment of what makes sense in terms of meeting energy needs — the use of gas as an energy vector. This option is claimed to be more efficient and less costly than electricity for heating, and possibly for other purposes.

It is certainly easier to transmit gas with lower energy losses. And it can be stored, unlike electricity. In the UK gas is the main source of heat. With heating demand being high at times, the UK gas grid carries about four times more energy than the electric power grid. That is why some say it is foolish to try to switch over to electric heating — the power grid could not cope without massive expansion.

We have a polarity of views – essentially between backers of “pipes” and “wires”. Moving the context to the climate debate, the electric wire lobby says the energy system can best be decarbonized by sending power from wind, solar and other renewables to energy users down wires, including for heating and for charging electric vehicles (EVs). The pipe lobby says that, for heating, it makes more sense to stay with the gas grid and standard appliances but switch over to green gas. That way, you don’t have to make many changes whereas to use electricity efficiently you would have to install expensive heat pumps in every house. Green gas can also be used for vehicles, as compressed natural gas already is. So we have something of a stand-off of views.

Heat and power

The situation is complicated by the addition of another pipe option — the supply of heat direct to users. In high-density urban environments, district heating can make more sense than individual domestic boilers, and heat networks could supply perhaps half of UK heat. What’s more, local gas-fired Combined Heat and Power (CHP) plants can supply heat much more efficiently than small domestic heat pumps. Heat pumps can have a coefficient of performance (COP) of 3 or 4, i.e. they can get three or four times more useful heat out of the input electricity than using it directly. However, CHP plants have a COP equivalent of maybe 9 or more; they use heat from burning fuel that would otherwise be wasted.

The gas/heat pipe versus electric wire debate continues. The electricity lobby is still dominant, although other views are gaining traction, and concessions have been made. The UK government’s advisory Committee on Climate Change suggested a compromise, with electric heat pumps used for bulk heating but gas-fired boilers retained to meet peak demand. In time, the committee says, the natural gas can be replaced by green gas.

A similar approach has been backed EU-wide, with the European Commission still pushing for electrification as the main route ahead but recognizing the potential of green gas and heat. That formulation might be challenged but the gas/pipe lobby is hampered by the fact that the biogas resource is limited — there are land-use constraints on expanding biomass production — and most of the other green gas options are in their infancy, although Ecofys has suggested that this could change soon. The growing interest in so-called “power to gas” (P2G) hydrogen options certainly suggests that a new and large source of green gas could emerge.

All power to gas

P2G involves the use of electricity from renewables to produce hydrogen from the electrolysis of water. In some cases, the hydrogen is then converted to methane gas using captured carbon dioxide. That methane can be injected into the gas mains, as can hydrogen, or used as a vehicle fuel. Since there are likely to be increasing amounts of wind and solar capacity, there will at times be excess output over demand. Converting that surplus to hydrogen and/or methane makes more sense than curtailing the output of wind and solar plants.

What’s more, these gases can be stored and then used to generate power when there is a shortage of renewable power. So P2G provides not just valuable fuels but also a partial solution to the problem of the variability of renewable sources like wind and sunshine — assuming there is enough surplus to go around. Conversion efficiencies for P2G do need attention but they are rising and costs are falling as new electrolysis technology develops.

At present most P2G projects, for example in Germany and the UK, are dedicated to producing vehicle fuels or, to a lesser extent, gas for grid injection. But the grid-balancing role could grow and, neatly, will be made both possible and necessary by the growth of wind and solar generation. It could be a way ahead.

The P2G grid-balancing approach essentially offers a way to store power until it is needed, with hydrogen or methane storage being much easier than direct electricity storage, for example in batteries. Large volumes can be stored over long times. However, there is another approach; heat can also be stored in bulk over long periods with low losses. Devotees of CHP argue that, if linked to heat stores, it too can offer a grid-balancing option, given that the ratio of power to heat output can easily be changed. When there is plenty of green power, the power output from a CHP plant can be lowered and the heat output stored, if it too is not needed. When power demand rises, the CHP plant power output can be raised, and if heat is needed, it can be supplied from the store.

Most CHP plants use fossil gas but are highly efficient (70–80% or more) so relatively low-carbon. Increasingly they use biomass or biogas, in which case they are near-zero net carbon.

Pipe dreams

As can be seen, the gas/heat pipe lobby has some powerful arguments on its side. Some argue for total conversion to green gas as, for example, in the proposed Leeds H21 project. That would produce hydrogen by steam reformation of natural gas, with the resulting carbon dioxide stored to make the process lower carbon. The P2G approach, however, looks to a zero-carbon system that uses electricity-producing renewables to produce storable 100% green gas for heating or other purposes, including grid-balancing. CHP also offers balancing and, if using a green fuel, near-zero carbon power. There are also other approaches to green heating. Electricity from wind or photovoltaics (PV) can be converted to heat and stored. Large heat pumps, powered by wind or PV electricity, can upgrade the heat from stores. In some locations all these systems could be combined, with solar heat also feeding into the heat stores.

Integrated systems that cross the boundaries between heat and power may prove to be the way forward. But if we are to seek optimal mixes of heat and power, wires and pipes, we must move away from assuming that electricity is always the best option. It is invaluable in some contexts, particularly for local “peer to peer” trading by prosumers, and also for long-distance high-voltage direct current (HVDC) supergrid balancing, but many end-uses now do not need a 240 V 50 Hz grid supply, and some are better served by other energy vectors. And, aiding balancing, “hydrogen could eventually become a way to transport renewable energy over long distances”, according to the International Renewable Energy Agency (IRENA). So the future may not be “all electric” after all.

Digital opportunities: combining computing and high-energy physics

As we celebrate the 30th anniversary of the World Wide Web, the spotlight is on pioneering computer scientist Tim Berners-Lee, who developed the concept at the CERN particle-physics lab near Geneva. He originally studied physics at the University of Oxford, but Berners-Lee is far from the only physicist to have had a fruitful career in computing. Just ask Federico Carminati, who is currently the chief innovation officer at CERN openlab. This is a public–private network that links CERN with other research institutions – as well as leading information and communications technology companies such as Google, IBM and Siemens – to investigate the potential applications of machine learning and quantum computing to high-energy physics.

With a keen interest in natural sciences from childhood – he begged his parents for a microscope, and wanted to study animals – Carminati’s interest in physics peaked towards the end of high school, thanks to his mathematical prowess. He graduated from the University of Pavia, Italy, in 1981 with a master’s degree in physics. Strangely, there were no Italian universities offering PhDs in physics at the time. “So, I decided to start working immediately. My first job was at the Los Alamos National Laboratory, where I worked as a high-energy physicist, on a muon-decay experiment,” Carminati says.

He spent a year working at Los Alamos, before his contract ended and was not renewed. “I began writing a number of letters looking for a job and, at the encouragement of my wife, I wrote to Nobel-prize-winning physicist Samuel Ting. Honestly, it was a very long shot and I didn’t think I was going to receive any answers,” he recalls. Luckily, the eminent physicist found Carminati’s CV interesting and wrote to ask him whether he was “better” at computing or hardware. “I said I was better in computing. So he put me in contact with the California Institute of Technology,” where Carminati spent the next year, before being hired by CERN in a computing role.

Federico Carminati at CERN

Carminati has been at CERN since 1985, where he has held a variety of jobs, the first of which was at the CERN Program Library, which handles the organization’s data. The library essentially started as a collection of programs written for physicists at CERN experiments. “But it became a worldwide standard for computing in high-energy physics, “ Carminati says. “My task was to co-ordinate the development of this very large piece of code, and to distribute it. This was before the Web existed, so distributing it meant shipping large reel tapes of data.”

Later, Carminati became responsible for one specific part of this library – the GEANT detector simulation program. The idea here was to carry out detailed and precise simulations of the very high-energy experiments they hoped to run on actual detectors in the future. Carminati worked on this until 1994. “I then decided to join the small team that was set up by the CERN director and Nobel-prize-winner Carlo Rubbia, who decided to start working on the design of a new kind of reactor that would combine the technology of nuclear-power reactors and of high-energy accelerators.” Carminati worked as part of this small team for the next four years, which proved interesting even though the team’s prototype never saw the light of day.

From 1998 to 2012, Carminati worked on the ALICE experiment, one of the four main detectors at the Large Hadron Collider (LHC). Among his roles was that of computing co-ordinator, which meant that he was in charge of designing, developing and co-ordinating the computing infrastructure for this experiment. “I was also very involved in the development of CERN’s computing grid,” he says.

Formidable requirements

Launched in 2002, CERN’s Worldwide LHC Computing Grid was a pioneering concept, as it allowed physicists across the globe to exploit the many petabytes of data generated each day when the LHC is running. It was key in allowing researchers to pin down the Higgs boson in 2012. While this global collaboration of computer centres is well established today – connecting more than 8000 physicists to thousands of computers in some 190 centres across 45 countries – it was a gargantuan task.

“We were asked to put down the computing requirements for the LHC on paper, in the late-1990s, and it emerged that they were so formidable that we were nearly accused of sabotaging the project,” exclaims Carminati, who says that it seemed as though the computing power needed was far beyond the funding provided to CERN. It was equally impractical to build and host such a large computing centre at the European lab.

The idea emerged to harness all of the computing facilities of the different laboratories and universities across the world that were already involved in the LHC, and integrate them into a single computing service. “Nowadays, everybody is talking about cloud computing, but at the time it was little more than science fiction,” says Carminati. “The interesting thing was that the funding agencies agreed to give us this computing power. But they wanted to make local investments by helping create centres of computing excellence in all the different countries,” he says, explaining that funders hoped that these centres would hire home-grown people and develop know-how in information technology locally. “It was a fantastic adventure because I travelled to places such as South Africa, Thailand and Armenia to help them set up computer centres.”

It was not all smooth sailing, though, as Carminati encountered “a tremendous amount of negotiation, and it took a lot of hard work to sell the concept to local politicians”. Carminati says a particular highlight was negotiating the South Korean computer centre for ALICE. “I had to work with the country’s ministry of science and education, and the local scientists too. I also did the same for India.”

Carminati’s role also involved helping users to exploit the computing power once it was established. “I was co-ordinator for ALICE’s experimental computing infrastructure between 1998 and 2012, which was much too long if you ask me. It was fun and creative until the LHC was switched on. When the machines started, it was exhausting. When the LHC is running we are in ‘production mode’,” says Carminati. “You have to be ready to process the vast amounts of data. It becomes a very complex organizational task where you have to co-ordinate hundreds of developers, distributed around the world, providing software to a central depository, all on a very tight time schedule. There are some very hard choices you have to make, when it comes to time versus quality, and the whole thing is tough and exhausting.”

Carminati spent 2013–2017 back at GEANT, working on improving the performance of the simulation program on new computing architectures, and developed the new generation of code used to simulate particle transport at CERN. During that time he also managed to obtain his PhD, from the University of Nantes in France.

Open for business

Now based at CERN openlab, Carminati explains that computing technologies are currently evolving so fast that evaluating them only once they are on the market is “not good enough”. CERN openlab is one of the few units at the European lab explicitly carrying out research into computing.

The aim is for CERN to reach out to high-energy physics users, as well as commercial users, to highlight techniques developed in-house, while also collaborating on projects with other institutions. For example, CERN openlab is currently working with Unosat, the UN’s technology platform that deals with satellite imagery and analysis, which has been hosted at CERN since 2002. One of their joint projects is to evaluate the movement of large refugee populations across the globe, to know how many people are at any given refugee camp, which can often be difficult or even dangerous to reach. One way to assess population density is to count the tents at a camp. “We are experts in machine learning and artificial intelligence,” says Carminati, “so we are working with Unosat to develop programs to automatically count the tents in satellite pictures.”

Another planned collaboration is with Seoul National University’s Bundang Hospital in South Korea, which Carminati says has a “fantastic health information system and patient records, from many years”. CERN openlab is trying to find the resources to begin a project with the hospital to “use machine learning to see whether we can correlate the classifications that artificial intelligence can make of patient records in actual diagnosis”. The idea is to find out if a machine-learning system could learn to make a diagnosis of its own, or pick up people who may have a “double diagnosis” – those with two diseases that are always coupled – thereby having a case for creating a new diagnostic category.

Federico Carminati lecture at CERN

When it comes to the impact of quantum computing on the future of high-energy physics, Carminati is convinced that CERN must start thinking about tomorrow’s computing technology today. “It is so important to explore this, because 10 years from now we will have a shortage of a factor of 100, when it comes to computing time.” The other vital issue is the amount of data being taken at the LHC, and any future colliders, as they search for physics beyond the Standard Model. “We are now looking for something very subtle. With the Higgs we said that we were looking for a needle in a haystack. Now the new game is that we will make a stack of needles, and then look within it for the odd one out.”

This means that particle physicists will be taking and classifying an incredible amount of data, and then processing it with extreme precision, all of which will require an increase in computing power. “We may be increasing the amount of data that we take and the quality of the detectors. But we cannot expect our computing budget to be increased by a factor of 100,” says Carminati. “We are just going to have to find new sources of very fast computing, and quantum computing is a strong candidate.”

While he is clear that none of today’s quantum computers are anywhere near that mark, he believes that quantum computing will mature, partly thanks to investment by industry. “Whenever this happens, I think we have to be ready for it, to exploit it as best we can. We would be able to use a quantum computer across the board – from simulation and detector construction, through to data analysis and computing speed-ups. It is very important to start developing our programs and software now.”

Carminati points out that, were the quantum revolution to arrive, scientists would have to completely rewrite their codes, as he believes there is nothing like a universal quantum computer. “Can we have software that is agnostic of the specific kind of computing that we are using? We will have to develop a new angle, and so this is a large part of our research”. Last November Carminati organized the first ever workshop on quantum computing in high-energy physics at CERN, to get a head start on these very issues.

Outside the box

All of this means that today’s physics graduates will have a wide variety of opportunities when it comes to jobs across the fields. “A physicist is trained to creatively solve complex problems using mathematics, with a lot of thinking outside the box,” says Carminati. “We train so many physicists at CERN, and sometimes it is frustrating to see them leave, but we cannot keep everybody, this we know. Our consolation is knowing that we’re giving them a skillset that it is really applicable to many other research fields, and across industry.”

Today, there is a global hunger for machine-learning and quantum-computing experts, with countries from the US to India and China looking to train and develop such expertise. Carminati has a very optimistic outlook for today’s graduates who may be considering one of these fields. “Try to have as much constructive fun as you can in doing your research, because it’s a fascinating job.”

Monolayer resets record for thinnest non-volatile memory device

Thinnest Non-volatile Memory

It may seem that anything traditional materials can do 2D materials can do better, but when it comes to a resistive device you can switch on or off, leakage currents can pose problems when materials are atomically thin. Drawing on their expertise in high-quality monolayer device fabrication Jack C Lee and Deji Akinwande at the University of Texas at Austin Microelectronics Research Center in the US and their colleagues in the US and China have now demonstrated resistive switching across monolayer hexagonal boron nitride (h-BN) bringing the record thickness for a memory material down to around 0.33 nm.

Non-volatile resistance devices switch between high and low resistance states when a voltage is applied, a process described as setting and resetting. They have attracted interest for memory that persists without a power supply, as well as high-density memory and next-generation neuromorphic computing that mimics the operation of synapses in living organisms.

“We pioneered monolayer 2D memory last year initially based on MoS2,” says Akinwande. So-called transition metal dichalcogenides (TMDs) like MoS2 are semiconductors – they only conduct electrons when a potential field provides the energy needed to overcome the material’s bandgap. As Akinwande tells Physics World, they then began to consider h-BN as a possibly more suitable candidate for memory phenomena compared with TMDs because of the more insulating characteristics such as the high bandgap. “The main challenge was that monolayer h-BN is two times thinner than MoS2, so it was not clear if the device would work because any pinholes in the h-BN would result in an electrical short.”

Fabricating perfection

The researchers took several measures to ensure the quality of their sample devices. They grew h-BN monolayers directly on the bottom electrode and then grew the top electrode directly on the h-BN to avoid any impurities and defects arising from transfer processes. They also used gold, an inert metal, for their electrodes. This helped to rule out the role of oxides forming at the interface so the researchers could attribute any switching behaviour to the h-BN layer with certainty.

“After many rounds of sample experiments we were able to work on high-quality h-BN monolayer films grown by collaborators at University of Texas and also samples from Peking University that resulted in observation of memory effects in the thinnest atomic monolayer, a materials science record,” says Akinwande.

Filling in the gaps

Measurements of the effect of temperature on the electronic properties revealed that while the devices shared the characteristics of TMD devices in the low resistance state, in the high resistance state the behaviour differed. The increasing current with temperature in the high-resistance state suggests the role of charge traps in the electronic behaviour.

Based on ab initio simulations the researchers explain the switching behaviour as the result of boron vacancies. In the low resistance state, gold atoms can occupy these vacancies forming a conductive bridge.

The ratio of the high and low resistance states was 107. Switching occurred in less than 15 ns, and tests demonstrate that the resistance states were stable for at least a week after over 50 switching cycles. “We are now conducting further experiments to improve the reliability and explore applications including and beyond information storage,” says Akinwande.

Full details are reported in Advanced Materials.

Facing up to the decarbonization challenge

How can we satisfy the world’s energy needs while also reducing its carbon emissions?

Of all the scientific and social questions facing humanity, this is arguably the most important. Indeed, the future of our civilization may even depend on finding an answer, especially as the climate consequences of dumping carbon dioxide (CO2) into the Earth’s atmosphere become ever more apparent and alarming.

It was therefore interesting to hear this question being posed by a senior scientist at ExxonMobil, one of the world’s major oil and gas firms. In 2018, ExxonMobil announced that it plans to produce 25% more oil and gas by 2025 than it did in 2017. The company expects global demand for oil to rise by 19% between 2016 and 2040, and demand for gas to rise by 38% in the same period. In contrast, the Intergovernmental Panel on Climate Change estimates that keeping the Earth’s temperature within 1.5 °C of pre-industrial levels would require oil and gas production to fall by 20% over the next few decades.

Amy Herhold, director of physics and mathematical sciences at ExxonMobil Research and Engineering in New Jersey, US did not mention the company’s plans for increasing production during her talk at an APS March Meeting session devoted to the company’s physics research over the last five decades. Instead, Herhold began by pointing out that the world’s population is expected to grow by 1.7 billion between 2016 and 2040. According to the company’s forecasts, demand for energy (in all forms) will go up by 25% over the same period, leading to a 10% rise in CO2 emissions. The small silver lining in this dark cloud is that CO2 production per unit of gross domestic product (GDP) is predicted to drop by 45%, as improvements in efficiency, coupled with increasing use of renewable energy, reduce the “carbon intensity” of the global economy.

Much of the rest of Herhold’s talk was devoted to an overview of the physics research that scientists at Exxon (later ExxonMobil) have done since the 1950s, on topics as varied as subsurface sensing; the behaviour of emulsions and foams; fluid flow; and even, in the early days, nuclear fusion. Towards the end, though, she returned to the subject of energy efficiency and decarbonization.

One of ExxonMobil’s current research goals is to develop membranes that can separate different types of hydrocarbons. The aim is to replace or reduce the use of distillation, in which crude oil is heated (an energy-intensive process) and successively-heavier fractions separated out. Another research project focuses on developing advanced biofuels for use in hard-to-decarbonize sectors such as commercial shipping. And a third relates to carbon capture, in which concentrated CO2 obtained from (say) the flue gas of a power plant is injected into underground rock formations to sequester it from the atmosphere.

Replacing distillation with mechanical filtration seems like a clear win for ExxonMobil; no company wants to spend money on fuel if it doesn’t have to. But it wasn’t so obvious what it stands to gain from working on carbon capture or non-fossil-fuel forms of energy, so at the end of the talk, I asked Herhold if she could elaborate.

Her response, roughly, was that ExxonMobil is an energy company, so it’s interested in all options. The company’s scientists also know a lot about underground rock formations and how fluids flow through them, so she thinks they have something to offer on carbon capture. But beyond that, Herhold was pretty tight-lipped. “What our business model looks like in the future is not something I can comment on,” she said. “It’s going to be an interesting journey.”

  • This article was updated on 28th March to clarify the second paragraph.

Curved camera chips may be the next leap in astronomical imaging

It’s not just televisions and smartphones that are looking forward to a curved future. Astronomers are now looking to put curved camera sensors into spacecraft exploring the depths of the Universe – since they promise to provide better imaging performance in a more compact package – and recent tests by researchers in the US and France show that prototype curved devices are more than up to the task.

In the study, Simona Lombardo of the Laboratoire d’Astrophysique de Marseille and her colleagues tested five CMOS chips that they curved into both concave and convex shapes, and with different radii of curvature. Using commercially available flat CMOS devices, the team first thinned the sensors to increase their mechanical flexibility and then glued them onto a curved substrate to create a spherical shape.

The researchers found that in almost all cases the characteristics of the curved sensors matched those of a flat CMOS chip – confirming that there was no degradation in performance as a result of the curving process. But they also found that the curved detectors generated a much lower dark current – one of the main sources of noise in image sensors – than the flat version.

“The dark current is due to intrinsic movement of electrons in the pixels of the sensor, even when not exposed to light,” explains Lombardo. “These electrons are then collected in the pixels and become indistinguishable from the electrons generated by the incoming light (from a star or a nice landscape).”

Aiming for longer exposure times

Lombardo cautions that the apparent reduction in dark current could simply be caused by the flat sensor and its curved counterparts coming from different batches. But a genuine enhancement resulting from the curved nature of the detectors could be good news for astronomers.

“In astronomy one is generally interested in measuring the most correct number of charges coming from an object (or its flux) and quite often the objects under study are faint and require long exposure times,” says Lombardo. “Having a lower dark current decreases the error in the measurement and, most importantly, allows longer exposure times.”

One of the reasons that curved camera chips hold promise for spacecraft engineers is that some past missions have had to use special optics to correct distortions produced by their inbuilt telescopes. A detector with a curvature that is able to cancel out this distortion would negate the need for those extra components, allowing the designers to make lighter, more compact orbiting observatories. And, given the enormous cost of launching every extra kilogram into space, that could lower the price tag of a mission. “Since there are fewer and smaller optical elements, the manufacturing becomes easier and the costs are reduced,” adds Lombardo.

According to the researchers, one project that could make use of curved sensor technology in its camera is the proposed MESSIER satellite – which would image, among other things, the extraordinarily faint tendrils of material that stretch vast distances around and between galaxies. “As the goal of the mission is the observation of large astrophysical structures, a wide field of view is required for the telescope,” says Lombardo. “Curved detectors allow [one] to greatly simplify the design while keeping the performances at [a] high level.”

Dave Walton, Head of Photon Detection Systems at UCL’s Mullard Space Science Laboratory in the UK, agrees: “I’m sure there’ll be other space missions that will want to use curved sensors. As with all space applications, the issues will be demonstrating that the technology is sufficiently mature, and that it will survive the rigours of spaceflight.”

Walton, who was not involved in the new study, continues: “Which proposed future missions will actually survive/evolve to reach the launchpad is always an interesting question, but some of the studies that might benefit from curved sensors include ESA’s GaiaNIR and NASA’s LUVOIR.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors