Skip to main content

MRI safety: an urgent issue for an increasing crowd

© AuntMinnieEurope.com

The cornerstone of a safe MRI workplace is repeated and updated MRI safety training and awareness. The number of MRI scanners is increasing, and scanners are also moving toward higher field strengths, both in private practice and at hospitals and institutions all over the world. Consequently, there is a large and increasing crowd of radiology staff and others who need MRI safety education to keep our working environment a safe place – and one that no one should be afraid to enter.

When we discuss MRI safety education, we must remember that the knowledge and skills need to be repeated and reactivated regularly, just as we are expected to participate in heart and lung rescue repetitions and fire drills at regular intervals.

What are the risks?

The MRI-associated risk most people are familiar with is of a projectile. There are several well-known and well-proven recommendations to prevent metal and ferromagnetic items from being brought into the examination room.

MRI safety

Screening for metal is important, and when it comes to screening patients, a number of routines should be used, including filling in a special screening form and changing from street clothes into safe clothing. The MR radiographer also needs to interview the patient right before entering the examination room to check that the patient has fully understood the information, and there must never be any unknown circumstances; if there are, further investigations must be done. These procedures are very important and must never be excluded.

It is also possible to use a ferromagnetic detector as a support to the screening procedure. Such a detector is a good asset if you want to reduce the risk of something being accidentally taken into the room. At the same time, it is important to know that while a ferromagnetic detector may increase MRI safety, it should never replace any of the ordinary screening procedures used.

Awareness of implants

Another risk that is greater these days has to do with the distribution of radiofrequency energy that may result in heating of the patient and/or heating of the patient’s implants. Heating injuries have increased due to the use of more efficient and powerful methods and scanners. Occasionally, they are also caused by a lack of MRI safety competence regarding how to position the patient, etc.

It is of the utmost importance to identify every implant – a task that can be both time-consuming and difficult. We must remember that it is essential to find out if the MRI examination can be performed on a patient with a certain implant and, if so, how it can be done safely.

Working with MRI requires clear and well-founded working routines that are never abandoned. It is of great importance to be alert and never take things for granted in our special environment where a lot of different professions are involved. It is a truly challenging environment, and teamwork is necessary. Working alone with MRI examinations and equipment should never be an option, and all members of the scanning team must have a high level of MRI safety skills.

How to minimize risk

There are several ways to improve MRI safety besides sufficient safety training and education and installing helpful devices. We must not forget that equipment vendors play an important role when it comes to improving the safety situation, and their support and collaboration are greatly appreciated.

We need solid recommendations regarding education and routines to be followed by everyone working in the scanner environment. A better understanding of MRI safety risks and more resources for safety education are needed to maintain a high level of competence among the growing group that needs MRI safety training.

Another area for improvement is the reporting of incidents. Today, there are several different incident reporting systems, locally and sometimes nationally, but most of us working with MRI safety know that the reported incidents to date only show the tip of the iceberg.

What we really need is a general, efficient, easy-to-access reporting system. A dream scenario would be if every single accident could be reported and analysed, and any improvements made would also be registered. This information would then be available in a database so the whole MRI community could learn from it. The reporting system would be a most welcome tool for everybody working to improve MRI safety locally as well as globally.

Titti Owman Titti Owman is an MR radiographer/technologist and a member of the Safety Committee of the Society for MR Radiographers & Technologists (SMRT) and a past member of the Safety Committee of the International Society for Magnetic Resonance in Medicine. She is a founding member of the national 7-tesla facility within the Lund University Bioimaging Center, Sweden, and is also a research coordinator/lecturer at the Center for Imaging and Physiology, Lund University Hospital. She is also past president of the SMRT.

  • This article was originally published on AuntMinnieEurope.com ©2019 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

Machine learning stabilizes synchrotron beams

Machine learning has been used by scientists in the US to reduce unwanted fluctuations in photon beams from a synchrotron light source. The technique does this by stabilizing the synchrotron’s electron beam and offers a way around an important barrier to the development of next-generation facilities.

The work was done by Simon Leemann and colleagues at the Lawrence Berkeley National Laboratory (LBNL) in California and could allow emerging analysis techniques that require high beam stability – such as X-ray photon correlation spectroscopy (XPCS) – to be implemented on synchrotons.

Synchrotron light sources are extremely useful scientific instruments because they deliver bright, high-quality beams of coherent electromagnetic radiation from infrared wavelengths up to soft X-rays. The light is produced by accelerating electrons in a storage ring using powerful magnets – taking advantage of the fact that an accelerated electron emits electromagnetic radiation. In modern synchrotrons, the acceleration is enhanced using magnetic wigglers and undulators.

The technology is now entering its fourth generation and future synchrotrons could create light beams up to three orders of magnitude brighter than available today. These facilities should make it possible to study physical and chemical processes on length scales ranging from nanometres to microns; and timescales from nanoseconds to minutes. One possible application would be acquiring complex 3D maps of chemical reactions that could potentially lead to a deeper understanding of electrochemical systems including batteries and fuel cells.

Vertical instability

One potential limitation on the performance of fourth-generation facilities is the relatively large instability in the vertical profile of a synchrotron’s electron beam. This instability leads to fluctuations in the position and size of the light beams, which will make it difficult to implement techniques such as XPCS.

Current stabilization techniques use a combination of predetermined models and tedious manual calibrations to achieve a stability of around 10% of the electron beam width. This is adequate for third-generation facilities, but much tighter control will be needed in fourth-generation synchrotrons.

Now, Leeman’s team has shown that much greater stability can be achieved by using machine-learning algorithms to adjust existing beam-control instrumentation. They developed a neural network designed to prevent width fluctuations from occurring in the electron beam at the third-generation Advanced Light Source (ALS) at LBNL. The algorithm was trained using two datasets. One comprises the distribution of magnetic “excitations” that are used to control the shape of the electron beam. The other is the width of a photon beam obtained from the ALS.

Using this information, the neural network constructed a table of how the magnetic excitations affected the final photon beam width. By continually updating this table, and referring back to it three times per second, the synchrotron could then automatically adjust its electron beam. This resulted in vertical electron-beam fluctuations of just 200 nm – just 0.4% of the beam’s vertical extent and suitable for fourth-generation synchrotrons.

The research is described in Physical Review Letters.

New ways to probe material properties

Thousands of scientists will be gathering in Boston, US, next week for the Fall Meeting of the Materials Research Society. With freezing temperatures on the horizon, There will be plenty of time to size up the latest equipment for materials characterization and analysis. Here are a few of the highlights.

Integrated Hall solution speeds up measurement

Lake Shore Cryotronics will be demonstrating an all-in-one instrument for complete Hall analysis at Booth 401. Ideal for semiconductor material research, the company says that the MeasureReady M91 FastHall measurement controller is faster and more accurate than traditional Hall solutions, while also being easier to use.

The M91 FastHall Controller from Lake Shore CryotronicsCombining all the necessary Hall measurement system functions into a single instrument, the M91 automatically executes measurement steps, collecting the data and calculating the final Hall and mobility parameters. By automatically optimizing the excitation and measurement range, the instrument eliminates manual trial-and-error steps and ensures that measurements are always made under optimal conditions for the sample.

Most commonly measured materials can be analysed within a few seconds, including those with low mobility. Such speed of measurement is enabled in large part by Lake Shore’s patented FastHall technology, which eliminates the need to reverse the magnetic field during the measurement. This is particularly beneficial when using superconducting magnets, which are relatively slow at completing field reversals.

You can find out more about the M91 at Booth 401, along with Lake Shore’s full range of measurement and control solutions for low-temperature and magnetic-field conditions.

Integrated time controller delivers picosecond precision

The ID900 Time Controller from ID Quantique combines the functionalities of a time-tagger, delay generator, pattern generator, counter and discriminator in a single device. It forms the central hub in a system for fluorescence lifetime NIR spectroscopy with one of ID Quantique’s single-photon detectors.

ID900 Time ControllerThe time controller has four inputs and four outputs that are interconnected internally via a fast FPGA. Its high-speed mode allows fast count rates up to 100 Mcps on each channel with a binning of 100 ps, while the high-resolution mode has a binning of 13 ps and allows a maximum count rate of 25 Mcps on each of the four channels.

The ID900 comes with a clear and simple interface for controlling, monitoring and analysing the parameters for histogram generation, timestamping and delay generation. A configuration editor also allows the user to create customized features such as coincidence filtering or even conditional outputs. It is available in three versions that each offer different add-on functionalities.

Find out more about the ID900 time controller by visiting ID Quantique at Booth 725.

Next-generation microscope incorporates quantum metrology

The quantum scanning microscope (QSM) from QZabre is the first turnkey instrument on the market to exploit a nitrogen-vacancy centre (NV) – a single atomic defect in diamond – as a highly sensitive magnetic-field sensor. Scanning NV technology provides quantitative data on magnetic fields at the nanoscale, and can be used to sense static and dynamic magnetic fields, as well as microwave fields and current densities, and even temperature, subsurface metal or proteins and larger magnetic molecules.

Quantum scanning microscopeThe heart of the instrument is a diamond scanning probe with a single NV at its apex. The tip, which is made from high-quality single-crystal diamond, is shaped to deliver maximum sensitivity, and is attached to a tuning fork for height feedback. Spatial resolution and magnetic sensitivity depend on the diamond tip, with typical values currently 30 nm and 1–10µT/√𝐻𝑧 respectively, and QZabre believes that these numbers will fall with further innovation.

The microscope delivers closed-loop scanning with a range of 90 µm x 90 µm, and coarse movement with position feedback over an area of 6 mm x 6 mm. A robust design enables stable scanning with minimal drift for long time periods, while an intuitive software interface simplifies and automates many of the complex underlying processes.

To discuss how the QSM can help your specific application, visit QZabre at Booth 810.

Solar-powered planes, trains and automobiles

Can cars be run on solar energy alone? Vehicle-mounted photovoltaic (PV) cells have been able to provide sufficient day-time power for very light-weight experimental “solar cars” as demonstrated in the cross-Australia World Solar Challenge.  However, even with lighter, more efficient cells, it is a big step from that to conventional cars for general use. Solar PV battery chargers are widely used on yachts and other vessels for ancillary power. They are also used to power the units of some refrigerated transport containers to reduce fossil-fuel use by delivery trucks. Yet PV-powered cars might seem a step too far and not everyone is convinced that enthusiasm for this idea is entirely justified.

Cars do, however, spend of lot of time parked or driving in the open air, so roof-top PV cells could be used to charge electric vehicle batteries. And several solar cars have now been developed, although in most cases the solar input is a “top up” to standard grid charging, adding a few miles extra to their daily range on sunny days. For example, Toyota claims that its solar car can manage up to 55 extra kilometres with a full days solar charge. Going one better, Lightyear’s new solar car claims to be able to add almost 20 km extra range for each hour of solar charging, possibly taking the total range up to 1250 km.

The solar-roof system on Hyundai’s new solar car is claimed to be able to charge 10–60% of the battery per day. So if six hours of daily high-level solar charging is possible then it might be possible to increase the annual travel distance by an extra 1300 km. That’s valuable, saving on mains charging costs, but it also means that, if you do not need to drive long distances each day, you may not have to use charging points at all. For example, with the version being developed by Toyota, it is claimed that if the car is only driven four days a week for a maximum of 50 km a day it does not need to be plugged in anywhere. It could be the same with some of the others.

That may be fine for some short-distance commuters, but otherwise it is still a bit limiting. In general, it may make more sense to rely mainly on domestic — or car park — grid-linked charging. If you want solar charging then it might be best to use chargers at car parks with PV covered car-ports  It is certainly becoming increasingly common in the US — and also in some parts of the EU — to mount PV arrays on weather protection canopies/sun shades erected over car-parking spaces at retail outlets, train stations and similar venues.

In some cases the power produced is made available to charge electric vehicles. Solar canopies have become a booming market, with, for example, parking-lot canopy-scheme capacity in the US already heading for over 1 GW. For good or ill, there are a lot of car parks around the world and the solar canopy idea seems likely to spread, especially in sunnier countries where car-lot sunshades are already common.

There are also some other novel static options for PV transport applications. For example, a study by Imperial College London has backed the idea of PV generation alongside rail tracks that would feed power direct to the live rails rather than via the grid. The UK Department for Transport is supporting work on using a novel PV “clip-on” system for railway-track sleepers.

Up in the air

An even more exotic transport idea is to power airships using solar cells. Dirigible balloons offer a large top surface area for PV cells and their output could be stored in batteries and used to run electric motor-driven propellers. Given that airships can fly above the clouds, so increasing the amount of sunlight available, some interest has been shown in the idea. They offer leisurely travel options, but also heavy lifting capacity for delivering bulky items to remote locations. Emergency access for disaster relief is another possibility, with some interesting hybrid balloon/aircraft versions emerging.

Solar energy looks like it will play an expanding role in transport

On the larger scale, China has developed a high-altitude solar-powered airship project that could be used as an alternative to satellites for telecommunications — an idea that has also driven the development of solar-powered drone-like aircraft. An unmanned solar-powered aircraft developed by Airbus has operated in the stratosphere at an average altitude of 20,000 m, with its maiden flight in 2018 lasting almost 26 days.

Some small light-weight solar-powered manned aircraft have also been tested.  In 2016 the single-seat Solar Impulse 2 craft circumnavigated the globe without using fuel. The trip was completed in 17 separate legs. It was an impressive feat and there have been some larger projects proposed since, including one six-seater. Some see this sort of development as essentially power-assisted gliders and so perhaps not heralding the advent of solar-powered aviation on a large scale, especially given the solar power to load/lift/speed ratios. However, it is early days yet and, for example, some hybrid balloon/aircraft variants might do better.

Harvesting the Sun

More likely to be viable in the short term are simple battery-powered light aircraft that are charged when they are on the ground. If they are charged with green/solar power then it would make them zero carbon. Some interesting projects are underway and they are moving up-scale quite rapidly. A nine passenger Alice prototype recently emerged and there are plans for some quite large electric planes. Though the current emphasis is on short haul and hybrid systems.

Longer term, hydrogen powered planes may be a way ahead. Examples include the 4-seater HY4 tested in 2016 in Germany and the Element One concept developed by Singapore-based HES, which uses fuel cells to make electricity. The hydrogen fuel would be produced by electrolysis, using power from green/solar energy sources on the ground.

However, the energy density by volume of hydrogen gas is low and pressurised gas /liquid hydrogen storage systems are heavy. Unsurprisingly, given their higher-energy densities, liquid biofuels are usually seen as a more likely choice for aircraft with the hunt being on for the best option. In that case, the planes would be solar powered, but the Sun’s energy would have been harvested by biomass grown on the ground — or possibly at sea.  That would reduce the need for large areas of land to grow biofuels for aircraft, although there are all sorts of other eco-issues with growing biofuels, wherever it is done.

Looking more broadly, while some of these new fuels and/or the use of green electricity might reduce aviation emissions, maybe we need to rethink our love of flying. Otherwise, we may end up using our green fuel/green power for that purpose only — a real possibility according to one recent study. We may need that green energy for other more energy-efficient forms of transport. This includes trains powered by electricity supplied by wind farms or trains that use hydrogen produced from renewable sources.

Despite the space limitations, putting PV solar on vehicles, cars or even planes may avoid direct conflict with the use of green energy for other types of transport. Although it may take time for applications, solar energy looks like it will play an expanding role in transport.

Building an ethical consensus for space exploration

SpaceX Mars City One

When Israel’s Beresheet spacecraft crash-landed on the Moon in April, it was yet another reminder that “space is hard”. Despite SpaceIL – the firm behind the craft – receiving plaudits for getting so close, only later did it emerge that Beresheet contained hundreds of tardigrades – creatures under a millimetre long that are known to survive incredibly harsh conditions (October 2019). While SpaceIL had jumped through the necessary hoops from NASA’s planetary-protection office for the mission, the firm was apparently unaware that the tardigrades had been included in a payload by the US-based Arch Mission Foundation.

The incident was a reminder that, for some, space exploration comes with a “first come, first served” mentality. Take another example – private artificial satellite constellations promising global 5G communication. Did humankind – particularly weather forecasters and ground-based astronomers – ever agree to all these missions? Or was it just the will of the companies thinking that their perspective of “good” fits a universal standard? Sadly, such ethical considerations are either missing entirely or can easily be breached, particularly by unregulated entities.

As space organizations think about sending humans beyond Earth orbit for the first time in over half a century, we still largely know nothing about how we affect our celestial neighbours and, even more terrifyingly, how environments on other bodies could affect our own health. Assumptions about our ability to live in space have no regard for how little we know about medical practices in non-Earth gravity, microbial resistance in microgravity, the chemical composition of regoliths – even on the Moon we claim to know so well – as well the effects of bioterrorism and quarantine in future space expeditions. This is all, of course, to say nothing about the possibility of life that may exist elsewhere.

Space exploration is risky, but it is well within our research capabilities to understand potential contamination effects, improve medical practices, thwart any potential for space to become a new sector for bioterrorism, and explore the global socioeconomic impact of science missions. Failing to consider these issues not only endangers future explorers but, I believe, is also highly unethical.

On top of this, some dismiss the role that humans have already played in colonization and exploitation on Earth. To them, colonization was an event that took place in the past and not one that continues to thrive today on exploitation. They believe that the Western way of life and our affinity for capitalism is something that is guaranteed to carry on into space. They talk about how our government and institutions will continue to live on the Moon or on Mars and the rhetoric of democracy in space, citing “space is for all” without accounting for the right of all ideologies to study space.

This is not to say that exploration and scientific progress should stop while we reassess. Indeed, the emergence of the private space sector allows us to progress in a way not seen before. For example, in late September, SpaceX unveiled its Mars rocket prototype, dubbed Starship. But this is the scary part: progress without even the slightest consideration of socioeconomic implications or how exploration can facilitate exploitation on a global scale will do more harm than good.

Progress without ethics silences all the voices that have a right to space science and exploration. It gags those who are simply not rich enough or who have no interest in establishing a permanent or economic presence on another world. If those who are privileged enough to lead humans further into space cannot see the repeated and destructive path that humanity will once again head down, then there are already voices that are being silenced – even before we have colonized other worlds.

Space is the last peaceful place we have. The collaborative nature of space science and exploration was built on the understanding that no single company, agency or nation can ever hope to comprehend the universe we live in. It is human nature to explore, learn and to take risks. For the first time in human history, there is a chance to push forward in a way that benefits and includes us all and, with space science, humanity can begin to untangle its harmful and exploitative tendencies.

We need to carry out an uncomfortable self-assessment – what are the goals for public and private sectors in space? Are we technologically ready for such exploration? Who are the people putting progress ahead of ethical concerns and why do they view colonization as a right? Ethics is often viewed as a roadblock to progress instead of a way to make advances responsible and mindful of a long history of exploitation and silencing. In my experience, any conversations already taking place about these questions have no diversity component.

I belong to a generation where the Hollywood depiction of travelling to Mars has became less science fiction and more of a feasible career goal. I believe we can have an ethical public–private collaboration as we enter a new chapter in human history. We owe it to ourselves and to generations to come to create a safe and truly inclusive future in space: where the dismantling of exclusionary practices can begin now.

Quantum computers could mark their own homework

A new and very efficient protocol for assessing the correctness of quantum computations has been created by Samuele Ferracin, Animesh Datta and colleagues at the UK’s University of Warwick. The team is now collaborating with experimental physicists to evaluate the protocol on nascent quantum processors.

Quantum computing has been advancing rapidly and physicists are now in the early stages of building quantum processors that can outperform supercomputers on certain computational tasks. However, quantum computations can easily be disrupted by environmental noise, which destroys quantum information in a process called decoherence. As a result, it is crucial to ensure that a quantum computer has done the required calculation and has not fallen prey to decoherence.

Datta explains, “A quantum computer is only useful if it does two things: first, that it solves a difficult problem; the second, which I think is less appreciated, is that it solves the hard problem correctly. If it solves it incorrectly, we had no way of finding out.”

Defeats the purpose

The conventional way of checking is to do the calculation on a conventional computer and then compare the results. While this is possible for simple computations, even the most powerful supercomputers will not be able to check the work of future quantum computers. Indeed, using huge amounts of conventional computing power to do this defeats the purpose of developing quantum computers.

Ferracin and colleagues take an alternative approach that involves having a quantum computer run a series of simple calculations, whose solutions are already known. The protocol involves calculating two values. The first is how close the quantum computer has come to the correct result and the second is the level of confidence in this measurement of closeness.

These parameters allow the researchers to put a statistical boundary on how far the quantum computer can be from the correct answer of much more difficult problem.

Having honed their protocol over the past several years, Ferracin and colleagues are now collaborating with experimentalists, enabling them to ascertain how well the scheme performs in real quantum computers.

Looking to the future, the team hopes that its protocol will allow quantum computers to do calculations that are inaccessible even to the most powerful conventional supercomputers. “We are interested in designing and identifying ways of using these quantum machines to solve hard problems in physics and chemistry, to design new chemicals and materials, and to identify materials with interesting or exotic properties,” says Datta.

The protocol is described in the New Journal of Physics.

Cyclotron-based gallium-68 generator breaks production records

Gallium-68 (68Ga) is a positron emitter that’s becoming established as a valuable diagnostic isotope, primarily for detection of neuroendocrine tumours (NETs). Such tumours do not metabolize glucose well – precluding their visualization via standard FDG-PET scans – but overexpress somatostatin receptors that bind, for example, to the PET agent 68Ga-Dotatate.

Currently, 68Ga is made using a germanium/gallium generator, in which 68Ga is created as its parent isotope 68Ge decays. Unfortunately, this approach only produces enough 68Ga for two or three patient scans per day. Canadian company ARTMS Products is developing an alternative technique, in which a low-energy cyclotron is used to create 68Ga from solid zinc-68 targets. The company has now demonstrated record breaking, multi-curie levels of 68Ga production.

Paul Schaffer

“The primary challenge with germanium/gallium generators is that they can’t seem to make the generators fast enough, and when they do, the output is relatively limited,” explains Paul Schaffer, founder and CTO of ARTMS. “The generators make about one patient dose for each elution. So even if they run two or three per day, that’s only two or three patient doses per day, which is an expensive proposition for many centres.”

To address this problem, ARTMS developed the QUANTM Irradiation System (QIS), a 68Ga production scheme that includes enriched 68Zn targets, a transportation device that attaches onto the port of an existing medical cyclotron and a send-and-receive station that terminates inside a shielded workspace. Schaffer notes that the system is manufacturer-agnostic and can be installed on any major cyclotron brand.

The hardware enables a technician to load a non-radioactive 68Zn target into the transportation system. Automated pneumatics and robotics then move the target to the cyclotron’s target port, where it is irradiated for two hours by a proton beam. This proton irradiation generates 68Ga within the target via the 68Zn(p,n)68Ga nuclear reaction. The irradiated target is then brought back to the shielded space where the 68Zn can be extracted and purified for use in radiopharmaceuticals.

“What makes the ARTMS system unique is we’ve demonstrated that it can produce 68Ga at levels of 10 Ci – or 370 GBq,” says Schaffer. “This is 100 to 200 times more activity in a two-hour irradiation than a germanium/gallium generator can put out. This puts the problem not on the amount of gallium that you have but, with its 68 min half-life, the rush to use it all.”

He suggests that a hospital should to be able to produce about a day’s worth of 68Ga and scan several patients following a single cyclotron run. “And at the end of the day, you don’t have any 68Ge or long-lived by-products to deal with,” he notes.

The record-breaking 68Ga production was achieved at Odense University Hospital. The team there demonstrated multi-curie production of two radiopharmaceuticals: 68Ga-Dotatate (known as NETSPOT) and a prostate-specific membrane antigen (PSMA) radiopharmaceutical for imaging prostate cancer. The Odense team is now embarking on a validation study in preparation for regulatory submission.

“We have demonstrated the production level, we’ve demonstrated the chemistry and we’ve demonstrated that 68Ga quality is consistent with regulatory standards, but we have not yet received regulatory approval,” says Schaffer.

He explains that other centres worldwide are working with ARTMS to achieve regulatory approval for the QIS in their respective countries. “We have a system in Zurich and one in Wisconsin, and also have systems being installed in Japan, the UK, Costa Rica and Toronto,” he says. “I think it’s in everyone’s interest to get this approved as quick as possible.”

Schaffer notes that the new production technology is intended to supplement, rather than replace existing 68Ga generation methods. “There are going to be areas in the world that will continue to rely on reactor-produced isotopes. There will be areas where Ge-68 and Mo-99 generators just make sense,” he tells Physics World. “But there are many areas of the world that rely on cyclotron technology, and that’s where ARTMS wants to fit in, that’s our goal.”

Square Kilometre Array hit by €250m shortfall

Scientists finalizing plans for the world’s largest radio telescope are in a race against time to try and plug at least some of a €250m hole in the project’s finances. They are hoping to persuade new and existing member states of the Square Kilometre Array (SKA) to stump up more than €100m of additional funds within the next year to avoid reducing the instrument’s core scientific research, which one leading expert says would raise doubts about whether the project is worth building at all.

When built, the SKA should consist of several hundred mid-frequency radio dishes spread out across southern Africa alongside a few hundred thousand low-frequency dipole antennas located in Australia. This would allow astronomers to peer back to the first stars in the universe and study gravitational waves, among other things. However, a series of price rises means that the observatory’s design, agreed in 2015 and itself a much slimmed-down version of the original blueprint, now has a cost well above a cap of €691m that was imposed by member states in 2013.

At a meeting in Nice last week, Andrea Ferrara, an astrophysicist at the Scuola Normale Superiore in Pisa, and chair of the SKA’s science and engineering advisory committee, told representatives of the project’s 10 current member states that at least €800m will be needed to ensure the observatory’s core research remains intact. Speaking to Physics World, he pointed out that the current estimated total cost of €940m means that even increasing the funding to €800m would still lead to cuts. Things that might potentially be pared back, he says, include computing power as well as the low-frequency antennas, which would reduce the observatory’s resolution.

Tough questions

SKA director-general Philip Diamond says he is “working very, very hard” to reduce the funding gap. One avenue is trying to increase contributions from the countries who agreed earlier this year to set up an intergovernmental organisation to build the SKA — Australia, China, Italy, the Netherlands, Portugal, South Africa and the UK – and possibly also Canada, India and Sweden. He also hopes that new countries, such as Germany, France, Switzerland, Japan, South Korea and Spain, could be persuaded to join the project.

Diamond foresees the intergovernmental body, known as the SKA Observatory, existing as a legal entity by mid-2020, with governments then committing funding by November or December that year. He says that discussions with potential new member states “gives me confidence that we are considerably above” the current cost cap, although he acknowledges that none of these countries “are guaranteed yet”, adding that the existing countries “are all showing flexibility in considering looking for additional money”. Diamond admits, however, that they are investigating cuts as well as possibly delaying construction if the necessary funding is not available in a year’s time. Construction is currently envisaged to be complete by 2028, he adds.

According to Ferrara, any less than €800m for the SKA would make it “difficult to cover what would be considered transformative science right now”. As to whether it would be worth building the observatory at all if no new funding materializes, he replies that “that is a very tough question”. It would, he says, “mean building an instrument that will not deliver what it was supposed to do”.

Timeline: The Square Kilometre Array

2006 Southern Africa and Australia are shortlisted to host the Square Kilometre Array (SKA) beating off competition from Brazil and China. Due to be completed in 2020 and cost €1.5bn, the facility would comprise about 4000 dishes, each 10 m wide, spread over an area 3000 km across

2012 The SKA Organisation fails to pick a single site for the telescope and decides to split the project between Southern Africa and Australia. Philip Diamond is appointed SKA’s first permanent director-general replacing the Dutch astronomer Michiel van Haarlem, who had been interim SKA boss

2013 Germany becomes the 10th member of SKA, joining Australia, Canada, China, Italy, the Netherlands, New Zealand, South Africa, Sweden, the UK. SKA’s temporary headquarters at Jodrell Bank in the UK opens. SKA members propose a slimmed-down version of SKA known as SKA1. With a cost cap of €674m, it would consist of 250 dishes in Africa and about 250 000 antennas in Australia

2014 Germany announces it will pull out of SKA the following year

2015 Jodrell Bank beats off a bid by Padua in Italy to host SKA’s headquarters. India joins SKA

2017 Members scale back SKA again following a price hike of €150m, which involves reducing the number of African dishes to 130 and spreading them out over 120 km

2018 The first prototype dish for SKA is unveiled in China. Spain joins SKA

2019 Convention signed in Rome to create an intergovernmental body known as the SKA Observatory. The Max Planck Society in Germany joins SKA. New Zealand announce it will pull out of SKA in 2020

Sea ice snowscape is surprisingly complex

The distribution of snow on flat Arctic sea ice follows two distinct statistical patterns, suggesting that wind shapes the snowscape via two separate processes. A collaboration from Sweden, Canada, the UK and the US came to this conclusion after analysing correlations between snow thickness measurements over different length scales. The researchers propose that statistical patterns over distances of around 10 m represent dune formation, while interactions between snow dunes account for patterns at the 30–100 m scale. The results could help improve climate models, which do not fully account for the thickness and variability of snow cover on sea ice.

In the Arctic, fresh sea ice starts to form from around October. Before external stresses generate topographical features like cracks and pressure ridges, the ice is remarkably flat, and any unevenness in the snow that accumulates can only be due to the action of the wind. For this reason, first-year sea ice offers an ideal natural laboratory in which to test models of wind-blown snow distribution.

Woosok Moon, of Stockholm University and the Nordic Institute of Theoretical Physics, and colleagues found two sites where such conditions arose repeatedly over several years. At Dease Strait, near Cambridge Bay in northern Canada, they performed snow thickness measurements on first-year ice in the spring of 2014, 2016 and 2017. At Elson Lagoon on Alaska’s north coast, similar measurements were made in the spring of 2003 and 2006. The wind speed and direction were known at each site for the months preceding the measurements, so the researchers had a good record of the factors that determined how the snow accumulated there.

The team used nonlinear time-series analysis to characterize the statistical properties of the snow’s thickness over different length scales. As the distance between the measurements increased, they noticed a change in the statistical pattern that described the thickness fluctuations.

“We start by measuring the snow thickness in an area every 10 m, and then we do the same thing in another area. We find that the statistical characteristics of the two sets of measurements are very similar,” explains Moon. “But when we measure the thickness every 60 m instead, we see that this sample is different statistically from the 10-m sets.”

Usually, seeing such a change in the statistics of a system indicates that different phenomena govern the behaviour at each scale. Snow particles are transported by the wind in much the same way as sand grains are, and since the fluctuations in snow thickness at the 10-m length scale are similar to those seen in sand dunes, the researchers think that the same formation mechanism operates in both cases.

The cause of the statistics exhibited at longer measurement scales is more mysterious, but Moon and colleagues say that its noise characteristics mirror the self-organized criticality seen elsewhere in nature, such as in the distribution of different sizes of earthquakes along a geological fault line. This is the phenomenon whereby complex statistical patterns emerge from large numbers of simple interactions, which in this case are represented by the calving and merging of drifting snow dunes.

The complexities of snow accumulation in the Arctic are more than a statistical curiosity. Depending on the season, the snow layer can either promote or prevent the melting of sea ice. In the winter, thick snow acts as a blanket, trapping the ocean’s heat below and preventing the growth of new sea ice. In the spring and summer, the snow’s bright white surface reflects sunlight, keeping the ice frozen for longer – at least until melt ponds start to form, at which point the snow’s albedo drops and thawing accelerates.

“For proper quantification of the above physics, the most important properties of the snow are its thickness and its temporal and spatial evolution,” says Moon. “Now we understand the role of the wind on these properties, we can move to more complicated topographies and use our knowledge to better understand the influence of rugged surfaces.”

Moon and colleagues reported their findings in Environmental Research Letters.

The past, present and future of computing in high-energy physics

One of the main computing tasks at CERN is to sort through huge numbers of particle collisions and identify the ones that are interesting. What impact has this “big data” had on high-energy physics?

I’d like to invert the question, because rather than talking about the impact of big data on high-energy physics, I think it’s more interesting to talk about the impact of high-energy physics on big data.

High-energy physicists started working with very large datasets in the 1990s. Out of all the scientific disciplines, our datasets were among the largest, and we had to develop our own solutions for handling them – firstly because there was nothing else, and secondly because CERN operates within a specific social, economic and political framework that encourages us to spread work around different member countries. This is only natural: we’re getting a lot of money for computing from national funding agencies, and they naturally privilege local investments.

So, high-energy physicists were doing big data before big data was a thing. But somehow, we failed to capitalize on it, because we didn’t communicate this well at the time. A similar thing happened with open-source software. We have a philosophy of openness and sharing at CERN. We could have invented open-source and popularized it. But we didn’t. Instead, when open-source started to become widespread, we said, “Oh, yeah, that’s interesting. This is what we have been doing for 20 years. Nice.”

The problem is that we have very little room to capitalize on our ideas beyond fundamental physics. Our people are working day and night on experimental physics. Everything else is just something we do because we want to do physics. This means that once we’ve done something, we don’t have time to develop it further. Sure, Tim Berners-Lee came up with the concept of the World Wide Web when he was at CERN, but the Web was taken up and developed by the rest of the world, and the same thing happened with big data. Compared to the amount of data Google and Facebook now have, we have very little indeed.

How does CERN openlab help to tackle the computing challenges faced at CERN?

We identify computing challenges that are of common interest and then set up joint research-and-development projects with leading ICT companies to tackle these.

In the past, CERN has taken a sound engineering-style approach to computing. We bought the computing we needed at the most convenient conditions, and off we went. Nowadays, though, computing is evolving so quickly that we need to know what is coming down the pipeline. Evaluating new technologies after they are on the market is not good enough, so it is important to work closely with leading companies to understand how technologies are evolving and to help shape this process.

Another aspect of CERN openlab’s work involves working with other research communities to share technologies and techniques that may be of mutual benefit. For example, we are working with Unosat, the UN technology platform that deals with satellite imagery and which is hosted at CERN, to help estimate the population of refugee camps. This is a big challenge, and part of the problem is that it’s difficult – sometimes even dangerous – to count the actual number of people living in these camps. So, we are developing machine-learning algorithms that will count the number of tents in satellite photos of the camps, which refugee agencies can then use to estimate the number of people.

Carminati expects machine learning to play an increasingly important role in data analysis at CERN.

What are some of the technologies you see becoming more important in the future?

We’re already using machine learning and artificial intelligence (AI) across the board, for data classification, data analysis and simulations. With AI, you can make very subtle classifications of your data, which is of course a large part of what we do to find new particles and elaborate on new physics.

The high-energy physics community actually started looking at machine learning in the 1990s, but every time we started to do something, we saw the tremendous possibilities, and then we had to stop for lack of computing power. Computers were just not fast enough to do what we wanted to do. Now that computers are faster, we can explore deep learning using deep networks. But these networks are very slow to train, and this is something we are working on now.

What about quantum computers? What impact will they have on high-energy physics?

This is very hard to say, because it’s crystal-ball thinking. But I can tell you that it’s important to explore quantum computing, because in 10 years we will have a massive shortage of computing power.

High-energy physics is in a very funny situation. We have two theories: general relativity and quantum mechanics. General relativity explains the behaviour of stars and planets and the evolution of the universe, and it is the epitome of elegance. It’s a beautiful theory, and it works: you use it every day when you use GPS on your smartphone. Quantum mechanics, in contrast, is a very complex theory, and although it’s pretty successful – the fact that we found the Higgs boson is a testament to its success – there are a lot of unsolved questions that it doesn’t answer. The other problem is that quantum mechanics does not work with general relativity. When we try to unify them, the outcomes are really weird-looking, and the few predictions that we make are not borne out by reality. So we have to find something else.

But how do you find that “something else”? Usually, you find something that is not explained by present theories of physics, and that unexplained thing gives theorists hints about how to proceed. So we have to find something that contradicts our current view of the Standard Model. However, we know that this cannot be a big thing, because every observation we make is more or less confirmed by the model. Instead, we are looking for something subtle. The old game was to find a needle in the haystack of our data. The new game is to look in a stack of needles for a needle that is slightly different. This new game will involve an incredible amount of data, and incredible precision in processing it, so we are increasing the amount of data we take and increasing the quality of our detectors. But we will also need much more computing power to analyse these data, and we cannot expect our computing budget to increase by a factor of 100.

That means we have to find new sources of very fast computing. We don’t know what they will be, but quantum computing may be one of them. We are looking into it as a candidate to provide us with computing power in the future, and also because there is one really exciting thing that you can do with quantum computing that you cannot do nearly as well with normal computing, and that is to directly simulate a quantum system.

What is the particle-physics community doing to meet its more immediate computing challenges?

We are moving forward along several axes. One of them is to exploit current technology as well as possible. It used to be that when computers got faster, it was because they had a faster clock rate. That kind of increase in power was readily useable, because you could just park your old program on a new machine and it would run faster. More recently, though, computing power has gone up because of increases in the number of transistors on a chip, meaning that you can make more operations in parallel. But to take advantage of this power you need to rewrite your programs to exploit this parallelism, and that is not easy.

We are also exploring different computing architectures, such as graphical processing units GPUs, to see how well they can fit into our computing environment. And of course, there is the work we are doing on novel technologies such as machine learning, which could really improve the speed of certain operations.

The biggest prize, though, would be quantum computing. Quantum computers would be useful across our entire workload. But we will have to develop new ways of thinking to exploit them, and this is why it is so important that CERN openlab is working with different companies in this area. Can we imagine software that is independent of the type of computing we are using? Will we have to write a different program for each type of quantum computer? Or will we develop algorithms that can be ported onto different quantum computers? For the moment, nobody knows. An extension of C++ or Python that could be ported to different quantum computers is, for the moment, science fiction. But we have to think creatively in this direction if we want to assess the capabilities and opportunities.

Copyright © 2025 by IOP Publishing Ltd and individual contributors