Skip to main content

IBA launches SMARTSCAN beam commissioning system

Commissioning a new linear accelerator is an intensive process, requiring a thousand or so beam measurements on a water phantom. This can take days or even weeks to complete, and the repetitive nature of this task makes it prone to user error.

To ease this procedure, IBA Dosimetry launched its SMARTSCAN beam commissioning system. Announced at this week’s AAPM Annual Meeting, SMARTSCAN guides the user through the entire linac commissioning workflow – from system setup to scanning of the entire data set – and automates repetitive tasks. As a result, beam data commissioning and accelerator quality assurance (QA) is consistently executed with optimal quality. SMARTSCAN is connected to myQA, IBA Dosimetry’s Integrated QA platform

“IBA is launching a water phantom that automates the commissioning steps,” says IBA’s Ralf Schira. He notes that there are some crucial steps where the physicist still needs control, and that SMARTSCAN provides guidance through those procedures. “It’s not a black box, it’s more like a navigation system.”

Schira explains that the development was motivated by IBA’s discussions with physicists as to what they find to be the most problematic part of beam commissioning. As well as the process taking too long and being too user intensive, many interviewees pointed out that they were not 100% confident that all of the scans were 100% correct.

“SMARTSCAN addresses both problems, by making the process more efficient and providing the required beam quality data,” says Schira. “This gives the peace of mind that beam commissioning has been done perfectly.” This, in turn, provides the foundation for safe and accurate treatment of every patient.

To ensure optimal beam data quality, SMARTSCAN checks every single scan during the process, with suspicious measurements flagged immediately. SMARTSCAN also enables commissioning work to be completed efficiently in less time, allowing faster clinical implementation of new equipment. It does this by creating the most optimal scan sequence, as well as by adapting the speed of detector motion.

“SMARTSCAN provides a great deal of intelligence and guidance,” Schira tells Physics World.

Monte Carlo accuracy

The AAPM meeting also saw IBA launch SciMoCa, a Monte Carlo-powered secondary dose check and plan verification software. Monte Carlo is generally accepted as the gold standard for dose calculation accuracy in treatment planning. Now, SciMoCa makes Monte Carlo accuracy available for secondary independent dose calculation and verification, allowing users to verify their treatment plans with an equally robust QA system.

“Introducing Monte Carlo plan QA seamlessly for all major linacs, TPS systems and treatment modalities introduces an additional level of QA accuracy in our industry,” says Jean-Marc Bothy, president of IBA Dosimetry GmbH. “At this year’s AAPM in Nashville we received exceptionally positive feedback from the medical physics community about SciMoCa’s unprecedented Monte Carlo workflow automation, calculation speed and proven accuracy”.

The FDA-cleared SciMoCa software is developed by Radialogica and Scientific RT. IBA has entered into a global distribution agreement with Radialogica.

Discovery could show how best to remediate cobalt pollution

A team from the UK and Spain has found that pH and the prevalence of organic matter in soil affect the binding of cobalt with minerals and organo-mineral complexes. This has implications for the clean-up of contaminated land.

Using synchrotron x-ray absorption spectroscopy, Gemma Woodward and colleagues identified a universal uptake mechanism for cobalt in soil and sediment.

The team showed that where the soil contains lots of organic matter, as is the case for peat, cobalt binds more easily to the surface of the soil complex. Although binding to the soil makes cobalt more likely to affect humans near a spill, it improves prospects for remediation that collects and treats soil.

If there is little organic matter in the soil, however, the cobalt remains mobile and is more likely to enter groundwater and rivers. Cobalt is highly soluble so a cobalt pollution event of this type could be challenging to mitigate.

The researchers found that cobalt behaviour also varies with pH. At low pH cobalt forms a relatively weak electrostatic bond with the soil known as outer-sphere complexation. Whereas at high pH an inner-sphere bidentate complex forms via two stronger covalent bonds. For minerals containing iron, there was no bonding at low pH. That indicates that in iron-rich soils, cobalt contamination would remain mobile and potentially spread over a larger area.

“These findings mean that at pH 6.5 or lower cobalt will be mobile in deserts and alluvial sediments that contain little organic matter, whereas on peat, cobalt will be mobile only at very low pH,” says Woodward.

In natural environments a combination of organic matter and minerals is most likely and cobalt binding will probably be an amalgam of both conditions. “Artificial remediation can be very expensive, so where cobalt is mobile, supporting natural attenuation processes may be more appropriate and effective,” Woodward adds.

Cobalt is a heavy metal that enters the environment from metal production facilities, coal power plants or vehicle emissions. A radioactive form, cobalt-60, is a by-product of nuclear reactions and is found in the cooling waters of nuclear power plants. In small amounts, cobalt is beneficial to plants but high concentrations stop plants producing enough chlorophyll. The gamma radiation emitted by cobalt-60 is useful in radiotherapy and radiography, but cobalt may cause heart disease in humans.

Woodward and colleagues published their study in Geochimica et Cosmochimica Acta.

The riddle of ultrahigh-energy cosmic rays

Far, far away, something – somewhere – is creating particles with crazy amounts of energy. Whatever they are or wherever they’re from, these particles can be anything between 1018 eV and 1020 eV. Given that the top particle energy at CERN’s Large Hadron Collider is about 1013 eV, some of these particles are a million times more energetic than anything we can fashion at the most powerful particle accelerator on the planet. Quite simply, they’re the most energetic particles ever seen in nature.

Known as ultrahigh-energy cosmic rays (UHECRs), these particles were discovered in 1962. They’re the super-energetic brethren of common-or-garden cosmic rays, which were first spotted by Austrian scientist Victor Hess during a famous series of daring hot-air balloon flights 50 years earlier. But while we know a great deal about regular cosmic rays, what UHECRs are made from, where in the heavens they come from and what accelerates them remain a mystery.

Fortunately, some UHECRs occasionally rain down on planet Earth. When one such ray enters the atmosphere, it collides with air molecules, which in turn knock into other particles, resulting in a cascade effect all the way to the ground. The result is a shower of particles spread over an area 5 km wide at the Earth’s surface. And thanks to the Pierre Auger Observatory in Argentina and the Telescope Array in Utah, we can detect these showers and extract information about the cosmic rays themselves.

Pierre Auger Observatory

Both facilities consist of an array of surface detectors – in the case of Auger, 1660 large barrels each with over 12,000 litres of water spread across 3000 km2. When a particle from a shower flies into a detector, it creates an electromagnetic shockwave that’s picked up by light-detecting tubes mounted on the detector’s tanks. Researchers can then combine this information with data from 27 telescopes dotted throughout the array that collect the fluorescence light created when the cascade excites nitrogen in the air.

This combined technique yields an accurate measure of the flux, arrival direction and energy of the UHECRs. And last year, as a result of this work, Pierre Auger researchers unequivocally showed that the most powerful cosmic rays come from outside the Milky Way, not from within our galaxy (Science 357 1266). Considering we’ve known about cosmic rays for over a century, this breakthrough may seem underwhelming and a bit overdue. In reality, though, it reflects the gargantuan challenge researchers face. Cosmic rays with an energy above 1020 eV land – on average – just once per square kilometre on Earth per century.

Cosmic rays with an energy above 1020 eV land – on average – just once per square kilometre on Earth per century

What are UHECRs made of?

Data gathered over decades prove that low-energy cosmic rays – which are mostly protons, nuclei and electrons – appear to come from all directions in the sky. Scientists attribute this spread to the rays being deflected in all directions by the magnetic fields that permeate our galaxy, which rules out all hope of ever zeroing in on their source directly. UHECRs are another matter. They power through galactic magnetic fields so well that they are deflected by only a few degrees. “We can use them as astronomical messengers to find the sources directly,” explains Ralph Engel, spokesperson for the Pierre Auger Observatory.

During a UHECR air shower, the cascade effect involves more and more particles as the shower scythes through the atmosphere. However, each interaction loses energy, which means that the number of shower particles starts to decline, with only a small fraction reaching the ground. But by knowing how the air shower spreads in the atmosphere, Auger and Telescope Array researchers can simulate the particle interactions to deduce where in the atmosphere the shower was at its peak. And by combining the shower peak value with the measured shower energy, they can infer the mass – and thus the identity – of the UHECRs.

When Auger scientists applied this method, they expected the highest-energy UHECRs to be simply made of protons. Instead, they found something strange. As the energy of the UHECRs increased from 1018 eV to 1020 eV, so did the mass. “We start with a lot of protons around 1019 eV,” explains Engel. “Then all of a sudden, there’s a drastic change to helium [nuclei] and then elements in the range of carbon and nitrogen.”

The increase of the UHECR’s mass as the rays get more energetic is a problem for both experimentalists and theorists. What’s tricky for Auger scientists is that heavier UHECRs get deflected more by the Milky Way’s magnetic fields, which makes it even more challenging to work out their source. For theorists like Vasiliki Pavlidou of the University of Crete, on the other hand, the problem is more fundamental: it could challenge our entire understanding of high-energy physics. “If the primary particles at the highest energies are indeed getting heavier, there are a couple of uncomfortable coincidences that we have to accept,” she says.

Cascade of UHECRs

According to conventional wisdom, cosmic rays above a certain energy rapidly lose energy as they interact with photons in the cosmic microwave background, which means that the energy of UHECRs seen on Earth ought to be limited to about 1020 eV. However, if the observed particles are getting heavier with energy, then the astrophysical process that’s accelerating the cosmic rays in the first place – whatever it is – must be running at close to its top energy. (The lighter particles will then be simply too puny to reach those high energies.) The 1020 eV UHECR energy limit is therefore governed by two completely unrelated processes: how the particles are accelerated at their extragalactic source and how they lose energy as they travel through interstellar space. That’s the first odd coincidence.

The second coincidence is to do with cosmic rays from within our galaxy and those that come from elsewhere. It seems that galactic cosmic rays stop being observed at 3 × 1018 eV – exactly the same energy at which extragalactic cosmic rays start getting heavier with energy. That’s strange given that galactic and extragalactic cosmic rays come from very different sources (even if we still don’t know where the latter originate).

Given that these two coincidences depend on processes and properties that are not even vaguely related, why are they happening at the same energy scales? One reason could be that these coincidences simply do not exist. That would certainly be the case if extragalactic cosmic rays don’t get heavier with energy but are just always protons; the coincidences would then just fade away. Indeed, Pavlidou and her Crete colleague Theodore Tomaras reckon that UHECRs could be mainly protons, the only snag being there would have to be some new undiscovered physical phenomenon that affects the air showers above a certain energy.

That may sound outlandish, but there’s good reason to not reject the idea outright. Physicists model how the particles in the air shower interact based on their understanding of the Standard Model of particle physics, but it has never been tested (even at the LHC) at such high energies. Moreover, these simulations fall far short of explaining all the air-shower properties observed. So you have two unpalatable choices. Either cosmic rays are protons and new physics is making them appear heavy. Or UHECRs are heavy particles and the Standard Model needs some serious tweaking.

But if UHECRs are protons, figuring out how protons could be masquerading as heavier particles will require some alternative thinking. One exciting possibility is that the proton’s initial collision produces a mini black hole, the existence of which is predicted by theories with large extra dimensions. “For the right number of such dimensions they can actually have the desired mass,” explains Tomaras. “Mini black holes would decay instantaneously to a large number of hadrons sharing the black hole energy, making the proton primary ‘look’ heavy.”

Another alternative would be to invoke the existence of as-yet-undiscovered phases of quantum chromodynamics (QCD) – the theory that describes how quarks are bound inside protons, neutrons and other hadrons. Tomaras admits, however, that these are “exotic” scenarios. “We have not yet discovered large extra dimensions,” he says, “and we have reasons to suspect that the production cross section of mini black holes will most likely be too small to serve our purpose and, furthermore, we do not have a robust quantitative understanding of the phases of QCD yet.” However, if proof surfaces of UHECRs being protons, Tomaras believes it is “almost inevitable” that such exotic phenomena occur in nature.

What accelerates them?

Leaving aside the lack of certainty surrounding what UHECRs are, the question that really matters is: what makes them? Here, the picture is even more muddled. Until recently, some physicists were exploring exotic ideas known as “top-down models” that go beyond the Standard Model. The idea is that high-energy, unknown objects such as super-heavy dark matter – with masses 1012 times larger than the proton mass – would decay down to UHECR particles. The catch with these models is that they suggest cosmic rays should be dominated by photons and neutrinos, whereas data from the Pierre Auger Observatory, Telescope Array and elsewhere suggest mostly charged particles. “Nobody tries to build exotic models of the classic top-down set-up anymore,” explains Engel.

Although the exotic dark-matter scenario hasn’t been totally ruled out as the source of UHECRs, researchers are more seriously contemplating if extremely violent astrophysical events might instead be responsible for such high energies. Pulsars, gamma-ray bursts, jets from active galactic nuclei, starburst galaxies and others have been proposed, with popular opinion swaying between them.

Roberto Aloisio from the Gran Sasso Science Institute in Italy believes that on face value Auger’s results – suggesting heavier UHECR particles at the highest energies – are an important development. “It is easier to accelerate heavy nuclei than protons because the acceleration mechanisms always feel the particles’ electric charge – and nuclei heavier than protons always have larger electric charge,” he explains. As a result, Aloisio suggests Auger points towards pulsars as the source of UHECRs, which produce heavier elements and could drive these particles to the required energy (Prog. Theor. Exp. Phys. 2017 12A102).

Currently, however, there is one candidate that is ahead of all others as the source of UHECRs. “If I had to bet I would definitely put all my money into starburst galaxies,” says Luis Anchordoqui of the City University of New York, who is a member of the 500-strong Auger team. Starburst galaxies are the most luminous in the universe, forming stars at a furious rate. As Anchordoqui and colleagues first hypothesized in 1999, nearby starburst galaxies accelerate nuclei to ultrahigh energies through a collective effort, combining numerous supernovae explosions in the central dense region of the galaxy to create a galactic-scale “superwind” of outflowing gas.

As this superwind expands, it becomes less dense, slowing flow down to subsonic speed – in effect, halting the progress of the superwind itself. “This produces a gigantic shock wave, similar to the one produced after the explosion of a nuclear bomb, but much more powerful,” says Anchordoqui.

Crucially, this process of diffusive shock accelerator, or DSA, can whip up gas particles to near the speed of light. Particles gain energy incrementally by being confined by magnetic fields, and crossing and recrossing the shock front. Going round and round the astrophysical accelerator, these little energy boosts build up until the particle reaches escape velocity and flies out into space. Anchordoqui recently revisited the work in the context of Auger’s latest findings (Phys. Rev. D 97 063010).

DSA, which does not only occur in starburst galaxies, is often invoked to explain proposed particle acceleration in gamma-ray bursts, active galactic nuclei and other UHECR source candidates. Yet in early 2018, Kohta Murase and his collaborators from Penn State University showed that a different acceleration mechanism could be at play (Phys. Rev. D 97 023026).

In their model, ordinary cosmic rays existing in a particular galaxy are given a huge energy boost by powerful jets of active galactic nuclei, through a mechanism known as discrete shear acceleration. It is a complex process involving the interaction between the particle, local disturbances in the magnetic field and the velocity difference – or “shear” – of different parts of the jet’s flow and ambient cocoon. But in the end the effect is similar to DSA. “The cosmic rays gain energy via scattering back and forth around the shear boundary,” explains Murase, after which they escape through the radio lobes that are often found at the end of the jets.

Even more recently, Murase and Ke Fang from the University of Maryland (Nature Phys. 14 396) revisited an idea that powerful black hole jets in aggregates of galaxies could be powering UHECRs. To begin, they compared their model to Auger’s observed UHECR flux and composition data, revealing a good match with experimental observations. But most intriguingly, they showed that by detailing how UHECRs, neutrinos and gamma rays might all be produced by active galactic nuclei, they could explain the data collected by the IceCube Neutrino Observatory in Antarctica, Fermi Gamma-ray Space Telescope and Auger simultaneously. “The most beautiful possibility is that all three messenger particles originate from the same class of sources,” Murase adds.

Where do they come from?

If we knew where in the sky UHECRs come from, the task of choosing which source produced them would be a lot easier. But there is no such thing as “easy” in cosmic-ray science. Undaunted, Auger and Telescope Array scientists use catalogues of potential candidate objects that could accelerate UHECRs and then try to match them with the arrival directions of the cosmic rays they observe. As more and more data arrive, both facilities have each identified an area from which a big proportion of these rays seem to originate.

The starburst galaxy M82

In the case of Auger, this area contains a number of starburst galaxies, but also Centaurus A – the nearest giant galaxy to the Milky Way that hosts an active galactic nucleus. As for the Telescope Array, its “hot spot”, which lies just beneath the handle of the Ursa Major constellation, is an even clearer indication of an arrival direction, with a quarter of detected UHECR signals coming from a 40° circle that makes up only 6% of the sky. But although the starburst galaxy M82 resides in the hot spot, about 12 million light-years away in Ursa Major, various other types of objects in that patch of sky could also be a UHECR birthplace.

“The correlation is in the direction of M82 if you want to say it’s starburst galaxies, or it’s the direction of Centaurus A, if you want it to be active galactic nuclei,” says Engel. “Although the data correlate better with starburst galaxies, it doesn’t mean that they will be the sources.”

Just as we don’t know what UHECRs are or what accelerates them, so where in the sky they originate is shrouded from view too. However, it may not be long before we find the answer. Upgrades to the Pierre Auger Observatory and the Telescope Array are in progress, while researchers are exploring new facilities, such as the Probe of Extreme Multi-Messenger Astrophysics (POEMMA) satellites.

The mystery of the mass and origin of these enigmatic particles could, within a decade, finally be laid bare.

 

Optimizing HIFU for breast cancer treatment

Advances in imaging technology and screening programmes have enabled earlier detection of breast cancer, allowing the use of non-invasive treatments such as high-intensity focused ultrasound (HIFU). HIFU can ablate small regions without damaging adjacent tissues and is also repeatable.

During HIFU, ultrasound emitted from a transducer outside the body propagates through tissue to focus on the target. This focused acoustic energy raises the temperature in the target to necrosis-inducing levels within a few seconds. HIFU has been used previously for breast cancer treatment, but its success rate varies widely, from 20 to 100%, depending upon factors including the system type, imaging technique, ablation protocol and patient selection.

To optimize breast cancer HIFU, a Japanese collaboration has used numerical simulations to determine the relationship between breast tissue structure and focal error, which is caused by reflection and refraction of the ultrasound wave due to the acoustic inhomogeneity of the body. Improving ultrasound focusing should result in more efficient and safer treatments (J. Therapeutic Ultrasound 10.1186/s40349-018-0111-9).

HIFU modelling

The research team – headed up at Nihon University and the University of Tokyo – used MRI data from 12 patients to create digital breast phantoms comprised of skin, fat and fibroglandular tissue. These phantoms formed the input data for the simulator ZZ-HIFU, which simulates ultrasound propagation through tissues with varying acoustic properties.

The researchers examined the impact of different target positions and transducer arrangements on the focal error. To evaluate focal error, they determined the focusing ratio for various breast phantoms and set-ups. They simulated ultrasound wave propagation from a 256-channel phased-array transducer through the skin to a target, with the acoustic axis passing either through breast tissue only (A) or through the nipple (B).

The focal shape, a high-pressure amplitude region around the focus, was distorted as the ultrasound waves propagated through the breast tissue. Arrangement A led to less distortion, with a focusing ratio of 0.093, compared with 0.094 for B. As these values are considerably lower than obtained without breast tissue (0.243), the researchers normalized the results to this value. The normalized focusing ratios were 0.384 and 0.386, for A and B, respectively.

They then applied a focus control technique, based on a time-reversal method, to the simulation of arrangement B. This process provided an ultrasound wave correctly focused on the target with a clearer focal shape, and considerably improved the normalized focusing ratio, from 0.386 to 0.543.

Using a different breast phantom, the researchers compared the relationship between fibroglandular structures and focal shapes in the two target arrangements. Again, the focal shape for A was much clearer than that for B. In the former, no fibroglandular tissue was observed between the skin and focus, whereas in the latter, the focus was deep in the fibroglandular tissue and highly distorted. The normalized focusing ratios in these two cases were 0.901 and 0.465, respectively.

The team also examined the impact of the brightness threshold used to segment fat and fibroglandular tissue. They found that higher thresholds (indicating more fatty tissue) led to lower focusing ratios and increased focal distortion.

Optimizing treatments

The simulations showed that focal error was caused by the complex distribution of fibroglandular tissue and was dependent on the target position and transducer arrangement. As such, the authors suggest arranging the HIFU transducer such that the ultrasound wave avoids fibroglandular tissue.

If a tumour is located deep in fibroglandular tissue, focus control using the phased-array transducer is required for safe and efficient treatment. The researchers showed that focusing ratios can be increased using focus control based on the time-reversal method. They point out that, as the focusing ratio increases with a decrease in local acoustic inhomogeneity, it may be used as an indicator to reduce the HIFU focal error depending on the breast structure.

“The obtained results demonstrated that the focal error observed during the breast cancer HIFU treatment is highly dependent on the structure of fibroglandular tissue,” the authors conclude. “The optimal arrangement of the transducer to the target can be obtained by minimizing the local acoustic inhomogeneity before the breast cancer HIFU treatment.”

Steamy study shows how graphene layer affects droplets

Body-temperature control, self-assembly, vapour-mediated sensing, energy harvesting and emulsification are just some of the processes that hinge on evaporation. Little wonder then that the mechanisms underlying water evaporation and related surface effects, such as wetting, have attracted intense research attention.

Scientists have been keen to investigate the effect of graphene on the evaporation process, since the interaction between the droplet and the substrate is known to play a critical role. So far, however, the results have been inconclusive, with some studies suggesting that graphene increases the wetting angle, while others indicate that graphene has no effect at all – in other words, it is wetting transparent.

A new study by researchers at the Chinese Academy of Sciences and the Collaborative Innovation Center of Quantum Matter in Beijing, China, has now investigated the effect of a graphene layer on wetting and evaporation. Yongfeng Huang, Jun Lu and Sheng Meng used a single layer of high-quality graphene produced through chemical vapour deposition, and tested its effect on the behaviour of water on four hydrophilic (water-attracting) surfaces plus one that is hydrophobic. When they added the layer of graphene to the surfaces, they found that the contact angle increased for hydrophilic surfaces and decreased for hydrophobic surfaces.

The contact angle is an important parameter, since droplets evaporate according to two main regimes: one where the diameter of the droplet in contact with the surface remains constant while the contact angle decreases, and one where the contact angle of the droplet with the surface remains constant but the contact diameter decreases. Any change in wetting angle has a knock-on effect on the contact diameter of a droplet, so the increase in contact angle observed for the hydrophilic surfaces should cause a droplet with the same volume to have a shorter contact diameter.

The researchers monitored the contact angle, contact diameter, and mass of water droplets as they evaporated from surfaces with and without a layer of graphene. The measurements show that the rate of evaporation increases for the hydrophilic surfaces by around 20% and decreases for the hydrophobic surface, as would be expected from the changes in contact angle.

However, the mean evaporation rate per contact length/diameter remains unchanged. For all substrates investigated in the study, the difference in evaporation rate per contact line is less than 5%.

Molecular dynamics simulations enabled the researchers to identify the initiation of the evaporation process in a water molecule where it is in contact with the surface. “Since the graphene does not alter the binding energy of a single water molecule, it has negligible effects on evaporation per contact line,” says Huang. The results could be useful for applications that aim to control evaporation, such as heat transfer, printing, and self-assembly.

Full details are reported in 2D Materials.

Nanoparticles set spinning record

If a jet engine spins faster than about 1000 Hz, the forces on its outer edge may rip it apart. But two research teams – one in Switzerland, the other in the US and China – have independently made nanoparticles rotate at over a billion Hertz, making them the fastest rotations ever produced.

Such ultrafast nanorotors could be useful for testing material properties, as well as verifying theories of frictional damping on the nanoscale. The dumbbell-shaped nanoparticles of the US-Chinese group can also form ultrasensitive torsion balances – the force sensors used to measure gravity in the 18th century. They could therefore potentially detect quantum effects in gravitation and other tiny force effects.

Both teams trapped silica nanoparticles around 100 nm in size using optical tweezers, a well-established technique that exploits the force of focused laser beams to manipulate everything from atoms to cells. By rotating the polarization of the beams, the researchers turned the optical tweezers into “optical spanners”.

“This had been done about twenty years ago in a liquid, but what sets the ultimate spinning speed the particle can reach is how much it’s damped.” explains René Reimann of ETH Zurich. “If you’re sitting in a liquid all these molecules are braking it by friction, so it doesn’t spin super-fast.” Adding extra laser power to overcome this friction risks damaging the particle through overheating. Researchers have more recently tried spinning particles in vacuum, but their rotation rates have not exceeded 10 MHz.

Systems like this, where you have really good control over either torsion or rotation, would be really useful – both in terms of sensing and on the fundamental physics side

Andrew Geraci, Northwestern University

In the new research, both groups used extremely high vacuum, very pure silica nanoparticles, and trapping lasers with a wavelength of precisely 1550 nm – the transparency window for silica. “We were trying to minimize the absorption as much as possible,” explains Tongcang Li of Purdue University in Indiana, a senior member of the US–Chinese group. By minimizing heating in all these ways, both groups achieved rotation rates over 1 GHz.

Nanoparticles minimize spinning stress

One reason the particles can spin so fast without flying apart is their nanoscale size, which means that their edges move less quickly than a larger object at any given rotation rate – which means that they experience less stress. “If you do the tests with a bigger probe, you’re typically limited by defects like scratches and tiny cracks on the surface that fly apart long before you reach the ultimate limit set by the atomic bonds,” Reimann explains. “If you move to a much smaller particle, it’s much simpler to prepare a probe that is defect free.” Moreover, he says, theoretically unforeseen effects may cause matter to behave differently at the nanoscale. “It’s an interesting fundamental question,” he says.

Reimann and colleagues have now acquired a detector capable of measuring rotation rates of up to 20 GHz. In theory, this should allow them to spin particles fast enough to rip apart the atomic bonds. “It would be very interesting to see at which speed these things explode, and whether it’s always the same, or whether it varies from particles to particle,” he says.

In principle – at sufficiently high rotation rates and sufficiently low pressures – it may also be possible to measure so called vacuum friction. This is a theoretically-predicted braking force arising from virtual particle–antiparticle pairs appearing momentarily from fluctuations in the quantum vacuum. However, Reimann believes this is some way from being experimentally tested.

Torsion in the balance

While the ETH Zurich researchers used spherical nanoparticles, the US–Chinese group produced nano-sized dumbbells by joining together two spherical nanoparticles. In linearly polarized light, the long axis of these nano-dumbbells aligned with the light’s polarization axis and simply wobbled because of collisions with the remaining air molecules. The researchers calculate that their system forms a torsion balance similar to that used to Henry Cavendish to measure the strength of gravity in 1798, but nearly 20 orders of magnitude more sensitive. It is even, they calculate, around 100 times more sensitive than state-of-the-art torsion balances today.

This could have multiple uses. For example, theoreticians predicted over 40 years ago that anisotropy in the quantum vacuum between birefringent materials should lead to a tiny rotational force called the Casimir torque, but this has never been observed experimentally: “Our calculations show the sensitivity of our system should be enough to detect the Casimir torque,” explains Li. Others have even suggested similar devices could potentially detect quantum effects in gravitation.

Andrew Geraci of Northwestern University in Illinois, who was not involved in either paper , is cautiously impressed. “Some of the technical details are not reported at this point in a very clear way, but I wouldn’t consider that a major shortcoming – just a sign that this is a relatively new area and people haven’t had time to tease out all these details yet,” he says. “There are a lot of things one can imagine doing where systems like this, where you have really good control over either torsion or rotation, would be really useful – both in terms of sensing and on the fundamental physics side.”

The papers by the Swiss and the US–Chinese teams are published in Physical Review Letters.

Precision multi-lesion phantom sharpens stereotactic radiosurgery

Stereotactic radiosurgery (SRS) treats small tumours in the brain by delivering high doses of radiation to the target area with sub-millimeter precision, preserving the surrounding healthy tissue. Initially, the treatment was focused on patients with a small number of lesions. But interest in the approach has broadened as clinicians have discovered that survival times and benefits persist even for more complex cases.

Treating multiple brain metastases simultaneously means fewer patient trips to the clinic, and minimizes the amount of time that patients need to be immobilized while treatment takes place. It also allows more people to benefit from more productive use of hospital resources.

But the challenge comes in finding a more efficient way of assessing the treatment plan, which already posed a number of technical hurdles. “Many of those lesions are very small targets,” points out Jacqueline Maurer, a medical physicist based in the US.

The delivery dose can be difficult not just to model, but also to measure. Using the most common detector arrays, the spacing between individual sensing elements is too coarse. There are time constraints to consider too – for example, using a single device to evaluate 10 targets one by one requires the treatment plan to be delivered 10 times, which is impractical.

The solution is a precision brain phantom that features as many as 29 parallel imaging planes, each one loaded with radiation-sensitive film spaced at 5 mm increments. “You can deliver the plan a single time and get high-resolution absolute dose measurements by arranging the film so that it bisects each of the lesions,” Maurer explains.

Customer-led design

Unable to realize a set-up using existing materials, Maurer sketched out her idea to CIRS, a US manufacturer of tissue-equivalent phantoms and simulators for medical imaging, radiation therapy and procedural training. “I thought this geometry might work, but there wasn’t a product on the market and so I approached Vladimir, one of the engineers at CIRS,” Maurer remembers.

Back in the factory, the CIRS team worked up several prototypes and added a range of useful features. The phantom – dubbed model 037 – measures 150 mm (W) x 190 mm (H) x 170 mm (L) to cover variations in brain anatomy, and weighs 5 kg. Constructed from brain-equivalent epoxy resin, the unit is designed to exhibit a linear attenuation that’s within 1% of real tissue from 50 keV to 15 MeV.

“We wanted to make multi-lesion treatment available to our patients, but at the same time we didn’t want to risk inaccuracies in the procedure,” says Maurer. “The phantom let us do it confidently.” This includes running a sequence of patient quality assurance (QA) and characterization tests to ensure that the radiation is being delivered as expected.

End-to-end testing

In the clinic, Maurer and her colleagues were able to take the QA device through each part of the process that the patient goes through. “You can put the phantom on the CT scanner, take an image, and send the information to your treatment planning system,” she explains. “And from there you can carry out the QA steps, which include imaging the phantom on the linear accelerator (linac) and checking the film to see how well you did.”

SRS treatments can involve very steep dose gradients, which puts extra pressure on the team to make sure that everything is positioned correctly. “You could have a 40% dose error for just being 1 mm off target,” Maurer cautions.

The precision with which CIRS is able to construct the product is a key part of the SRS multi-lesion brain QA phantom’s success. When loaded with sheets of radiation-sensitive film – also available from the firm – each layer in the device is exactly 5 mm apart, according to the specification.

M037 PHANTOM

Holding the phantom together are four threaded bolts, positioned one in each corner, but that’s not their only feature. Because the bolts are made of materials with different Hounsfield units – a measure of linear attenuation – the combination provides X-ray visible reference structures that can be used for positioning by the linac’s onboard imaging software.

“Automatic image registration algorithms tell you exactly what your shifts in rotation need to be to get your phantom in precisely the right spot,” says Maurer. “And if you agree with the match then the alignment software will automatically send those shifts and rotations to your machine, which make those hairline adjustments for you.”

Just like during treatment, the phantom can be positioned complete with couch kicks (table rotations) so that the geometry is the same as the patient geometry. “What you see on the film is an actual representation of the dose to the patient,” Maurer concludes.

For full product specifications, visit www.cirsinc.com

Grasslands may be more reliable carbon sinks than forests in California

Discover why grasslands may be more reliable carbon sinks than forests in California in this video abstract from Pawlok Dass, Benjamin Houlton, Yingping Wang and David Warlind published in Environmental Research Letters (ERL). “California forests appear unable to cope with unmitigated global changes in the climate, switching from substantial carbon sinks to carbon sources by at least the mid-21st century,” writes the team. ERL comes to you from Physics World parent IOP Publishing.

Video courtesy CC-BY 3.0, Pawlok Dass, Benjamin Houlton, Yingping Wang and David Warlind 2018 Grasslands may be more reliable carbon sinks than forests in California Environ. Res. Lett. 13 074027 10.1088/1748-9326/aacb39

Meeting Paris Agreement requires asset stranding

Discover why meeting the Paris Agreement on climate change requires asset stranding in this video abstract in Environmental Research Letters (ERL) from Alexander Pfeiffer, Cameron Hepburn, Adrien Vogt-Schilb and Ben Caldecott. “Our results can help companies and investors re-assess their investments in fossil-fuel power plants, and policymakers strengthen their policies to avoid further carbon lock-in,” write the researchers in ERL. ERL comes to you from Physics World parent IOP Publishing.

Video courtesy CC-BY 3.0, Alexander Pfeiffer, Cameron Hepburn, Adrien Vogt-Schilb and Ben Caldecott 2018 Committed emissions from existing and planned power plants and asset stranding required to meet the Paris Agreement Environ. Res. Lett. 13 054019 10.1088/1748-9326/aabc5f

 

Quantifying changes in carbon stocks and forest structure from Amazon degradation

Discover more about quantifying long-term changes in carbon stocks and forest structure from Amazon forest degradation in this video abstract published in Environmental Research Letters (ERL) by Danielle I RappaportDouglas C MortonMarcos LongoMichael Keller, Ralph Dubayah and Maiza Nara dos-Santos. The team combined annual time series of Landsat imagery and high-density airborne lidar data to characterize the variability, magnitude, and persistence of Amazon forest degradation impacts on aboveground carbon density and canopy structure. ERL comes to you from Physics World parent IOP Publishing.

Video courtesy CC-BY 3.0, Danielle I RappaportDouglas C MortonMarcos LongoMichael Keller, Ralph Dubayah and Maiza Nara dos-Santos 2018 Quantifying long-term changes in carbon stocks and forest structure from Amazon forest degradation Environ. Res. Lett. 13 065013 10.1088/1748-9326/aac331

Copyright © 2025 by IOP Publishing Ltd and individual contributors