Skip to main content

The muon’s theory-defying magnetism is confirmed by new experiment

A long-standing discrepancy between the predicted and measured values of the muon’s magnetic moment has been confirmed by new measurements from an experiment at Fermilab in the US. The 200-strong Muon g–2 collaboration has published a result consistent with data collected two decades ago by an experiment bearing the same name at the Brookhaven National Laboratory, also in the US. This pushes the disparity between the experimental value and that predicted by the Standard Model of particle physics up to 4.2σ, suggesting that physicists could be close to discovering new fundamental forces or particles.

The muon, like its lighter and longer-lived cousin the electron, has a magnetic moment due to its intrinsic angular momentum or spin. According to basic quantum theory, a quantity known as the “g-factor” that links the magnetic moment with the spin should be equal to 2. But corrections added to more advanced theory owing to the effects of short-lived virtual particles increase g by about 0.1%. It is this small difference – expressed as the “anomalous g-factor”, a = (g – 2)/2 – that is of interest because it is sensitive to virtual particles both known and unknown.

In 1997–2001, the Brookhaven collaboration measured this quantity using a 15 m-diameter storage ring fitted with superconducting magnets that provide a vertical 1.45 T magnetic field. The researchers injected muons into the ring with their spins polarized so that initially the spin axes aligned with the particles’ forward direction. Detectors positioned around the ring then measured the energy and direction of the positrons generated by the muons’ decay.

Spin precession

Were there no anomalous moment, the magnetic field would cause the muon spins to precess such that their axes remain continuously aligned along the muons’ direction of travel. But the anomaly causes the rate of precession to slightly outstrip the muons’ orbital motion so that for every 29 trips around the ring the spin axes undergo about 30 complete rotations. Because the positrons have more energy on average when the spin aligns in a forward direction, the intensity of the most energetic positrons registered by the detectors varies cyclically – dropping to a minimum after about 14.5 revolutions and then rising back up to a maximum. It is this frequency – the number of such cycles per second – that reveals the precise value of a.

When the Brookhaven collaboration announced its final set of results in 2006, it reported a value of a = 0.00116592080 and an error of 0.54 parts per million (ppm) – putting it at odds with theory by between 2.2–2.7σ. That discrepancy then rose as theorists refined their Standard Model predictions, so that it currently stands at about 3.7σ. The latest measurements extend the disparity still further.

The recent measurements were made using the same storage ring as in the earlier work – the 700 tonne apparatus was transported in 2013 over 5000 km (via land, sea and river) from Brookhaven near New York City to Fermilab on the outskirts of Chicago. But while the core of the device remains unchanged, the uniformity of the magnetic field that it produces has been increased by a faxtor of 2.5  and the muon beams that feeds it are purer and more intense.

Avoiding human bias

The international collaboration at Fermilab has so far analyzed the results from one experimental run, carried out in 2018. It has gone to great lengths to try and avoid any sources of human bias, having even made its experimental clock deliberately out-of-synch to mask the muons’ true precession rate until the group’s analysis was complete.

Describing its results in Physical Review Letters, alongside more technical details in three other journals, the collaboration reports a new value for a of 0.00116592040 and an uncertainty of 0.46 ppm. On its own, this is 3.3σ above the current value from the Standard Model and slightly lower than the Brookhaven result, but consistent with it. Together, the results from the two labs yield a weighted average of 0.00116592061, an uncertainty of 0.35 ppm and a deviation from theory – thanks to the smaller error bars – of 4.2σ. That is still a little short of the 5σ that physicists normally consider the threshold for discovery.

Tamaki Yoshioka of Kyushu University in Japan praises Fermilab Muon g–2 for its “really exciting result”, which, he says, indicates the possibility of physics beyond the Standard Model. But he argues that it is still too early to completely rule out systematic errors as the cause of the disparity, given that the experiments at both labs have used the same muon storage ring. This, he maintains, raises the importance of a rival g–2 experiment under construction at the Japan Proton Accelerator Research Complex in Tokai. Expected to come online in 2025, this experiment will have quite different sources of systematic error.

Alternative theory

Indeed, if a group of theorists going by the name of the Budapest-Marseille-Wuppertal Collaboration is correct, there may be no disparity between experiment and theory at all. In a new study in Nature, it shows how lattice-QCD simulations can boost the contribution of known virtual hadrons so that the predicted value of the muon’s anomalous moment gets much closer to the experimental ones. Collaboration member Zoltan Fodor of Pennsylvania State University in the US says that the disparity between the group’s calculation and the newly combined experimental result stands at just 1.6σ.

The Fermilab collaboration continues to collect data and plans to release results from at least four more runs. Those, it says, will benefit from a more stable temperature in the experimental hall and a better-centred beam. “These changes, amongst others,” it writes, “will lead to higher precision in future publications.”

Inverse planning with Leksell Gamma Knife Lightning: Clinical plan quality and efficiency

Want to learn more?

Leksell Gamma Knife® (LGK) Icon is the most recent LGK system that enables high-precision frameless stereotactic radiosurgery. The treatment planning system tailored for the LGK, the Leksell GammaPlan®, can be used either by manual forward planning or inverse planning. Leksell Gamma Knife® Lightning is the new inverse planning software that provides an optimizer that calculates inverse plans based on a set of constraints defined by the user.

Dr Florian Stieler and Manon Spaniol from the Department of Radiation Oncology and Medical faculty, Mannheim, will present their work on comparing manual forward treatment planning for LGK SRS to inverse planning using a pre-release of the Leksell Gamma Knife® Lightning.

This webinar will give great insight into the clinical value of the new inverse planning tool with improved plan quality measures and reduced Beam-on-Time compared to manual forward planning.

Want to learn more?

Florian Stieler, PhD, is the senior medical physicist at the University Medical Center Mannheim, Department of Radiation Oncology and Medical Faculty Mannheim, University of Heidelberg, Germany.

 

 

 

Manon Spaniol is a medical physicist trainee and PhD student at the University Medical Center Mannheim, Department of Radiation Oncology, University of Heidelberg, Germany.

 

 

 

 

E2E test for spine stereotactic radiosurgery on MR-Linac using RTsafe Spine phantom

Want to learn more on this subject?

This webinar will share the results of end-to-end dose measurements for spine stereotactic radiosurgery quality assurance test in both MR-Linac and a conventional linac. Treatment plans were generated for lumbar and thoracic spines. Target and spinal cord doses were measured directly with two ion chambers inserted into the RTsafe Spine phantom. The RTsafe anthropomorphic Spine phantom was used to examine the feasibility of spine SBRT with the MRL.

Want to learn more on this subject?

Eun Young Han is an associate professor of radiation oncology at the University of Texas MD Anderson Cancer Center at Houston, USA. She works as a clinical medical physicist. She earned her PhD at the University of Florida focusing on Monte Carlo-based internal dosimetry using pediatric and adult anthropometric phantoms. Today, her clinical and research interests are in the areas of end-to-end test and dosimetric comparison of brain stereotactic radiosurgery using Gamma Knife ICON and spine SBRT using MR-guided Linac.

Jinzhong Yang is an assistant professor of radiation physics at the University of Texas MD Anderson Cancer Center, USA. He is the physics lead of the MR-Linac programme at MD Anderson. He earned his PhD in electrical engineering from Lehigh University in 2006. Dr Yang has more than 12 years of research experience in medical image registration and image segmentation, with a focus on translating novel imaging computing technologies into clinical radiation oncology practice. His current research interest focuses on MR-guided online adaptive radiotherapy.

2D boron sheets show novel ‘half-auxetic’ effect

Researchers have discovered a two-dimensional material that expands regardless of whether it is stretched or compressed. This hitherto unobserved “half-auxetic” effect, as it has been dubbed, could find use in future nanoelectronics applications.

Most materials become thinner when stretched. If you stretch a rubber band across its length, for example, it will shrink in the other two directions (perpendicular and in-plane), becoming narrower and thinner as you pull. Auxetic materials, however, do just the opposite, expanding in both the perpendicular and in-plane directions relative to the applied strain. They also shrink when they are compressed, unlike ordinary materials that expand. In mathematical terms, conventional materials are characterized by a positive Poisson’s ratio and auxetic ones by a negative Poisson’s ratio.

One of the oldest and best-known applications of a natural material that is borderline auxetic is cork, which has a Poisson’s ratio of near zero and can be pushed into the thinner neck of a wine bottle. Other naturally-occurring examples include human tendons and cat skin.

Researchers seeking to mimic such behaviour in artificially-engineered auxetic materials have previously succeeded in making structures that are robust to indentation and tearing (shear stress). Such materials are now increasingly employed in products such as bicycle helmets or safety jackets.

Palladium-decorated borophene

An international team led by Thomas Heine from TU Dresden in Germany has now discovered “half-auxetic” behaviour in an atomically-thin version of the element boron that they made more stable to strain and stress by adding palladium (Pd) to it. The Pd-decorated borophene, as it is called, has three stable phases, one of which exhibits half-auxetic behaviour along one of its crystal axes.

Using computational modelling, Heine and colleagues showed that the material behaves like an auxetic material when strained (a negative Poisson’s ratio) but expands like an ordinary material when compressed (a positive Poisson’s ratio). Simply put, regardless of whether it is strained or compressed, the material always expands.

Novel negative Poisson’s ratio material

The researchers chose palladium as a stabilizer for their borophene studies because it is a transition metal widely employed in electronics and in catalysis and an efficient donor of electrons to boron. It also has the lowest melting point of all platinum group metals, which makes it easier to handle in experiments.

Heine and colleagues studied their palladium borides (PdBn, where n=2,3,4) theoretically using first-principles calculations combined with a “particle swarm optimization” (PSO) algorithm that enabled them to check the materials’ properties. “Poisson’s numbers are typically calculated by the ratio of strain in two directions, but for compressive and tensile strain, we found that the numbers were different in one PdBn,” Heine explains. “We therefore used the more complex (but more accurate) definition that the Poisson’s number is the derivative of one strain direction with respect to the other.”

These calculations revealed a material with a novel negative Poisson’s ratio and intriguing mechanical and electronic properties. Of the three stable phases of the PdBn they discovered, the PdB4 monolayer – a semiconductor with an indirect band gap of 1.22 eV – was the one that showed the half-auxetic behaviour.

Avoiding energetically-costly bond stretching

Describing their work in Nano Letters, the researchers say that the half-auxeticity they unearthed in PdB4 stems from the material trying to avoid an energetically-costly stretching of the Pd-B bonds when strained along its length. To overcome the significant stress it experiences during this applied strain, the sheet in effect becomes corrugated. This process pushes the neighbouring in-plane atoms away from each other, causing it to expand in both the lateral and vertical directions, like an auxetic material.

When the material is compressed, the PdB4 accommodates this stress by, again, pushing the in-plane atoms away from each other so that the material slightly expands in-plane. This is what a conventional material does.

Designing new structures

Heine says that the mechanism he and his colleagues identified might be used to design new half-auxetic structures. “These novel materials could lead to innovative applications in nanotechnology, for example in sensing or magneto-optics,” he explains. “A transfer to macroscopic materials is equally conceivable.”

Spurred on by their findings, members of the team, which includes researchers from Hebei Normal University in China and Singapore University of Technology and Design, say they now plan to find out whether the effect occurs in other classes of nanomaterials, such as metal-organic frameworks or 2D polymers and macroscopic frames produced by 3D printing. “It will also be interesting to explore if the half-auxetic effect can be found in the out-of-plane direction, that is, if the thickness of a material always expands when subject to in-plane tension or compression,” Heine tells Physics World.

Liquid-jet evolution is driven by surface tension, not gravity

Spectacular upward jets of liquid are produced when a droplet falls on a liquid surface – a phenomenon that has fascinated physicists for at least 100 years. Now, researchers led by Cees van Rijn at the University of Amsterdam have shown that surface tension plays a far larger role than gravity in slowing the upward flow and shaping the jet. The team used advanced imaging techniques to provide a clear quantitative explanation for the self-similar evolution of the jets. Their results shed new light on a widely studied area of fluid dynamics, and could lead to a better understanding of how liquids behave in microgravity.

When a raindrop hits a pool of water, the liquid it contains will rapidly move to fill in the impact crater it forms. This generates an upward-moving jet typically several centimetres in height, which rises and falls in under 100 ms. A key feature of these jets is that their shapes remain the same as they rise and fall – a phenomenon called self-similarity.

The physics of these flows has long been an active area of research, which has largely focused on how various aspects of jet evolution relate to the type of liquids involved. However, the role of one key aspect of small-scale fluid dynamics remains unclear: surface tension is crucial to understanding how droplets in jets evolve but has so far only been invoked to model the droplets that form at the tips of upward-flowing jets.

Fluorescent tracer

De Rijn’s international team explored jets in unprecedented detail using particle imaging velocimetry (PIV). This involves putting fluorescent tracer particles in fluids and illuminating them with a laser – revealing the paths and velocities of the flowing liquid.

The team’s measurements revealed that fluid elements inside the jets decelerated between 5–20 times faster than would be expected from gravity alone. Such high values could only be explained by accounting for surface tension pulling the jets downwards. This insight allowed de Rijn and colleagues to update existing theoretical models to incorporate surface tension alongside the influences of gravity and fluid inertia. Their new models held up for a variety of other liquids, including ethanol, and mixtures of water and glycerol.

These improvements allowed the team to better explain the self-similar manner in which the jets evolve. The effect of surface tension means that while the heights and widths of the jets change constantly over time, their velocity profiles and conical shapes show almost no variation. With their updated models, the researchers could match these dynamics with a particular mathematical description of self-similar systems, which they corrected for the contribution from gravity.

The insights gathered by the team provide the first fully accurate quantitative explanations for the shapes and dynamics of the jets produced by impacting droplets. In future research, they hope to repeat their experiments aboard the International Space Station, where the influence of surface tension can be studied in a low-gravity environment.

The study is described in Physical Review Fluids.

Best practice: How to choose the right detector for your water phantom

Want to learn more on this subject?

Choosing the right detector is key for accurate dose measurements in water. Which detector characteristics have a major influence on my measurement results? Which trade-offs are acceptable for a specific measurement task? Is a diode automatically the best choice for small field dosimetry? It all depends.

In this educational webinar, Jan Würfel, research scientist and detector expert at PTW Freiburg, will provide answers to frequently asked questions on radiation detectors and give advice on how to choose the best detector for a specific application. He will go into detail about basic detector properties, address factors that need to be taken into consideration when selecting a detector for dose measurements in water, and give examples of suitable detectors.

Key topics covered in this webinar include:

  • Discussion of important detector properties, such as range of use, measurement speed, long-term stability or energy response, and their relevance.
  • Typical examples of dose detectors with the discussed properties.
  • Overview of major detector applications and their specific requirements.
  • Typical examples of detectors suitable for use in reference, relative and small field dosimetry.

We look forward to sharing our knowledge and best practices on detector selection and use with you.

Want to learn more on this subject?

Jan Würfel studied physics at Karlsruhe Institute of Technology (KIT) and holds a PhD in molecular electronics. He currently works as a research scientist at PTW Freiburg with a key focus on improving detector performance and investigating new detector materials. In addition, Jan frequently serves as a speaker on a variety of dosimetry topics for the PTW Dosimetry School. His research and lecturing interests include small field dosimetry, detector physics and reference dosimetry.

 

Ask me anything: Tim Gershon

Tim Gershon

What skills do you use every day in your job?

As is the case for most academics, the job has a mixture of elements. A lot of my research and interaction with students involves analytical skills. Some of it is subject-specific knowledge, but a great deal of it is understanding how to interact with people – finding good ways to give constructive feedback and suggestions for how people can move things forward. I find that it’s important to do that while still being honest so that people realize when things are not working out, but also understand how to improve.

One big challenge is understanding how to work most efficiently. No matter how well you manage your time there are still only 24 hours in a day, so you have to be realistic about what you can achieve in the available time. Prioritization is a key skill — knowing when something is really important, and that you may have to drop everything else in order to make sure it gets done. But often it’s a juggling act, keeping several tasks on the go at the same time.

What do you like best and least about your job?

The thing that I enjoy best in my job is the thrill of discovery — if that’s not too pretentious a way of putting it – the feeling that at any moment we could have found something of interest in the data. I also enjoy the challenge of figuring out problems. Sometimes you can spend a lot of time scratching your head trying to understand what is going on, and there is great satisfaction when eventually you achieve it. That could be understanding some strange effect in the data, or overcoming some other problem.

I’m sure I won’t be the first person to say that the thing I like least is admin. We seem to be very good at inventing bureaucratic processes that may have been initiated for a good reason, but often outlive their purpose and nobody seems to be willing to get rid of them.   

What do you know today, that you wish you knew when you were starting out in your career?

It’s tempting to say that I wish I had known what all the major discoveries were going to be. Then I could have got ahead of the curve and chosen to work on those. But if I already had that knowledge then I would not have enjoyed working on understanding it all, so I guess I don’t wish for that after all.

Really my answer to this is more focused on the human aspect — I would have like to have learnt a little earlier on the best ways of interacting with people. This also means learning about yourself, and by now I appreciate that sometimes it’s best to have a breath of fresh air or a cup of tea before responding to that difficult e-mail.

Over the course of my career, I also think that as a field we have learnt a lot about diversity and how it benefits the workplace, though there is still much to improve. I would like to think that I have always championed diversity, but it is certainly an area where it would have been beneficial to the field as a whole if we had done more about this, earlier, and I include myself in that.

Fast AFM scanning: realizing the gains of closed-loop velocity control

An R&D collaboration between Queensgate, a UK manufacturer of high-precision nanopositioning products, and scientists at the National Physical Laboratory (NPL), the UK’s National Metrology Institute, has yielded experimental findings that are likely to attract commercial interest from manufacturers of next-generation atomic force microscopes (AFMs) and other scanning probe microscopy (SPM) systems.

In a proof-of-concept study completed earlier this year, an amalgam of enabling technologies from Queensgate – including high-speed, piezo-driven nanopositioning stages and proprietary closed-loop velocity-control algorithms – were put through their paces by NPL researchers in a series of experiments to evaluate their potential suitability for high-speed AFM scanning applications. The results are eye-catching: reliable capture of large-area, high-quality AFM images with nanometre spatial resolution – and all achieved in a matter of minutes, rather than hours or days, at raster scan speeds ranging from 0.5 mm/s up to 4 mm/s.

Although NPL and Queensgate have worked together on several previous occasions, the latest undertaking was conducted via the Measurement for Recovery (M4R) programme. This NPL-led initiative is funded by the UK government and aims to support industry with its recovery from the economic impacts of COVID-19. “M4R provides access to cutting-edge R&D, expertise and facilities to help address analysis or measurement problems that can’t be resolved using standard technologies and techniques,” explains Edward Heaps, research scientist for dimensional metrology at NPL, who carried out the experimental work on behalf of Queensgate. “Ultimately, the aim of M4R is to help boost productivity and competitiveness in UK industry post-pandemic.”

Accept no AFM compromises

To put the NPL study into context, it’s first necessary to recap the fundamentals of AFM. This powerful SPM modality uses an ultrasharp microfabricated tip (usually Si or Si3N4) attached to a cantilever to generate topographic images of a sample surface at very high resolution (between 1–20 nm depending on the sharpness of the tip). Deflection of the cantilever – a result of atomic-scale forces acting between the probe tip and sample – provides the basis for sample imaging and nanometrology as the tip is scanned across, and in close proximity to, the surface. In the same way, AFM is also able to map a range of mechanical surface parameters (e.g. stiffness, friction and adhesion) as well as chemical, electrical and magnetic properties on the nanoscale.

AFM data

Notwithstanding the upsides, there are non-trivial shortcomings associated with AFM and other SPM techniques. Slow measurement speed, for example, means low sample throughput and troublesome temperature-induced measurement drift because of prolonged scan times (in some cases running to many hours or days if a large area is to be imaged at high resolution). “Traditional experience with high-speed AFM says that there’s a trade-off between the scan speed and the range [scan area],” explains Heaps. “For the research user, the benefits of extended range and increased scan speed will be seen in measurement throughput. That ultimately translates into enhanced productivity and more published research papers.” The same calculus applies to industrial R&D users who may be using AFM for the quality control and imaging of semiconductor ICs, quantum nanodevices or advanced optical components.

For Queensgate’s engineering team, the NPL collaboration provided an opportunity to road-test a portfolio of technologies for closed-loop velocity control that, they believe, will address the systems-level compromises traditionally associated with high-speed AFM scanning. “We had already deployed closed-loop velocity control to do fast, accurate linear ramps for motion control,” explains Graham Bartlett, lead software engineer at Queensgate. “The M4R project allowed us to evaluate this capability for AFM scanning applications, knowing that image quality links directly to how accurate our velocity control is at maintaining dead straight, linear motion with constant speed.”

By extension, image quality also provides a visual indicator of how fast the AFM stage can be driven before accurate control of velocity cannot be maintained. “If you’re taking AFM measurements every few microseconds and you’re running at constant speed, you’ll get those measurements at a constant spacing and a clear image,” adds Bartlett.  “If your AFM head speed is inconsistent, your measurements won’t be evenly spaced, and you’ll end up with a skewed or distorted image.”

Speed is nothing without control

Prior to the experimental study, Heaps used Queensgate hardware to retrofit NPL’s customized metrological high-speed AFM (an instrument co-developed by the University of Bristol, UK). In terms of specifics, that meant replacing the AFM’s existing 5×5 μm XY stage with a Queensgate NPS-XY-100 stage (100×100 μm range) driven by a Queensgate NanoScan NPC-D-6330 controller (which offers closed-loop control of position and velocity). Evaluation work subsequently proceeded along several coordinates, with initial tests establishing that velocity control remained sufficiently accurate at raster rates of up to 4 mm/s – though a progressive reduction in resolution on the raster axis is also seen as scan speeds are stepped incrementally from 0.5 mm/s to 4 mm/s.

Image of numbers

A related issue for fast AFM scanning is the extent to which rapid acceleration and deceleration at each end of a raster line excite mechanical resonances. This phenomenon, commonly known as “ringing”, is found to be amplified at the faster scan speeds – which in turn gives a somewhat smaller linear region for image acquisition. With closed-loop control and features such as notch filtering, however, the NanoScan controller can substantially reduce these effects over an open-loop control system. The controller also has built-in waveform generation capabilities, which include S-curve acceleration and deceleration profiles to further reduce such resonances.

The fine-detail of image acquisition also came under scrutiny as part of the NPL project. The combination of high speed and larger scanning area significantly simplifies the imaging process. With the original 5×5 μm piezo stage, large-area imaging with the NPL high-speed AFM requires a coarse positioner to move the nanopositioning stage to capture a series of smaller image “tiles” that are subsequently stitched together – a slow and cumbersome exercise that impacts aggregate acquisition time. By contrast, the NPL results show that the 100×100 μm piezo stage may be used on its own to capture a larger image, while the higher speed available using velocity control gives a greatly reduced timeframe. Even if a larger area needs to be captured, it will mean significantly fewer moves from the coarse positioner – another win for the NPS-XY-100 stage.

Following on from the M4R collaboration, Queensgate and its parent company Prior Scientific are looking to share the experimental findings directly with OEM partners in the SPM community and beyond. “We see real potential for our nanopositioning stages, control electronics and algorithms to deliver significant AFM improvements versus speed, throughput and image quality,” concludes Bartlett. “It’s also worth noting that those same core building blocks are well suited for other cutting-edge applications like 3D live-cell imaging using confocal microscopy.”

High-field versus low-field MRI: is it time for a rethink?

© AuntMinnieEurope.com

New ultralow-field MRI scanners are bringing unprecedented flexibility at low cost to the clinical setting, registrants heard at ECR 2021. Presenting the benefits of ultralow-field MRI, Mathieu Sarracanie, co-head of the Adaptable MRI technology (AMT) Center at the University of Basel, Switzerland, said both the upper and lower ends of the ultralow-field spectrum that feature in two recently-launched devices will shine bright in the modality’s future.

The lower end and the higher end of low-field MRI, bring benefits and value, including performance, lower costs, and easier siting than 1.5-tesla (T) and 3T MRI, he noted, including the unfriendly environment of the intensive care unit (ICU). However, within the range of low-field, advantages may vary.

Illustrating the differences between ultralow-field scanners, he cited two commercial systems as case examples. The Siemens Magnetom Free.Max, a 0.55T scanner with an 80-cm bore, weighs in at 3.2 metric tons (3200 kg). This scanner has been used routinely for imaging the abdomen, as well as for different organs such as the lungs and for cardiovascular procedures.

Mathieu Sarracanie

“The major pro is that it is basically a standard MRI, it has a large bore, no quench line, which is a huge advantage, no need for helium refill and shows very good imaging performance. The cons are (also) that it is basically a standard MRI, and the siting is similar to a 1.5T machine,” noted Sarracanie, pointing out that this solution might not be suitable for ICU, and that it might be problematic for neurological applications where resolution and the capacity to leverage susceptibility contrast are key.

Sarracanie pointed to the Hyperfine scanner as his second example, noting that he co-founded the ultralow-field MRI startup in 2014 with the aim of bringing MRI to places where previously it wasn’t feasible.

The Hyperfine SWOOP, a 64mT machine, intended mostly for head injury but also MSK wrist, knee and foot imaging, can be located in smaller and more crowded spaces such as the ICU, stroke unit, outpatient centres, and paediatric departments. The average scan for a patient is 35 minutes and that includes T1, T2, FLAIR, and diffusion imaging with 3D slices and millimetric resolution, he said. The Hyperfine needs a monthly subscription and has to be integrated into the hospital’s PACS.

Hyperfine SWOOP

“This solution is truly disruptive, in the sense that the technology allows for a paradigm change. It has a small footprint and is also mobile. These machines don’t need any specific siting and the cost is low. The con is that these devices are purpose-built and some customers may see that as a disadvantage,” Sarracanie said.

Pointing to Hyperfine’s imaging capacity, he noted that grey- and white-matter contrast can show very high dispersion at lower field strength. This has been explored by the researchers at the University of Aberdeen, UK, who have generated some fascinating T1 contrast results in stroke patients thanks to a novel fast-field cycling MR scanner, he noted.

“Now is the time for ultralow-field MRI. The 0.55T machine could be a serious contender to the standard 1.5T as it is more compact and cost-efficient. Machines at even lower field strength will not replace MRI as we know it but will expand its use and bring it to other places that we don’t know yet,” Sarracanie added.

7T detail

High-field MRI of 7T and above might seem like a cool toy with little clinical benefit, but in October 2017, the FDA approved the first 7T MRI platform for clinical use, pushing it from the realm of research to the clinical world, according to Anja van der Kolk, a neuroradiologist at the UMC Utrecht and Netherlands Cancer Institute/Antoni van Leeuwenhoek Hospital in Amsterdam, who spoke about the benefits of ultrahigh-field MRI in brain diseases during the same ECR session, organized jointly by the European Society of Radiology and European Federation of Organisations for Medical Physics.

Anja van der Kolk

Ultrahigh-field MRI provides more anatomical detail due to higher spatial resolution within a reasonable scan time. New detail can be gleaned from T2-weighted imaging because of the increased susceptibility effects. Increased signal-to-noise ratio also enables metabolic imaging and imaging of x-nuclei, while increased spectral resolution makes it possible to distinguish metabolic peaks that overlap on 1.5T.

Now essential for certain areas of healthcare, the advantages of ultrahigh-field translate not only into direct clinical benefits such as diagnosis, treatment and prognosis of disease, but also indirect ones such as knowledge about the development of diseases, their associated factors, and the effects of these diseases on other organ systems.

“These indirect benefits have a significant impact on our understanding of diseases and how to potentially diagnose and treat them in a better way,” she said.

Novel techniques

One of high-field MRI’s most promising applications is metabolic imaging, such as MR spectroscopy, chemical exchange saturation transfer, and sodium and x-nuclei MRI, according to van der Kolk.

“Specifically, in sodium MRI of brain tumours you can see high sodium concentration inside the enhancing part of the tumour, but also within the T2 hyperintense area around the tumour and which we know is caused by oedema and tumour cells. This may be a way to visualize infiltrative tumour cells,” she noted.

Ultrahigh-field MRI advantages

But should radiologists view ultrahigh-field MRI as a specialized platform for only some rare or difficult cases or as a routine platform and part of the MRI workforce?

Van der Kolk’s view falls between the two extremes. At the moment a specialized platform for specific cases, this niche modality may be used more routinely with time.

“Future studies will show if it is going to become the next 3T scanner,” she noted.

Clinical use

Her talk covered several specific neurological diseases already benefitting from ultrahigh-field MRI such as MRI-negative/cryptogenic epilepsy, Parkinson’s disease, deep-brain stimulation, multiple sclerosis, pituitary (micro)adenomas and brain tumours.

Ultrahigh-field MRI can have a clinical benefit in epilepsy, and this is because 30% of patients with the condition have the cryptogenic type, meaning no anatomical focus can be found. However, 7T MRI can detect new lesions that cannot be detected at lower field strengths in 30% to 40% of patients. Lesions such as focal cortical dysplasia, polymicrogyria, and mesial temporal sclerosis can be missed on 3T. And while these lesions can be seen on 3T retrospectively, she noted that the 3T images that illustrated her cases were performed in centres where dedicated epilepsy radiologists with high experience in detecting these abnormalities had missed them on 3T and had only seen them retrospectively on the original images once detected on 7T.

In addition to more accurate detection of the causes of epilepsy, and better assessment of Parkinson’s disease, the targeting of deep-brain stimulation electrodes also can be optimized through improved visualization of the basal ganglia and other grey-matter structures. Ideally, these electrodes should be localized in the dorsal border of the subthalamic nucleus – more readily visualized on 7T than on other field strengths.

Proven advantages

Multiple sclerosis is one of the best-studied diseases with 7T, and it leads to better lesion detection in younger patients for earlier diagnosis, as well as differentiation from other white-matter lesions: ultrahigh-field MRI was the first technique to depict the small central vein sign, which is characteristic of multiple sclerosis lesions and not seen in other white matter hyperintensities, van der Kolk noted. Also, it depicts paramagnetic phase changes and a persistent hypointense rim that can predict outcome in these patients.

She also pointed to 7T’s improved detection of very small microadenomas which would otherwise be missed on lower field strengths, making presurgical planning difficult.

Meanwhile for brain tumours, using 7T’s increased susceptibility effects and spectral resolution doctors can detect the 2-hydroxyglutarate peak, specific to gliomas, for example, while tumour progression is shown using the susceptibility effects of microvascularity, which increases during treatment.

However, there are challenges, she said. So far it is unclear what the clinical consequences of some findings will be, and importantly nonradiology clinicians will have to know what the advantages of ultrahigh-field scans are in order to request them.

  • This article was originally published on AuntMinnieEurope.com ©2021 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

Improved hydrogel could make artificial tendons

artificial tendon material

A strong, flexible and tough hydrogel that contains more than 70% water could be used to make durable artificial tendons and other load-bearing biological tissues. The new hydrogel was made by researchers at the University of California, Los Angeles, US and is based on polyvinyl alcohol (PVA) – a material that is already approved for some biomedical applications by the US Food and Drug Administration.

Biological tendons are also more than 70% water, yet they remain strong and tough thanks to a series of connecting hierarchical structures that span length scales from nanometres to millimetres. Researchers have been trying to mimic these materials using hydrogels, which are three-dimensional polymer networks that can hold a large amount of water and are structurally similar to biological tissue. The problem is that so far, hydrogels that contain as much water as natural tendons tend not to be as strong, tough or resistant to fatigue as their biological counterparts.

Salting out a freeze-casted structure

In their work, a team led by Ximin He of UCLA’s Samueli School of Engineering began by freeze-casting, or solidifying, PVA to create a honeycomb-like porous polymer structure. The micron-sized walls of the pores in this material are aligned with respect to each other and serve to increase the concentration of PVA in localized areas.

The researchers then immersed the polymer in a salt solution (“salting out”) to precipitate out and crystallize chains of the polymer into strong threads, or fibrils, that formed on the surface of the pore walls. This phenomenon is known as the Hofmeister effect.

Hierarchical assembly of anisotropic structures

The resulting hydrogels have a water content between 70 and 95%. Like natural tendons, they contain a hierarchical assembly of anisotropic structures spanning lengths from the molecular scale up to a few millimetres.

He and colleagues tested various salt ions in their experiments and found that sodium citrate was the best at salting out PVA. When they used a mechanical tester to measure the stress-strain characteristics of the resulting hydrogel, they found that it had an ultimate stress of 23.5 ± 2.7 megapascals, strain levels of 2900 ± 450 %, a toughness of 210 ± 13 megajoules per cubic metre, a fracture energy of 170 ± 8 kilojoules per square metre and a fatigue threshold of 10.5 ± 1.3 kilojoules per square metre. The researchers say that these mechanical properties resemble those of natural tendons. They also note that their hydrogel showed no signs of deterioration after 30 000 stretch cycles.

Replicating other soft tissue

Since the Hofmeister effect exists for various polymers and solvent systems, He says the technique used in this work, which is detailed in Nature, could apply to other materials too. It might therefore be possible to use hydrogel-based structures to replicate other soft tissues in the human body, not just tendons.

As well as making these other tissues, the hydrogels could be used in bioelectronics devices that have to operate over many cycles, adds He. The structures could also be used as a coating for implantable or wearable medical devices to improve their fit, comfort and long-term performance.

The researchers’ longer-term ambition is to use the new hydrogel to mimic not only load-bearing tissues but also functional organs. “This could be achieved by combining 3D printing and tissue engineering with the hydrogel we have developed,” study lead author Mutian Hua tells Physics World.

Copyright © 2025 by IOP Publishing Ltd and individual contributors