It’s easy to forget that Hubble, famed as it is for images of nebulae and galaxies in deep space, is equally adept at imaging the outer planets of our solar system. This ability came into its own in the summer of 1994, when 21 fragments of the shattered comet Shoemaker–Levy 9 slammed into the giant planet Jupiter. Nobody really knew what to expect – some astronomers thought the event might be a damp squib – but over seven days in July, Jupiter was pounded, with each impact producing a huge fireball and dark bruises on Jupiter’s banded visage.
This Hubble image shows the evolution, over several days, of one of the larger impacts, referred to as site G. A daring space shuttle mission had repaired the telescope’s short-sighted optics a mere eight months earlier, and the success of the surgery quickly became clear: Hubble had a front-row seat for the impact, and produced stunning imagery that reminded us all of the power and importance of the solar system’s minor bodies.
With so many galaxies in the universe, some of them are bound to collide. When they do, the result can be a titanic maelstrom of stars, gas and spiral arms wrapping around one another, as shown in this Hubble image of the galaxies NGC 4038 and 4039. They’re collectively known as the Antennae Galaxies, because wide-field views show huge streams of gas and stars pulled out of the galaxies by gravitational tides and stretching away like giant insect antennae. A narrower view, meanwhile, shows the two galaxies locked in an embrace with huge dark clouds of dust spilling across their distorted faces.
Few, if any, stars will collide during this merger, but the vast clouds of molecular gas within the galaxies can’t avoid one another. And so they clash, sparking intense bursts of star formation, which are visible as pink blotches of ionized hydrogen embedded in the twisted spiral arms. Eventually, the two galaxies will coalesce to form a single giant elliptical galaxy, and their supermassive black holes will find one another and merge, unleashing a burst of gravitational waves as they do so. We’ll have to wait another 400 million years for that happen, though.
Comparison of routine spiral 4D CT and i4DCT imaging for regular and irregular breathing patterns. Left: normalized programmed breathing curves; green: beam-on periods; grey: beam-off periods; red: breathing pause. Right: images reconstructed at maximum inhalation; the arrow shows an artefact caused by the breathing pause. (Courtesy: Med. Phys. 10.1.1002/mp.14106)
Respiration-correlated CT imaging, or 4D CT, is an essential part of radiotherapy planning for thoracic and abdominal tumours. Clinical 4D CT images are often affected by artefacts, however, mainly due to irregular breathing patterns during data acquisition. A research team has now successfully validated a prototype implementation of an intelligent 4D CT (i4DCT) scanning protocol that produces fewer motion artefacts, demonstrating that breathing signal-guided 4D CT is feasible for clinical applications.
The researchers, at the University Medical Center Hamburg-Eppendorf (UKE) in Germany and Siemens Healthineers, utilized online breathing curve analysis and respiratory signal-guided 4D CT protocols to develop the i4DCT concept. In a feasibility study, they compared images of a motion phantom recorded using routine spiral 4D CT and i4DCT (Med. Phys. 10.1.1002/mp.14106).
4D CT images are used for dose calculation and optimization, to define motion-adapted safety margins, and to perform 4D dose reconstruction and quality assurance after radiation treatment. When images are degraded by artefacts, it may be necessary for a patient to have an additional CT scan, exposing them to a second high radiation dose and potentially delaying their radiotherapy.
The i4DCT concept enables automated selection of CT beam-on/beam-off periods, by adapting data acquisition to a patient’s individual breathing pattern, instead of the patient having to adapt to the scanner. The i4DCT workflow consists of an initial learning period to establish a reference patient-specific breathing cycle representation, followed by online breathing signal-guided sequence mode scanning.
The process involves switching on the CT beam at a breathing state just before the patient’s typical end-inspiration state, continuously acquiring breathing signal and projection data, simultaneously analysing the breathing signal in terms of projection data coverage, and switching off the beam if predefined coverage conditions are fulfilled. The i4DCT monitors the patient’s breathing in real time, starting and stopping the sequence scanning based on online analysis of the acquired breathing curve information.
After a beam-off event, which corresponds to a breathing state close to, but after, end-inspiration, the scanner couch is moved to the next position and the process repeated until the desired scanning range is covered. The projection data are then used for retrospective image reconstruction.
Protocol validation
For the study, the researchers implemented the i4DCT core workflow on a Siemens SOMATOM go platform, and imaged a motion phantom containing a customized wooden insert with oblique aluminium plates. They examined four programmed motion curves: regular breathing; breathing pause/irregular breathing frequency; irregular breathing amplitude; and a mixture of breathing pause, and frequency and amplitude irregularity.
Principal investigators René Werner (left) of UKE and Christian Hofmann of Siemens Healthineers.
Principal investigators René Werner and Christian Hofmann reported that in a regular breathing scenario, both routine spiral 4D CT and i4DCT generated similar images. Spiral 4D CT images acquired during the breathing pause/irregular breathing frequency scenario contained interpolation artefacts that distorted the appearance of the central aluminium plate. However, the i4DCT acquired data did not have these artefacts. For the other two motion scenarios, i4DCT-acquired data also outperformed spiral 4D CT data, producing images of better quality with fewer artefacts.
The researchers point out that i4DCT will not generate artefact-free images for every patient and every breathing pattern. Because it relies on external breathing signals, a robust correlation of internal and external motion data is needed to produce artefact-free images.
The i4DCT scanning time was also longer: by 38%, 72%, 82% and 100%, respectively, for the four motion scenarios compared with conventional 4D CT. With the exception of the regular breathing scenario, beam-on time also was longer, but only by 13%, 20% and 25% compared with conventional 4D CT.
“The measurement results and acquired images support the conclusions of the [previously published] in silico studies and illustrate the considerable reduction in 4D CT image artefacts by real-world application of i4DCT and comparison to routine spiral 4D CT,” the authors write.
“In line with this statement, the i4DCT functionality has been integrated into Siemens go.Open Pro scanners and is called Direct i4D technology,” Werner tells Physics World. “Currently, the collaboration partners are working on a comprehensive phantom-based study to double check the Direct i4D performance for patient breathing patterns under real-world conditions. Moreover, together with other radiotherapy affiliations, the first 4D CT images of lung and liver patients have been measured and are under evaluation.”
Relaxed approach: Steven Savage on the shore of Lake Mälaren, north of Stockholm (Courtesy: Steven Savage)
I have spent my entire career researching materials technology, usually in an environment where national security is the raison d’être. I am now semi-retired, and in recent years I have become interested in the ethical impacts of new technology on society. I have lived and worked in the UK, the US and now Sweden, and my family and friends live in the UK and Belgium. I currently interact with colleagues in the US, Europe, Asia and Australia.
The pandemic has had relatively little impact on my life as a scientist. In this, I consider myself fortunate. I write technical reports, grant proposals and journal papers, and I participate in telephone and video meetings from my home office in much the same way as before. I go to my place of work when needed, which of late is less frequently since more of my colleagues are now also working from home. Virtual coffee breaks are a poor substitute for the real thing (and the keyboard gets sticky), but they are better than nothing.
A different path
Sweden has taken a much more relaxed approach to the challenges posed by COVID-19 than most other nations. While other states rapidly imposed mandatory countrywide quarantines and restrictions on non-essential travel, Sweden continues to follow a “softly, softly” policy – one that was even described (recently, in a quality daily newspaper) as a laissez-faire attitude. However, that is the policy of the government, which in my view (perhaps I am being unjust) seems to be a hive of (in)activity. In recent years this has become the status quo, so perhaps this is an example of normal attitudes being applied in abnormal times. In contrast, many employers were quick to allow staff to work from home even if they were healthy, and everyone was required to stay away if they showed any symptoms. Sweden is already highly “digitized” so perhaps this was easier for us than for other countries.
There are restrictions on gatherings of more than 50 people, and restaurants may only offer table service – no buffets or eating at the bar. There is a voluntary code of social distancing, which in the main does seem to be observed, although an increasing number of complaints are being made related to city-centre bars and restaurants, and a form of policing is now being implemented. Public transport continues to operate, and Stockholm, like other major cities, has experienced crowded buses and trains due to absences of drivers and other staff. Most domestic air travel has ceased, simply because there are no passengers. Shops remain open, albeit with fewer customers than usual. However, the hardware stores seem to be benefitting as citizens take on home improvement projects. Swedes are keen do-it-yourself enthusiasts! There has been little evidence of hoarding, and shelves in food stores remain well-stocked.
Worrying signs
The health service seems to be coping, although it seems that all efforts are devoted to treating COVID-19 patients, with all other activities on hold. This is clearly worrying, especially for patients needing immediate treatment. Early fears that the hospitals, intensive care units and ventilator capacity would be overwhelmed seem to be unfounded, but having said that, and despite optimistic comments from some, there is little to suggest that the infection rate is dropping. It doesn’t help that testing is still relatively infrequent, although recently a few hundred random tests were performed to estimate the level of infection in the Stockholm population. Another worrying sign is that many care homes for the elderly have cases of coronavirus. The mortality in Sweden is significantly higher, by perhaps an order of magnitude, than our neighbours Norway and Finland, both of which were quick to impose strict quarantine measures.
Because of these facts, there is increasing criticism, supported by national and international experts, of the Swedish strategy. Most worrying for me personally is the lack of transparency shown by Swedish authorities, primarily the Public Health Agency, which is releasing very little information other than correct but unhelpful statements like “very little is known for certain” and “numbers are unreliable because they may relate to different measurement methods”. I am worried, not so much for my own safety but for those more vulnerable.
Searching questions
There is a downside and an upside to any event, and the COVID-19 outbreak is no exception, as it presents an opportunity that even the maddest of mad scientists would not consider presenting for ethical review as an experiment. While we acknowledge the disastrous effects of the pandemic both on citizens and on national economies, we must also ask, what can we learn from this? Better still, what must we learn from this?
Lack of preparedness on the part of local, national and international authorities and healthcare organisations is a recurring theme, in Sweden and elsewhere (Finland, which has maintained a robust level of national preparedness, is a notable exception in our region). Shortages of simple personal protective equipment (PPE) such as masks, gloves, visors, and gowns, as well as devices such as ventilators, continue to cause concern. Sweden and much of the Western world has clearly taken the “just-in-time” economically optimized philosophy to an extreme. Emergency stocks of elementary equipment and medicine such as paracetamol and sedatives have been reduced or eliminated.
The weakness of this approach is now painfully clear. In military terms, the logistical supply line is interrupted. That a very large amount of PPE originates from a single source – China – is also a weakness, open not only to accidental catastrophes such as COVID-19 but also to deliberate geopolitical manipulation. This fact has been widely recognized for at least a decade, without any significant action being taken.
Much of the Western world has clearly taken the “just-in-time” economically optimized philosophy to an extreme
Also worrying is that fact that existing crisis management plans have, in at least some cases, never been implemented. There are newspaper reports of a regional crisis management plan (which included an emergency supply of PPE) from one Swedish region dating back to 2006, but this has still not been implemented. Similar reports are emerging from the UK.
Simple numbers, complex reality
Information is important; accurate and timely information even more so. Although China recently revised its initial number of deaths upward, indicating a reason to carefully examine the data, there seem to have been relatively few deliberate attempts to spread “false news” or “alternative facts”. Nevertheless, the information available has, in my opinion, fallen short of being adequate. We have been supplied with simple numbers: of infected patients, of intensive care patients, of deaths and more recently rates of infection, and the inevitable comparisons between different nations. Sweden, having chosen to apply a policy of voluntary social distancing, is frequently used for comparison, but since the numbers are based on different parameters they are likely to be misleading. Even deaths are unlikely to be reported accurately, since the cause of death may or may not be attributed to COVID-19, and there is latency in the reporting.
What especially concerns me is the psychological impact of the pandemic. I have little evidence of any attempts in any country to address this aspect, other than occasional platitudes that boil down to “don’t panic, we have the situation under control” – platitudes that are frequently and immediately contradicted by someone on the “front line”. The people of Sweden and other nations need reliable and timely information to ally their fears; open and transparent discussion of alternatives; and explanations for the actions taken. For me, “don’t panic” sends entirely the wrong message.
In this case, time will not tell which strategy (to quarantine or not) was right or wrong. Is Sweden following a sensible policy? There seems to be mounting evidence against this, but the question is far too complex for a simple yes/no answer. Which is most important – the national economy or human life? There is no right or wrong answer, simply varying degrees of better or worse solutions. What is important is to learn from the experience. There is sound scientific evidence that the risk for pandemics is increasing (see, for example, the World Health Organization’s 2019 annual report A world at risk) but as noted, there is little point in performing a risk assessment, developing crisis management plans and then putting it all on a shelf to gather dust.
For me, “don’t panic” sends entirely the wrong message.
In the meantime, I continue to work using video conference calls, telephone and email. These are increasingly essential tools – imagine the current crisis without modern communication! A truly horrifying thought. This crisis will pass, as have previous pandemics, natural and human-made catastrophes. Let us hope we take to heart those lessons which can be learnt and take precautions against making the same mistakes again. To quote one of the world’s greatest statesmen, “Never let a good crisis go to waste.”
The role of gravity in producing distinctive dribbles of wine called tears has been explained by scientists in the US. Andrea Bertozzi and colleagues at the University of California, Los Angeles have shown the forces on a film of wine on the side of a glass create a “reverse undercompressive shock wave” that breaks up to form tears
Tears – sometimes called legs or fingers – are best observed in higher alcohol wines at room temperature. Using a martini glass with a fixed wall angle also helps. To create tears, begin by covering the glass and swirling the wine. Then set down the glass and remove the cover after a few seconds. You should see a circular wave sloshing around the glass that climbs up above the meniscus. As it travels, the wave creates small droplets that fall back into the glass in shape reminiscent of human tears that endure for much longer than the wave.
For more than a century, physicists have known that wine climbs the side of a glass in a process called Marangoni flow. In 1855, James Thompson (brother to Lord Kelvin) was the first to describe the effect. Ten years later the Italian physicist Carlo Marangoni wrote about the flow in a dissertation – and ultimately lent his name. Then in 1878, the American polymath Willard Gibbs published a theory describing Marangoni flow.
Marangoni flow occurs at the interface of two liquids with different surface tensions. With tears, the difference is created by the rapid evaporation of alcohol from a film of wine clinging to the inner surface of the glass above the fill line. The loss of alcohol increases the surface tension of the remaining liquid in the film relative to the wine in the glass below. This pulls liquid up from the glass, which eventually falls back down in dribbles called tears, legs or fingers. What had puzzled researchers, is why the liquid forms tears, rather than flowing back down in a sheet or stream.
Upgraded class project
Although Marangoni flows have been studied for 165 years, the underlying theory was not complete. In 2019 Andrea Bertozzi was preparing a lecture on tears of wine and told Physics World: “I thought that I would do a fun lecture and I could even bring some wine to do a demonstration. As I was preparing my materials and going through the research papers in detail, I realized that there was a big gap in the literature”.
She adds, “Part of my lecture ended up being about what was missing and what we could do about it scientifically. My student Yonatan Dukler decided to make this topic his class project”. That project has now been extended and the results published in Physical Review Fluids with Dukler as the lead author and Hangjie Ji and Claudia Falcon lending a hand.
Reverse undercompressive shock
The four researchers realized that the current theory of tear formation did not do a very good job at describing the role of gravity, which pulls the liquid down the side of the glass in the form of tears. The team created a quantitative model that assumed a constant surface tension gradient up the side of the glass. They discovered that the thickness of the liquid film climbing up the glass plays a crucial role in the formation of tears.
If the liquid crept up the glass in a uniform thick film it would simple stream back down again, rather than creating tears. Instead, experiments and calculations done by Bertozzi and colleagues show that the liquid moves up in a bulge-like wave that leaves a thinner film behind it. This can be described as a “reverse undercompressive shock wave”, which is known to be unstable. This means that small inhomogeneities along the wave can cause it to break up into tears.
According to Omar Matar of Imperial College London, the research is “a significant result that illustrates the richness in physics and underlying mathematical structure of the tears of wine problem, which continues to be a source of fascination to scientists in the present day”. He adds that the work, “represents a departure point to a number of exciting future directions that allow us to explore such factors as the influence of the three-dimensional geometry of the glass, and the development of surface tension gradients up the glass, on the morphology of the tears”.
Our universe contains two types of star clusters. Open clusters are found in the discs of galaxies, and are formed of young stars that soon drift away into the environs of the galaxies’ spiral arms. Globular clusters, on the other hand, are compact and ancient, dating back to the dawn of the age of galaxies, and are formed of hundreds of thousands of stars packed into a space no more than 100 light years across.
Globular clusters exist in the halo around a galaxy, and as you might expect with so many stars so close together, they make wonderful subjects for Hubble portraits. A good example is this image of the globular cluster NGC 1866, which lives on the outskirts of the Large Magellanic Cloud. This cluster harbours a mystery. While models of the formation of globular clusters depict all their stars being born in one huge burst, some such clusters, including NGC 1866, seem to contain several generations of stars, based on how much heavy metal the stars contain. Hubble’s detailed observations are helping to identify such clusters – an important step on the road to figuring out their history.
Hidden beneath the expanding lobes of gas and dust in this multi-wavelength view is one of the most massive and volatile stars in the Milky Way. Eta Carinae was just another nondescript star until 1843, when it underwent a dramatic outburst and briefly became the second-brightest star in the night sky. It’s unclear exactly what prompted this explosive episode, but we do know that Eta Carinae – actually a double star system concealed at the epicentre of the two lobes – shed an enormous amount of mass from its outer layers in the process.
Hubble’s false-colour image combines visible light observations by its Wide Field Camera 3 with ultraviolet-light data from the its Ultraviolet Imaging Spectrograph. It shows the presence of gas – magnesium in blue, and shocked nitrogen gas presented here in red – that could have been ejected by the star shortly before its outburst, and which could therefore provide clues as to what caused the tumultuous eruption.
Physicists operating a huge detector in Japan have confirmed previous hints that neutrinos behave differently to their antimatter counterparts. The researchers, who make up the T2K collaboration, point out that it is still too early to claim discovery of the long-sought leptonic charge-parity (CP) violation. But they say that the latest results impose the strongest constraints yet on the type of matter-antimatter imbalance that neutrinos can experience.
CP violation was introduced by Andrei Sakharov in 1967 as a way of explaining why the universe (or at least the part visible to us) appears to consist almost entirely of matter even though equal quantities of matter and antimatter ought to have been produced in the Big Bang. Previously it had been thought that particles and antiparticles would always behave in exactly the same way if their spatial coordinates were reversed. But Sakharov posited that this symmetry was broken in the early universe.
Such an asymmetry has been observed in the decays of certain quark-containing particles, namely kaons and B-mesons. However, the amount of CP violation observed in these processes is far too small to explain the huge imbalance that exists between matter and antimatter.
Very heavy neutrinos
The nearly 500-strong international T2K collaboration is investigating whether the shortfall might in fact be made up by neutrinos, which are leptons. The idea is that the decay of very heavy neutrinos shortly after the Big Bang would have yielded the CP violation needed to generate a matter-dominated universe.
Neutrinos come in three flavours – electron, muon and tau – and oscillate from one flavour to another as they travel through space as a result of their tiny but finite mass. CP-violation would mean muon neutrinos transform into electron neutrinos more readily than muon antineutrinos morph into electron antineutrinos, or vice-versa.
T2K uses a neutrino beam generated at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai on the east coast of Japan. Energetic protons strike a graphite target to create pions and kaons, which decay to create muon neutrinos and muon antineutrons. The source can be adjusted to create beams that are mostly muon neutrinos or mostly muon antineutrinos.
Extremely pure water
The neutrinos are sent through the ground (which offers little resistance to neutrinos) towards the Super-Kamiokande detector underneath a mountain in Kamioka, nearly 300 km to the west. The detector comprises 50,000 tonnes of extremely pure water, within which a small fraction of neutrinos will interact with neutrons to produce Cherenkov light that is picked up by an array of 13,000 photomultiplier tubes.
Because electron neutrinos and antineutrinos cannot be measured simultaneously, the experiment is first run in “neutrino mode” and then switched to “antineutrino mode”. However, in both cases the detector and beam lines consist entirely of matter, as opposed to antimatter, which tends to bias the results in favour of enhanced neutrino, rather than antineutrino, oscillation. As such, the collaboration has to correct its measured results to identify any CP violation, which it does using data from magnetized detectors positioned near the graphite target at the source in Tokai.
Analysing events recorded between 2009 and 2018, the researchers report in Naturethat they observed 90 electron neutrinos and just 15 electron antineutrinos. Even allowing for the effect of the purely matter-based components, this major imbalance points to CP-violation in neutrinos, they say. Indeed, the results suggest an almost maximum bias towards matter and comes close to excluding any favouring of antimatter. Zero asymmetry between the two is ruled out with a confidence of 95%, as was the case when the collaboration last published an analysis in 2017.
“Undeniably exciting”
In a commentary piece to accompany the paper in Nature, Silvia Pascoli of Durham University in the UK and Jessica Turner of Fermilab in the US write that the “undeniably exciting” results could represent “the first indications” of a solution to the universe’s matter–antimatter asymmetry. But they argue that a true discovery of leptonic CP violation will require “more intense beams, larger detectors and better-understood experimental features”.
The T2K collaboration plans next to reduce systematic uncertainties by upgrading the magnetized detectors as well as collecting more data, while J-PARC intends to upgrade the accelerator and beamline in order to raise the intensity of the beam. T2K spokesperson, Atsuko Ichikawa of Kyoto University in Japan, says that the collaboration plans to carry out the upgrade within the next two years, although she adds that the schedule might slip due to the coronavirus pandemic.
Super-Kamiokande should then be superseded by the much bigger Hyper-Kamiokande. More than 70 m deep and nearly 70 m wide, Hyper-K will contain 260,000 tonnes of ultrapure water – allowing researchers to make more precise probes of CP-violation as well as look for proton decay. The project was approved officially by Japan in February and is due to start operating in 2027, but here too, says Ichikawa, the virus could lead to delays.
Also this decade researchers hope to switch on the Deep Underground Neutrino Experiment at the Sanford Lab in South Dakota, US. This will use several thousand tonnes of liquid argon to intercept a neutrino beam that has travelled 1300 km from Fermilab in Illinois.
Left: confocal microscopy of a stained brain tissue slice from a rat. Right: the corresponding dMRI slice of the rat’s brain, shown with a voxel-wise map of the MR-derived effective radii. (Courtesy: Jelle Veraart, department of radiology, NYU Grossman School of Medicine)
An international team of researchers has established a way to non-invasively measure the radii of axons, fine nerve fibres in the brain, using diffusion MRI (dMRI). Their proposed method showed good agreement with histological studies in both rodents and humans, and outperformed previous techniques in which reported measurements were an order of magnitude larger than histologically-derived axonal sizes (eLife 10.7554/eLife.49855).
Axons are the wire-like protrusions of a neuronal cell, involved in conducting electrical impulses away from the cell body. They are microscopic in diameter and, as bundles, comprise the primary form of communication of the nervous system. Clinical and histological studies have shown that axon radii can range from 0.1 µm to more than 3 µm in the human brain, and that this size, along with myelination, is responsible for the speed of neuronal communication.
In addition, clinical studies of neurodegenerative diseases, such as multiple sclerosis, have revealed preferential damage to smaller axons, while an electron microscopy study involving subjects with autism spectrum disorder showed a significant difference in axon size distribution compared with healthy controls.
Non-invasive axon radii quantification…
Clearly, accurate quantification of axon radius is an important neuroimaging biomarker. This, however, has proven to be a highly challenging task to perform non-invasively. Nevertheless, a team of researchers from the Champalimaud Centre for the Unknown in Portugal, NYU Grossman School of Medicine in the USA and the Cardiff University Brain Research Imaging Centre (CUBRIC) in the UK, set out to achieve just that.
The team used dMRI, a non-invasive and non-ionizing imaging modality that measures the random motion of water molecules in tissues, revealing details of the tissue microarchitecture. The key novelty of the approach is the researchers’ proposed method to separate MRI signals originating from different compartments of the probed tissue. More specifically, they modelled how the water signal behaves in different tissue types, and thereby managed to suppress signal arising from surrounding tissue outside the axons. In doing so, they could more accurately quantify the properties of the axons in the probed brain tissue samples.
…shows unprecedented agreement with histology
To validate their work, the researchers first tested their model on rodents using high-resolution (100 x 100 x 850 µm) dMRI images acquired on a 16.4T MR scanner (Bruker BioSpin). After scanning, they collected 50 µm-thick slices from two rat brains, corresponding to the imaged volume, and stained them to highlight the axons. Finally, they used confocal microscopy to visualize and analyse the tissue slices. Their results showed that the median MR-derived effective axon radius was 3–13% larger than the median radius derived from histology, an error that could be due to shrinkage of the filaments through staining.
Histological validation of axon radii mapping using confocal microscopy shows good agreement with the proposed non-invasive dMRI method. (Courtesy: CC BY 4.0/eLife 10.7554/eLife.49855)
The second part of the study focused on human subjects, who were imaged using a lower resolution (3 x 3 x 3 mm) Siemens Connectom 3T MR scanner. Here, the team used histological values reported in the literature for comparison. Despite this, the researchers showed that their method was able to accurately estimate known axonal sizes with errors an order of magnitude smaller than those in previous MRI studies.
The researchers conclude that their study revealed “a realistic perspective on MR axon radius mapping by showing MR-derived effective radii that have good quantitative agreement with histology”. Thinking about the next steps, first author Jelle Veraart, adds: “The non-invasive quantification of axon diameters using MRI allows clinicians and researchers to identify problems and developmental pathways that arise in the depths of the brain, driving forward treatment and understanding of development and disease progression.”
Spinal cord injuries often result from impacts such as car accidents, but they can also occur due to tumour growth and other non traumatic causes. They can leave patients fully or partially paralysed and can affect the functioning or organs leading to a myriad of health complications. Now, a group of researchers at the Materials Science Institute of Madrid (ICMM) in Spain is developing a novel treatment using graphene – a single layer of carbon with unique mechanical and electrical properties.
Taking a biomaterials approach, the group is developing graphene-based foams that could be implanted in the spinal cord. These foams can act like a scaffold or trellis, assisting with the regeneration of neurological tissue within the healthy region surrounding the injury site. Research group leader María Concepción Serrano López-Terradas describes the project in this interview with Physics World, recorded before Spain’s nationwide lockdown began in March due to the COVID-19 pandemic.
Find out more about the innovative materials research taking place at the ICMM in this video profile we shared last week. Also take a look at the Physics World Nanotechnology Briefing, published in April 2020. This free-to-read collection celebrates how nanotechnology is playing an increasingly important role in applications as diverse as medicine, fire safety and quantum information.