Researchers at CERN have placed new limits on the timing performance of state-of-the-art systems for time-of-flight positron emission tomography (TOF-PET). Their tests show that the best photodetectors and scintillation materials can achieve a timing resolution far below 100 ps – a five-fold improvement on standard commercial systems. If implemented in the clinic, these advances would enhance the quality of PET images, enabling doctors to reduce the dose of radioactive tracer they administer to patients.
TOF-PET is a highly sensitive imaging technique that reconstructs a 3D view of tissues using precise measurements of the times at which two simultaneously-emitted gamma-ray photons arrive at a scintillator-based detector. These photons are produced when the decay of a radioactive tracer triggers electron–positron annihilations inside the patient’s body.
Most commercial TOF-PET systems, which exploit silicon photomultiplier photodetectors (SiPMs) and the fast scintillator lutetium oxyorthosilicate (LSO:Ce), achieve a timing resolution of around 500 ps for whole-body scans. For more localized imaging, 200 ps seems within reach. But a group of more than 40 scientists – including several authors involved in this work – have launched a competition called the 10 ps Challenge that aims to push the limits even further. Imaging on this timescale could deliver a 16-fold improvement in sensitivity and avoid the need for computational techniques to reconstruct the image.
Comparing photodetectors
In this study, reported in Physics in Medicine and Biology, Stefan Gundacker and colleagues have made the most precise measurements yet of the timing performance of leading scintillation materials and photodetectors. They first measured the intrinsic single-photon timing resolution (SPTR) of eight industrial and research SiPMs, with the best value of 70 ps achieved by the NUV-HD device from Italian research institute FBK. By combining this detector with a small crystal of LSO:Ce co-doped with calcium, they demonstrated an overall timing resolution of 58 ps, increasing to 98 ps for the 20 mm-long crystals typically used in TOF-PET.
The team then used the FBK device to measure the timing response of different scintillating materials, including bismuth germanate (BGO), which is cheaper than LSO:Ce and slightly safer for clinical environments. High-frequency readout electronics were able to capture extremely fast scintillation processes, revealing a pronounced peak in the light emitted from BGO within just a few picoseconds. This is caused by Cherenkov emission, and the team estimates that BGO emits 17 prompt Cherenkov photons from each gamma-ray interaction.
The measurements also show a direct correlation between the timing performance of the TOF-PET system and the intrinsic SPTR of the photodetector, with simulations indicating that this fast Cherenkov emission could improve the timing resolution of BGO-based systems if the SPTR of the photodetector can be reduced. For example, an idealized detector with an SPTR of 20 ps – which has been measured on research devices – would enable BGO systems to reach a timing resolution of around 30 ps for small crystals, compared with 158 ps measured for the FBK photodetector used in this study.
“We are still far from having a perfect photodetector in PET,” Gundacker tells Physics World. “But there are other intermediate solutions to investigate, especially in the case of BGO where we have shown that detecting the first photon with high time precision would already be sufficient.”
Pushing boundaries
The tests also reveal that BC422 plastic scintillators are even faster than LSO:Ce, although their low detection efficiency rules them out for PET applications. The most promising alternative is the ultraviolet emitter barium fluoride (BaF2), which achieved a timing resolution of 51 ps when combined with a UV photodetector from FBK – with further improvements possible by pushing the performance of the UV detector.
The team concludes, however, that current devices and materials will not be good enough for practical, commercial TOF-PET systems to break the 100ps barrier, let alone the more ambitious 10ps challenge. To make further progress towards this target, researchers will need to investigate novel approaches such as quantum-confined systems, as well as better methods for collecting and detecting the light.
Some 20,000 optical scientists and engineers will be converging in San Francisco at the beginning of February for SPIE’s flagship Photonics West and BIOS events. Delegates will be spoilt for choice, with three international conferences presenting more than 5200 technical papers, and two world-class exhibits featuring the latest products from around 1400 international companies.
Each of the conferences will headlined by impressive plenary speakers, including Nobel-prize winner Eric Betzig, David Payne from the Optoelectronics Research Centre at Southampton University, and Google’s Trond Wuellner – who promises to share his vision for the future of computing. A parallel programme for entrepreneurs and investors will feature the popular Startup Challenge, in which early-stage companies compete to win funding from some of the largest companies in the photonics industry.
Many of the exhibitors on the show floor will be launching their latest product innovations, some of which are highlighted below.
TOPTICA takes optical frequency measurement to the 21st significant digit
The DFC Core+ frequency comb (Courtesy: TOPTICA)
TOPTICA’s frequency comb DFC CORE+ has demonstrated world-record stability in a joint research project with the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. The frequency comb is designed for use in atomic clocks, which rely on probing optical transitions in cold atoms with very stable and narrow-linewidth laser light. The probed atoms provide the long-term stability and accuracy of the clock, but the most stable lasers have a different wavelength from the best atomic references. Frequency combs are used to transfer the stability from the wavelength of the narrow-linewidth laser to the wavelength of the cold-atom transition.
Tests by PTB and TOPTICA researchers have demonstrated the DFC CORE+ can achieve a stability transfer at a record level of 10–21 over an averaging time of 105 s. This was achieved by suppressing the influence of optical path-length fluctuations through a combination of active phase-stabilization and common-path propagation, a scheme that is technically simple and robust against environmental changes. The frequency ratio was measured with an accuracy of 9.4×10–22, equivalent to the 21st significant digit.
The advance paves the way not only for further improvements in atomic clocks, but also more sensitive gravitational wave detectors.
The DFC CORE+ will be displayed at BiOS at Booth #3209 and Photonics West at Booth #8209. An open access article describing the research is available for download, while more information about TOPTICA’s frequency combs can be found on the company’s website.
Motion systems deliver speed and precision
A gantry and hexapod for industrial photonics alignment and automation (Courtesy: PI)
PI will be showcasing the latest generation of its gantry systems and motion subsystems for automated high-speed photonics alignment and laser processing applications.
The company’s Silicon Photonics Alignment product line addresses the requirement of aligning multiple optical paths with multiple interacting inputs and outputs, each of which requires optimization. The automated alignment engines include between three and twelve-axis mechanisms, controllers with firmware-based alignment algorithms, and the software tools needed to achieve the accuracy for markets such as packing, planar testing, and inspection.
Hexapod six-axis parallel positioning systems are instrumental to fast alignment for silicon photonics due to their lower inertia, improved dynamics, and smaller package size, as well as higher stiffness and programmable pivot point. Fast, linear-motor-driven systems that exploit industrial motion controllers will also be shown.
Tunable mid-IR laser combines speed with performance
The MIRcat-QT mid-IR laser
DRS Daylight Solutions, a leading supplier of mid-infrared quantum-cascade lasers, will be featuring the MIRcat-QT, its flagship broadly tunable laser. Now incorporating the company’s proprietary ZeroPoint technology, the laser provides improved beam pointing accuracy and stability for applications such as nanoscale imaging, point-scanning microscopy, photothermal and photoacoustic imaging, stand-off detection, and single-mode fiber-optic coupling.
The MIRcat-QT offers tuning ranges approaching 1000 cm–1 with wavelength coverage options spanning 3 µm to more than 13 µm. The system’s flexible, modular design allows factory configuration of up to four pulsed or continuous-wave/pulsed modules, plus the option to add or upgrade modules later.
Modules are available that can deliver output peak powers up to 1 W and/or average output power up to 0.5 W. Peak tuning speeds exceed 30,000 cm–1/s, while a high-precision tuning mechanism provides wavelength repeatability of less than 0.1 cm–1. MIRcat’s TEM00 output beam quality enables high-efficiency fibre coupling, and the new ZeroPoint technology ensures this high efficiency is maintained across the entire tuning range.
To find out more about the MIRcat-QT system, visit DRS Daylight Solutions at Booth #2327
Software speeds up the design of laser-based optical systems
BeamXpertDESIGNER models the propagation of laser radiation through an optical system (Courtesy: BeamXpert)
BeamXpert, a spin-off from the Ferdinand-Braun-Institut, Leibniz-Institut für Höchstfrequenztechnik (FBH), will be introducing its software BeamXpertDESIGNER at this year’s Photonics West.
BeamXpertDESIGNER allows laser users and developers to design optical systems based on laser radiation. It combines two simulation models that together offer real-time calculations and sufficient accuracy for most practical design tasks.
An extremely fast first-simulation algorithm enables real-time prediction of beam-propagation parameters such as the beam diameter and position, along with Rayleigh lengths, divergence angles and other properties defined in the ISO standards for lasers. The second model determines the aberrations that may degrade the beam quality, helping the user to choose the most appropriate optical components for the system.
BeamXpertDESIGNER is supplied with a lifetime license that includes support and updates during the first year. The software is quick and intuitive to learn, and is delivered with more than 16,000 components, a comprehensive glass library, and compatibility with ZEMAX formats.
Visit BeamXpert at Booth #4545-18 on the German Pavilion to test the software for yourself.
DPSS lasers for high-precision applications
Scottish laser manufacturer UniKLasers will be showcasing an expanding range of single-frequency diode-pumped solid-state (DPSS) lasers for high-performance applications such as metrology, spectroscopy, holography, quantum sensing and optical trapping.
The company will be presenting its second-generation Solo 640 series of lasers, which deliver impressive output powers of up to 1000 mW. Customer-focused specification enhancements include advanced remote-control operation and extended power and wavelength stability, enabling more than eight hours of non-stop operation.
Meanwhile, the Solo 780.24, Solo 689.4 and Solo 698.4 lasers have been designed for quantum applications such as cold atom interferometry, gravimetry and atomic clocks. UniKLasers is a member of the Pioneer Gravity consortium, which is currently developing a commercial quantum gravitometer. This is the company’s fourth quantum project to be sponsored by Innovate UK, in which UniKLasers is focusing on the development of higher power lasers and new quantum wavelengths. The next anticipated release will be the Solo 813, the “magic wavelength” laser for use in strontium lattice clocks.
To find out more, visit UnikLasers at the UK Pavilion at Booth #5053. On Tuesday 4 February the company will present a spectral performance analysis of DPSS and ECD single-frequency lasers at the LASE & BiOS poster session, and will also provide an update on its high-power red at the Holography Technical event.
An Italian approach to CO2 lasers
The Blade-Self-Refilling laser.
The Italian laser manufacturer El.En. is a pioneer in the development of rechargeable CO2 laser sources. Unlike conventional CO2 lasers, the company’s Blade Self-Refilling lasers are equipped with a special slot in which to insert the CO2 gas-mix cylinder, allowing an operator to easily replace the cylinder and regenerate the laser source in just a few seconds – which ensures that the laser is always working at its full potential.
The laser sources in the Blade Self-Refilling series also have one of the highest energy efficiency in their category. Power options range from 350 to 1200 W, with all versions except the most powerful supplied in the same form factor to simplify the engineering of different models with different power solutions.
Alongside its portfolio of laser sources, El.En. also supplies a series of high-performance laser-scanning heads. These include the Gioscan series of galvo motors that offer the fast acceleration needed to provide an immediate and precise response in all beam steering applications. As an independent producer that builds all of its products in-house, El.En. offers full technical assistance as well as the ability to build customized solutions for specific applications.
To find out more about this Italian company and its products, visit Booth #5470 at Photonics West or go directly to elenlaser.com
Researchers in the US have combined artificial intelligence (AI) with an advanced laser-based imaging technique to create a system that can identify different types of brain cancer from surgical samples with a similar accuracy to pathologists, but much, much faster. The test could enable surgeons to bypass the pathology lab and receive real-time diagnostic information during operations, to inform their decision-making (Nature Med. 10.1038/s41591-019-0715-9).
Every year around 15.2 million people around the world are diagnosed with cancer. More than 80% of those will undergo surgery, and in many cases this involves removing and analysing a proportion of the tumour during the operation. This provides a preliminary diagnosis, ensures that the specimen is adequate to achieve a final clinical diagnosis later, and helps guide surgical management.
In the US alone, more than 1.1 million tumours are biopsied annually. These are typically analysed using a histology workflow that is more than a century old. The tissue sample is taken to a laboratory, processed and prepared by skilled technicians, and then interpreted by a pathologist – all while the patient and surgical team wait in the operating theatre. But, both globally and within the US, there is a shortage of pathologists to provide such intraoperative diagnosis.
In recent years, Daniel Orringer, a neurosurgeon at NYU Langone, has been involved in the development of a novel laser-based technique that provides rapid, high-resolution images of unprocessed biologic tissue, known as stimulated Raman histology. By using laser light to excite the tissue, this technology generate contrast between the different components, such as lipids and proteins, revealing diagnostic features that are poorly visualized or can’t be seen with other histology methods.
Orringer and his colleagues have demonstrated that stimulated Raman histology can be combined with computer-aided diagnosis to classify brain tumours. However, they believe that matching the accuracy of the pathology laboratory in a clinical setting using this technique could be challenging – particularly with the current shortage of trained pathologists. But, as clinical-level accuracy has been achieved for image classification in other medical fields, such as ophthalmology, radiology and dermatology, using deep learning, they wondered if AI could help.
The researchers trained the deep convolutional neural network behind their AI-tool on stimulated Raman histology images of more than 2.5 million samples from 415 patients. It was taught to classify tissues into 13 histologic categories, focused on the 10 most common brain tumours, and offer a diagnostic prediction.
In a clinical trial at three medical centres in the US, the team tested the system on 278 patients undergoing brain tumour resection or epilepsy surgery. Brain tumour biopsies were collected from the patients and then split so that they could be sent for diagnosis via both the pathology laboratory process and the AI-based test.
The results for the two trial arms were very similar. The AI system classified 94.6% (264 of 278) of the tumours correctly, while the pathologists achieved a diagnostic accuracy of 93.9% (261 of 278). But imaging with stimulated Raman histology and AI diagnosis was significantly faster, providing a classification of brain tumour type in the operating room in less than 150 s. Typically, the pathology laboratory takes around 20–30 min.
There were also interesting differences between the errors made by the pathologists and the AI-based test. The AI correctly classified all 17 of the tumours that the pathologists diagnosed incorrectly, while the pathologists correctly diagnosed all 14 of the cancers that the AI misdiagnosed. The study authors say that this suggests that pathologists could use the new system to help them classify challenging specimens.
“As surgeons, we’re limited to acting on what we can see; this technology allows us to see what would otherwise be invisible, to improve speed and accuracy in the operating room, and reduce the risk of misdiagnosis,” says Orringer. “With this imaging technology, cancer operations are safer and more effective than ever before.”
“Stimulated Raman histology will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” adds Matija Snuderl, neuropathologist at NYU Langone, who was involved in the study.
As far back as I can remember I had an interest in how things worked, from items around the house, to the planets and stars. I’ve always enjoyed dismantling and building things, and having a working knowledge of the basics of physics helped with that. In my early years at high school I had a particularly good physics teacher. He was rather loud and outspoken on all subjects, not just physics, and his outbursts to students would be considered politically incorrect today. I actually found this rather refreshing. That, coupled with his fabulous practical demonstrations, kept me engaged and looking forward to physics lessons.
What did your BSc in physics focus on?
My degree was in applied physics, so the slant was towards practical applications, which suited me well. As I took a “sandwich” course, I spent a year in industry working for a company making mass spectrometers. This gave me a good grounding in some new areas including materials science, surface science, vacuum technology and electronics. I was even sent abroad to install an instrument at a steel works.
Did you ever consider a permanent academic career in the physical sciences?
It never occurred to me to stay in academia. By the time I graduated I was keen to get out into industry or the practical side of research and development. A recession was under way, so jobs were thin on the ground. I managed to get funding to do an MSc in surface science. From there I made a contact at the Royal Signals Radar Establishment (now QinetiQ) in Malvern, UK, where there was a renowned crystal growth department. When a vacancy arose in the materials characterization department, I applied and was offered the job piloting the two magnetic sector mass specs.
How did you get into technology journalism?
My work at Malvern was mostly with the semiconductor growth teams. As well as bulk-crystal growth there was a big effort in thin-film growth for semiconductor devices, and a lot of this was in compound semiconductors, which back then was a niche technology compared with silicon. A former colleague went to the US to work for a small publishing company that produced a magazine on the compound semiconductor industry. When that company was sold to IOP Publishing [which also publishes Physics World] I was asked if I’d like to work alongside the editor in the UK office.
I really enjoyed the work on Compound Semiconductor. At the time there was a buzz around compound semiconductors. The industry was gearing up to supply devices for the low-energy lighting revolution; emerging smartphone devices; and the roll-out of faster fibre-optic networks, which were starting to replace copper cable networks in towns and cities globally.
You retrained as an electrician – what were the challenges in doing that?
I started studying electrical installation courses out of interest through evening classes at my local college. By then my contract at IOP Publishing was coming to an end and I’d decided to work part-time for another publisher. I also had freelance editorial work for a government department and via business contacts that I’d made through the magazine. I was lucky because I had a reasonable income from this, and could work a little with a friend who was an electrician to gain important practical experience. I was able to grow the electrical business while still doing some freelance editorial work to plug the gaps.
Converting what’s studied in the classroom, to designing and fitting actual installations, is a steep learning curve. Then of course you need to build a customer base. I’ve been very fortunate to have some good regular business, including several companies that spun out of QinetiQ. I wanted to be self-employed for the flexibility it offered but you also need great discipline and time management when you’re running your own business.
How has your physics background been helpful in your work, if at all?
It definitely has been helpful, because it gives you an intuitive understanding of what’s going on or what might be going wrong, as well as the ability to step back and think things out logically. Electricians spend most of their time battling against Ohm’s law. The wiring regulations are a minefield, but the industry simplifies this down to an array of standard methods for doing most tasks. Unfortunately, this also leads to rigid adherence to these without really considering their origins. I recently saved a customer a lot of money by calculating that the existing main earthing conductor for the installation was of a sufficient size when another company wanted to pull the building apart to get a bigger cable in because “that’s the size we always use in these sorts of installations”.
Any advice for today’s students?
Stay curious. Not just about your own particular niche of physics, but also about what others are doing. Try and move around within your organization, as I did in Malvern. It’s increasingly rare for anyone to stay in the same field or with the same organization for all of their career. A well rounded and adaptable individual is far more employable.
All things are possible, it seems, when fundamental and applied physics are put to work solving the multidisciplinary science and technology challenges facing the food-manufacturing sector. Think new approaches for compelling product innovation, reduced costs as well as the scientific insights that will enable the food industry to reimagine best practice to ensure a more sustainable world. That, give or take a few embellishments, is the key take-away message from the Fourth IOP Physics in Food Manufacturing (PiFM) Conference in Leeds, UK, earlier this month.
“The PiFM conference is a unique forum, bringing together an unusually diverse range of academics and students with R&D scientists from the food-and-drink industry,” explained John Bows, R&D director at PepsiCo Global Snacks in Leicester, UK, and chair of the IOP PiFM group, which co-sponsored this year’s event along with the IOP’s Liquids and Complex Fluids Group. “We strive to educate physics academia on why food is so interesting [from a scientific perspective], while informing the food industry why they need physics,” he added.
In search of solutions
The meeting was the most international PiFM conference to date with speakers from Australia, China and Europe. Delegates who talked to Physics World commended the depth and breadth of the presentations – from the biomechanics of swallowing to new kinetic models of shear thickening to acoustic measurements in food processing (see box below). “It is good to know that there is a science cadre, by no means all physicists, who can address complex problems in the processing of the heterogeneous materials and almost all states of matter that make up foods,” noted Megan Povey, chair of the PiFM 2020 conference and a food physicist at the University of Leeds.
It appears that many of those complex problems land routinely on the desk of Martin Whitworth, technology lead for strategic knowledge development at Campden BRI, one of the world’s foremost food science and research centres, headquartered in the UK. In a call to arms to the assembled PiFM community, Whitworth showcased “a few industry problems perhaps amenable to physics solutions”. These included the need for enhanced diagnostic techniques to detect foreign bodies in food – especially impurities such as wood, plastic and glass fragments – as well as a requirement to evaluate the uniformity of thermally assisted high-pressure food processing (an emerging technique which uses pressures up to 600 MPa to reduce the thermal load of sterilization). Whitworth also highlighted the need to search for more stable and repeatable foaming capabilities in plant-based and vegan food products.
Our approach to physics must become far more inclusive if it is to make the contribution necessary to meet the tests we face as both a scientific and industrial community
Megan Povey
Given such wide-ranging demands for product and process innovation, it was disappointing that small and medium-sized enterprises (SMEs) were largely noticeable by their absence. “It is important for us, as a special interest group, to address this issue because the food industry is dominated by SMEs, both in terms of employment and turnover,” Povey acknowledged. One option for next year’s event is collaboration with other learned societies — such as the Society of Chemical Industry — or food industry bodies like the Food and Drink Federation.
The next generation
The conversation at the meeting also turned to careers and professional development, with the role of mentorship a prominent talking point in a lively panel debate featuring five early-career scientists from research and industry. “[As a student] you need senior people to champion you for funding, as well as giving you independence and the opportunity to strike out,” explained Zachary Glover, a PhD student at the University of Southern Denmark.
His point was amplified by panel chair Beccy Smith, head of the modelling and simulation group at Mondelēz International, who noted the importance of identifying the right mentor – someone who will give credit where it is due. “They’re easy to spot at conferences,” she added, “as they’ll always recognize the efforts of their co-workers and students first.”
While other panellists fretted about the near-term industrial relevance of their PhD research, it fell to a more experienced voice – Alessandro Gianfrancesco of Nestlé Product Technology Centre in the UK – to provide the necessary reassurance. “Your PhD is about the training, about tackling a complex problem, developing your research approach and how you present that research,” he said. “You cannot predict what will be useful in 20 years’ time, so just keep an open mind and keep on learning.”
For her part, Povey sees greater openness – individually and collectively – as the only way to go. “PiFM 2020 for me demonstrated the unifying power of physics in helping to understand the world,” she concluded. “The diversity of participants, in particular, is something that we must build on for the future. Our approach to physics must become far more inclusive if it is to make the contribution necessary to meet the tests we face as both a scientific and industrial community.”
Brief tasters from the PiFM 2020 conference and poster sessions
Of couscous and chocolate
A UK-Dutch team presented experimental findings that point to rheology-based design principles for industrial granulation — a ubiquitous operation in food manufacturing in which a small amount of liquid is incorporated into dry powers. Examples of the process include so-called “wet granulation”, in which a minimal amount of liquid is added to produce matt solid granules , for example in couscous and baby food. In “overwet” systems, a larger amount of liquid is added and the mixture turns into a flowing suspension, such as with liquid chocolate. The team’s experiments, using an industrially realistic model powder called Spheriglass, indicate the two regimes of granulation “may be amenable to a single, unified description”. Daniel Hodgson (University of Edinburgh) received the best student presentation prize for this work.
Targeted emulsions
Particle-stabilized emulsions, also known as Pickering emulsions (PEs), are attracting significant research interest as “biodegradable delivery vehicles” for a range of active compounds and micronutrients. Andrea Araiza-Calahorra and colleagues at the University of Leeds described the use of novel protein-based soft-gel particles as stabilizers in a new class of “gastric-stable” PEs. They used complementary techniques – among them static and dynamic light scattering, cryo-scanning electron microscopy and gel electrophoresis – to compare the behaviour of the PEs pre- and post-digestion. The studies provide new design principles for PE-based delivery systems for compounds that require targeted intestinal release. Andrea Araiza-Calahorra from the University of Leeds received the best student poster prize for this research.
Age-appropriate food and drink
A major reason why old people get ill is that they no longer salivate properly, to the point that they stop enjoying their food and in turn stop eating altogether. Marco Ramaioli, a senior scientist at INRAE in Paris, reported work that he and his colleagues have carried out to understand the biomechanics of swallowing to develop safer and more appropriate food and drinks for this group. Their mixed-methodology approach includes in vivo ultrasound observations, simplified fluid-mechanical analysis, novel biomimicking experiments and studies of the flow of a food bolus containing solid inclusions.
In the weeks since the Physics World team kicked off the new year by testing a pair of graphene headphones, we’ve received a steady stream of comments about our review and a related segment on our weekly podcast. A few people have asked our opinion of other graphene headphones, and one man went so far as to question whether the “graphene” label he found on an inexpensive pair of headphones was anything more than “misleading click-bait”.
I can’t judge any product I haven’t tried, and I also can’t judge a product’s graphene content without taking it apart and getting experts to analyse it. However, with those two caveats firmly in place, here are two facts to consider should you happen to be in the market for graphene headphones (and, by extension, graphene anything).
First, a lot of things contribute to how a pair of headphones will sound. The physical composition of the headphone drivers (graphene, PET, cellulose, or whatever) is only one factor. Others include the method by which those drivers create sound (this blog post explains a few of the possibilities, and their trade-offs); the quality of the other electronics; and simple things like how well the headphones fit over/in your ears. Some of these things are more expensive to optimize than others. The graphene headphones I tested are a high-end product with, it appears, a high-end price, so I suspect they are pretty good at the non-graphene-related aspects of headphone design – and that much of their cost comes from that, not from the graphene.
Second, graphene exists in many forms, with many price points. A lot of physicists are interested in ultra-pure, single-layer graphene, which has amazing electronic properties. This “physicists’ graphene” is difficult (and expensive) to make in macroscopic quantities. However, others are more interested in graphene’s mechanical properties, such as strength and rigidity. To get these properties, you don’t need ultra-pure single-layer graphene. You can get by with a cheaper type, which for argument’s sake I will term “materials scientists’ graphene” (this is an oversimplification, but it conveys the right feel). The proprietary graphene-based material in the headphones I tested was most likely in this category.
But even this type of graphene is expensive relative to a third type of graphene, which is cheap enough to be added in bulk to substances like paint or resin to improve their heat transport and/or electrical conductivity. As I understand it, this “engineers’ graphene” functions like a superior version of graphite, and manufacturers are selling it by the kilo (and maybe, soon, by the tonne).
I’m not trying to start a three-way brawl between physicists, materials scientists and engineers about which type of graphene is better. They all have their uses, and they all qualify as graphene. But here’s the problem: a product can advertise itself, accurately, as containing graphene even if the graphene it contains is not of a type or quantity that’s going to make a difference to its performance. What’s more, if an unscrupulous manufacturer wants to put graphite in its product and call it “graphene”, it’s hard for ordinary consumers to know the difference. To the naked eye, graphene and graphite both look like gritty black powders. You need more sophisticated testing equipment to distinguish between them, and between the various grades of graphene.
Certification is a huge issue for the graphene industry, and a lot of people are working on it. However, until there’s a strong framework for regulation, the next best thing is probably to look for independent endorsements by people and organizations who know what they’re talking about. The headphones I tried were endorsed by the co-discoverer of graphene, Kostya Novoselov, as making good use of the material. Since then, I’ve learned of a different make of graphene headphones that has been endorsed by an industry body called the Graphene Council. However, until someone gives Physics World its own product-testing lab and qualified technicians to run it, that’s about all I can say – except to add that there are some graphene products I definitely won’t be testing with my colleagues.
The effect of rapid rotation on a single quantum spin in a piece of diamond has been measured for the first time. Alexander Wood of the University of Melbourne and colleagues rotated the diamond at 200,000 rpm and used laser light and microwaves to measure the effect on the spin. The technique could be further developed to measure rotation on the nanoscale, say the researchers.
Diamonds contain impurities called nitrogen-vacancy (NV) centres that comprise a single electron-like spin that is very well isolated from the surrounding environment. The spin can be measured and manipulated using light and microwaves. As a result, NV centres have proven to be very useful in a wide range of applications from quantum sensors to quantum-information storage.
Spin echoes
Spin is quantized intrinsic angular momentum, and this means that an NV spin should be affected if the diamond is rotated. To study this effect, Wood and colleagues used a technique called optically detected spin-echo magnetic resonance – a technique that has been developed to detect and image small numbers of electron or nuclear spins in samples.
In the experiment, a piece of diamond is mounted onto a rotating cylinder and a magnetic field is applied along the axis of rotation of the spin. The measurement technique involves first putting the NV spin into a lower energy state by firing a laser pulse at the diamond. Then the diamond is subjected to a series of microwave pulses, which rotate the direction of the NV spin. Finally, the energy state of the NV spin is read-out by observing the fluorescent light that it emits.
The team found that the probability that the NV spin ended-up in a higher-energy state depended upon the angle between the diamond’s axis of rotation and the polarization of the applied microwave signal. This is just as predicted by theory and the team say that the technique could be developed as a way of “probing rapid rotation and motion on quantum-relevant timescales”. Potential applications include sensitive torque detectors and studies of the fundamentals of quantum mechanics.
Delivery as planned: the ClearView 3D Dosimeter enables clinical users to visualize and verify the detail of complex SRS dose distributions. (Courtesy: Modus QA)
The growing clinical application of stereotactic radiosurgery (SRS) for the treatment of metastatic tumours in the brain presents a significant dosimetric and quality assurance (QA) challenge for medical physicists and their clinical colleagues. Put simply, the precision targeting inherent to SRS necessitates all manner of patient, machine and process-level checks to verify that conformal, high-dose radiation is delivered to the patient as intended – usually in one or a few fractions – while minimizing damage to surrounding healthy tissue and organs.
With this in mind, Modus QA, a Canadian supplier of QA products and services to radiation oncology clinics, is stepping up the commercial roll-out of its ClearView 3D Dosimeter, a non-diffusing, radiochromic hydrogel dosimeter designed specifically to support advanced radiotherapy techniques like SRS. Working in tandem with the vendor’s VISTA optical CT scanner, the dosimeter helps users to confirm that planned treatments are delivered accurately, visualizing the intricate detail of complex dose distributions for multiple-lesion dosimetry.
John Miller: “The goal is an integrated 3D dosimetry system that gives users more accurate measurements of dose and dose distribution.” (Courtesy: Modus QA)
“Gel dosimetry can be used to measure any 3D dose distribution,” explains John Miller, founder and co-owner of Modus QA, “though the most important application we see is for SRS treatment of metastatic tumours in the brain – a truly complex problem with multifoci targets distributed in 3D space.” Fundamentally, ClearView is a tool that enables medical physicists to meaningfully compare 3D dose distributions – measured versus calculated – and get them to a pass or fail ahead of SRS treatment. “It’s a necessary and sufficient test,” adds Miller.
Joined-up thinking
If that’s the back-story, what of the specifics? The ClearView 3D Dosimeter itself comprises an optically clear, low-scattering and colourless hydrogel matrix suffused with a radiochromic indicator dye. The dye turns purple after irradiation, with the change in optical attenuation throughout the gel directly proportional to the absorbed radiation dose.
Spatially, while the radiochromic dose response of the ClearView gel is accurate at the submicron scale, the resolution of the dose image is ultimately determined by the spatial accuracy of the optical CT scanner used for readout. In this case, the latest iteration of the Modus optical CT scanner (VISTA 16) is capable of imaging 0.25 mm isotropic voxel sizes, though in practice the more commonly used spatial resolution of 0.5 mm will be down-sampled to match the treatment plan.
Also worth noting is the joined-up approach that Modus has taken to ClearView and VISTA product development. VISTA, for example, is built with a convergent green light source (530 ± 10 nm) to minimize the scatter associated with cone-beam optical CT, while ClearView has been optimized for use with VISTA by increasing the clarity and reducing the scatter of the hydrogel. “The goal is an integrated 3D dosimetry system,” Miller explains. “The combination of the two products ultimately gives users more accurate measurements of dose and dose distribution.”
Into the workflow
When it comes to deploying ClearView and VISTA into the SRS QA workflow, Miller reckons an experienced user will need about 60 minutes to measure a dose distribution and compare it to the calculated dose distribution. The stepwise process includes acquisition of an optical CT scan of the gel before irradiation (the reference scan); irradiation of the gel with the planned treatment dose; a 45-minute wait for the gel chemistry to develop; followed by another optical CT scan (the data scan). Each scan takes less than five minutes, while the CT reconstruction is automatic.
“It’s the comparisons at this point that are crucial,” says Miller. Users can do a gamma distribution to see a gamma map in 3D; also a conformity index between planned isodose surfaces and measured isodose surfaces. In a multimet treatment, meanwhile, users are able to look at the centre of mass of each target and compare with the plan to ensure the centre of each target is hit with the centre of each dose distribution. “You can then look at contours around that to make sure the shape of the dose distribution is correct,” he adds.
The ClearView roadmap
Looking ahead, the priorities for 2020 are already nailed down for the Modus product development team. With the ClearView roll-out under way to new clinical customers, the emphasis now shifts to commercial release – slated for the autumn – of the work-in-progress VistaACE analysis software. “The software will effectively complete the product launch,” notes Miller, “enabling medical physics teams to use ClearView, VISTA and VistaACE on site as an integrated solution for 3D gel dosimetry.”
Beyond that, intriguing possibilities are coming into view on the ClearView development roadmap. A case in point is a concept called 3D dosimetry as a service (3DDaaS). “There’s no timeline for the launch of 3DDaaS as yet,” Miller admits, “but the idea is that this will be a remote accreditation service for QA of clinical procedures – for example, to support the commissioning of new radiotherapy equipment or new treatment techniques in the clinic.”
Right now, Modus is focused on accelerating clinical uptake and validation of ClearView beyond its current “early-adopter” institutions, which include the London Regional Cancer Program in Ontario and the University of Michigan. A key opportunity to engage with end-users directly will come in June at the International Conference on 3D and Advanced Dosimetry in Quebec City.
“Watch this space,” Miller concludes. “We’ll be there with at least the work-in-progress VistaACE.”
Using the ClearView 3D Dosimeter
Selling the benefits of gel dosimetry to the medical physics community is all about openness, claims John Miller of Modus QA. “Transparency is key,” he explains. “We’ve tested ClearView rigorously and are being upfront and open with all our specifications – acknowledging that radiochromic gels have their limitations with respect to linearity, range and energy and dose-rate effects.”
Benefits of the Modus QA ClearView 3D Dosimeter include:
Stability: chemistry is stable for more than 60 days prior to irradiation under recommended storage conditions. The signal is geometrically stable any time after irradiation (though for best dosimetric results, Modus recommends optical scanning between 45 min and 24 hr after irradiation).
Linearity: dose response is linear within the range 0 to 80 Gy, making ClearView suitable for SRS QA.
Spatial resolution: the gel is accurate to the submicron level.
Material properties: near-tissue equivalence provides a patient-like testing environment, while low-scatter substrate is ideal for optical cone-beam CT scanning.
A full list of ClearView specifications is available here.
The new laser ultrasound technique was used to produce an image (left) of a human forearm (above), which was also imaged using conventional ultrasound (right). (Courtesy: Xiang Zhang et al)
Fully contact-free laser ultrasound (LUS) imaging has been demonstrated in humans by researchers at Massachusetts Institute of Technology (MIT), in collaboration with MIT Lincoln Laboratory. Xiang Zhang and colleagues used an infrared laser to generate sound waves at the tissue surface of volunteers’ forearms. A second beam detected the propagating sound waves by measuring how the subjects’ skin vibrated in response. The technique could be especially useful for imaging where physical contact is not tolerated, such as over wounds and on other sensitive areas (Light Sci. Appl. 10.1038/s41377-019-0229-8).
In conventional ultrasound imaging, an array of transducers is pressed against the skin directly or with a coupling gel to help transmit the acoustic waves into the tissue. The method is inexpensive, convenient and produces images in real time, but it has some disadvantages.
One significant limitation is the pressure that is typically required to maintain acoustic contact between the device and the target tissue. This limitation is exacerbated for contact-sensitive applications where such pressure would be too painful, such as for burn victims or trauma patients.
Another problem is the low degree of reproducibility, due to the fact that the operator usually defines the image orientation and field-of-view by manipulating the transducers manually. This means that patient images acquired at different times are difficult to compare, and treatment or disease progression cannot easily be tracked.
A variation on the technique – ultrasound tomography – addresses the issue of reproducibility, but the solution comes at the expense of convenience, as it involves part of the patient being immersed in a water tank. The problem of physical contact, meanwhile, is partly dealt with by photoacoustic imaging. In this method, the ultrasound pulses are generated within the tissue remotely by a laser, but the reflected signal is detected using conventional transducers on the skin.
LUS could tackle these problems simultaneously by combining a new approach to photoacoustic generation with an optical interferometer, allowing it to create and measure ultrasound waves from a distance. Conventional photoacoustic imaging cannot image deeply since the light is strongly attenuated in the tissue. Rather than seeking a way to increase the laser’s penetration, however, in their new approach Zhang and colleagues turned this bug into a feature.
“We actually rely on this high absorption to efficiently generate an acoustic source at the tissue surface, meaning we can convert the maximum amount of light into acoustic energy while maintaining human safety,” explains Zhang. “This allows us to image deeper than typical photoacoustics since we don’t rely on light to travel through the tissue; only acoustic waves instead.”
A simplified schematic of the laser ultrasound system. (Courtesy: Xiang Zhang et al)
The team found that they achieved the ideal balance of optical absorption, acoustic power and patient safety using 2 mm-wide, nanosecond laser pulses at wavelengths near 1500 nm. This setup approximates a disk-shaped transducer just beneath the tissue surface, producing a 60° ultrasound beam at 1.5 MHz – towards the lower end of the frequency range typically used for ultrasound imaging.
The researchers tested their LUS technique using a gelatin phantom, ex vivo pig tissue and four human subjects, comparing the results to those from a standard ultrasound imager. While LUS could not match the image quality provided by the conventional approach, it still successfully picked out the same soft- and hard-tissue features.
One aspect in which LUS is currently lacking is its inability to deliver results in real time, as images must be reconstructed from sequential single-point measurements. In this respect, Zhang expects the development of the technique to mirror that of conventional ultrasound imaging.
“Looking back historically, medical ultrasound began by sequentially moving a single transducer to form an image – similar to moving a single laser spot in LUS – and eventually scaled toward arrays of hundreds or even thousands of transducers in medical probes today. I believe a similar path is ahead for LUS,” says Zhang.
Progress along this path should be accelerated by a fortunate coincidence: as well as being ideal for LUS, 1500 nm is the wavelength favoured by the telecommunications industry, meaning that both new and mature optical technologies are readily available for translation. Even in its current state of development, however, the technique could find immediate applications where high-quality images are not strictly required, Zhang suggests.
“For now, LUS could be useful in binary measurements where features don’t necessarily need to be resolved at a high resolution; rather, a yes/no measurement is sufficient, possibly for detection of internal bleeding or fractures in painful areas,” he tells Physics World.
Physics World’s Laser at 60 coverage is supported by HÜBNER Photonics, a leading supplier of high performance laser products which meet the ever increasing opportunities for lasers in science and industry. Visit hubner-photonics.com to find out more.
A spectrometer that directly detects the vibrational “fingerprint” of molecules offers a sensitive new way of deducing a material’s chemical make-up. The device, which was developed by researchers in Germany, Saudi Arabia and Hungary, can sense the presence of substances at much lower concentrations than is possible with state-of-the-art commercial infrared spectrometers. It can also measure the spectra of samples in water, which is impractical with conventional absorption spectrometry because water itself is such a strong absorber of infrared light. The new method is thus particularly attractive for applications in biology and medical diagnostics.
When a material is irradiated with infrared light, its constituent atoms and molecules absorb energy at frequencies that depend on their chemical structure. A few picoseconds (10-12 s) later, this absorbed energy dissipates as vibrations. Conventional infrared spectroscopy focuses on the absorption step. By analysing the light transmitted through the material, researchers determine which frequencies are absorbed, and thus which chemical species are present. The drawback is that some molecules absorb infrared light better than others, and even strongly-absorbing substances may not produce detectable dips in the transmitted light if they are only present in small amounts.
Electro-optical sampling
The new spectrometer avoids these pitfalls by focusing instead on the dissipation step. The researchers, led by Ioachim Pupeza and Marinus Huber of the Ludwig Maximilians University (LMU) and the Max Planck Institute of Quantum Optics in Garching, Germany, begin by irradiating their samples with an ultrashort pulse of infrared light. Because this initial excitation pulse lasts only a few femtoseconds (10-15 s), it delivers its energy to the sample within two oscillations of the light field. After this pulse is past, but – crucially – before the target molecules stop vibrating, the researchers apply a second ultrashort pulse of light, this time in the near-infrared region of the spectrum. This “gating” pulse carves out a slice of the electromagnetic radiation given off by the vibrating molecules – a technique known as electro-optical sampling. The researchers then analyse this vibrational signal to determine the sample’s molecular fingerprint.
A key advantage of this method, Pupeza explains, is that it separates the molecular signal from various sources of background radiation – including the initial excitation pulse. “The fact that we can carve out very brief portions of our wave means that we can exclude the excitation and only look at the molecular response,” he tells Physics World. Pupeza adds that the technique is also coherent, meaning it is only sensitive to light that is in phase with the excitation, and not to random thermal vibrations.
The researchers tested their method, which they term field-resolved infrared spectroscopy (FRS), on several biological samples. In measurements of human blood serum, they detected changes as small as 500 ng/mL in the molecular concentration of certain chemicals – 40 times lower than is possible with a commercial infrared spectrometer. The researchers also obtained infrared spectra of live human cells in suspension and an intact willow leaf. Both materials would be hard to analyse with absorption spectroscopy because they absorb nearly all incident light.
The researchers, who report their work in Nature, hope to extend FRS in future experiments. “We would like to cover the entire infrared molecular fingerprint region to capture as many resonances as possible from complex samples,” Pupeza says. Another possibility, he adds, would be to combine FRS with a frequency comb, which could provide enough spectral resolution to analyse gaseous materials as well as solids and liquids.