The emergence of 2D materials requires technologies to characterize their properties. Optical micro- spectroscopic platforms such as LabRAM Soleil offer both physical and chemical information in one system. Thus, the number of atomic layers, the effect of vertical or lateral heterostructures on the electronic properties and homogeneities of the structures can be controlled.
In this webinar, Thibault Brulé and Agnès Tempez will highlight how photoluminescence and Raman microscopies can address 2D materials challenges. They will also point out how the combination of micro-spectroscopies with AFM can lead to the nano resolution and to deeper understanding of these structures.
Thibault Brulé is Raman application scientist at HORIBA France, working in the Demonstration Centre at the HORIBA Laboratory in Palaiseau. He is responsible for providing Raman spectroscopy applications support to key customers from various industries, as well as contributing to HORIBA’s application strategies. Prior to joining HORIBA in 2017, he conducted research on proteins in blood characterization based on dynamic surface enhanced Raman spectroscopy. He then applied this technique to cell-secretion monitoring. Thibault holds a MSc from the University of Technologies of Troyes, completed his PhD at the University of Burgundy and followed on with a postdoc fellowship at the University of Montreal.
Agnès Tempez is application scientist at HORIBA France, working in the Nanoscopy group, connected to the Demonstration Centre at the HORIBA Laboratory in Palaiseau. She is responsible for providing applications support to key customers involved in nanoscopy and TOF-related research projects. Prior to joining HORIBA in 2006 as project manager, she worked as a research scientist at Ionwerks, Inc. in Houston, Texas, to develop time-of-flight mass spectrometry instrumentation for materials characterization and MALDI/ion mobility coupling for complex biological samples. Agnès holds a PhD in analytical chemistry from the University of Houston, Texas. She is the author of nine patents, one book chapter and more than 75 articles in peer-reviewed journals.
Circuits that carry signals via visible and infrared light rather than electric currents are desirable for many applications because they transmit data faster and use less energy. The problem is that current programmable photonic integrated circuits (PICs) are volatile and suffer from high optical signal losses – both of which prevent them from maintaining their programmed state. A team of researchers in China has now succeeded in fabricating metre-scale single-mode waveguides that boast optical losses of only 0.03 dB/cm. The researchers also used their waveguides to construct optical true delay lines (OTDLs), which are important components of many photonic devices – including future quantum information processors and sensors.
Current fabrication techniques for PICs produce devices with highly variable final properties. This variability limits the techniques’ yield and reduces the devices’ configurability. Existing techniques also produce devices with a high surface roughness, leading to high optical losses.
The excellent optical and electro-optical properties of lithium niobate on insulator (LNOI) offer a possible way around this problem. The material has recently emerged as an attractive substrate material for PICs, and it shows promise for making circuits with lower losses, higher density and greater tunability than previous devices. However, while researchers had previously fabricated various photonic structures – including waveguides, microresonators and photonic crystal cavities – on LNOI, no group had succeeded in using it to fabricate high-quality, low-loss OTDLs.
Low-loss OTDLs on lithium niobate
A team of researchers led by Ya Cheng of East China Normal University has now done just that using a new technique called photolithography-associated chemo-mechanical etching (PLACE). The technique can produce smooth, metre-long waveguides on the LNOI that can be integrated with micro-electrodes, which allow the devices to be tuned electro-optically at a later stage.
Cheng explains that PLACE requires five major steps. The first step is to apply a thin coating of chromium of the top surface of lithium niobate thin film using a technique called magnetron sputtering. Next, the researchers pattern the chromium film into a waveguide mask using space-selective femtosecond laser ablation. They then use a chemo-mechanical polishing (CMP) process to selectively remove the lithium niobate. The CMP process produces extremely smooth walls, leading to ultralow light propagation losses in the finished waveguides. The fourth and fifth steps involve removing the chromium mask layer using chemical wet etching and depositing a titanium oxide (Ti2O5) film on the fabrication lithium niobate waveguide as the cladding layer.
Ultra-low propagation losses
To measure the propagation losses in the OTDLs, the researchers used a high-precision loss measurement method that relies on comparing propagation losses in two waveguide arms of different lengths in a beamsplitter. They used a wavelength-tuneable laser as the light source, tuning it to a wavelength of 1550 nm and coupling it into the waveguide through a fibre taper. They first collected the light transmitted from the waveguide using a lens and then recorded its intensity with either a power meter or an infrared CCD.
The devices have propagation losses of just 0.03 dB/cm, meaning that after propagating in a one-metre long waveguide, half of the light power of the laser beam can be preserved, Cheng explains. They are also reconfigurable thanks to the good electro-optical properties of lithium niobate.
The researchers, who report their work in Chinese Physics Letters, say they now hope to reduce propagation losses even further by refining their fabrication technique. “We will also be looking into applications for the OTDLs, probably beginning with quantum PICs,” Cheng tells Physics World.
Take a rubber band, stretch it along its length, and it will shrink in the other two directions, getting narrower and thinner as you pull. The amount of “perpendicular contraction” that occurs is determined by the material’s Poisson ratio, which in such cases is a positive number. Some materials, however, do the exact opposite when stretched. Known as “auxetics”, they expand in one of the perpendicular directions and therefore have a negative Poisson ratio.
The first artificial auxetic materials were made about 40 years ago but they also exist in nature. Some are complex biomaterials, such as human tendons and cat, cow and salamander skin. There are also inorganic auxetics, including palladium, copper, gold and other face-centred cubic metals, as well as certain zeolites such as natrolite (Na2Al2Si3O10). When stretched, these materials undergo a clever internal reorganization, forming voids that lower the overall density.
Synthetic auxetics take the lead from nature’s elegance, being carefully engineered so that they have a similarly porous internal geometry. Their ability to get thicker when stretched makes auxetics fascinating from a scientific and theoretical point of view. But they also have some cool everyday applications. The sole on Nike’s Flyknit running shoe, for example, has a macroscopic auxetic geometric structure. It expands when a runner hits their foot on the ground, reducing uncomfortable pressure points in the process.
Pain relief Auxetic structures are already used in Nike’s Flyknit running shoe. The unusual auxetics discovered by the University of Leeds team could lead to new applications entirely. (Courtesy: Nike)
Despite such successes, it’s fair to say that applications of auxetics in other areas have been more limited. Part of the problem is that many artificial auxetic materials have a porous, foamy structure, with the individual pores usually being bigger than a micron in size. An auxetic material can therefore expand only by a certain amount: any more and it will weaken and possibly collapse. But in 2018 Devesh Mistry – who was then one of my PhD students at the University of Leeds, UK – made a ground-breaking and entirely unexpected discovery.
Serendipitous success
At the time, Mistry was studying the mechanical properties of liquid-crystal elastomers – a rubber-like material based on the standard liquid crystals found in flat-screen TVs and mobile-phone displays. Liquid crystals are curious in that they flow (like a liquid) yet still retain some order (like a crystal). Subtle differences in the amount of order in these fluids create many different phases of liquid crystal, the simplest being the “nematic” phase.
Nematic liquid crystals usually have elongated, rod-like molecules, which all line up so that their long axes point roughly in the same direction, like knives in a cutlery drawer. How well the molecules are aligned with that overall direction (known as the “director”) is quantified by the “order parameter” S = <3cos2θ – 1> / 2, where θ is the angle between the molecules and the director. This parameter can vary from 1 (for a perfect crystal) to 0 (corresponding to a randomly oriented liquid) and even as low as –0.5 (negative order, which we’ll return to later). Most nematic phases have an order parameter of S ≈ 0.6, meaning there’s some fluctuation in the overall alignment of the molecules around the director.
1 Auxetic behaviour in liquid-crystal elastomers Liquid crystals found in mobile-phone screens or TVs usually consist of rod-like molecules that point in roughly the same direction, known as the “director”. However, liquid-crystal behaviour can also be obtained from long-chain polymers that have rod-like units strung along the backbone (a) or stuck out from the side (b). If the neighbouring chains are physically connected, you end up with a “liquid-crystal elastomer”, which has some unusual mechanical properties, as our group at the University of Leeds, UK, found when we strained a thin film of the material in the x direction (c). We discovered that its strain in the z direction (red dots in (d)) and associated change in thickness falls as expected before suddenly increasing again. At this threshold strain of 0.9, the angle (purple squares in (d)) between the director and the x-axis drops from 90° (i.e. perpendicular to the strain) to 0° (i.e. parallel to the strain).
Now although most liquid crystals – including those in displays – contain small molecules, it’s also possible to obtain liquid-crystalline behaviour from long-chain polymers that have small, rod-like units strung out along their backbone (figure 1a) or stuck out from the side (figure 1b). When these rods line up, the polymers act like liquid crystals. What’s more, if the rods on one polymer chain are physically connected (or “cross-linked”) to more than one chain, you end up with the kind of liquid-crystal elastomer (LCE) that Mistry was studying.
Combining the elasticity of an ordinary elastomer (like rubber) with the self-organization of a liquid crystal, these soft materials have some unusual anisotropic mechanical properties. But because there was no chemist on our Leeds team at the time for him to draw on, Mistry decided to develop an LCE that he could easily synthesize himself using a known synthetic pathway and commercially available monomers – a “physicist-friendly” LCE as he put it. Using techniques borrowed from the liquid-crystal display industry, Mistry was able to make highly uniform thin films of his materials, in which the orientation of the rods can be controlled over large areas.
Strange behaviour
Having developed his new LCE material, which had rods as side groups, Mistry and a group of technicians from Leeds started building a special piece of equipment for testing its mechanical properties. The equipment was designed so that it could fit onto the stage of a polarizing microscope – one of the tools-of-the-trade in liquid-crystal physics. Using the rig, Mistry was able to measure the angle of the director at a chosen point in the LCE, which indicates the amount of local liquid-crystalline order, and monitor how this changes as the strain is increased.
When he pulled the film in the x direction – perpendicular to the director, which pointed in the y direction (figure 1c) – nothing of note initially happened. The material simply stretched in a soft elastic fashion and the director remained about 90° to the stretching direction. However, once it was stretched beyond a threshold strain of 0.9, the director rapidly began to line up with the strain axis – in other words, the angle between the director and the stretching direction fell to 0°. By simultaneously measuring the dimensions of the LCE film, Mistry concluded that it was behaving as an auxetic material – it was getting thicker in the z direction, at 90° to the direction in which it was being stretched.
There were two possible explanations for this unexpected auxetic behaviour. The first, rather mundane, option was that the strain on the sample had simply lowered the material’s density, meaning that pores had formed in it as with all other ordinary auxetics. But when Mistry carried out further mechanical studies using cryo-scanning electron microscopy and atomic-force microscopy, it appeared that the volume – and hence density – remained unchanged, which meant that no nanometre-sized pores had formed.
That therefore left the other, more exciting, option, which was that the LCE’s auxetic behaviour was occurring at the molecular scale. Incredibly, Mistry had discovered a material that, when strained beyond a certain threshold value (0.9), thickened perpendicular to the direction in which it was being stretched without the formation of voids. Somehow the polymer was rearranging itself at a local molecular level to trigger auxetic behaviour (figure 1d).
What’s more, Mistry soon noticed that the threshold strain dictating the onset of auxetic behaviour coincided with what appeared to be an order parameter of zero – and potentially even less than zero. An order parameter less than zero might seem odd, but in this case it means that the rod-like units are randomly oriented in the x–y plane of the material, while the director itself points in the z-direction, out of the plane.
Further insights have been uncovered by Thomas Raistrick, one of my current PhD students. His quantitative Raman-scattering measurements have shown that the uniaxial order of the material (meaning it has only one axis of anisotropy) falls to zero as the material is stretched. But just before the threshold strain, biaxial behaviour emerges (meaning the material now has two axes of anisotropy). As the strain increases further, the material begins to return to a uniaxial state. We need to do more work, but it’s clear that a complex interchange of order and symmetry causes the auxetic response, with further insights hopefully emerging soon.
Applications ahead
What’s so interesting about this LCE’s auxetic behaviour is that it occurs at the molecular scale rather than relying on the behaviour of macroscopic pores found in other “first-generation” synthetic auxetic materials. It therefore does not become inherently weaker when stretched, which means it could be used, say, as body armour, where you need a material that is likely to come under a lot of impact. The material would then act as a shock absorber – thickening in response to the force, rather than becoming thinner like a standard positive-Poisson-ratio material.
Clear benefits This transparent liquid-crystal elastomer displays auxetic behaviour when stretched – not because voids form inside it (as with ordinary auxetics) but because the structure reorganizes internally at a molecular level. (Courtesy: Ethan Jull, University of Leeds)
These transparent molecular auxetics could also be useful in the automotive industry to protect car windows, which usually have several layers of glass separated by layers of polymer. When hit, the induced strain makes all the layers shrink and so can become unstuck or “delaminate”. But if you had an auxetic material with a negative Poisson ratio, it would – if hit – expand against each layer of glass and stop any delamination from occurring. Molecular auxetics could also, for the same reason, be used in solar cells, which usually come with a 25-year warranty and need to stay robust for long periods.
Our team at Leeds, including Mariam Hussain, Richard Mandle, Ethan Jull, Keith Rollins and Peter Hine, is currently exploring these protection and delamination prevention applications. We believe that molecular auxetic LCEs could be game-changers – not just because they’re robust, transparent and thicken when stretched, but also because we can effectively tune these materials at the molecular level simply by adjusting their chemical make-up.
The author thanks Ethan Jull of the University of Leeds for the original draft of this article
Roger Penrose is one of the recipients of this year’s Nobel Prize for Physics. The British mathematical physicist was honoured for his discovery that black-hole formation is a robust prediction of the general theory of relativity.
In this interview with Physics World’s Tushna Commissariat, recorded in 2015, Penrose looks back to the early years of his career as a PhD student at the University of Cambridge. Penrose describes how he was inspired by courses given by eminent physicists including Paul Dirac and Hermann Bondi. But his career-long interest in black holes was triggered by attending a lecture by David Finkelstein on the nature of event horizons and the concept of a singularity.
Speaking at the “Celebrating 100 Years of General Relativity” event at Queen Mary University of London, Penrose also looks to the future of his field. He believes that in the long-standing quest to unify the theories of gravity and quantum mechanics, it is quantum theory that must be modified significantly beyond its current form. The (now) Nobel laureate refers to this process as the “gravitization of quantum mechanics”.
Stereotactic ablative radiotherapy (SABR), in which high radiation doses are delivered over just a few fractions, requires management of intrafraction tumour motion to ensure accurate and safe treatment. Such motion management is generally provided by dedicated commercial real-time tracking systems. But according to new research from Australia, both multileaf collimator (MLC) tracking and gating can provide real-time motion adaptation on a standard linear accelerator. These low-cost strategies have the capability to make SABR treatments more accessible.
Writing in Radiotherapy and Oncology, researchers from the ACRF Image X Institute of the University of Sydney and four other Australian cancer treatment centres evaluated radiation doses delivered to 44 prostate cancer patients when using MLC tracking and gating with a standard linac. The patients were enrolled in the TROG 15.01 SPARK trial, which examined the use of kilovoltage (kV) intrafraction monitoring (KIM) to measure tumour position during treatment.
During the study, led by Paul Keall, professor of medical physics at the University of Sydney Medical School and director of the ACRF Image X Institute, and Jarad Martin, a radiation oncologist at Calvary Mater Newcastle, the researchers delivered 49 fractions using MLC tracking and 166 fractions using beam gating and couch shifts. They performed motion tracking with KIM, which uses the linac’s on-board kV imager to acquire patient images during treatment. Intrafraction KIM-guided motion adaption was then performed using MLC tracking for 10 patients or gating for 34 patients. Of the 166 gated fractions, 65 included prostate motion that exceeded established thresholds and required treatment interruptions.
KIM uses the kV imager on standard linacs to image a patient, and automatically segments the implanted fiducial markers to determine the 3D position of the tumour in real time. The real-time position information can be used to guide motion adaptation, to improve the accuracy of radiotherapy. (Courtesy: Julia Johnson)
The researchers estimated the radiation doses that would have been delivered without motion adaptation and compared these with doses delivered when employing MLC tracking or gating. To evaluate the efficiency of gated treatments, they also calculated the time required to gate and perform a couch shift for each fraction.
Gated treatments proved to be efficient, taking between 5 and 19 min to complete, compared with 2 to 17 min for MLC tracking. Couch shift interruptions lasted between 1 and 4 min. Because KIM calculated the new couch position, the treatment team only needed enter the new values into the treatment system.
First author Emily Hewson.
Both MLC tracking and gating delivered similar radiation doses, and both delivered doses closer to the treatment plans than if no motion adaptation strategy had been used. “Both methods were effective at improving dose delivery accuracy that is crucial for high-dose treatments,” says first author Emily Hewson. “While MLC tracking had a slight dosimetric improvement compared with gating, this difference was small, so either method could be used to provide accessible SBRT treatments for prostate cancer.”
Hewson notes that the variance of dose differences from the treatment plan was larger with gating than MLC tracking for the bladder and rectum. “Data suggest that while both strategies would perform similarly on average, gating would result in doses that deviated more from the plan for the worst cases,” she adds.
The researchers suggest that MLC tracking might have been more accurate if a leaf width smaller than 5 mm had been used. “Smaller MLC leaves offer higher beam adaptation accuracy, which would benefit the small, slow motion that the prostate undergoes,” explains Hewson. “Another alternative to improve accuracy would be to integrate MLC tracking with a real-time couch tracking method in the future to compensate for the finite MLC leaf width.”
Addressing the complexity of each strategy, Hewson says that treatment planning and implementation were comparable, but that the underlying software and quality assurance processes for MLC tracking are more complex than gating.
“MLC tracking requires complex software to automatically adapt the treatment beam in real time, requiring more commissioning. Also, each patient plan requires pre-treatment dosimetric quality assurance,” she explains. “But implementing MLC tracking during treatment simply requires the treatment team to open the MLC tracking software; adaptation is automated. Gating requires more intervention by the treatment team if tumour motion exceeds the designated threshold.”
The researchers say that the accuracy of each adaptation method will be limited by the accuracy of the tumour localization method. The 3D tumour localization accuracy of KIM in the trial was quantified to be 0.0±0.5 mm, 0.0±0.4 mm, and 0.0±0.5 mm in the anteroposterior, left–right, and superior-inferior directions, respectively.
“KIM provides real-time tumour position in six degrees-of-freedom using fiducial markers that are MR-compatible and smaller than beacons used with some commercial systems,” the team write. “And because KIM utilizes the on-board kV imager that is already equipped on modern linacs, it potentially will allow widespread implementation of SABR.”
“Our implementation of KIM to monitor tumour motion, combined with either gating or MLC tracking improves the availability of intrafraction motion adaption for all clinics with standard treatment machines,” says Hewson. “One of the major barriers to implementing real-time adaptive radiotherapy in many countries has been a lack of finances and resources. The adaptive methods we compared could potentially overcome these obstacles and bring intrafraction motion adaptation into standard clinical practice at any cancer treatment facility that treat patients using a modern linear accelerator.”
A device described as the world’s smallest ultrasound detector has been created by Vasilis Ntziachristos and colleagues at the Technical University of Munich and Helmholtz Zentrum München. The extremely sensitive device can image structures smaller than individual living cells and is made using inexpensive and readily available silicon-on-insulator technology. With further optimization, the team says their detector could be mass-produced for use in a broad range of imaging applications.
Traditionally, ultrasound detectors use piezoelectric transducers to both broadcast high-frequency sound and also pick up sound that has reflected from target objects – using the reflected signal to create an image. The spatial resolution of an ultrasound image can be improved by shrinking the size of the transducers, but this can drastically lower the sensitivity of the system.
Recently, optical detection techniques have been used to get around this resolution problem. One approach has been to detect changes in the resonant properties of an optical cavity that are caused by ultrasound waves. But so far, even the most advanced miniaturization techniques have not succeeded in confining light to dimensions smaller than about 50 microns, placing a constraint on the resolution that can be achieved.
Silicon-on-insulator technology
Ntziachristos’ team has improved on these designs using silicon-on-insulator technology, which can be fabricated through techniques widely used in the semiconductor industry. The researchers developed a “silicon waveguide-etalon detector” (SWED). The waveguide is contained in a periodic arrangement of Bragg gratings, each separated by spacers; but with one grating replaced by a cavity. A reflective layer of silver is then deposited on the end of the waveguide.
When Ntziachristos and colleagues pumped a continuous-wave laser into the SWED, they found that incident ultrasound waves could induce characteristic intensity variations in the light reflected off the silver layer. Furthermore, the high contrast between the cladding and cavity material enabled far better light confinement than had been achieved previously.
With a sensing area that is 220×500 nm in width, the SWED is a factor of 10 smaller than the diameter of a blood cell; and 10,000 times smaller than previous resonator-based sensors. The resulting spatial resolution made possible for Ntziachristos’ team to image of structures 50 times smaller than the wavelength of the ultrasound used to obtain the images – a capability called super-resolution imaging. At the same time, the SWED is 1000 times more sensitive than current optical devices; and some 100 million times more sensitive than piezoelectric detectors of the same size.
Such a significant improvement in both sensitivity and resolution mean that the SWED can fit onto a chip just half a micron in size. This opens-up a wealth of opportunities for improvement in both medical and industrial imaging. With further optimization, the device could soon be integrated into mass-produced, extremely dense ultrasound arrays, capable of picking out ultra-fine details in materials and biological tissues. It could also be used to study the fundamental properties of high-frequency sound waves, and their small-scale interactions with matter.
What are the benefits of the CLS to the Canadian user community when compared to using facilities outside of the country?
The Canadian science community has always had access to foreign facilities, including several dedicated beamlines at the now decommissioned Synchrotron Radiation Center in Wisconsin, in the 1980s. But having a reliable and dedicated source for Canadian scientists has allowed for comprehensive, robust, and long-term research projects to be conducted, helping Canada maintain and secure scientific prominence in key scientific areas, including nanomaterials, agriculture, and protein crystallography, among many others.
The CLS was also built to specifically address Canada’s science needs and strategic priorities, with dedicated infrastructure for research in health, agriculture, energy and the environment, and advanced materials, while facilities in other countries have understandably been designed to support those nations’ strategic interests.
Additionally, building a light source for and in Canada has encouraged the development of previously unavailable expertise in the design and manufacture of synchrotron components. Several companies across the Canada now compete to deliver services and equipment to other synchrotron development projects around the world.
Finally, while of course we hope that scientific endeavour around the world remains collaborative, open and unaffected by political winds, it is important that our country maintains a reliable advanced light source for Canadian scientists to be able to access.
What are some of the challenges of running a national facility in a country where major population centres are separated by huge distances?
Canada is the second largest country in world by area. Indeed, the distance between Toronto and Vancouver, two of its major cities, is almost the same as from London to Cairo. Perhaps Canadians are simply used to this but we have all grown accustomed to calculating time zones (six across the country) for conference calls and adjusting in-person meeting schedules (pre-COVID-19) to allow for everyone to recover from jetlag.
View from the top Isabelle Blain (left) and Marie D’Iorio sit on the board of directors at the CLS. (Courtesy: Canadian Light Source)
The CLS is located at the University of Saskatchewan in Saskatoon – which is not a major metropolitan area and is some distance from any large cities. Has this relative isolation been a problem? If not, how has it been overcome?
Because of Canada’s expanse, the CLS’s location would have presented a bit of a challenge regardless of the site selected. Being located in the centre of Canada has allowed for centralization, giving equal access to the theretofore largest provincial user groups (Ontario to the east and British Columbia to the west) but also, and importantly, it fuelled the growth of Saskatchewan’s provincial synchrotron user community. Indeed, Saskatchewan grew from a single synchrotron user to more than 300 in 10 years, attracting research funding, faculty and graduate students from around the world.
Additionally, Canadian grant councils, especially the Natural Sciences and Engineering Research Council, have continuously funded travel for users to access the CLS in Saskatoon. Although travel within Canada is known to be expensive compared to travel within Europe and the US, for instance, Canadian synchrotron users have been largely successful at accessing travel funding to use the CLS.
The benefits of having a synchrotron in a smaller community also include easy access from the airport, affordable accommodation, and lower overall ancillary costs.
Has the presence of the CLS in Saskatoon strengthened science in that city and elsewhere in the Prairie region of Canada?
The CLS caters to approximately 1000 scientists from academia, government and industry per year, approximately a third of them from the Prairies.
The CLS helps to train and educate hundreds of graduate students per year, attracts some of the most talented scientists in the world to Canada, fosters collaborations between Canadian and local scientists and colleagues at facilities around the world, and has created technologically advanced jobs and unique scientific and technical expertise in our province and country.
The CLS also strengthens Canada’s and Saskatchewan’s international scientific reputation through publications in academic literature (more than 5000 papers to date), through hosting international scientific conferences, and by bringing media attention to scientific discoveries.
The CLS collaborates with a number of other initiatives in the Prairies including the Protein Supercluster, the National Research Council’s Aquatic and Crop Resource Development Research Centre, and the Global Institute for Food Security.
It is important that Canda maintains a reliable advanced light source for Canadian scientists to be able to access
Are there any plans to build complementary facilities in Saskatoon? For example, a neutron source that would make Saskatoon the “Grenoble of the Prairies”?
The University of Saskatchewan, the CLS’s owner, is a leader in the effort to build a new dedicated neutron source for Canada. It would be advantageous of course to have it near the CLS, in terms of the complementary nature of the research and the enormous possibilities for scientific cross-pollination, but there are no immediate plans to build a neutron source on our campus.
However, the proximity to the university – as well as the federal agriculture department, nuclear science facilities, and an innovation park renowned for its plant biotech companies – has created a cluster known for its global-leading agriculture expertise, which culminated in the recently funded Protein Industries Canada (PIC), an industry-led, not-for-profit organization created to position Canada as a global source of high-quality plant protein and plant-based co-products.
What will the future bring for the CLS? Are there any major upgrade plans?
The CLS is a third-generation facility and already brighter, faster and stronger fourth-generation light source facilities are being built around the world. While the CLS will remain globally competitive and a crucial resource for the Canadian scientific community for at least a decade, the national light source science community is developing plans for the next generation of Canadian synchrotron-enabled science.
Based on consultations with international machine-design experts as well as extensive engagement with the Canadian user community, a conceptual design report for CLS 2.0 is in development. CLS 2.0’s beam will be 700 times brighter and more coherent than the current CLS. A brighter, faster and smaller light beam will enable scientists to see samples much more clearly and collect better data much faster, allowing for better scientific outcomes.
CLS 2.0 will be among the best light sources in the world and will enable the Canadian scientific community to remain a global leader.
What are some of the physics research highlights from the CLS over the past 16 years?
Physics is the backbone of everything we do as a science facility – the accelerator and rings operate thanks to the expertise of our physicists and engineers – and it is a huge part of the research programme. Our Far-Infrared beamline, for example, is a powerful tool for studying molecular vibrational dynamics, as Ohio State University and University of New Brunswick scientists showed by demonstrating the effects of quantum monodromy on the spectrum of cyanogen is-thiocyanate.
Our many X-ray beamlines can be put to several useful applications in physics research, most prominently in the recent discovery and exploration of charge density waves in cuprate superconductors.
Vast expanses Inside the CLS experimental hall, which welcomes scientists from across Canada and beyond. (Courtesy: Canadian Light Source)
Recently, our REIXS beamline was used by the Canadian Space Agency and NASA scientists to test and validate the performance of window shielding for an upcoming satellite launch, by ensuring that X-rays within the desired range, but no infrared light, could pass through.
Researchers from University of Toronto and King Abdullah University of Science and Technology have overcome a key obstacle in combining the emerging solar-harvesting technology of perovskites with the commercial gold standard – silicon solar cells. The result is a highly efficient and stable tandem solar cell with the perovskites mixed as a liquid solar ink. The CLS was used to show that the solution processing treatment left the perovskites’ crystal structure untouched, leaving their basic function intact.
The REIXS beamline, one of the top X-ray scattering beamlines in the world in quantum materials research, was used to study a conductor-to-insulator phase change in samarium nickelate, a quantum material known as a strongly correlated electron system. The dramatic phase change means that the material can be used as a very sensitive detector. Indeed the team from the CLS, Argonne National Laboratory, Rutgers University, the National Institute of Standards and Technology, the Massachusetts Institute of Technology, Columbia University and the University of Massachusetts was inspired by an organ near a shark’s mouth called the ampullae of Lorenzini, which is capable of detecting small electric fields from prey animals.
This of course is just a small subsection of the physics research at our facility, which also covers hydrophobics, catalyst structure, new imaging techniques and many other areas.
Ancient observations, modern explanation Robert Alicki (left) and Alejandro Jenkins (right). Alicki holds a bust of Thales, the pre-Socratic philosopher who described magnetic and “amber” effects as evidence of a material’s soul. (Credit: Maria Alicki)
Shuffling around on a carpet to give someone an electric shock might seem like the oldest trick in the book, yet scientists know surprisingly little about why it happens. “I believed – like I think most physicists – that these phenomena were understood by the experts,” says Robert Alicki, a mathematical and theoretical physicist at the University of Gdansk, Poland. “But it was not the case. It was still an open question.”
Thanks to Alicki and his colleague Alejandro Jenkins of the Universidad de Costa Rica, the mystery surrounding triboelectricity (as the “charging by rubbing” effect is known) may be clearing up. According to Alicki and Jenkins, a major barrier to understanding triboelectricity is that physicists tend to view the phenomenon in terms of electrostatic potentials, even though “from a potential effect, you are never going to sustain a current that is going around a circuit,” Jenkins says. “It’s like the problem of perpetual motion.”
Alicki and Jenkins formulated their alternative description by incorporating the concept of pumping into a new, quantum model of a system undergoing triboelectric processes. “Pumping can replenish a potential, but it is not describable by a potential,” Jenkins explains. “It can do something that no potential can do, and that is to drive something around on a closed path.”
Using this pumping-based model, the pair successfully reproduced several experimentally observed characteristics of triboelectricity, such as its dependence on material surface and geometry and the speed of rubbing. In particular, the model accurately predicts that the most electrically negative and electrically positive materials will have symmetrical maximum charge densities when rubbed – something that models based on electric potentials models cannot explain. The new model also predicts a maximum tribovoltage in terms of the sliding velocity of the two surfaces, which Alicki and Jenkins say could be tested using an experimental set-up with sufficient control over a constant sliding velocity.
From lasing bosons to fermions
While Alicki has been working on quantum thermodynamics for decades, Jenkins is a more recent recruit, having started out in high-energy theory. As their paths converged – Jenkins has just begun a fellowship at Gdansk’s Institute for Theory of Quantum Technologies (ICTQT) – they discovered that they shared an interest in systems found in motors and engines that operate away from equilibrium, where energy is irreversibly converted from one form to another. While such systems are the bread and butter of engineers, and Alicki and collaborators started working on them as far back as the late 1970s, Jenkins says that on the whole, they have attracted less attention from theorists than systems at equilibrium, fluctuating around equilibrium or relaxing to equilibrium.
At first, this common interest in out-of-equilibrium systems led Alicki and Jenkins to formulate a mathematical description of “superradiance”, or the enhanced effects of radiation associated with rotating objects. Such effects were first described in 1971 by Yakov Zel’dovich, whose suggestion that superradiance ought to apply to a spinning gravitational mass led to follow-up work by Jacob Bekenstein and Stephen Hawking on the thermodynamics of black holes.
By describing rotating systems in terms of quantum fields, and treating the moving object as a heat bath, Alicki and Jenkins showed how work could be extracted via stimulated emission, similar to a laser’s operation. But while their laser analogy offered a new perspective on such systems, the underlying process, while exotic, was already pretty well-understood. It was only later that they realized that their formulation of a quantum field and two heat baths could lead to something “qualitatively new”: a description for the motion that drives an active current of fermionic electrons from one material to another in the humble triboelectric effect.
“The soul of inanimate objects”
The pair built up their model by defining Hamiltonians with creation and annihilation operators for electron states on the surface of the moving material (where the population inversion takes place) and within the interior of the two rubbed materials (which act as the heat baths). They then defined the pumping of the system in terms of the rate of change of the populations of these electron states. Although the Pauli exclusion principle forbids fermions such as electrons to exhibit superradiance, Alicki and Jenkins were able to show that with the bulk bodies of the two surfaces acting as two heat baths, a motion-induced population inversion of fermions could nevertheless result and sustain a macroscopic current.
While magnetic and triboelectric effects have been known to scientists since antiquity – in the 6th century BCE, the pre-Socratic philosopher Thales of Miletus referred to them as “evidence of a kind of soul” – Jenkins notes that “the interesting point is neither can be described classically”. The need for quantum mechanics to explain the behaviour of permanent magnets was established by Niels Bohr and Hendrika Johanna van Leeuwen over 100 years ago, and Jenkins says that his and Alicki’s latest work shows that the same is true of triboelectricity. Although pumping and work cycles exist in classical thermodynamics, the pair insist that only a quantum treatment can make sense of electrons’ fermionic behaviour as they move between surfaces in the triboelectric effect. In effect, Alicki says, “Quantum mechanics is the soul of inanimate objects.”
Alicki and Jenkins are now considering ways to further investigate dry friction to explore how it relates to the triboelectric effect. They are also interested in understanding details of energy transduction in active devices such as batteries, solar cells and thermoelectric generators, as well as active processes in various applications from astrophysics and cosmology, to fundamental physics.
New laureates: Roger Penrose, Reinhard Genzel and Andrea Ghez have won the the 2020 Nobel Prize for Physics. (Courtesy: IOP Publishing/Tushna Commissariat; CC-BY-SA H Garching; UCLA/Christopher Dibble)
The prize is worth 10 million Swedish krona (about $1.1 million) and half goes to Penrose, with Genzel and Ghez sharing the other half of the prize.
The Nobel Committee cites Penrose “for the discovery that black hole formation is a robust prediction of the general theory of relativity”, and Genzel and Ghez “for the discovery of a supermassive compact object at the centre of our galaxy”.
After the announcement was made this morning by the Royal Swedish Academy of Sciences, Ghez answered questions remotely from the US.
Doubt and excitement
When asked what went through her mind when she first thought there was a huge black hole lurking in the middle of the Milky Way, Ghez replied “The first thing was doubt that you’re really seeing what you think you’re seeing. Doubt and excitement – that feeling that you’re at the frontier of research.”
On being the fourth woman to win the physics Nobel prize, Ghez said “I’m thrilled to receive the prize and take very seriously the responsibility with being the fourth woman to win the Nobel prize. I hope I can inspire other young women into the field – it’s a field that has so many pleasures. And if you’re passionate about science there are so many things that can be done.”
Since the 18th century, physicists have speculated about the existence of objects so massive that even light cannot escape their gravitational pull. However, it was not until the early 20th century when Albert Einstein created his general theory of relativity that scientists had the mathematical tools to investigate black holes with mathematical precision.
Can black holes form?
But even then, there was confusion over whether a black hole could form in nature. One concern at the time was the idea that any departure from perfect spherical symmetry of an object could prevent it from collapsing to a singularity – a single point in space and time. This was an important consideration because rotating stars do not have spherical symmetry.
In 1965 Penrose developed new mathematical tools for describing how a star could collapse to a black hole and devised a rigorous proof that the formation of a black hole is entirely consistent with general relativity. In particular, he introduced the concept of the “trapped surface” – a closed 2D surface with the property that all light rays orthogonal to the surface converge when traced toward the future. A trapped surface is formed in the early stages of the gravitational collapse of a star and once it has formed, the system must collapse to a singularity, creating a black hole. Crucially, Penrose showed that this applies irrespective of the symmetry of the collapsing object.
As well as being the first major contribution to general relativity since Einstein, Penrose’s work inspired generations of astrophysicists and astronomers to work towards observing black holes.
“Renaissance in relativity”
“It was Penrose, more than anyone else, who triggered the renaissance in relativity in the 1960s through his introduction of new mathematical techniques,” says the UK’s Astronomer Royal, Martin Rees.
Also in the mid-1960s, astronomers and astrophysicists were beginning to think that light emitted from bright regions at the centres of some galaxies was created by matter falling into black holes that were millions or even billions of times more massive than the Sun. However, verifying that these active galactic nuclei (AGNs) contained black holes proved to be very difficult because telescopes did not have the resolution to distinguish between a black hole and a tight cluster of stars – which could also be lurking at the centres of galaxies.
A way around this problem is to study the motions of stars that orbit close to the AGN. If the stars are orbiting a black hole, their speeds should have a specific relationship with their distance from the black hole – as do planets orbiting the Sun. However, if the stars are orbiting a cluster of stars, a different speed–distance relationship is expected.
Highly elliptical orbit
Teams led by Ghez and Genzel studied a star that takes about 16 years to orbit the AGN at the centre of the Milky Way. The star has a highly elliptical orbit and gets to within 17 light-hours from the AGN. Independent analyses of the motion of the star by both teams suggests that it is orbiting an extremely compact object that with a mass of about 4 million Suns. The only reasonable interpretation of this is that there is a supermassive black hole at the centre of the galaxy.
Laura Nuttall, who studies black-hole mergers at Portsmouth University told Physics World “It’s great to see Penrose, Ghez and Genzel recognized with the Nobel prize. Penrose is synonymous with black holes. His work in proving how black holes form, as well as their centre being a singularity, has opened so many fields, including that of searching for gravitational waves.”
She adds, “Ghez and Genzel’s work has also inspired many, such as the Event Horizon Telescope, which only released an image of a supermassive black hole last year. It’s wonderful, that their work is very much taken as a given today – of course black holes form from the collapse of matter and of course there’s a black hole at the centre of the galaxy. It’s easy to forget that this has not always been the case!”
Ghez was born in 1965 in the New York City, US. She received a BS in physics from the Massachusetts Institute of Technology in 1987 and a PhD at the California Institute of Technology in 1992. After a year at the University of Arizona, she moved to the University of California, Los Angeles in 1994 where she has remained since.
Genzel was born in 1952 in Bad Homburg vor der Höhe, Germany. He studied physics at the University of Freiburg before completing a PhD in radio astronomy at the University of Bonn in 1978. He then moved to the US working first at the Harvard-Smithsonian Center for Astrophysics until 1980 and then the Space Sciences Laboratory at the University of California, Berkeley until 1985. After a year as a professor at the University of California, Berkeley, he became a director of the Max Planck Institute for Extraterrestrial Physics in 1986. Since 1999 Genzel has held a joint appointment between the Max Planck Institute for Extraterrestrial Physics and the University of California, Berkeley.
Penrose was born in 1931 in Colchester, UK. He did a BSc in mathematics at University College London before completing a PhD in algebraic geometry at the University of Cambridge in 1957. After spending time at Princeton and Syracuse universities in the US in 1959–1961, he returned to England to King’s College London before heading to the University of Texas at Austin in 1963–1964. Penrose then moved to Birkbeck College, London until 1973 before heading to the University of Oxford, where he has remained since.
Electrodes fixed to an inflated balloon catheter (Courtesy: John Rogers, Northwestern University)
A research team led by engineers at the George Washington University and Northwestern University has developed a new surgical tool containing advanced flexible electronics that could improve diagnosis and treatment of cardiac diseases.
Balloon catheters are often used during minimally invasive surgery or ablation procedures, where they are relied upon to carry out measurements and perform therapeutic functions when inserted through small incisions. They can also be inserted into the heart to treat cardiac arrhythmias by locating and ablating the region of tissue causing the arrhythmia. Currently, however, most balloon catheters are rigid, which means they cannot conform well to the soft surfaces in the heart. In addition, these devices can only perform one function at a time, requiring doctors to use multiple catheters throughout a procedure.
Using their experience in flexible and stretchable electronics, the researchers sought to create an elastic system that conforms to tissue surfaces and can act as both a diagnostic and therapeutic device in one.
Flexible arrangements
The device is made up of stretchable gold interconnects sandwiched between a flexible polyimide sheet to form a flexible surface. The catheter is not only flexible but can also stretch up to 30% in both directions without causing damage to the material.
The researchers employed existing manufacturing techniques commonly used in the semiconductor industry to produce each array on a temporary silicon wafer. They then transferred the arrays to the soft elastomeric surface.
One of the features that makes this catheter unique is its multilayer design, with each layer having a unique purpose. The layer on the outside, in contact with the skin, contains electrodes that carry out electrical readings and electrical stimulation of tissues. The next layer down, separated by an insulating layer of polyimide, contains temperature sensors. These could allow surgeons to track changes in temperature of tissues in specific areas. Finally, at the bottom is a layer containing pressure sensors, which measure local forces between heart tissue and the device.
New device could change surgery
The team tested the balloon catheter using computational models, plastic heart models, and real human and animal hearts. They found that the catheter had advantages over current devices in both physical form and functionality.
The multilayered nature of the new catheter means that a wide range of diagnostic and therapeutic functions can be integrated into one device, allowing doctors to perform several measurements simultaneously and map them to specific areas. This opens up the possibility that the device could automatically regulate properties like temperature throughout surgery.
“We have taken new breakthrough materials and fabrication techniques typically employed by the semiconductor industry and applied them to the medical field, in this case cardiology, to advance a new class of medical instruments that will improve cardiac outcomes for patients and allow physicians to deliver better, safer and more patient-specific care,” says Igor Efimov, a senior author of the study.