Borophene – a sheet of boron just one atom thick – can be stabilized in air by bonding its atoms with hydrogen, researchers in the US have discovered. The new technique was developed Mark Hersam at Northwestern University and colleagues, who found that hydrogenated sheets of borophene (called borophane) oxidized far more slowly in air than pure boron sheets. Their approach could enable researchers to finally realize many of the proposed applications of borophene – which were previously seen as impractical outside the lab.
In its atomically thin 2D form, boron has a diverse array of crystal lattice structures. Together named borophene, these sheets have many desirable properties: including high mechanical strength, flexibility, and phonon-mediated superconductivity. Like carbon-based graphene, these 2D materials hold the potential to revolutionize many aspects of electronics. However, unlike graphene, borophene is much trickier to fabricate into practical devices.
While graphene can be produced by simply peeling away layers of graphite, borophene must be synthesized directly on a substrate: a process first demonstrated by Hersam and colleagues in 2015. Unlike graphene, borophene rapidly oxidizes when exposed to air, removing its conductivity. This means that any experiments on the material must be carried out in ultra-high vacuum conditions, severely restricting the integration of borophene within practical devices.
Chemical functionalization
Previously, chemical functionalization by adding different atoms has been widely used to fine-tune the electronic properties of materials including graphene. Among the resulting products is graphane, in which the carbon atoms are bonded with hydrogen. Inspired by this process, Hersam’s team exposed borophene to atomic hydrogen in ultra-high vacuum to produce sheets of borophane featuring boron atoms bonded to hydrogen in several different ways.
The researchers then used a combination of atomic-scale imaging, spectroscopy, and theoretical calculations to determine the diversity of crystal lattice structures of their new material. Overall, they identified eight distinctive bonding patterns, each of which retained the desirable traits of borophene. The team also showed that their process could be entirely reversed through the thermal desorption of hydrogen – returning the boron to its original pure state.
Outside the vacuum chamber , Hersam and colleagues found that the oxidation rate of borophane was two orders of magnitude lower than borophene – demonstrating a far higher stability in air. This resilience at standard temperatures and air pressures could now significantly improve the prospects for the practical use of atomically-thin boron outside the lab. Applications could include batteries, sensors, solar panels, and quantum computers. If achieved, the team predicts a potential revolution in electronics; comparable even with previous advances brought about by graphene.
Antihydrogen atoms have been laser-cooled for the first time, paving the way for precision studies that could reveal why there is much more matter that antimatter in the universe. The cooling was done by an international team of physicists at CERN in Switzerland, who used a new type of laser to cool the antiatoms and then measured a key electronic transition in antihydrogen with unprecedented precision. Their breakthrough could lead to improved tests of other key properties of antimatter.
In every process ever observed in the laboratory and almost every process predicted by the Standard Model of particle physics, the creation of a particle is always accompanied by the creation of its antiparticle. Conversely, when a particle and its antiparticle meet, the two annihilate. One indisputable fact, however, is that we live in a universe that is made almost entirely of matter – raising the question of how lots of matter was created without an equivalent quantity of antimatter at the Big Bang.
In the Standard Model, the physical properties of a particle (such as an electron) appear to be equal and opposite to its antimatter equivalent (the positron) – electrons and positrons have the same mass but opposite electrical charge, for example. Therefore, looking for tiny differences between particles and their antimatter equivalents could shed light on the matter-antimatter asymmetry in the universe. One way of doing this is to make and study antihydrogen, which comprises a positron and an antiproton.
Annihilation problem
As with ordinary hydrogen, the quantum properties of antihydrogen become clearer at low temperatures. Cooling antiatoms like antihydrogen, however, is far from straightforward. Many techniques for cooling matter are simply unavailable: sympathetic cooling, in which the atoms lose energy by colliding with different atoms, is not feasible as they would annihilate. Evaporative cooling, in which all but the very coldest atoms leave the trap, taking the energy with them, is currently impossible because antiatoms are so hard to produce: “It’s just not an option with antimatter,” says Jeffrey Hangst of Aarhus University in Denmark, who works on the Antihydrogen Laser Physics Apparatus (ALPHA) experiment at CERN; “We don’t have the numbers; we don’t have the density.”
One possibility is Doppler cooling, which works – paradoxically – by exciting the atoms. If the atoms are irradiated with a laser frequency just below that needed to excite an electronic transition, an atom moving towards the beam will see the radiation blue-shifted and may absorb a photon. When this excited state decays, it emits more energy than it originally absorbed, cooling the sample. This technique is widely used with other atoms but faces a problem with hydrogen – the one atom whose antimatter counterpart has so far been produced. The only suitable transition is the Lyman-alpha transition between the 1s and 2p orbitals, involves light at vacuum ultraviolet wavelengths around 121 nm. However, there are no practical lasers operating in this region and efforts to develop a continuous wave 121 nm laser had foundered after years of attempts.
“A lot of grief”
For the new work, fellow ALPHA member Makoto Fujiwara from TRIUMF in Canada suggested they try pulsed laser cooling and, together with colleagues, set out to produce a device that produced 121.6 nm laser pulses from 729.4 nm continuous wave laser light: “In retrospect it seems like kind of an obvious thing to do,” says Fujiwara, but Hangst says Fujiwara “took a lot of grief from some of our colleagues when he proposed this and when they started building the laser”.
The researchers then designed a cylindrical magnetic trap with transparent ends. At one end, they injected antiprotons from CERN’s antiproton decelerator. At the other, they added positrons. After several hours, around 1000 antihydrogen atoms had accumulated in the centre of the trap. The researchers then used their laser to cool the atoms. They do not report a final temperature in their paper, as the atoms had not reached thermal equilibrium, but the sharpened Lyman-alpha peak revealed that the atoms were moving more slowly than had previously been achieved.
Einstein’s equivalence principle
The researchers next measured the frequency of the transition between the 1s and 2s orbitals in antihydrogen: “It’s the thing we understand best in hydrogen, it’s measured absolutely to a precision of about 10-15,” says Hangst, “and that’s the thing we want to compare with antihydrogen.” Their new results show an improved precision from cooling and they intend to report a comparison with hydrogen in future work. The team also wants to study other properties of antihydrogen, starting with Einstein’s equivalence principle, which says that matter and antimatter behave the same under gravity.
Fujiwara describes the team’s success as “revolutionary” and Vladan Vuletić of the Massachusetts Institute of Technology (who was not involved in the work) agrees: “The main challenge with cooling hydrogen or antihydrogen has always been…the generation of laser radiation at such short wavelengths with the necessary spectral purity…You’re building this on top of this very complex experiment: you first need to produce the antiprotons; you need to trap them together with the positrons in an electromagnetic trap; you need to neutralize them into antihydrogen and then on top of all that you need to bring in your laser cooling.”
The Japanese semiconductor pioneer Isamu Akasaki has died at the age of 92. His work in the late 1980s and early 1990s led to the development of blue light-emitting diodes (LEDs), which soon found a wide range of applications from low-energy light bulbs and mobile-phone displays to televisions. For the work Akasaki shared the 2014 Nobel Prize for Physics together with fellow Japanese-born researchers Hiroshi Amano and Shuji Nakamura.
Akasaki was born in Chiran, Japan, on 30 January 1929 and graduated from Kyoto University in 1952. After receiving a PhD in electronics in 1964 from Nagoya University, he moved to Matsushita Research Institute Tokyo before returning to Nagoya in 1981 where he remained for the rest of his career. From 1992 Akasaki held a joint position with Meijo University, which is also in Nagoya.
It was at Nagoya and Meijo where Akasaki conducted much of his Nobel-prize-winning research. The first red LED was created in the early 1960s and researchers then managed to create devices that emitted light at ever-shorter wavelengths, reaching green by the end of that decade. However, creating devices that could deliver enough blue light was a struggle. But doing so was essential for a source of white light – needing, as it would, red, green and blue LEDs.
Crystal maze
At Nagoya in the 1980s, Akasaki and Amano focused on making blue LEDs from the compound semiconductor gallium nitride (GaN) given that it has a large band-gap energy corresponding to ultraviolet light. Yet they needed to overcome several challenges, including the ability to create high-quality crystals of GaN with good optical properties. To do so they used metal-organic vapour-phase epitaxy techniques to deposit thin films of high-quality GaN crystals onto substrates.
Another issue was to learn how to dope the GaN so it is a “p-type” semiconductor, which is crucial for creating an LED. Akasaki and Amano noticed, however, that when GaN doped with zinc is placed in an electron microscope, it gives off more light than if undoped, which suggested that electron irradiation improved the p-doping.
This effect was later explained by Nakamura, who was based at the Nichia Corporation and was working independently on GaN blue LEDs. In the early 1990s both groups then used their high-quality, p-doped GaN to make high-brightness blue LEDs, achieved by combining them with other GaN-based semi-conductors in multilayer “hetero-junction” structures. Today, GaN-based LEDs are used in back-illuminated liquid-crystal displays in devices ranging from mobile phones to TV screens.
In 2014 Akasaki, along with Amano and Nakamura, were awarded the Nobel Prize for Physics for “the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”. Akasaki was awarded many other prizes during his career including the Japanese Order of Culture in 2011 and the Queen Elizabeth Prize for Engineering in 2021. He died on 1 April from pneumonia.
A few years ago, atmospheric physicist Grant Allen and his colleagues were using drones to measure methane emissions from a fracking site in Lancashire in the north-west of England. But next door to the shale-gas operation was a dairy farm and the researchers wondered if they could also measure the methane produced by the cows. So while the animals were in the barn being milked, the researchers flew their drone system in the fields outside.
“They have about 150 cows and once you put them all inside a box, like a barn, they become a condensed system that you can model as a point source of emissions,” says Allen, who is based at the University of Manchester, UK. It is then possible to measure the concentration of the methane that is downwind with a drone. “And if you know the wind speed and you’ve got the measurement of the concentration,” Allen continues, “you can do some clever maths to calculate what the emission flux is in grams per second from the herd as a whole – that way you can get an average emission per cow.”
Cattle are responsible for a huge amount of greenhouse-gas emissions. Globally the livestock sector accounts for the equivalent of seven gigatonnes (7 × 1012 kg) of CO2 every year, according to the United Nations. This is around 15% of anthropogenic emissions – a similar proportion to cars. On a commodity basis, beef and milk from cows are responsible for the highest proportion of these emissions. And almost 40% of that seven gigatonnes is methane produced by fermentation in the stomachs of ruminants – mainly cows.
Over the course of a week, the fracking site that Allen and his colleagues were monitoring released more than 4 tonnes of methane – equivalent to the environmental impact of 142 trans-Atlantic flights. But this was linked to a single event, thanks to operations to clean out a 2.3 km-deep shale gas well. While sources of methane like this are sporadic, cattle belch methane all year round. “If you compare it over a few days, the fracking site was emitting a lot more per unit time over that period,” Allen explains. “But if there were no other emissions from the fracking site for the rest of the year, then the cumulative flux from a dairy herd of 150 cows for a whole year is more.”
Droning on The drones used by Grant Allen of the University of Manchester are tethered using tubes connected to spectrometers on the ground. The drones did 22 surveys downwind of a point-source of methane gas, released from a regulated cylinder with a flowmeter. (Courtesy: Grant Allen)
Fermenting plants, burping methane
Cows and other ruminants eat grass, straw and other fibrous plants that are simply indigestible to most other animals. To extract nutrients from the complex carbohydrates – particularly cellulose – in these plants, the animals ferment them in a special stomach chamber known as a rumen. In this oxygen-free environment, microbes (mainly bacteria) break down the complex plant material. But as this process occurs, it produces a vast amount of hydrogen.
As the hydrogen builds up, the cow turns to another group of bacteria-like micro-organisms known as archaea. These bugs use the hydrogen as a source of energy but produce methane as a by-product, a process known as methanogenesis. And as this gas builds up, the cow belches it out, which is good for the cow, but not the planet because methane is a potent greenhouse gas. Although it only survives in the atmosphere for a decade or two, over a 20 year period it has more than 80 times the global-warming potential of carbon dioxide, according to the Intergovernmental Panel on Climate Change.
Ruminating on ruminants Ermias Kebreab from the University of California, Davis is studying ways to reduce methane emissions from cattle. (Courtesy: UC Davis/Katherine Kerlin)
This makes methane a good short-term target for tackling climate change, says animal scientist Ermias Kebreab, director of the World Food Center at the University of California, Davis. While reducing carbon dioxide needs to be the long-term focus, it will take a while for us to see the effects of this as carbon dioxide persists in the atmosphere for centuries. “But the effect of slowing down or reducing methane will be felt in a decade or so,” explains Kebreab. And given that they are responsible for more than 40% of anthropogenic methane emissions, livestock are a good place to start.
Improving productivity is key to reducing emissions. According to Kebreab, you can make animals produce more protein – milk or meat – per kilogram of feed with a combination of genetics and good nutrition. For example, he explains, there are cows in low-income countries that produce around 4–5 kg of milk per day. But if you were to cross breed those with Holstein-Friesian cows – which are renowned for their high milk production – you could get 20 kg of milk per day, while maintaining some of the advantages of the local breeds. Indeed, over the last five decades, a focus on breeding and meeting nutritional requirements has cut methane emissions per litre of milk by about 50% in the US. There are now fewer dairy cows than half a century ago, but they each produce more milk (figure 1).
1 Feeding a growing population Changes (relative to 1950) in total milk produced, milk production per cow, total number of cows and dairies and methane produced per kilogram of milk in the California dairy industry. (Data source: Ermias Kebreab)
Seaweed supplement
To cut emissions even further, Kebreab’s recent work has focused on using seaweed as a feed additive. He and others have shown that adding various species of seaweed to a cow’s food can reduce methane production by as much as 90%. “It’s absolutely crazy,” Kebreab exclaims, adding that there is minimal processing of the seaweed before it is given to the cattle. “When it’s collected, we freeze dry it to make sure that the active ingredient is stable. After that, you just crush it into a powder, which is added to the feed,” he explains.
The seaweed inhibits methanogenesis. The archaea in the cows’ rumen use enzymes to break down gases, but various compounds in the seaweed appear to interfere with some of these enzymes. This means the microbes are unable to complete the process and much less methane is produced. Other compounds have also been found to have similar effects. The Swiss AgriTech company Mootral claims that its garlic-based additive shows a 40% reduction in methane production.
In his research, Kebreab measures the methane produced by the cows using a system called GreenFeed. This trough-like machine contains food to entice the cattle and then measures the gases they breath and burp out while they are eating. There are also other devices for measuring the emissions of individual animals, such as respiration chambers and hand-held spectroscopy devices. While these systems are accurate, using them to measure large numbers of animals is expensive and time consuming. To get around this, animal scientists use the measurements from small numbers of animals to create models of emissions from different feeding systems that can be applied to whole herds. Increasingly, however, people are exploring ways to measure whole herds. That makes sense, as the variability in emissions between one cow and another can be huge. Such large-scale measurements can also help confirm models based on emission measurements from individual animals.
Taste the difference A small amount of macro red algae, a type of seaweed, has been mixed with molasses and added to cattle feed to study its effect on cattle emissions. (Courtesy: Gregory Urquiaga/UC Davis)
In the UK, Allen’s drones are tethered to the ground by a 150 m-long tube. As they fly downwind of the methane source, they pump air down through the tube to a spectrometer, which identifies different gases based on their spectral signatures (Atmos. Meas. Tech.13 1467). “We’re essentially measuring on the ground, but we’re measuring air that’s been brought down from where the drone was at that time,” Allen explains.
To test their mathematical models and accurately measure methane fluxes, Allen and his team developed a method that involved the controlled release of methane from a cylinder in a field. To ensure there was no cheating, it was decided that the person who flew the drone, analysed the data and calculated the methane emissions was not aware of how much methane was in the box, or the rate at which it was released. The result was a good correlation between the measurements and the known methane release. But achieving accurate results requires multiple flights adding up to a few hours of flight time.
Whole herd, or individual cow
Phil Garnsworthy, head of animal sciences at the University of Nottingham in the UK, is researching breeding cattle that are genetically predisposed to produce less methane. He says that the microfauna of the rumen varies from one cow to another, and appears to be fixed by genetics. “You can take the rumen contents from one cow and put it into another and after about two to four weeks, the population of bugs will go back to what it was before,” he explains.
“The cow has control over her rumen microbes. Breeding for low methane is breeding for a particular population of microbes in the rumen.” While respiration chambers provide accurate data, they are impractical for large-scale measurements of methane emissions, says Garnsworthy. That’s because when measuring methane emissions for breeding purposes, the cows need to be confined in the chamber for about three to seven days to get decent figures.
In the Nottingham study, the dairy herd is milked by robot. The cows, which wear chips for identification purposes, each come in three times a day and feed while being milked. To provide long-term methane measurements from individual animals Garnsworthy and his colleagues decided to try installing gas analysers in the feeding trays – a tube near the animal’s nostrils connected to an infrared spectrometer. “The first cow stuck its head in, and we suddenly saw this massive peak in methane about once every minute,” Garnsworthy says, “and we thought that must be it breathing in and out. Then we realized that it breathes in and out more often than once a minute, and that it was belching methane.”
The measurements from this technique are comparable with respiration chambers, the researchers found (Animals 10.3390/ani9100837). Garnsworthy says there are sectors of the scientific community who think his technique is “rubbish”, arguing that it is not as accurate as techniques like respiration chambers. But he believes that these other methods are not measuring cows under normal commercial conditions.
“They say our technique is so variable that it is not accurate enough for individual cows, but what we can do is measure them thousands of times and then get a pretty precise average,” he explains. And as this method involves measuring all the animals, individual measurements can be combined to provide methane emissions for the whole herd. That’s good because researchers can then compare emissions from farms or herds without having to measure individual animals. But, as Garnsworthy cautions, if you are interested in comparing animals for breeding purposes or conducting nutritional experiments, the observational unit needs to be individual animals. “It just depends what you want the data for.”
Elsewhere, researchers have been looking at systems that can monitor herds for longer periods, or even continuously. One option is to use point source lasers that criss-cross the field to measure emissions from the herd. In 2014 Richard Todd and colleagues at the US Department of Agriculture used this spectroscopy technique as well as the GreenFeed breath analysis system to measure emissions from 50 cattle grazing 26 hectares of Oklahoma grasslands. Three lasers scanned 16 paths over the prairie, while the cattle were fitted with GPS collars to track their locations.
Weather conditions and data on the positions of the cows allowed the researchers to measure upwind and downwind gas concentrations to track methane emissions from the cattle. They found that the laser results of methane emissions were comparable to the GreenFeed system, concluding that the open path lasers tended to overestimate emissions, while the GreenFeed method tended to underestimate them.
Sensitive scanning
Other scientists are looking at even more sensitive techniques. In 2019 Daniel Herman of the National Institute of Standards and Technology (NIST) demonstrated that dual frequency combs can measure emissions from cattle herds (Conference on Lasers and Electro-Optics 10.1364/CLEO_AT.2020.AW4K.1). A frequency comb is a very precise spectroscopy instrument, in which lasers emit a continuous train of very brief, closely spaced pulses of light covering millions of different frequency peaks. The individual peaks look like the teeth of a comb – giving the tool its name. These laser pulses act as markers that allow the detector to measure the spectral signature of any material through which they have passed, with incredible precision. Dual frequency combs use two combs with slightly different tooth spacing. This creates an even more sensitive device that acts like hundreds of thousands of laser spectrometers working together.
2 Detecting methane leaks Researchers led by Daniel Herman at the National Institute of Standards and Technology in the US have used a mobile dual-frequency comb laser spectrometer to measure bovine emissions. The spectrometer sits in the centre of a circle that is ringed with retroreflecting mirrors, with laser light from the spectrometer (yellow line) passing through a gas cloud and striking the retroreflector before being returned directly to its point of origin. The data are used to identify leaking trace gases (including methane), leak locations and emission rates. (Courtesy: Stephanie Sizemore and Ian Coddington / NIST)
In 2018 another NIST team, led by physicist Ian Coddington, had already demonstrated that a portable dual-frequency comb could be used to detect methane and other emissions outdoors, with extreme precision and over large areas (Optica5 320). In field tests designed to simulate emissions from oil and gas production, Coddington’s team was able to measure methane emissions of 1.6 g per minute from a kilometre away (figure 2). Herman and colleagues used two dual frequency combs on opposite sides of a feedlot containing around 400 cattle. One comb was downwind of the pen and the other upwind, to measure gas concentrations as air flowed in and out of the pen. The downwind system detected increases in methane, carbon dioxide and ammonia.
According to Allen, there are advantages and disadvantages to both the fixed-lasers-based systems and drones. While the fixed systems can monitor emissions continuously, they often rely on the wind blowing in a certain direction, so there can be a lot of dead time with emissions being missed. Also, methane is buoyant and can rise quite rapidly. The ground-level lasers can miss these plumes, while the drones can cover that vertical dimension and be positioned to account for wind direction. However, drones are expensive; require someone to fly them; and can only monitor for short periods at a time.
It is, however, also possible to measure methane emissions using aircraft. This is what Ray Desjardins, an atmospheric scientist at Agriculture and Agri-Food Canada, has spent decades working on. He explains that every 20th of a second, the equipment onboard an aircraft measures the concentration of different gases in the air, as well as the vertical motion of the air. “Basically, if there is a difference in the concentration between the air going up and down you can easily calculate the emission of a gas,” he says. But aircraft-based measurements can struggle to measure specific sources of methane. Recent work by Desjardins found that measurements of agricultural methane emissions, particularly from animal husbandry, are much more accurate when the area being surveyed is less than 10% wetland (Agricultural and Forest Meteorology 10.1016/j.agrformet.2017.09.003).
Desjardins says that the aircraft technique is used to check if a farm’s inventory of greenhouse gases is accurate. “That’s what we’ve mainly used it for at this point.” Eventually, he says, it may be a way to reward farmers and give them credit for using certain greenhouse-gas mitigating techniques. “It might be a way to verify that what they say they’re doing, they’re doing,” he explains.
Kebreab also thinks that rewarding farmers could be a good way to incentivize them to cut methane emissions. Mitigation can be expensive and an added cost on already strained finances. “Having protocols that would help them recoup the money they’re going to be spending on buying whatever technology is available would be very, very helpful,” he concludes. “If the technology actually allows them to improve their productivity, then that’s a win–win situation.”
Example MR images from three paediatric brain tumour patients. First column: T1-weighted images after injection of gadolinium contrast agent; second column: T2-weighted images; third column: ADC (apparent diffusion coefficient) maps calculated from diffusion-weighted images. (Courtesy: CC BY 4.0/Sci. Rep. 10.1038/s41598-021-82214-3)
Brain and spinal cord tumours are the second most common cancers in children, making up about 26% of all childhood cancers. Many of these tumours are found in a region of the brain called posterior fossa, with the most common site being the cerebellum. There are three main types of such brain tumours – ependymoma, medulloblastoma and pilocytic astrocytoma – and as the treatment and outlook for each is different, accurate identification of tumour type is important to help improve surgical planning.
The most widespread method used in current clinical practice to characterize these tumours is through acquiring brain MRIs, which are then assessed by radiologists. However, this qualitative analysis is often challenging, due to the overlapping characteristics of these three types of cancers. This makes diagnosis difficult without the added confirmation of biopsy. One possible solution is to use diffusion-weighted imaging, which measures the random motion of water molecules in tissues, revealing details of the tissue microarchitecture. This more advanced MR technique can provide quantitative information regarding the tumour, in the form of apparent diffusion coefficient (ADC) maps, with the aim of improving diagnosis.
AI-based paediatric tumour classification…
A UK-based multi-centre study, led by the University of Birmingham and including researchers from the University of Warwick, focused on analysing ADC maps of paediatric brain tumours of the posterior fossa. The aim was to accurately identify the tumour type, without the need for biopsy. To achieve this, the group employed machine learning techniques and showed that it is possible to discriminate between the three most common types of paediatric posterior fossa brain tumour. They report their results in Scientific Reports.
Andrew Peet from the University of Birmingham.
Andrew Peet from the University of Birmingham explains: “When a child comes to hospital with symptoms that could mean they have a brain tumour, that initial scan is such a difficult time for the family and understandably they want answers as soon as possible. Here, we have combined readily available scans with artificial intelligence to provide high levels of diagnostic accuracy that can start to give some answers.”
…in a large-scale multi-centre study
The study involved 117 patients from five primary treatment centres across the UK (Nottingham, Newcastle, Great Ormond Street Children’s Hospital London, Alder Hey Liverpool and Birmingham Children’s Hospital), with MR images provided by 12 different hospitals (including the local hospitals where the children had their first scans) and a total of 18 different scanners. The images were analysed by a paediatric neuroimaging expert who manually drew regions-of-interest (ROIs) around the tumours. The researchers then extracted ADC values from the tumour ROIs and extracted a range of metrics from the ADC histogram.
The group used these features as input to two machine learning classifiers, a linear model called naïve Bayes (NB) and a non-linear model called random forest (RF), and trained them to noninvasively discriminate between the three most common types of paediatric posterior fossa brain tumours. The RF method achieved the best overall classification accuracy of 86.3%, while the NB classifier had the highest classification rates for ependymomas of 80.8%.
The authors report that these accuracies are not as high as previously seen in other studies. However, they note that their study is “much larger than the aforementioned studies with a more heterogeneous data input with regards to hospitals, scanners and acquisition protocols”. In fact, “previous studies using these techniques have largely been limited to single expert centres,” adds Peet. “Showing that they can work across such a large number of hospitals opens the door to many children benefitting from rapid noninvasive diagnosis of their brain tumour. These are very exciting times and we are working hard now to start making these artificial intelligence techniques widely available.”
Theo Arvanitis from the University of Warwick.
“Using AI and advance magnetic resonance imaging characteristics such as ADC values from diffusion-weighted images can potentially help distinguish, in a noninvasive way, between the main three different types of paediatric tumours in the posterior fossa,” explains co-author Theo Arvanitis from the University of Warwick.
“If this advanced imaging technique, combined with AI technology, can be routinely enrolled into hospitals, it means that childhood brain tumours can be characterized and classified more efficiently, and in turn means that treatments can be pursued in a quicker manner with favourable outcomes for children suffering from the disease,” Arvanitis adds.
I grew up in Calcutta, India, and did my schooling there. My dad was a materials engineer – a “metallurgist”, in those days– who graduated from the University of Manchester Institute of Science and Technology, and worked for a British company before ultimately starting his own business back in India. So I grew up in an environment of bold entrepreneurship as my dad built the family business – a materials-processing firm. Science and technology were very much part of my growing up.
I had a keen interest in what went on in the factories from a very young age. Not that I was deeply interested in physics when I was in school – I was more interested in music and acting at that age. But I ended up doing a natural sciences degree at Calcutta University, studying physics, chemistry and mathematics. Then when I was 21, I went into the family business. But within two or three years, I realized that wasn’t the place I wanted to spend my entire life. It didn’t feel right. The only way to get out of such a situation, particularly in India, was to become a student again – it was the most conflict-free way of saying I wanted to do something different. I ended up in the UK, doing materials engineering in Northumbria University. It was a very quick decision.
What was your experience like there, and what did that lead on to?
Northumbria was fantastic. I got a first-class degree and had an amazing time as well. As a mature student – by that time, I think I’d grown up a little bit – I was more interested in what was going on in the classrooms rather than in student politics. But also, I felt I had to prove something. It was the first time I came out of my comfort zone, in a way.
After that, I worked for a local company in North Tyneside, Elmwood Sensors, that was part of an international engineering group. That role was very much materials-science-based. In those days Elmwood Sensors had R&D activity at Durham University, so I got to know the university quite well. Durham kindly offered me a fully paid scholarship PhD, and I jumped on it.
My PhD was in materials science, based in a physics department. So I always say I did “the dirty end of physics”, really, in condensed matter. It was, again, a very enjoyable experience. It’s a great city to be in and once you’re in Durham, you never leave Durham. I took the PhD as a job. I finished within three years, and I got a prize for the best thesis for condensed-matter physics at Durham as well, which was nice to have.
How did you get from there to helping to found Kromek?
When I was doing my PhD, I fancied going to work in the City. The world of finance fascinated me. So, as I was finishing, I started to apply for jobs and I had one lined up in investment banking in London. But when I finished, I went travelling for four months with my wife. And while we were travelling, Durham was trying to spin this business out. The founder, Max Robinson, decided to invest some money in this spinout and they were looking for somebody to lead it.
I was contacted by Durham because they knew I had an interest in entrepreneurship and I loved science and technology. So I decided to give it a go. In May 2003 I returned to the UK, and that month the company operationally started. We had one patent that the university wanted to commercialize, and were based in the technology-transfer offices at Durham. So I had a room, a secondhand computer and a piece of paper. That’s how Kromek started.
What were the early years of Kromek like, and how did you get to where you are now?
Kromek came out of a roughly 20-year research history at Durham University. In the mid-1990s, as a leader of a European funding consortium, Durham developed an IP to grow cadmium telluride. That’s when the technology part began, and then in 2003 that was commercialized.
In the early days of Kromek, the business model was simple. We’d make a lot of these materials and scale it up. But we changed tack once we started to get an understanding of the market; we began to raise money and we realized it would be better to add more value to our offering. So we started developing electronics and application expertise. In 2013 we acquired one of the largest cadmium zinc telluride manufacturers in the US – so we are now a UK and US business, with 50% of our workforce in the US.
Max Robinson also had expertise in security imaging, which is how we got involved in that. Our first product happened to be a barcode scanner to detect liquid explosives – a product that is still used today in many airports. We also started seriously engaging with the US Department of Defense in 2008, which led to us protecting New York City and other cities against dirty bombs through large networked solutions for security.
We started making a spectrometer in 2009 – it was the GR1, which still remains the world’s smallest room temperature spectrometer, at the size of a matchbox. In 2011 we had just begun importing and distributing in Japan when the Fukushima disaster happened. The GR1 was used in the disaster aftermath because it was so small and very high resolution, ideal for getting in the right spots to categorize reusing shielding, for example.
So there are some very exciting technologies that have been developed in Kromek. We do medical radiation detection but we also do security – protecting ports, borders and cities – working with a fantastic customer base around the world.
What kind of skills are you looking for when hiring employees at Kromek?
The breadth of Kromek’s technology is big – we offer the whole platform. So we are interested in electronics engineers, semiconductor scientists, semiconductor engineers, process engineers, photonics scientists, core physicists and more. We do systems modelling and algorithm developments for software development. We are increasingly getting into artificial intelligence on a number of fronts – around bomb detection and in other parts of our portfolio. We’ve got a team of data scientists and AI specialists who are delivering product AI solutions.
Of course, we also need all the other important functions – sales, marketing, digital marketing, account management, product management and all the associated services that go with running a company. Core skills are important, but I’m always looking for personalities and what people bring to the business, as well. Will they fit into the team? Are they able to contribute and interact with other people? Emotional quotient is important.
What advice would you give to young, early-career scientists considering launching a spinout company from their university?
Entrepreneurship is an interesting journey. It is like learning to fly while you’re flying, so the challenges can be very big. Taking a realistic view of your financing is extremely important. Starting out with a vision of what it is you’re trying to build is key. And you have to believe in it completely.
Particularly with technology businesses, you have to bring a lot of people along with you, fixed on that vision, and go through the ups and downs that come with it together – it’s not going to be a straight line. When you start out, you have nothing except an idea. You’ve got to make that idea work, and then you have to prove that people are willing to pay to buy it.
Don’t try to be an expert in everything yourself because you’re not
Surround yourself with the best people you can afford. My philosophy is very simple: everybody in the business is better skilled than me, and they’re better at doing their job than I am. Your job as chief executive is about creating the vision, leading, co-ordinating and making sure all those experts and experienced people you’ve got with you are driven towards a common goal. Don’t try to be an expert in everything yourself because you’re not.
Science is about breaking new frontiers, inventing new things and increasing the knowledge that exists each day. When you come to the other side of industry and entrepreneurship, you are generally trying to convert that science into usable products or services. It’s about solving somebody’s problem using the science – and that means that you have to be completely customer-focused.
Having a piece of technology and having a successful business are two very different things, because technology is only one element of a successful business. Making sure you’ve got a good team involves not just brilliant scientists and brilliant people, but a group of people who gel with each other, work well together and have a common goal. Also, without taking risks, you’re not going to succeed. In innovation, there’s a clear understanding that if you haven’t failed, then you haven’t tried. You learn more from failures than when you succeed. It’s an exciting journey that nothing else can replace.
What is a day in the life of Arnab Basu like?
It varies from day to day. I have my internal responsibilities as the chief executive, working with my team. I’m a people person, so – in our previous norm – I’d be going from desk to desk talking to people, to understand what’s going on and trying to solve problems or enable and encourage my employees. I used to travel a lot, so I spent a lot of time with customers and working on related issues. That’s what really gives me a buzz: sitting in front of a customer and learning what they’re trying to achieve and how we can help them. I also spend time on strategy, in terms of looking forward and seeing what technology and what products we have, what markets they fit in and how we can expand them. It’s really wide-ranging stuff. It is all market-driven and market-focused, really.
Handy handheld The D5 from Kromek is the world’s smallest radio isotope identification device. (Courtesy: Kromek)
How has the global pandemic altered the operation of your business?
COVID-19 has presented some unprecedented challenges. We’re still a relatively small business, operating on four sites, but with a global customer base. Some areas were badly affected. For example, in our medical-imaging segment, no hospital was focusing on installing new scanners while they were trying to understand how to handle the crisis.
We were lucky in a way, because we had taken steps towards adapting to new ways of working throughout the previous year. We did a pilot of homeworking 10 days before the first lockdown came into effect – but that was not because we had a grand vision of what was going to happen, it was just luck. We have learned a lot in the last year, and we’ve adapted. We’re a manufacturing business so we still have to have people in to do things in labs, to make stuff and build prototypes. You just can’t do that remotely, so we had to adapt our working conditions.
Of course, commercially, the business did suffer in the first half of 2020. But Kromek’s technology products are still needed in the market for very simple reasons. COVID hasn’t got rid of cancer, so early detection of cancer will still make a difference in people’s lives. The need to protect against terrorism hasn’t gone away, so products in that arena are still in demand. This growth market remains strong for us and will continue to be so.
What we are doing in biodetection is especially pertinent – developing a broad-spectrum monitoring system for airborne pathogens, on a very wide area basis. This is something that doesn’t exist today. These systems could sit in places where people are coming in and going out of countries, sampling air all the time. Any new variants, any new mutants, any novel viruses could be picked up and reported at an early stage.
Innovation will be key for us as a country, as a world and as a company. We must make sure we keep innovating, solving problems and providing solutions. That helps commercially in business – but it also aids humanity. That’s what the ultimate aim of science is and always should be.
What is your advice for students today – in a time when the future may seem bleak?
The world might be bleak today, but opportunities are borne out of situations like this. Fundamentally, the need for science, innovation and for good people to drive the world forward will not change. Any physics graduate already has the advantage of being able to look at a career from a very wide spectrum, whether you want to go into finance, product development, AI, data science or many other things.
I would say try to discover what you really want to do because it’s important that you pursue a career that you’re passionate about. I think the current generation of students are much more socially aware and put a lot more importance on the value of what they’re doing than my own generation. Our first priority was getting a mortgage and buying a car – we were much more materialistic, I think. That brings a different perspective and also a different way of looking at your career.
I think being a student now is not such a bad thing. COVID-19 will be behind you when you actually enter the job market. And there are some valuable skills that this pandemic has pushed people to learn. So use those, focus on those. Ultimately, skills are always needed.
Your first degree was in physics. What sparked your interest in the subject?
I enjoyed physics in high school. My physics teacher, Dan Smalley, instilled a great deal of confidence in me. How he applied the concepts of the field to the real world, initially and greatly sparked my interest. To no-one’s surprise, after also taking the advanced placement (AP) in physics with Smalley, I decided to further pursue physics in my undergraduate career. In both high school and university, I wanted to seek a field that had a mixture of science, health and medicine, but was uncertain of what satisfied that combination. A physics and public health double major at university was the closest combination to meet my interest.
While I was an undergraduate I took organic chemistry courses and I enjoyed them, which led me to apply to graduate programmes in chemistry. I told myself if I got into at least one programme, I would absolutely go for it. Thankfully I did, and I chose to go to the State University of New York at Buffalo (SUNY Buffalo). I chose a principal investigator from listening to one of his seminars, where he spoke about nanomedicine and nanotechnology (encompassing the science and the biological applications) – and I realized that this was the perfect combination for me. I was fortunate to be selected as a graduate student and part of his laboratory based on merit.
What nanotechnologies are you working on?
Currently I am developing a nanoformulation that is suitable for the targeted therapy of hypoxic regions of glioblastoma multiforme – a type of brain cancer. Specifically, my project involves synthesizing a hierarchal nanostructure that consists of ultra-small core shell lanthanide-based nanoparticles that are encapsulated with a PLGA chitosan coating that is then surface modified with a hypoxia targeting moiety.
The aim is to develop a drug delivery system that will be able to cross the blood-brain barrier and target hypoxic regions of glioblastoma. These hypoxic regions have low oxygen levels as well as low pH, and are resistant to chemo- and radiation therapy. Because of that, the tumours tend to metastasize, invade and relapse. My research goal is to improve the oxygen and pH levels within these hypoxic regions, while simultaneously providing dual imaging capabilities (e.g. MRI and CT), therefore, making the chemotherapy more effective.
It is really hard for anything (beneficial or not) to actually cross that blood-brain barrier because that is what keeps out pathogens and bacteria from entering the brain. We developed in vitro models, which suggested that a good percentage of my nanoparticles have crossed the barrier. The next step is determining if it works in vivo with animal studies.
You mentioned that your principal investigator made a positive impression on you from the start; what do you enjoy about working in his lab?
Part of it is how he presents his research; thoroughly breaking down complex ideas. Working in his laboratory, I am mentored by physicists, biologists, engineers, medical doctors and a variety of different experts. His laboratory is multifaceted, and the research projects conducted have a variety of interdisciplinary chemistry topics that overlap with medicine. The overlap with medicine is my priority interest, again incorporating science and medicine.
You founded Black in Nanotech Week, which ran for the first time in December 2020. Can you describe the event?
The virtual event included conference-style presentations from faculty members so participants could learn about different applications of nanotechnology. We ran Q&A sessions with experts from academia and industry and we also had graduate admission recruiters detailing their nanotechnology programmes, degree options offered, the application process, and the eligibility requirements. Plus of course, participants shared their own research in nanotechnology. One of the highlights of the event was guest speaker Hadiyah-Nicole Green, who is one of few Black women to have earned a physics doctoral degree and the first person to cure cancer using laser nanotechnology. She was willing to share her experiences and resources with the participants.
Black in Nanotech Week ended on a Saturday with a self-care event: a 45 minute yoga and meditation session. As scientists and as humans in general, we tend to not take breaks from our research or from our work. So, it is always important to step back and take care of ourselves and our mental health.
What is next for Black in Nanotech Week?
We plan to expand Black in Nanotech into an organization that highlights and celebrates Black voices in science and specifically nanotechnology. Our goal to is emphasize the importance of representation to younger individuals who are in middle and elementary school. To expand the percentage of Black individuals in STEM, recruitment efforts must start at the younger educational level.
Mentorship is clearly important to you, is that why you decided to start Black in Nanotech Week?
Yes, as I move forward with my career, I continue to find myself too often the only Black woman in a room full of bright minds, especially in nanotechnology. Unfortunately, there are very few who look like me in the field, and I, as a Black woman, want to help pave the way for other women of colour who are interested in nanotechnology. Thus, being a STEM diversity advocate is my way of changing the cold climate that marginalized groups consistently face in the scientific fields.
What are some of the challenges you come across while mentoring?
One of many problems is that we tend to start mentoring undergraduates when they are choosing to go to graduate school, but my goal is to begin even younger. Under-representation starts before the undergraduate and high school levels and I think mentoring should begin in elementary school – because that is where students in some communities are not seeing the opportunities in STEM.
When I was in high school, I was unaware of the many different scientific fields. I knew the basic subjects of chemistry, biology and physics, but STEM is more diverse than that.
You are also working on a project that is aimed specifically at Black women; can you tell us about STEMNoire?
I had the great opportunity to be on the planning council for STEMNoire, which is a research conference and a holistic wellness retreat for Black women in STEM. We realized that we wanted a conference that provides an open space to talk about our journeys, our successes, our failures and the obstacles that we have overcome without delegates feeling that they are in a minority. We wanted a group where Black women are the majority and are comfortable enough to share everything that we have experienced in our scientific fields.
In-person conferences are a bit difficult at the moment; are you planning online events or STEMNoire?
Yes, our inaugural conference was planned for summer 2020 in San Juan, Puerto Rico, but the COVID-19 pandemic occurred. Instead, we held a one-week virtual conference highlighting Black women in STEM and related fields in July 2020. Our official conference will be held in June 2021. The content is yet to be finalized. Participants can expect a variety of events, including various keynote speakers, poster sessions, and oral presentations – focusing on Black women’s current research work, expertise and experiences within their respective fields.
And for you personally, what are your plans after you finish your PhD?
I hope to be in a laboratory still working in nanotechnology – ideally in a pharmaceutical company or industry where I continue cancer-related research because it has definitely touched home. Many members of my family are cancer survivors including my dad, brother and grandmother. Thus, cancer research will always be dear to my heart. I would love to keep going in that field.
Large magnetic fields are usually a killer for superconductivity, but researchers in the US have now fabricated a material that simultaneously superconducts and exhibits the quantum Hall effect – a phenomenon that requires a strong magnetic field. The new material, made from nanoscale layers of gallium nitride and niobium nitride, could be used for “topological” quantum computing and to make more energy-efficient electronic devices.
The quantum Hall effect occurs when a current passing along the length of a thin conducting sheet gives rise to an extremely precise voltage across opposite surfaces of the sheet. This voltage only occurs when a strong magnetic field is applied perpendicular to the sheet, and it is quantized – that is, it can only change in discrete steps. Another consequence is that electronic states on the surface of the two-dimensional (2D) sheet are said to be “topologically protected”. This protection arises because electrons in these “edge” states can only travel in one direction, and they also steer around imperfections or defects in the material without backscattering. Since backscattering is the main energy-dissipating process in electronic devices, such protected states could be useful components in next-generation energy-efficient devices.
Another important benefit is that an edge electron with a certain momentum cannot scatter into a state with opposite momentum (or spin) since to do so it would have to flip its spin. Topologically-protected states might thus be ideal for quantum-computing applications in which defects usually destroy information carried in the spin state of electrons.
Topologically protected superconducting electron states
In recent years, researchers have been trying to create heterostructures – 2D semiconductors grown on superconductors – in which the states of the electrons that make up the superconducting current are also topologically protected. This supercurrent comes from electrons with opposite spins that have paired up and can move through the material without any resistance below a certain critical temperature. Materials that can accommodate such supercurrents are limited, however, because the magnetic field required to produce the quantum Hall effect destroys superconductivity – either by breaking up the electron pairs or by trying to make both electron spins align in the same direction.
These materials are routinely employed in light-emitting diodes and transistors for products like smartphones and home lighting, and the team chose them in part because they are robust. However, they do contain more structural defects than other technologically-important materials like silicon. To reduce the number of defects, and thus create a higher-quality heterostructure, the researchers modified the growth process employed in their earlier study. This modification also allowed them to precisely engineer the position of the electrons in the GaN atop the NbN.
Narrow “window” of temperatures and magnetic fields
Using measurements of resistance versus applied gate voltage at temperatures of 390 mK, the researchers showed that superconductivity in the improved NbN layer could survive applied magnetic fields as high as 17.8 Tesla. Meanwhile, the improved GaN semiconductor was of high enough quality to exhibit the quantum Hall effect at lower applied magnetic fields of 15 T. “Both these improvements mean the quantum Hall effect and superconductivity can occur at the same time in the heterostructure over a certain ‘window’ of temperatures and magnetic fields (that is, below 1 K and between magnetic fields of 15 to 17.8 T),” study lead author Phillip Dang tells Physics World.
According to the team, the new GaN/NbN heterostructure could be used in quantum computing and low-temperature electronics. Reporting their work in Science Advances, the researchers say they now plan to further investigate the interaction between superconductivity and the quantum Hall effect in this material.
Researchers in the US have shown that 2D materials can be used to create tunable optical second-harmonic generation. They achieved this by rotating layers of hexagonal boron nitride relative to each other, creating an efficient and controllable optical response. This could have advantages in a range of laser-based applications, the researchers say, and could provide a compact way of generating entangled photons for quantum information processing and computing.
Optical frequency conversion is a nonlinear process where materials are used to generate light at different colours, or frequencies, to that inputted. This can be exploited to generate light at frequencies for which there is no convenient laser source and is used for many applications, such as quantum photonics, super-resolution imaging and optical sensing. Second-harmonic generation is a common type of frequency conversion in which two input photons are combined to produce one photon with twice the energy.
The ubiquitous green laser pointers, for example, use second-harmonic generation to frequency double infrared lasers to green. This is simpler and cheaper than producing a green laser. But like most nonlinear optical processes, it relies on crystals. The rigid structure of crystals makes controlling the output signal and changing any aspect of it difficult because you can’t easily adjust their optical properties. “If you wanted to make [a green laser pointer] a different shade of green, for example, that would be really hard,” explains James Schuck, a mechanical engineer at Columbia University.
As well as adjusting the output frequency of a second-harmonic generation response, you might also want to modulate it, to make it brighter or dimmer. You can do this by adjusting the laser source, but it would be preferable to have a constant source and adjust the nonlinear response of the material itself. The output can also be modulated by applying a voltage to the crystal or hitting it with an ultrafast laser pulse.
However, Shuck and his colleagues wondered whether better tunable second-harmonic generation could be achieved using methods developed for “twistronics”, a technique in which two layers of 2D materials are twisted, or rotated, relative to each other. This rotation changes the electronic properties of the materials, with researchers being particularly interested in its effect on superconductivity and insulation. “It wasn’t a big leap for us to say well you probably change the optical properties too,” Shuck says.
In work described in Science Advances, the researchers rotated layers of hexagonal boron nitride to achieve highly tunable second-harmonic generation. They found that this technique enabled them to modulate the output intensity of a second-harmonic generation that was 10 times larger than achievable using electric pulses or lasers applied to optical crystals. This high tunability also persisted over a broad band of frequencies in the visible spectrum.
To rotate two layers of boron nitride relative to each other, the researchers etched the top layer into cog-like shapes, or micro-rotators. They then used an atomic force microscope (AFM) to push on and shift this top layer. This allowed them to dynamically tune the symmetry of the layers, adjusting the optical response.
Boron nitride crystals are etched into micro-rotator shapes and pushed by AFM tips. (Courtesy: Nathan R Finney and Sanghoon Chae/Columbia Engineering)
The researchers have named this technique “twistoptics”. The concept works with many other 2D materials, Shuck says. However, boron nitride is particularly good as it can produce a second-harmonic generation response over a wide range of input frequencies.
Shuck tells Physics World that twistoptics could be used to generate entangled photons for applications in quantum optics and quantum information. This process is essentially a reverse of second-harmonic generation. A high-energy photon is sent into the material and splits into two photons. And these photons are quantum entangled.
There are crystals available that can perform this down-conversion and produce entangled photons, but they are quite large. By stacking multiple layers of 2D material you could produce many twisted interfaces to create a very efficient nonlinear optical response in a very thin material, Shuck says. And as well as reducing the physical size of the material, this would provide more control over the output frequencies and intensities.
“We are trying really hard to see if we can create entangled photons,” Shuck tells Physics World.
In November 2019, 12 bottles of Chateau Petrus 2000 – a wine from Bordeaux, France, which would set you back a few thousand pounds per bottle — hitched a ride to the International Space Station (ISS). The bottles spent around 14 months in the microgravity environment before returning to Earth in January where they were then sent to the Institute of Vine and Wine Science at the University of Bordeaux to be analysed – and, of course, tasted. On 1 March, a bottle was opened and blindly tasted by 12 connoisseurs alongside one bottle from the cellar. Now the team have announced their highly anticipated preliminary results, finding that the wine on the ISS had “heighted floral characteristics” and was one to three years further “evolved” compared to the bottle that had stayed firmly to the ground. The team now plan to take a closer look at the biochemical properties of the wine that has been in space, followed, of course, by more tasting.
Still on space, ever dreamed of becoming an astronaut? Well, now is your (latest) chance. Applications are open for the European Space Agency’s 2021 astronaut selection. As well as recruiting new members to the astronaut corps, ESA has also issued a vacancy for a “parastronaut”, which will involve taking part in a feasibility study for astronauts living with specific physical disabilities. “Representing all parts of our society is a concern that we take very seriously,” says David Parker, ESA’s director of human and robotic exploration. “Diversity at ESA should not only address the origin, age, background or gender of our astronauts, but also perhaps physical disabilities.” The closing date for applications is 28 May and if chosen you will then begin a gruelling six-stage selection process. Successful candidates will be announced in October 2022. Ad Astra!