Contrary to traditional thinking that the atmosphere drives sea surface temperature and surface heat flux variability in the mid-latitudes, researchers from the US have shown that where surface temperature gradients are strong, the ocean drives this variability. The finding highlights the importance of resolving internal oceanic processes as well as atmosphere-ocean interactions in models.
Stuart Bishop from North Carolina State University and his collaborators at the National Center for Atmospheric Research showed that in regions away from these strong mean sea surface temperature gradients, the traditional idea of atmospheric forcing still holds.
Oceanic weather drives the surface temperature variability of the Western Boundary Currents and the Southern Ocean Antarctic Circumpolar Current, particularly the Agulhas Return Current, Bishop and the team found. By dampening the sea surface temperature anomalies generated by eddy stirring using spatial and temporal smoothing, they showed that the oceanic influence on sea surface temperature variability increases with time scale but decreases with spatial scale. These regions transition from ocean- to atmosphere-driven at spatial scales less than 500 km, but this figure varies widely geographically.
This result is in contrast to previous work, where the mid-latitude sea surface temperature variability was atmosphere-driven and the ocean was passive. Many of these earlier studies, however, used rather coarse resolution observational estimates of sea surface temperature and surface heat flux, or coarse coupled global climate models. So ocean eddies were not resolved – the researchers involved noted this limitation, finding their results only applicable to regions away from strong oceanic currents.
Many scientists feel it’s important to understand ocean-atmosphere energy exchange and to capture it in climate models, to represent both the mean state and variability of the climate system accurately. This study highlights the value in precise and detailed observational estimates as well as the importance of using eddy-resolving models to simulate ocean-atmosphere interaction, especially in regions with strong sea-surface temperature gradients.
The researchers used state-of-the-art surface heat flux and ocean temperature observational estimates. A simple energy balance model of coupled air-sea interaction and a lagged correlation between ocean temperature and surface heat fluxes helped discriminate between atmosphere-driven and ocean-driven variability.
Who inspired you to study physics? Perhaps you had a great teacher or a supportive parent. But how might it feel if you’ve got a sibling who’s also into the subject? Would they be your rival or would the two of you support and nurture each other?
These issues facing “sibling scientists” are the cover feature of the August issue of Physics World magazine, which is now out. Turns out that sibling scientists are generally a force for good, especially with the elder child acting as a mentor and guide – often providing information, support and advice to the younger sister or brother.
I wonder in fact if we should do more to encourage boys and girls who are already in thrall with physics to persuade their siblings into the subject too. Of course, our feature isn’t an exhaustive scientific study, so do tell us if you know of other examples of sibling science.
Remember that if you’re a member of the Institute of Physics, you can read Physics World magazine every month via our digital apps for iOS, Android and Web browsers.
Ability to adapt: Huub Janssen. (Courtesy: Janssen Precision Engineering)
What led you to start Janssen Precision Engineering (JPE)?
I did my degree in mechanical engineering at the Eindhoven University of Technology in the Netherlands, and directly after finishing my studies I worked for a couple of large companies – first for ASML, which makes machines for the fabrication of transistors and silicon wafers, and then for Philips in its liquid-crystal displays business. At Philips, I was responsible for the equipment in two factories, and I did that for about six or seven years, but then a big reorganization within the company meant that the most advanced of the two factories I worked on was going to be closed. This implied that the majority of the interesting work I did would fall away because the other factory was really old-fashioned, so that helped convince me to go for a new challenge. Also, when I was still working for Philips, they asked me to set up projects for external companies, and I thought, “Okay, if I can do this within Philips, I can also do it on my own.”
How did you get funding for the business?
I had some savings, of course, but I didn’t get any external funding, and I also had no orders when I started – I really had to look for customers. However, if you are starting a project-based company there’s not a huge investment involved in finding a desk with a computer where you can do mechanical drawings, and a small space to organize and build mechanical and electronic parts.
How has JPE changed over the years?
Initially, I did project work for semiconductor companies that wanted to move instruments and machines very precisely, in the sub-micron and nanometre range. That was 26 years ago. At that time, I was designing components that could operate in an ambient environment, rather than for vacuum or cryogenics. But maybe 15 or 20 years ago, the semiconductor industry began to shift to a vacuum environment. This was a new challenge, and I like technical challenges. Then, several years later, I got a request from a client in astronomy who wanted a mechanism that would work at cryogenic temperatures, in a liquid-nitrogen environment. We had no experience with those conditions, but we had a brainstorming session, came up with a concept and discussed this with the client. Their reaction was, “Okay, that could work, but we would like to see a proof of concept.” So we built a demonstration unit, proved that it worked at low temperatures, and then went on to develop a complete instrument called a Configurable Slit Unit for the Gran Telescopio Canarias. It’s a kind of diaphragm where you move a mask to block certain stars from the sky, and you can use a computer to change the mask position. It’s a very complex instrument, with lots of actuators and sensors operating in a vacuum and cryogenic environment.
That project took us more than 10 years, with some stops and milestones in-between, and during that time we got a request from scientists who had read that we had experience of positioning objects in cryogenic temperatures. But we had been working at 80 K and they wanted something that would work at 4 K. Well, we never say no! And since then we have been getting more and more requests in that area. A lot of physicists are struggling with instruments in a cryogenic environment – it’s really technologically challenging, so we try to come up with solutions that are at least one step ahead of what people can buy off-the-shelf.
Have you had any significant setbacks? If so, how did you get past them?
As you may know, the semiconductor industry is very cyclical – it has ramped up and ramped down at least three or four times in the past 20 years, and it does it in an extreme way. At times, semiconductor companies will not outsource any work at all, or very little, and then at other times the ramp-up is incredible. You need a lot of flexibility, but you can’t really say to your employees, “Okay, now I need to employ two people, but later I will need 20 or 30.” So I have to find other interesting work, maybe even unpaid work, to give them during the down periods.
Did one of those downturns help lead you into scientific instrumentation?
Yes, I think that’s a good point. You need a mixture. In astronomy, most projects are long-term (sometimes even too long-term, in my view), so the phasing is different from the semiconductor industry.
Any advice for someone thinking of starting their own company in this field?
In this field, I would say that you have to start with a good product, by which I mean you need to make a technological breakthrough in a place where people need it. So I don’t think you should take an instrument that is already on the market and try to improve it, because there is very little chance that you can beat a competitor who already has their product and market organized. But technology breakthroughs alone are not sufficient. They may be enough to get started, but then you have to be aware of the market potential and get it well aligned with your product. Otherwise you will not earn back the money you spent developing it.
Space is not a friendly place for instruments to operate. Cosmic radiation fries their electronics; they have to cope with extreme variations in temperature; and for missions venturing close to the Sun, the solar wind can gradually erode their exposed optical surfaces. Consider Solar Orbiter, a European Space Agency mission designed to observe the Sun from relatively close quarters. The spacecraft’s elliptical orbit will periodically take it to within 42 million kilometres of the Sun, where it will reach temperatures as high as 790 K. A sunshield will provide some protection, but Solar Orbiter’s 10 instruments – including an ultraviolet spectrometer called SPICE (Spectral Imaging of the Coronal Environment) – will still have to cope with intense heat. Furthermore, several of these instruments need to be actively chilled to as low as 213 K.
Some planetary environments are even harsher. Venus, for example, has a surface temperature of 730 K, a surface pressure nearly 90 times higher than Earth’s, and a corroding, acidic atmosphere. Only a handful of spacecraft have ever successfully landed there, and those that did survived only a few hours inside bulky pressure vessels before succumbing to the unforgiving conditions. If we want to send spacecraft there again, we will need to find alternative solutions.
The combination of several different extreme conditions poses tough challenges for instrument designers. However, careful planning, design and testing can minimize the hazards, while ingenious modelling can help scientists and engineers identify potential problems early. For example, one of the most important tasks at the beginning of Solar Orbiter’s design phase was to develop a thermal model of the spacecraft. This model indicates which parts of the spacecraft become hottest, which parts remain coolest and which areas can safely be used to shed heat. The thermal models for each of the instruments on a spacecraft must complement all of the others, as well as the craft’s overall thermal profile. Care has to be taken to ensure that one instrument does not shed excess light and heat in a way that warms its neighbours. Only when this is achieved is it possible to begin building the instruments. “It’s a high-risk business, so you have to show feasibility right from the start,” says Martin Caldwell, an instrument designer at the UK’s Science and Technology Facilities Council’s RAL Space, where SPICE was built and tested. “No instruments get designed or built without knowing they are going to survive.”
Hiding from the heat – or not
Several techniques exist to boost instruments’ chances of surviving harsh environments. Most of the instruments on Solar Orbiter, for example, sit behind a 3.1 × 2.4 m sunshield built from multiple layers of insulation foil sandwiched between two black-plated surfaces. The instruments can’t hide from the sunlight entirely, of course, because they need to make observations, so the shield contains small doors leading to feedthroughs that allow sunlight into the instruments. Because the shield itself can expand with heat, these doors are deliberately oversized so that lines of sight to instruments are not blocked.
SPICE is designed to allow 30 W of sunlight to enter via its feedthrough. Light then strikes its main mirror coated with boron carbide, which reflects ultraviolet frequencies onto the instrument’s diffraction grating and then onto its detectors. Visible and infrared frequencies, which carry much of the heat, pass straight through the mirror and back out into space. In this way, the mirror and its exit path to space help to cool the instrument “passively”, as opposed to “active” cooling, which is typically provided by louvers or radiators filled with liquid helium. “It’s amazing how, with just passive cooling, you can get very different temperatures around a spacecraft,” says Caldwell.
Controlled environment: Inside the GEER chamber at NASA’s Glenn Research Center. (Courtesy: NASA Glenn Research Center)
What happens when there is no way of hiding from the heat, like on Venus? The short lifespan of previous landers has helped dissuade space agencies from sending more such craft to its surface, but that could be about to change. Scientists working at NASA’s Glenn Research Center have developed new circuitry that could potentially survive for months, if not years. Electronics built from the semiconductor silicon carbide (SiC) have been in use on high-power devices for some time now and are well known for operating at high temperatures. However, says Philip Neudeck of the Glenn Research Center, what most people consider “high temperature” is only around 400 K to 520 K – still a far cry from Venusian conditions”. “There have been much briefer prototype demonstrations of silicon-carbide electronics at Venusian temperatures, but what hadn’t been demonstrated, until our recent work at NASA, is their long-term durability, especially within Venus’ atmospheric chemistry,” he explains.
Neudeck’s team has shown that SiC electronics have what it takes to survive on Venus. After building a test model with an integrated circuit chip containing just 24 transistors, Neudeck’s group used the Glenn Extreme Environments Rig (GEER) to simulate Venusian conditions and put the SiC electronics through their paces. This 1.2 m long stainless-steel chamber has a volume of 800 litres and can simulate not only the pressure and temperature on Venus, but also the 10 most abundant chemical constituents of the planet’s corrosive atmosphere with an accuracy down to mere parts per million. The rig can also be adapted for other environments, including those in the atmospheres of Jupiter and Saturn, and it can achieve even higher pressures than those found on Venus, albeit at lower temperatures. In a way, it’s the reverse of a vacuum chamber.
No instruments get designed or built without knowing they are going to survive
Martin Caldwell
“It’s no small feat to replicate those conditions on Earth, especially inside a chamber this size,” Neudeck says. His group had access to the GEER chamber for a straight run of 21 days and at the end the SiC electronics were still operating at full capacity. In comparison, the current record survival time on Venus is a mere two hours and seven minutes, achieved by the Soviet Union’s Venera 13 probe in 1982. The ultimate aim, Neudeck says, is to develop more complex electronics that can function indefinitely.
Thermal vacuum testing
At RAL Space, testing is also carried out in vacuum chambers built on a similar scale to the GEER. The chambers at its facility in Oxfordshire range from 0.5 m in diameter to a 5 m diameter chamber recently built by the Dutch company Schelde Exotech and designed specifically for instrument calibration testing. A second 5 m diameter, 6 m long chamber is being manufactured by the Spanish firm Cadinox, which won the tender from Added Value Solutions, a Spanish firm that has offices near RAL Space. Several even larger vacuum chambers, up to 8 m in diameter, are also being considered for the future, according to Giles Case, manager of the assembly, integration and verification facility at RAL Space.
Such large vacuum chambers are “pretty bespoke”, Case says. The specifications for chambers intended for spacecraft and instrumentation testing can vary, he explains, but in general they are constructed from electro-polished stainless steel that reduces outgassing. Beyond that, individual specifications may include how much weight it can carry, how many instruments can be adorned within it and how many doors and viewing points it has. A 5 m vacuum chamber costs around £10m, Case says, and the high cost is one reason why there are only around 10 large facilities for testing spacecraft and their instruments in Europe. By expanding their vacuumchamber capability, he argues that RAL Space will help enable the UK to “access the wider space market”.
Tough tech: The integrated circuits made from silicon carbide before they entered the GEER test chamber (top) and after spending three weeks inside it (bottom), when they were still operating at full capacity. (Courtesy: NASA)
Final testing in a vacuum chamber can take weeks, as it did for SPICE in March and April 2017. Once inside the chamber, SPICE was mounted on a platform that acts as a temperature interface, controlled by a black-painted thermal shroud that surrounds the instrument under testing. In addition to a bright lamp to simulate sunlight, the temperature interface mimics the operating temperatures that SPICE will experience in space. This set-up is capable of producing temperatures as low as 93 K or as high as 373 K. SPICE was tested at 350 K – 20 degrees higher than its intended operating temperature.
Setting an instrument up inside a vacuum chamber is complicated, so to save time and effort, Caldwell explains that thermal tests are generally conducted alongside other performance trials. For SPICE, those tests include an ultraviolet test with a narrowband lamp to ensure the instrument was correctly calibrated and focused (some of SPICE’s ultraviolet testing had previously also taken place at the Metrology Light Source in Germany). There’s also a shake test and a long bake-out to clean the instrument. The vacuum chamber itself had to be extremely clean because even the smallest amount of organic molecules sticking to the optics could cause darkening when exposed to ultraviolet light. To that end, the instrument had to undergo constant gas purges to keep the optics clear of contamination.
It’s amazing how, with just passive cooling, you can get very different temperatures around a spacecraft.
Martin Caldwell
When the testing is finished, an instrument typically remains in the chamber while the observing ESA specialists analyse the results. “Very often they might want extra measurements,” Caldwell says. “So it makes sense to keep it in there until we’re ready for delivery.” After a month inside the chamber, SPICE was delivered to Airbus in Stevenage where, on 21 May 2017, it was mated to Solar Orbiter’s spacecraft structure – the culmination of five years’ worth of development at RAL Space. Once the other instruments are added, the entire spacecraft will undergo final thermal testing inside a vacuum chamber to compare with the previous testing performed on the earlier thermal model. If all of the results check out correctly, the orbiter will blast off from Earth and head towards the Sun in October 2018.
Revise and adapt
Neudeck also has big plans for 2018. By then he hopes to be testing heat-tolerant SiC electronics that have 10 times more transistors than the first prototype, which is roughly the same electronic complexity as the integrated circuits aboard early solar-system missions such as the Pioneer and Viking probes. However, that doesn’t necessarily mean a new Venus mission is on the horizon. Echoing Martin Caldwell’s warning that space is a high-risk business, Neudeck emphasizes the need to minimize that risk by subjecting the instrumentation to considerable scrutiny. “We first have to critically evaluate, improve and understand the technology, and be able to produce and qualify it in sufficiently large testing volumes, before we can actually fly it,” he says. “But there’s no doubt we are laying the path.”
Commercial interests could help accelerate development in this area. Aerospace companies are interested in using Neudeck’s technologies for sensors that sit inside the hot areas of jet engines to monitor and control the rate at which fuel is burnt. Additional interest comes from the energy and automotive industries. “We’re looking to leverage these commercial interests to get these electronics to Venus faster,” says Neudeck.
SiC electronics could potentially also find other space-based uses. In addition to coping with very high temperatures and pressures, the electronics should also theoretically exhibit some resistance to radiation, although tests still need to be conducted to prove and quantify the radiation advantage. If it can be proven, though, then it could ultimately make it possible for spacecraft to explore the environment around Jupiter’s harsh radiation belts. As recently as October 2016, NASA’s Juno probe suffered a glitch as radiation damaged its onboard electronics, causing it to miss an opportunity to take data during a close fly-by of Jupiter. SiC electronics might not only let spacecraft survive around the giant planet, but also permit deeper ventures into its atmosphere.
When it comes to high-risk environments in space, borrowing from what has come before can sometimes be the safer option. “One of the things that made Solar Orbiter possible was learning from the BepiColombo spacecraft,” Caldwell says, referring to ESA’s upcoming Mercury mission, which will also launch in 2018, but which has been in development for longer than Solar Orbiter. “In the space business, everybody is desperate to rely on things that have been done before, that are available and safe to use,” he adds. However, as the SiC electronics show, sometimes the old technology simply isn’t good enough. In those cases, instrument designers must start again, and reinvent a better, tougher wheel.
Helium is the second-most common element in the universe, but here on Earth it is classified as a scarce and non-renewable resource. It is also much in demand, with applications including medical cryogenics (as of 2016, around 30% of helium consumed in the US went to equipment such as MRI scanners), arc welding, leak detection and superconductivity. The combination of scarcity and high demand has pushed prices skyward: the price of grade-A helium (refined to 99.997% purity) has risen by around 175% over the past 10 years, to $7.21/m3, and even crude helium is now worth around $3.75/m3 – 30 times more than the equivalent volume of crude methane.
In many markets, numbers like these would make investors salivate. Yet so far, all helium-rich fields have been discovered accidentally by companies searching for petroleum. This lack of intent has contributed to a great deal of wastage: we know that many natural gas production companies are unwittingly venting helium, either because they fail to recognize the value of this minor component or because they are unaware of its presence. Indeed, our discovery of one such case produced the idea for our current research project.
A helium-prospecting industry powered by people who understand both the economic value of this rare material and how to explore for it. But when oil and gas companies prospect for petroleum, they benefit from a well-developed exploration strategy that helps them assess a new basin or area in a systematic way. Unfortunately, there is currently no equivalent methodology for helium.
My colleagues and I are seeking to address this gap in our knowledge. In a joint research venture between Durham University and the University of Oxford in the UK, with the financial support of the Norwegian state oil company Statoil, we have used well-established hydrocarbon exploration protocols as a template to develop a similar strategy for helium. This work has enabled us to begin to answer questions about how and where helium is generated; how it is released from potential source rocks; how it then travels significant distances from source rocks to potential traps; and how helium-rich gas accumulates in the shallow crust, as well as how these accumulations can be destroyed by natural processes. The project is still in its early stages, but we believe our research is an important step towards diversifying the helium supply, so that when another supply crisis occurs, we are better prepared.
Know your helium
There are two stable isotopes of helium: ubiquitous helium-4, which constitutes 99.999% of helium gas, and rare helium-3. Helium-3 is used in neutron detectors and is also a candidate fuel for power generation though nuclear fusion. It is often referred to as “primordial helium”, since the bulk of it was trapped in the Earth’s mantle during the planet’s formation. The more common isotope, helium-4, is mainly produced by the alpha decay of uranium-235, uranium-238 and thorium-232 in the Earth’s crust, which has led to it being called “radiogenic helium”.
Different rock types produce varying amounts of helium-4, controlled by the original concentrations of uranium and thorium and the age of the rock. Some of the highest accumulations of helium are found in large, stable continental blocks known as cratons that formed in the early Precambrian (up to 4.28 billion years ago), such as the Canadian Shield. Helium is found in many other geological systems, including groundwater, ancient brines, fluid inclusions in ore deposits, hydrothermal fluids, igneous intrusions and rocks, oil-field brines, lakes, ice sheets, oceanic sediments and coal measures. To date, however, only a few natural hydrocarbon gas fields contain helium in sufficient concentrations for its extraction to be considered commercially viable.
The first economic quantities of helium were discovered in 1903 in Dexter, Kansas, US. Analyses of this gas – which was initially referred to as “wind gas” because it was non-flammable – found that it contained approximately 82.7% nitrogen and up to 1.84% helium by volume. This concentration of helium was unusually high, as most natural gas fields contain helium only in trace amounts (≤0.05%). Significant discoveries in eight other US states soon followed, with some of these fields containing up to 10% helium by volume – significantly above the threshold at which helium is considered economically extractable (0.3%). Helium reserves have also been found in Algeria, Canada, China, Germany, Hungary, India, Kazakhstan, Pakistan, Poland, Qatar, Romania, Russia and the Timor Sea. However, none of these new helium reserves currently match the concentrations, number of occurrences or estimated reserve volumes associated with the older US fields.
Finding new sources
From a simplistic sourcing perspective, older rocks will generally have had more time to produce and accumulate helium than younger ones. Therefore, a logical first step in a helium exploration protocol is to identify viable Precambrian-aged crystalline terrains in areas that have remained relatively tectonically stable for long periods of time. However, age is not the sole determinant of a viable source rock. The helium also needs to have been released from the rock – a process known as primary migration.
Helium migration: Once released from the source rock, helium and nitrogen can interact with groundwater (1), which carries the dissolved gases as it ascends. When the groundwater contacts a pre-existing gas cap containing methane or carbon dioxide (2), the nitrogen and helium partition out of the groundwater into the gas cap (3).
The primary migration process begins when particles in uranium and thorium-bearing minerals undergo radioactive decay, releasing alpha particles (helium-4 nuclei). The energy associated with this process produces “fission tracks” through the mineral, and helium atoms can readily diffuse along these tracks. While it is not yet known how efficient this escape process is, once helium has diffused out of the mineral it will accumulate in fluid inclusions and fractures within the source rock. For helium to then migrate out of the low-permeability source rocks into overlying layers, another input of energy is required. This typically comes in the form of heat and pressure from tectonic events such as rifting, mountain building or volcanic activity.
We do not fully understand the primary release process; however, we do know that the bulk migration of helium is enhanced by the presence of a carrier fluid or gas. In natural gas formations, high helium concentrations are always associated with high concentrations of nitrogen. The converse is not true, as many nitrogen-rich natural gas sources contain only trace amounts of helium; the explanation being that there are multiple sources of trapped nitrogen in the crust. Analyses have shown that radiogenic helium is consistently associated with nitrogen that has an isotope distribution characteristic of a source in the crystalline basement, indicating that the nitrogen is likely carrying helium out of source rocks.
Early in our research, we sampled well gases from existing helium-producing areas in the midwestern US and southern Canada. More recently, we worked with a helium exploration company, Helium One, to sample helium-rich gas seeps in the Tanzanian section of the East African Rift. All samples were rigorously analysed for information about the gas composition and the isotopic composition of the separated helium, other noble gases and nitrogen. This gave us an idea of the interaction between helium and its associated nitrogen carrier, and how they migrate out of source rocks. However, we still needed to account for the way in which helium and nitrogen moves (sometimes hundreds of kilometres) and collects into geological “traps” – a process known as secondary migration.
Our studies indicate that once helium and nitrogen are released from the source rock, they interact with groundwater in overlying strata (see figure above). Once enough helium and nitrogen are dissolved in the groundwater, they are able to form a separate nitrogen- and-helium-rich gas phase as the groundwater ascends to the surface and becomes depressurized. We think this is the mechanism for the near-pure nitrogen-helium gas fields found in North America.
Other helium-producing fields have a more complex makeup of gases. Some (such as LaBarge/Riley Ridge and Doe Canyon in the US) are known to contain primarily carbon dioxide, and others (like the North Dome in Qatar, Hassi R’Mel in Algeria and the Hugoton-Panhandle in the US) are rich in methane. The presence of significant concentrations of carbon dioxide or methane in these helium-rich natural-gas trapping structures gives us another clue as to the possible mechanisms behind helium trapping. If groundwater containing dissolved helium and nitrogen comes into contact with a pre-existing natural gas cap then helium and nitrogen will preferentially move from the groundwater into the gas cap.
But there is a complication. Because carbon dioxide and methane are both prevalent in the subsurface, there is a high risk that in locations where helium and nitrogen accumulate, the helium concentration may be diluted by large amounts of these other gases – potentially to levels that are not worth extracting commercially. In formations where we are relying on methane and carbon dioxide to strip dissolved helium and nitrogen, we need to establish a zone where the degree of dilution is “just right”. As an example, volcanic activity is a known source of high-carbon-dioxide gas fields. Therefore, in the right geological setting, the thermal aureole associated with magma emplacement may provide the heating needed to release helium from its source. However, if the trap is too close to the volcanic centre, the only gas you are likely to find there is carbon dioxide – whereas further from the volcanic source you are more likely to see nitrogen-rich helium gases like the ones that occur in the Tanzanian seeps we sampled.
Once helium has migrated into a gas-trapping structure, the preservation of helium in that trap depends on the rate at which helium is supplied to the deposit and the efficiency of the seal or trap to contain the gases. Trap destruction (caused by weathering, erosion or tectonic events) or a leaky seal (usually caused by the pressure in the reservoir exceeding the pressure in the overlying caprock) will result in helium being lost from the trap.
Next steps for helium exploration
Based on this preliminary helium exploration methodology, we have identified the Tanzanian region of the East African Rift as a potential helium-rich system. The region contains an ancient craton that has been perturbed by a much younger rifting event and, as a bonus, some of the gas seeps in the region are already known to be nitrogen-helium rich.
But our database and geochemical analyses of the helium-rich fields are lacking, especially when compared with the years of accumulated data for petroleum exploration. Continuing to sample and analyse natural helium occurrences so as to better understand and classify migration and accumulation processes is essential. Nevertheless, we believe we have taken an important step towards securing the future of our helium supply.
My physics degree at the University of Surrey, UK, included a “sandwich year” placement and one of the places I applied to was CERN. To maximize my chances of getting a job there, I wrote down whole areas that I would be interested in working in, including cryogenics. My interview with the cryogenics group at CERN didn’t go so well (I had actually forgotten what the word “cryogenics” meant), but somehow they offered me the placement anyway, and I loved it. I worked on the Large Hadron Collider magnets, looking at low-conductivity, high-strength materials such as carbon fibre and doing heat-transfer tests at cryogenic temperatures. By the end of the year I was hooked.
What did you do after you finished your undergraduate degree?
I did a PhD at the University of Southampton, UK, researching a type of cryocooler called a pulse tube refrigerator. Most commercial cryocoolers are of the Gifford–McMahon type, which use pistons to expand and contract gases; every time you expand the gas, it cools and you can, in effect, “store” the coolness in a particular area, which you then attach to whatever sample you want to cool down. The pulse tube is different in that it doesn’t have a physical piston moving, just a volume of gas, so there are fewer things that can break and less vibration because you don’t have mechanical things moving at high speeds. Pulse tube refrigerators are now a mature technology – you can buy them off the shelf – but at the time they were still very much in development.
What’s driving the adoption of technologies like that?
The great thing about coolers is that you can switch them on and leave them. You don’t have to continuously refill them with liquid helium and liquid nitrogen – you just press a button and as long as they’ve got electricity and water they keep on going for months, if not years. Another big driver is the cost of helium. Geoscientists have found more helium reserves, but they’re a long way from being brought online. That’s why everybody is turning towards cryogen-free cooling where they can. But liquid helium also has some big advantages. It’s got a lot of cooling power and it’s still relatively cheap to produce, even though the price has been going up. Plus, you need an awful lot of electricity to run cryocoolers. You need electricity to produce liquid helium too, but once it’s cold, it’s cold.
What are you working on now?
I work at ISIS, which is the UK neutron and muon source at the STFC’s Rutherford Appleton Laboratory. Our division provides support for the scientists and users coming to do experiments; we provide the equipment that gets very cold, very hot or reaches very high pressures. One system we’ve designed and built is a low-temperature stress rig – a sort of cold box that you can use to cool your sample, apply mechanical stress to it and then image it with neutrons. It has been used recently to do stress tests on high-temperature superconducting tape, which meant passing a current through the sample while cooling it and applying a tensile load, all while on a neutron beamline. In those circumstances, nothing is simple, not least because when you’re cooling things, they move and contract, and the properties of the material change. For example, many materials that are fine at room temperature become brittle once you cool them down.
Another system we’re working on is a low-temperature sample changer. Most of the time if you want to pull a low-temperature sample out of the neutron beam, you have to warm everything up, put your new sample in and then cool it back down again, which takes quite a long time. We’ve got a sample changer in production that works by attaching the sample to a little parachute, so that we can blow out the sample with helium, catch it with a robot arm, change it, drop in the new sample and let the gas out so that it sails down a tube where the temperature sensor makes an electrical connection at the base. At that point it gets cooled very quickly because it’s a small object and there’s minimal heat transfer. It’s similar to the sample changers used in some types of nuclear magnetic resonance machine. But there are lots of different technologies that have to work together: you’ve got the robots at one end, you’ve got the cryogenics, you’ve got the gas and the vacuum. As is often the case, cryogenics is only one part of a multidisciplinary system.
Aside from uncertainties in liquid-helium supplies, what are the other barriers to progress in cryogenics?
Here in Oxfordshire, we have a lot of cryogenics companies, and our biggest issue is the lack of skilled labour. Cryogenics is a specialized field and until recently you’ve not really been able to study it, either at school or at a technical college. We have helped to change that, and there’s now a university technical college down the road that will be training pupils up in cryogenics and vacuum use. There’s also a BTEC module (that is, a vocational qualification for 16-year-olds) in cryogenics and vacuum technology. We’re trying to make it part of the broader curriculum because cryogenics and vacuum are fundamental to so much of industry and technology. But even so, there’s a shortage of skilled labour. Cryogenics is very dependent on technical skills such as welding – you have to be able to build cryogenic vessels to certain safety standards.
The other issue I’d note is that most cryogenic companies are relatively small, and while they’re very good at what they do, they often don’t have the resources to expand. So if something new comes along, most of them won’t leap up and say “Right, okay, we’ll do that, we’ll develop it” because it’s too much of a risk for them. There’s just not the investment and the infrastructure here in the UK to support some of the big cryogenics projects coming up.
What would help with that?
It would help if there were funds available for riskier projects. Even in academic research, money is given for a specific project, and you cannot fail on that project, so you have to play it safe. And in industry, they can’t do basic development because they have to produce something and sell it – they can’t “play around” because they only get paid for what they deliver. I think that’s probably one thing that’s keeping cryogenic technology and superconductivity technology from growing as it should: industry can’t do development because they haven’t got money for it, and often academics can’t do it either because we’re not really here to develop things – yet if we don’t, industry can’t either. It’s especially hard in this economic climate, where there isn’t any spare money. Government officials like to put money in and get something out, just like in industry, but it’s quite hard when you’re developing technology because you don’t necessarily get the results as quickly as expected.
The British Cryogenics Council recently celebrated its 50th birthday. How has the field changed since it was founded?
Fifty years ago, the cryogenics industry was in its infancy. That’s not to say it was all about fundamental research, but superconductivity was still relatively new; the techniques of making superconductors work in practice were in the early stages; and cryogenics was very academically based. I think that’s been the biggest change. Cryogenics is an enabling technology now rather than a research area. Scientists are still using cryogenics because you can look at atoms and molecules more easily if you slow them down, but it’s also become an industrial tool, used in factories and MRI scanners as well as in particle accelerators.
What do you see as the major trends in cryogenics for the next 10 years?
There’s still a lot of development to be done on superconductivity, especially with high-temperature superconductors. At the moment, a lot of high-temperature superconducting materials are difficult to work with – you can’t roll them out into nice tapes like you can with low-temperature superconductors – so there’s more research needed to help them reach their potential. There are also some interesting “green” applications of cryogenics that may become more important. The so-called “hydrogen economy” has the potential to be huge for the cryogenics industry. If you’re going to run vehicles off hydrogen instead of fossil fuels, you’ll need a lot of hydrogen, and you’ll need to transport and store it somewhere either as a high-pressure gas or as a liquid, bearing in mind that hydrogen reacts with many metals causing embrittlement with long-term exposure. Then there’s the question of how you get the hydrogen into vehicles. If you have liquid-hydrogen filling stations, you’ll have to move liquid hydrogen from an underground tank up and into the vehicle while keeping it at a low temperature. All of that will involve cryogenic technology.
“From the moment that Averil Macdonald started her icebreaker session, I knew that the programme we had put together was going to resonate with the fantastic, global audience that we had been able to bring together,” says Wilkin. “The Sun shone on us as well for that first all-important evening, facilitating the start of a week in which good practice and frustrations could be shared honestly.”
A key theme of ICWiP was that across all cultures, women are still in the minority in physics – and this only gets worse further up the academic pipeline. Discussions took place all week about how to combat the contributing factors such as unconscious bias and stereotyping, and it will be interesting to see how these translate into future practice. “I hope that further Project Juno type activities will be implemented across other nations,” says Wilkin, “as this has given us an infrastructure within which we can persuade people that they can speak out about workplace practices which are detrimental to all, and particularly women.”
For Wilkin, the “adrenalin peak” came when they veered from the published programme for a surprise address from Malala Yousafzai. “There were many damp eyes among the delegates, and also the university co-ordinator,” explains Wilkin. “I felt lost for words listening to her on the podium. Her powerful message will be remembered by all – and her commitment to our cause, which meant she fitted us into a phenomenally busy flight schedule.”
Another highlight for Wilkin was when she welcomed the delegates in the opening address. “I needed to remind myself that this predominantly female audience really was a physics one! This in itself was incredibly empowering. As I stood there I also felt that this was part of demonstrating ‘a working physics mother’ rather directly to my daughter who was in the audience as part of our fantastic student guide team.”
The conference will hopefully meet for the seventh time in 2020 and Wilkin hopes to see ideas and strategies for helping women get further through the academic pipeline – “I believe in a number of developing countries we have improved the early-stage career and we need to ensure that we have working practices in place to help these women thrive, and succeed rapidly.”
Until then, keep an eye out over the coming months for a feature on the great Dame Professor Jocelyn Bell Burnell, who received the Institute of Physics President’s Medal at ICWiP, and for a couple of podcasts on women in physics, which will include interviews with ICWiP delegates.
A special thanks must go to IUPAP, the Institute of Physics, the University of Birmingham and sponsors for organizing, hosting and funding such a great event. I’ve used the word inspirational a lot over these blogs, but it really is the best word to describe ICWiP. I am looking forward to ICWiP 2020 and hope to see you there!
Titan has molecules that may link together to form membranes resembling those of living organisms on Earth. The presence of acrylonitrile – also known as vinyl cyanide (C2H3CN) – on Saturn’s largest moon has been confirmed by an international team using data from the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile.
Titan’s atmosphere mostly comprises nitrogen and some carbon-based molecules such as methane and ethane. While scientists suggest this chemical composition is similar to Earth’s primordial atmosphere, temperatures on the Mars-sized moon average at –179 °C – so cold that lakes, rivers and seas comprise liquid methane.
Hollow spheres
In 2015, scientists at Cornell University in the US predicted these extreme conditions could allow vinyl-cyanide molecules to link together and form sheet structures similar to lipid bilayers found in living cells on Earth – the main component of a cell’s membrane. Like the lipid bilayers, the researchers proposed that the sheet structures could form tiny, hollow spheres called “aztosomes”, which could act as small storage or transport containers. However, scientists had been unable to definitively confirm the chemical’s presence on Titan in among the planet’s array of carbon-rich molecules.
Now, using archival data from ALMA, Maureen Palmer of NASA’s Goddard Space Flight Center in the US and colleagues have identified that significant quantities of vinyl cyanide are indeed present on Titan. “The presence of vinyl cyanide in an environment with liquid methane suggests the intriguing possibility of chemical processes that are analogous to those important for life on Earth,” says Palmer.
Astrobiologically relevant
The team suggests that the chemical is probably most abundant in Titan’s stratosphere, where it likely rises, cools, condenses and rains onto the surface. This means that the moon’s second largest lake, Ligeia Mare, could have 10 million aztosomes in every millilitre of liquid – in comparison, coastal ocean water on Earth contains roughly a million bacteria per millilitre.
“The detection of this elusive, astrobiologically relevant chemical is exciting for scientists who are eager to determine if life could develop on icy worlds such as Titan,” says team member Martin Corinder. “This finding adds an important piece to our understanding of the chemical complexity of the solar system.”
The first Africa-led experiment at CERN is currently taking place at the Geneva-based lab’s ISOLDE facility. Researchers from the University of the Western Cape (UWC) in South Africa are studying the isotope selenium-70 to better understand how its nuclei shape relates to its energy levels. The group hopes that its presence at CERN will be a source of inspiration for other African scientists.
“The first thing it tells people in South Africa is that if we – a historically disadvantaged institution – can do it, any university in South Africa can do it,” says experimental lead Nico Orce of UWC. Determined to share the opportunity with as many colleagues as possible, Orce assembled a team of 11 people, which is more than the usual number for this type of experiment on ISOLDE. The team has nicknamed the experiment Ubuntu, a Xhosa word meaning “I am, because we are.”
Radioactive particles on tap
CERN’s Isotope Mass Separator On-Line (ISOLDE) facility provides the experiment with a low-energy beam of selenium-70 nuclei. These are smashed into a platinum target, which puts the nuclei into an excited state. By observing the gamma rays given off as the nuclei decay, the team can calculate the shape of the nuclei in the excited state. Results will test fundamental nuclear models and may also be relevant to nuclear astrophysics.
“UWC has a real battle to get funding and Nico has jumped through so many hoops to get here,” says David Jenkins of the University of York in the UK who co-led the experiment. “I wanted to get them involved at ISOLDE and help build the research expertise in the team.”
Capacity building
In supporting the African physics community, CERN works with other organizations including the Institute of Physics, which publishes Physics World. Both organizations support the African School of Physics, which seeks to build the continent’s capacity in fundamental research and its applications. In a related programme, when CERN upgrades computing equipment in its control centre it donates servers and other kit to African nations including Ghana and Senegal.
Perhaps the biggest impact of this first Africa-led experiment at CERN will be its symbolic importance in Africa and beyond. “Follow our lead” is the message the group wants to convey. “I’m so proud to be the only woman on this experiment and I feel like I’m representing all other women from Africa,” says MSc student Senamile Masango. “I would like to motivate all other women as well to come to science, to come to physics.”
A second South Africa-led experiment is already scheduled at ISOLDE. A team led by Hilary Masenda from the University of Witwatersrand will study the lattice sites, charge and spin states of iron using Mössbauer spectroscopy.
The above video is courtesy of Christoph Madsen and CERN.
By Sarah Tesh about the International Conference on Women in Physics in Birmingham, UK
Have you ever thought about why, when asked to indicate your gender on a form, “male” comes above “female”? It’s not alphabetically first, so why is it listed first? I had never questioned this myself until Jocelyn Bell Burnell pointed it out in her Institute of Physics (IOP) President’s Medal lecture. This is an excellent example of bias in our day-to-day lives – while each one of us may believe we are fair and unprejudiced, we cannot always control what our brains do and many of us are unconsciously biased without meaning to be. Unfortunately, this is one of the factors holding back women in physics.
Bias, stereotyping and harassment were major topics during the International Conference on Women in Physics (ICWiP) last week at the University of Birmingham in the UK. Many delegates at the conference have experienced these issues to varying degrees and several of the talks focused on ways to combat them.
Maria Teresa Lago from the University of Porto in Portugal suggested that to improve the situation, it is important to address the question of fair competition, rather than gender balance directly. But in order to make the competition fair, you have to remove the obstacles unrelated to hard work or someone’s potential in physics – the bias and stereotypes regarding gender, race, religion and sexuality.
During two “Cultural perception and bias” workshops, we heard from Angela Johnson of St Mary’s College of Maryland in the US, and a panel of four UK physicists – Ruth Oulton from the University of Bristol, Emma Chapman from the Royal Astronomical Society, Jaimie Miller-Friedmann from the University of Oxford and the IOP’s Improving Gender Balance manager, Jessica Rowson. In both sessions, the speakers presented examples and statistics to get the discussion rolling.
Johnson noted that in her experience “being white made it easier to walk through doors” but being a woman and gay presented challenges. Some academic scenarios benefit the dominant social group while being harmful to the others. For example, studies have found that if challenging a stereotype while taking an exam, the extra pressure means you perform less well than those who fit the stereotype. As the stereotypical physicist and dominant group in the field is the white male, other groups experience problems.
Battling bias: Angela Johnson kicked off the workshops (Courtesy: Sarah Tesh)
Research has found that white women in America are 12.5 % more likely than their male counterparts to have their e-mails ignored when applying to doctoral programmes. For black women, the likelihood was 29.8 %, for Indian women 37.7 % and for Chinese women 77.0 %. In a similar study, researchers sent out CVs for lab-based jobs that were identical in every way except for the name of the applicant. As well as John being chosen over Jennifer, “white-sounding” names got 50% more call-backs than “black-sounding” names. This happened for all fields of science and all role levels. It also didn’t matter what gender the employer was – women can have unconscious bias against women.
Interestingly, Miller-Friedmann has found that successful women in physics, such as those who might be the employer, take on “masculine” identities in response to professional isolation during their career. In interviews these women began by listing their achievements from school grades to university degrees, and stated explicitly how they were different from the girly girls.
Bias and stereotyping can cause toxic working environments that, if allowed to fester (or if people are just plain horrible) can manifest in bullying and harassment. Chapman highlighted that in the US, 1 in 20 female undergraduates and 1 in 6 female postgraduates in astronomy have reported sexual harassment from a teacher or adviser. Meanwhile, in the UK, the Guardian reported in March that, despite 300 sexual-harassment claims in six years at 120 universities, only 37 alleged perpetrators left or changed jobs as a result, many simply moving to a different university.
Oulton, meanwhile, conducted a survey investigating the perceptions of gender and bias in a European research consortium. She found that all the women in the group had experienced harassment during their careers in the form of “intrusive comments about their body or unwanted sexual touching”. But their male counterparts were unaware of how common this was and fixated on their own perceived vulnerability to false accusations of harassment.
Despite these shocking findings, the true scale of sexual misconduct in academia is unknown, partly because people don’t know what to do in response. In her session, Johnson split us into groups and gave each a real-life case study, asking “what would you do?”. The cases involved racist and sexist comments, bullying, unwanted sexual attention and being cut off by colleagues because of differing gender identity. While most of us thought it best to talk to someone senior about the problem, Johnson stressed the importance of someone speaking up at the time – whether the individual, a colleague or direct senior. But that comes down to confidence and in each incident the subject had remained silent because they did not want to make things worse.
Problem solving: the panel and audience could have discussed strategy for hours (Courtesy: Sarah Tesh)
So, the question remains about how to counter all these problems and what strategies are needed. One key point is that it’s not just down to women to make changes; men need to be aware of the problem too. Worryingly, Oulton’s above-mentioned survey found that men and women “significantly disagree about the severity of inequality of opportunity in science and engineering” – half of the men do not think women are at a disadvantage despite evidence to the contrary. They are therefore unaware of the role they play in encouraging bias and inequality. However, all agreed that those in authority need to provide clearer guidelines and action plans, and be seen to take “swift and fair action”.
Chapman proposes a “top-down change from the bottom-up” involving “grass-roots changes” such as codes of conduct and unofficial support networks, unconscious-bias training and awareness of gendered language at a higher level, and then policies for when things go wrong. However, “[the policies] need a lot of work and this change needs to come from the top”, Chapman points out.
Groups such as The 1752 Group, of which Chapman is a member, work with academics, student unions, universities, support services and national organizations “to conduct research that will lead to the development of best practice guidelines for the higher education sector”. The 1752 Group’s six strategic priorities for addressing staff-student sexual misconduct can be found online.
As well as taking action in higher education, there’s strong agreement that more needs to be done to remove bias and stereotypes at school level. While girls and boys may be equally interested in science at a young age, the attitudes of teachers and parents can hold girls back. For example, boys can dominate the classroom and get praised for hard work, while girls are praised for neat work. The Improving Gender Balance project suggests that to make a significant difference to students’ perceptions, work needs to be done across the whole school to challenge gender stereotypes. Rowson pointed out that good practice in the science department with regards to girls and STEM may be negated if gender lines are then enforced in other subjects, in break times, or in extracurricular activities. The project recommends strategies such as appointing a senior staff member as gender champion, providing teacher training in unconscious bias, highlighting positive role models and rethinking science clubs so that they don’t put girls off.
This only covers the tip of the iceberg regarding the problems women face in physics and the strategies under way to combat inequality. Change needs to happen so, as Chapman puts it “we can all just get on and do some science thank you very much”.