The temperature of supercooled liquid water has been measured accurately for the first time at temperatures below –40 °C. Researchers led by Robert Grisenti at the University of Frankfurt in Germany have measured the resonance peaks of light waves inside microscopic water droplets to reliably calculate a temperature of –42.55 °C. Their work is an important step towards producing more reliable climate models, which rely on understanding how ice forms in the atmosphere.
Under the right conditions, we have known for centuries that water can be supercooled well below 0° C without freezing. However, as temperature decreases, the rate of ice crystal formation increases rapidly, which has made it difficult to determine the lowest possible temperature of liquid water. This had limited the study of supercooled water to temperatures of about –38 °C and higher.
Light scattering
Grisenti’s team combated ice formation by creating a jet of micrometre-sized water droplets inside a vacuum. On this scale, evaporation cooled the water in the droplet much faster than ice could form. “The easiest way to determine the temperature of a spherical droplet was to measure its size, which can be accurately determined by exploiting the presence of an interference pattern in the light scattered by the droplet,” Grisenti tells Physics World.
Inside spherical droplets of certain diameters, standing light waves are formed, which produced distinctive resonance peaks in the optical spectrum of the droplet. If a droplet froze, its shape would become irregular, meaning standing waves could no longer form.
Shifting peaks
The team found that the resonance peaks shifted to smaller wavelengths over time. This meant the droplets were shrinking, and therefore remained liquid as they cooled. By calculating the amount of heat lost in evaporation, they could precisely quantify a supercooled water temperature of –42.55 °C for the coldest droplets.
The conditions created in the experiment closely resemble those in the Earth’s upper atmosphere. In the future, the research could lead to a better understanding of atmospheric ice formation, and to more reliable predictions for the Earth’s changing climate.
Geophysicist Terry Wallace has become the 11th director of the Los Alamos National Laboratory. With a budget of $2.5bn, Los Alamos currently has almost 12,000 employees and contractors. Taking up office on 1 January, he succeeds nuclear physicist Charles McMillan, who announced his plan to retire last September.
Wallace, 61, completed a BSc in geophysics and mathematics at New Mexico Institute of Mining and Technology before doing a PhD in geophysics from California Institute of Technology. He joined Los Alamos in 2003, after two decades as a geophysicist at the University of Arizona. Prior to his promotion to director, he served as principal associate director for global security and headed programmes on applying science and engineering to national and global security threats.
“Wallace’s unique skills, experience, and national security expertise make him the right person to lead Los Alamos in service to the country,” says Norman Pattiz, chair of Los Alamos National Security LLC (LANS), a private consortium that has managed the lab for the Department of Energy since 2006.
Three tasks
One of Wallace’s first jobs will be to tackle the safety issues that were highlighted late last year as well as settling into a new management contract for the lab. LANS failed to have its contract extended beyond the current financial year after it received poor performance reviews related to safety matters. “We do incredibly complex and dangerous work, and with that comes a degree of risk,” Wallace told Physics World. “We need to do everything we can to mitigate that risk, and that’s part of our ongoing commitment to safety and security.”
That commitment is one of three main issues that Wallace sees as requiring his immediate attention. The other two involve understanding other countries’ progress in nuclear weaponry, as well as gaining a better understanding of emerging threats. “It takes a nuclear weapons lab to find a nuclear weapons lab,” he says. “It’s not just North Korea. We also need to understand the Russia and China programmes – what they’re doing and why.” Beyond nuclear weapons, Wallace points to the challenges from cyber and terrorist attacks, which can cost lives and debilitate nations. “How do you protect a country’s citizens, borders, and economy?” he asks. “You do that by using cutting-edge science.”
While Los Alamos is also involved in creating new vaccines as well as developing new imaging and analytical techniques, Wallace maintains that national security matters remain the prime focus of the lab. “Technology has become an important tool in deterrence, but it’s also a tool that can be used against us and that we must guard against,” adds Wallace. “The questions we’re being asked are big and bold and difficult. Los Alamos is unique in that we can answer those questions. That’s why we’re here and that’s what I want us to do.”
That, at any rate, was what one historian told me when I mentioned I was working on a database with imaging capabilities. Databases, I was informed, are booby traps for historians. Convenient boxes for storing data, they only return reshuffled versions of what you put in. History is about the unboxable motives and meanings that give rise to events. I was ceasing to be a true historian, my colleague concluded, and about to become a big-data wonk.
I did not mean to go to the dark side. It was forced on me by my attempt to understand how research develops at synchrotron light sources – central facilities that produce multiple tunable and intense beams of X-rays. Half a century ago, researchers from academia or industry who needed information about a process or material would run an experiment at one of a synchrotron’s many beam lines. They would take the results back to their labs, generate new questions and then do another experiment. A historian could easily describe this process, which consisted mainly of short and simple feedback loops, by consulting the facility’s annual reports.
No longer. The research at synchrotrons – and at other materials-science facilities with a large user base – is teeming with webs of instruments, industries and interests. A team from China, say, could be doing research on an instrument built by a German company that’s been installed temporarily at a US synchrotron, working alongside art historians, chip makers and government scientists. The same instrument can support several different research programmes simultaneously, while the same programme can use different instruments at the same facility.
What we historians like to call the research “space” at a synchrotron facility is therefore less like a set of feedback loops and more like a rapidly changing ecosystem.
How to get a grip
I recently spoke to someone who had worked at Brookhaven National Laboratory’s National Synchrotron Light Source (NSLS) in its early days in the mid-1980s. He described an evening at the bar with friends placing bets on which research carried out at a synchrotron source would be the first to get a Nobel prize. Like his buddies, he’d bet on some aspect of condensed-matter physics, probably involving superconductivity. “Nobody ever dreamed it would be for structural biology!” he exclaimed.
Once a minor player at synchrotron sources, structural biology unexpectedly zoomed in importance – and has claimed all synchrotron-related Nobels given out so far (four, I count, all in chemistry). But how can a historian discover why, with hard data to back it up? Ditto other seemingly simple questions, such as: how has the ecosystem evolved over the years? Which instruments and techniques drive the ecosystem? Which industries put the most into this ecosystem and get the most out? What research paths lead to applied products? How might such paths be improved?
Answering such questions will not only help us to understand the history of physics and physics facilities, but also improve how we plan research and train new scientists. Unfortunately, finding the answers by reading all the NSLS annual reports on the shelf in my office – several feet worth of dense information – would be a time-consuming, if not Sisyphean task. So to see if digital tools could help, I contacted Elyse Graham, a Stony Brook colleague who is a professor in the booming field of “digital humanities”. I wanted to see how to identify key markers, mine data about them from available sources, and then put that information in a database. By mapping and imaging that information – not in spreadsheet-style but the way sociologists map and image demographic information – I hoped to guide my understanding of research ecologies.
Having already used software tools to analyse James Joyce’s Ulysses, Graham was hugely encouraging and we began to collaborate. Many other historian colleagues, however, were sceptical. Databases, they warned, are confined to fixed categories. The way they select and process data is not transparent. What’s more, databases can take over your thinking by leading you to pose only those questions that can be solved by consulting them.
But digital humanists, I discovered, know about these dangers and have been trying to address them for some time by establishing better research practices and designing superior tools. In any case, to say that databases are confined to fixed categories is not a real objection, Graham pointed out – after all, the same is true of traditional sources of historical information. Even those annual reports were assembled by lab administrators who picked the types of information useful for their purposes.
As for the lack of transparency, it’s possible for the structure and operation of databases to be made clear enough to users, even if this requires historians to acquire a new skill that Graham calls “production literacy” – describing, designing and using the software for digital tools. Finally, if a database is designed with care, it can not only re-discover existing relationships but also unearth new patterns and relationships, such as in the research ecology at a synchrotron. Historians can then ask new questions to new informants – not just scientists but technicians, programme planners and grant administrators too.
The critical point
A database and its imaging capabilities, in short, is like having a highly flexible and sophisticated map. Maps don’t do your moving for you, but allow you to move more confidently in complex and unfamiliar landscapes. Databases will never take over the function of historians because unravelling the motives, meanings and interpretations that underlie scientific events will always be a job for humans. But today, those humans who take on this task will often need to rely on big-data tools. Now that I’m on the dark side, I’ve finally seen the light.
Covering the commercial side of physics has its challenges. Because physics is such a diverse subject, people who train as physicists find their way into a host of different industries. Once there, they tend to blend in with graduates of other scientific disciplines, who are both more numerous and more likely to have their field in their job title: “physicist” is a relatively uncommon title compared to, say, “engineer”. It also doesn’t help that companies, unlike universities, almost never encourage employees to set up official, publicly accessible websites with contact info and details of what they’re working on right now.
But just because something is difficult doesn’t mean it shouldn’t be done, and one of Physics World’s resolutions for 2018 (right behind laying off the biscuits and getting more exercise) is to put more emphasis on covering industrial and applied physics. As part of that, we’re introducing a new column in the magazine that will explore the interactions between physics, industry and business in general. In this way, we hope to raise the profile of physicists in industry and, by extension, to emphasize the value that physicists bring to the commercial sector.
This new column has an author: step forward James McKenzie, a physicist and CEO of the smart-LED lighting manufacturer PhotonStar LED group who is also vice-president for business at the Institute of Physics (IOP), which publishes Physics World. After some discussions at last year’s IOP Business Innovation Awards (an annual event that honours businesses that have achieved success based on applications of physics), he kindly agreed to get the column off the ground.
What this column doesn’t have, though, is a name – and we’d like your help in choosing between ideas on our shortlist.
One possibility would be to name the column after a physics concept with links to business, industry or commerce. Many regular features in Physics World magazine already have names of this type, including Robert P Crease’s “Critical Point” column, which explores the relationships between physics, philosophy, art and society. Although “Critical Point” is named for the point on a pressure–temperature phase diagram where phase boundaries vanish, Crease (as regular readers know) also makes plenty of “critical points” in the more literary or philosophical sense. In that spirit, we suggest the name Transactions – a word associated with database operations, exchanges of money or goods, and the published reports of scientific societies (plus a somewhat niche interpretation of quantum mechanics).
The alternative is to name the column after a notable physicist whose work spanned both fundamental and applied research, or whose career epitomized the relationship between physics and innovation. McKenzie’s suggestion in this category is Joseph Swan, who patented an electric light bulb 10 years before Thomas Edison, but then, for a range of reasons, failed to commercialize it, and is seldom now recognized as the device’s inventor. “Being first doesn’t mean success in business,” McKenzie observes. In fact, he adds, the process of starting a business is a bit like a swan taking off from a river or pond – “It looks graceful from a distance, but up close and under the water it’s a frantic struggle to get airborne.”
If you have a favourite of these two suggestions (or an idea of your own – but be warned, we’ve ruled out Physics McPhysicsface and its kin), please let us know by e-mail at pwld@iop.org. Alternatively, you can simply vote for your preferred option on our Twitter poll. Finally, if names aren’t your thing, but you’d love to see McKenzie address a particular topic in a future column, get in touch. With your help, we’ll stick to our New Year’s industrial resolution.
UK hydrogeologists have identified the unsaturated zone, which sits above the water table, to be a major store of nitrate, especially in well-established agricultural regions. The result is a lag potentially spanning decades between nitrate application and its release into drinking water supplies.
Matthew Ascott and colleagues at the British Geological Survey and Lancaster University modelled global nitrate leaching from 1900 to 2000, focusing on the unsaturated zone sandwiched between the soil and groundwater table. They estimated peak global storage of nitrate in the unsaturated zone to be 605–1814 Teragrams (Tg). This could amount to 7–200% of the biologically available nitrogen in soil.
Reactive nitrogen has more than doubled in terrestrial systems since the industrial revolution, largely due to agricultural and industrial intensification. When nitrate enters surface waters eutrophication can result. This can be catastrophic for aquatic ecosystems, reducing oxygen and light availability. Human health is also at risk. Babies are particularly vulnerable to methaemoglobinaemia, a blood disorder that can result from ingesting nitrates; the World Health Organisation recommends a maximum nitrate concentration in drinking water of 50 mg/l.
Traditional nitrogen budgets assume a steady state with no nitrate accumulation over time. Ascott and colleagues’ model, in contrast, indicated that globally nitrate “shows a substantial and continuous increase” in the unsaturated zone.
The team identified three clusters of catchments exhibiting distinct storage behaviours (see image). Cluster 1 saw consistently increasing nitrate storage until 2000, indicative of ongoing development and intensification, as found in Africa, South East Asia and South America. Cluster 2, typical of the US and Europe, exhibited a similar pattern until around 1985, when catchment measures were implemented. Cluster 3 showed no geographical coherence and less predictable storage behaviour. Initially, nitrate storage increased rapidly then plateaued, indicating a shorter travel time through the unsaturated zone.
Although cluster 3 basins would likely respond quickly to changes in catchment management, clusters 1 and 2, which account for a majority of the globe, showed a lag between nitrate entering the system and emerging in springs and extraction sites.
“The time lag between the soil zone and groundwater means that it may take decades for the impact of the change in management to actually be seen in groundwater,” said Ascott. To date, few nitrate pollution studies consider lags in the unsaturated zone. Ascott hopes the study will “provide a step towards integrating this process into policy” on an international scale.
Gravitational-wave signals from binary supermassive black holes could be lurking in data being gathered by the Gaia space telescope. That is the claim of a team of astronomers led by Christopher Moore at the University of Cambridge, who have proposed how Gaia’s astrometric measurements could reveal gravitational waves with extremely low frequencies. The researchers hope to analyse the data in a few years when the astrometric measurements are released by the Gaia team.
The earthbound LIGO and Virgo detectors have famously observed several gravitational waves (GWs) in recent years. Most of these ripples in space–time came from merging pairs of black holes weighing in at tens of solar masses. We know that much larger supermassive black holes (at millions or billions of solar masses) exist and could form binary systems that broadcast GWs. However, the frequency of these GWs would be extremely low – well below the minimum frequency that can be detected by LIGO and Virgo.
Billion stars
Since its launch in 2014 by the European Space Agency, Gaia has been measuring the apparent positions of around a billion stars. In the early 2020s, it will release at least 80 readings for the positions of each star over 5–10 years, giving scientists a vast amount of data. Moore and his team realized that ultralow-frequency GWs would cause the apparent positions of distant stars relative to Earth to oscillate in subtle yet characteristic patterns.
Moore’s team explored this idea by simulating a mock set of Gaia data and injecting it with their predicted GW effects. Even after compressing the data by a factor of a million to reduce the required computational power, the team managed to recover the injected effects. The researchers are now confident that they are ready to begin analysis on Gaia’s data once it is released.
The properties of a hypothetical 4D material have been simulated in experiments done by two international teams of physicists. One team used light to emulate the 4D quantum Hall effect (QHE) while the other did it with ultracold atoms.
The quantum Hall effect has been the subject of several Nobel prizes and occurs in very thin conducting layers that are essentially 2D in nature. When such materials are cooled to near absolute zero and subject to a strong magnetic field, the electrical conductance is quantized and can change only in discrete steps. The QHE is a universal property of 2D conductors and can be seen in a wide range of materials – even when the samples are disordered.
Theoretical novelty
While the QHE does not occur in 3D materials, in 2001 physicists predicted that it could also occur in systems with four spatial dimensions. But nature only has three spatial dimensions, so the idea of the 4D QHE had been a theoretical novelty – until now.
Mikael Rechtsman of Pennsylvania State University and colleagues built their 4D QHE system from an array of optical waveguides. The waveguides are closely-spaced tubes that are etched through a single piece of glass using a powerful laser (see figure). By carefully positioning the waveguides in the array, the team created extra “synthetic dimensions” that emulate a 4D material.
Current of light
In their system, light played the role of electrical current and the team showed that its transmission though the lattice was much like what occurs in a 4D QHE.
Meanwhile at the Max Planck Institute for Quantum Optics in Munich, Immanuel Bloch and colleagues created similar synthetic dimensions using a 2D array of ultracold atoms trapped by crisscrossing laser beams. Bloch’s team began with a regular square array of atoms. Then they applied additional laser beams in the plane of the array that were offset from the array’s symmetry axes. This created a complicated superlattice in which the atoms moved as described by the 4D QHE.
Both teams included Oded Zilberberg of ETH Zurich, who developed the theoretical basis for creating a 4D QHE in special 2D systems.
Quasicrystal devices
Rechtsman believes that the 4D QHE simulations are more than just esoteric curiosities and could have practical applications. He points out that quasicrystals – materials that are crystalline but have no repeating unit cells – can have “hidden dimensions”. Their structures, he says, “can be understood as projections from higher-dimensional space into the real, 3D world”, adding that this higher-dimensional physics could form the basis of new types of photonic devices.
The journey to the Fukushima Daiichi nuclear-power plant is a truly sobering experience. When we visited the infamous site in September 2017 our scientific curiosity was dampened by the deserted villages we passed along the way. It was clear from the belongings left behind that the inhabitants had moved out in a hurry following the disastrous earthquake, tsunami and nuclear accident that took place in March 2011. Around 470,000 people had to be evacuated from this region, 154,000 of whom had to leave because of the damaged power plant. With vegetation covering houses and cars, nature has now taken over these abandoned settlements.
These villages are being slowly decontaminated – with some even declared safe – and roadside solar-powered radiation dosimeters have been installed to give the public confidence that things are on track. But despite this, we got the sense that official information remains mistrusted. Combined with the fear of radiation and a lack of jobs, past residents are reluctant to return and the villages they once lived in remain deserted.
We watched these empty houses pass by from a bus operated by TEPCO (the Tokyo Electrical and Power Company, which runs the reactors) as we were driven the 8.5 km from the security checkpoint to the Fukushima Daiichi plant itself. Upon entering the site, we passed large tanks of contaminated water and vast treatment plants designed to remove the radionuclides from the water that is used to cool the remains of the reactor cores. There were numerous large diggers and trucks, many so contaminated that they have themselves become radioactive waste. Then, as we descended a small hill overlooking units 1–4 of the plant, we saw for ourselves the destruction and large-scale construction work being carried out.
Measured approach: Bruce Drinkwater (right) and Rob Malkin by one of the many solar-powered dose meters located around Fukushima prefecture. This one, in Naraha, shows 0.158 µSv/h at a distance of 13 km from the damaged reactors. (Courtesy: Bruce Drinkwater)
Disaster zone
It is worth recalling the cause of this disaster. On the afternoon of 11 March 2011, a massive 9.1 magnitude earthquake occurred roughly 70 km off the east coast of Japan. The fourth most powerful earthquake of modern times, it created a giant 15 m tsunami that hit the Fukushima Daiichi plant about an hour later. While the plant’s four reactors are 12 m above sea level, the emergency diesel generators that supply back-up power to the cooling systems are only 5 m above the water line. At this lower level, the power of the tsunami was more apparent – large storage drums were distorted by the impact of immense waves, and the emergency power supply and cooling systems of three reactor units were disabled.
Units 1 and 3 subsequently suffered meltdowns, and powerful hydrogen explosions caused extensive damage to the 1 m-thick reactor building walls. Unit 2 also suffered a meltdown but avoided a hydrogen explosion. As for unit 4, which was undergoing maintenance at the time, a hydrogen explosion was caused by gas flowing from unit 3 into the unit 4 reactor.
While the decommissioning process will involve many stages, the first – a comparatively easy task – involves removing used fuel from the spent fuel pools located towards the tops of the reactor buildings. This was completed at unit 4 in 2014. The next task is to remove melted fuel debris, but this can only happen once its location and condition is firmly established.
Explosive event: The upper side of the unit 3 reactor building at Fukushima Daiichi was damaged by a hydrogen explosion. This area housed the spent fuel pool and the fuel handling machines. (Courtesy: TEPCO)
This is why we were in Japan. Our trip was part of a UK–Japan project to create new technology for exploring the hostile environment within the damaged reactors. Our part in this effort is to design new radiation-resistant ultrasonic imaging systems that will help provide more detailed information on the reactors’ current conditions, and allow the long and difficult decommissioning process to really begin. The work is being carried out in collaboration with Hiroshige Kikura and colleagues at Tokyo Institute of Technology, who are researching the robotic systems needed to venture into the inner reaches of this challenging and complex environment.
Our TEPCO bus parked between the badly damaged buildings of units 2 and 3, where the radiation levels for our visit peaked at 270 mSv/h (the annual limit in Japan is 50 mSv). Typical radiation exposure varies widely depending on where you live, but in the UK it is 2.7 mSv per year on average (though the average in Cornwall is much higher, at 6.9 mSv per year). Naturally, we were not permitted to leave the vehicle and were advised not to stay long.
To get a view of what a nuclear-power plant is supposed to look like, we also travelled to the north-west coast of Japan for a tour of the Tsuruga plant, which was untouched by the 2011 earthquake and tsunami. Located on a peninsula bordered by tranquil beaches, the region is a picturesque holiday destination and the power plant lies partially hidden by forest-covered rocky terrain surrounding it on three sides. A 1.5 km tunnel took us through this mountainous interior of the peninsula, and we emerged next to the gleaming white-painted reactor buildings of the plant.
This site has two reactors: a pressurized water reactor (PWR) and a boiling water reactor (BWR). PWRs are the most common type of reactor found around the world and use pressurized water as a coolant to extract heat from the nuclear fuel. BWRs are also in widespread use, particularly in Japan, and operate at a lower pressure, which allows the coolant to boil and the resultant steam to drive the electricity generation turbines directly. The PWR is still operational, though like most nuclear reactors in Japan, is currently offline and waiting for politicians to decide about the future of nuclear power in the wake of the earthquake and tsunami. The BWR that we had come to see is in the early stages of decommissioning and, crucially for us, is of a very similar design to Fukushima Daiichi’s reactors.
Inside a reactor’s heart
Gaining access to the BWR was, quite rightly, not a quick procedure. After exchanging business cards and taking lunch with our hosts, the Japanese Atomic Power Company (JAPC), we attended a briefing that left us in some suspense as to exactly what we were going to be allowed to see as it was given mostly in Japanese. A JAPC bus then delivered us to the security and safety check point where we were issued with entry passes, relieved of our shoes and checked in radiation scanners. At this point, we were also given personal dosimeters, which record total radiation exposure and ensure that nuclear-power-plant workers and visitors stay within strict annual dose limits. Next came a new outfit: light blue socks, gloves, hair net, helmet and even thermal underwear, topped with a light blue boiler suit as the outer layer. A second radiation check, some standard-issue blue shoes and, finally, the preparation process was complete. We passed through an airlock and crossed the threshold into the most tightly controlled area on the site – the reactor building itself.
The interior of the building was hospital-like in its cleanliness and the only sound was a low, persistent hum of machinery. We were led along a network of large and deserted concrete corridors that surround the primary containment vessel (PCV) – a concrete and steel structure enclosing the reactor vessel that is designed to contain any radioactive material in an accident. A very ordinary lift (amusingly almost identical to one in our university department) took us to the PCV’s entrance and, with no ceremony at all, we were ushered into the giant inverted-light-bulb-shaped containment vessel.
The height we entered at was level with the bottom of the reactor pressure vessel (RPV) where the control rod mechanisms are located. These rods control the nuclear reaction by absorbing neutrons, and are inserted and withdrawn by electrical drivers from below the pressure vessel. Standing on the metal grating floor, we were immediately struck by the engineering complexity: pipes and valves and all manner of fixtures and fittings clutter the space, with large painted markers of 0°, 90°, 180° and 270° the only way of maintaining a bearing.
A spiral staircase bought us to the concrete basement of the PCV. At Fukushima Daiichi (figure 1), this is where the fuel from the exploded reactors now rests as a lava-like mess known as corium – an unpleasant mixture of radioactive fuel, the remains of various metals and the products of the chemical interaction between the fuel and the concrete. The 10 m thick concrete floor and the metal shell below are the final barriers stopping that radioactive fuel leaking into the environment.
1 The flooded vessel Schematic of the reactor pressure vessels at Fukushima Daiichi. In the now-flooded concrete basements of these vessels, fuel and other debris await clean-up, but the area is far too radioactive for human access and even robots are destroyed in a short time.
We were now at the bottom of this light-bulb-shaped container where more pipes and other metalwork fill much of the space, providing a rather claustrophobic environment. The RPV itself sits on a 4 m diameter hollow concrete cylinder known as the pedestal, which we could just about see beyond the tangle of pipes. After clambering over large earthquake isolation dampers, there in front of us was a small opening looking into the interior of the pedestal – an uncluttered space compared with the complexity surrounding it. We had arrived at our destination. This is what Fukushima Daiichi’s units 1–3 would have looked like a few seconds before the disaster. And as the pedestal sits directly below the RPV, it is thought that much of the fuel and other debris caused by the meltdown may be contained in this hollow space, although recent images seem to show that some has spewed out of the small openings. After the accident, the PVCs were flooded with water up to a depth of 6 m to cool and moderate the ongoing reactions.
Deploy the robots
To date, a number of shape-changing robots, crawler robots and more recently a submersible robot, have been sent to explore the highly radioactive areas of the Fukushima Daiichi’s reactor buildings. However, access to the pedestal area itself is limited and the extremely high radiation levels near the fuel make it particularly challenging. The crawler robots enter the PCV through a 10 cm diameter tube known cryptically as X-100B. From here they are lowered about 2 m onto the metal grating floor of the PCV’s access level – the equivalent level at which we had entered Tsuruga’s PCV. From videos recorded by the first robots, it was quickly established that this floor has been damaged by heat but is mostly intact apart from a gaping hole through which the molten fuel fell. During the most recent deployment in late 2017, a submersible robot, known as Mambo, was lowered through openings in the metal grating floor, into the water below, to explore the lower regions of the PCV outside the pedestal.
The robots tell us that the radiation dose levels around the metal grating floor are 10–20 Sv/hr. Enough to kill a human in a few minutes, they’re operating in a world in which even specially designed robots and cameras don’t last long because of radiation damage to electronics and cabling. Outside the PCV, radiation levels are high relative to background, but low enough to conclude that the PCV has done its job and contained the fuel. Meanwhile, from the robots’ videos and images, we know that the uranium fuel melted through the bottom of the stricken RPVs and fell into the concrete basement. In some simulated scenarios, the fuel fell in a single large lump, while in others it was sprayed much more widely. However, the true state of the 200 tonnes of uranium is largely unknown, apart from the lower floor images, which show some molten material outside the pedestal.
To survey the location and condition of the fuel, and then help with the eventual removal process, our research group is developing an ultrasonic array imaging technique. Our imaging arrays are made up hundreds of small piezoelectric elements, each acting as an emitter and receiver of ultrasonic waves. In the imaging mode, the arrival times of the return echoes are used to map the 3D scene in front of the device – much like how bats use echolocation to hunt their prey and avoid flying into objects. More elements and larger arrays produce better images, but in the reactors’ space and with the access limitations, the system must be unusually compact – roughly as small as a coffee mug.
2 Sounding out the damage Bruce Drinkwater and Rob Malkin’s technique uses ultrasonic imaging to characterize structures and materials. (a) Imaging using a 2.5 MHz array of a sample of steel making slag, used as a simulant of the heterogeneous fuel debris expected within the reactor pedestal. (b) An ultrasonic image of the specimen showing surface features. (Courtesy: Rob Malkin)
Ultrasonic devices have two major advantages over optical camera systems in this kind of environment. First, the piezoelectric ceramics that they are made from are naturally radiation tolerant, which means that these devices can get much closer to the fuel, for example in the pedestal area, where the radiation levels are expected to be extremely high. Second, they can form images even in water that has been made optically opaque by fine particulates, a problem already encountered by the Mambo submersible robot.
Our imaging system will combine a reasonably standard sonar array for robot navigation and a novel high-frequency array for detailed materials characterization (figure 2). This second system will use the angular reflectivity and frequency spectral features of the ultrasonic echoes to distinguish the various materials that could be present. For example, metallic objects will produce high-amplitude short-duration echoes, whereas porous material such as corium will cause reduced amplitudes and more diffuse scattering. In some cases, it might also be possible to send the ultrasonic waves inside the debris and start to get a picture of its internal structure. Given the current uncertainties, whatever is discovered will be extremely valuable in planning the next steps and allowing decommissioning to begin in earnest.
Fukushima’s future
Our visit to Japan gave us a real sense of what lies lurking in the flooded depths of the damaged Fukushima Daiichi reactors. It is without doubt one of the most challenging environments imaginable. We hope our ultrasonic measurement systems, when mounted on specialist robots, will help form a definitive picture of exactly what state the damaged reactors are in, as only once this picture has become clear can the long process of fuel removal and decommissioning really begin. For us, there is still more research to be done over the next few years, but there is a real prospect that the systems we are developing will be manufactured and sent into the damaged reactors. In some ways, we feel a bit like the scientists and engineers preparing to launch a satellite into space – on the one hand excited, but on the other aware that, once the system is deployed, much of what happens is out of our hands.
Robot mission: (left) Bruce Drinkwater with an experimental robot designed for traversing rubble. (right) Inside unit 2, the pedestal area walkway directly underneath reactor pressure vessel. This image from one of the robots shows potential deposits of molten fuel. (Courtesy: left Bruce Drinkwater; right TEPCO)
The decommissioning of the Fukushima Daiichi nuclear plant is a multi-faceted science and engineering challenge, in which progress can be made only by scientists and engineers, in academia and industry, working together across national and subject boundaries. So big is the challenge that it will be around for the next 30 years, if not more.
Timeline of a disaster
11 March 2011 Earthquake strikes 70 km off east coast of Japan, causing the emergency shutdown of Fukushima Daiichi and loss of mains power
Fukushima Daiichi is hit by a 15 m tsunami, disabling emergency cooling generators
12 March 2011 Explosion and radiation leak at unit 1
13 March 2011 Sea water injected into units 1 and 3
14 March 2011 Explosion at unit 3
Sea water injected into unit 2
15 March 2011 Explosion at unit 2
Fire and explosion at unit 4
17 March 2011 Helicopters pour water on units 3 and 4
16 December 2011 All reactors are in a state of “cold shutdown”
The design of block copolymers that self-assemble into various shapes has proved an attractive approach for nanoscale periodic patterning that can cover large areas. However, various approaches that aim to control the alignment of these patterns often require several hours of annealing and can introduce defects, as well as limiting the block copolymer materials that can be used. Now, researchers have shown that a mismatch in the coefficient of thermal expansion in the block copolymer substrate and an applied top coat can provide a shear stress that aligns patterns over large areas with few defects after mere minutes of annealing.
Order means little without control. When Sam Nicaise – now a postdoctoral researcher at the University of Pennsylvania – and colleagues led by Karl Berggren at MIT in the US started looking at ways to control block-copolymer self-assembly, they joined a throng of researchers who had tried working with viscous flow, electromagnetic fields and gradient annealing with similar aims.
Block copolymers combine two or more different types of organic string molecules. In a way that resembles the phase separation of immiscible oil and water, block copolymers can microphase separate, leading to the self-assembly of nanostructures.
“We had a hunch – we thought maybe we could provide shear stress with a top coat,” says Nicaise. Conventional block-copolymer films are bare but research was starting to suggest that adding a top layer might make it possible to make the self-assembled patterns align. Nicaise and his colleagues thought they might be able to take the approach further, not just asserting alignment but controlling the direction of alignment as well.
Other groups had added oils and organic top coatings to the block copolymers. “If you come from a chemical engineering and rheological perspective that looks at the flows as a function of time and temperature, soft top coats are the first step,” suggests Nicaise. “We come from electrical engineering – so we come from a different perspective.”
Instead he and his colleagues opted for inorganic silicon dioxide, which brought two advantages. Silicon dioxide can tolerate higher temperatures, so by going to higher temperatures they could increase whatever shear stress is caused by a mismatch in the coefficient of thermal expansion between the layers. It also makes the structures compatible with industry processes, which generally use high temperatures.
The hunch paid off. The top coat cracked, but the researchers found that shear stresses exerted on the block copolymer followed the cracks and the alignment followed the direction of the stress. As a result, cracks could be artificially scribed on the surface to control the direction of alignment.
The stresses generated exceeded 100 kPa – orders of magnitude larger than those required for the shear alignment of block copolymers. In addition, the MIT researchers only annealed their films for 10 minutes. Other approaches to aligning block copolymer self-assembly require hours of annealing – timescales that can cool industry interest. As Nicaise puts it, “To see it ordered after annealing in just a few minutes – that’s really exciting.”
The researchers
Cheap models
The most common theoretical models for block-copolymer systems are either field-based or particle-based, which can be computationally expensive and time-consuming to calculate. Instead, the MIT researchers used a larger numerical approach based on finite elements, and this is able to capture the stresses inside the system.
Nicaise emphasises how useful the simplicity of their viscoelastic analysis is. His co-worker from the Alexander-Katz Group at MIT, Karim Gadelrab, who worked on the modelling side of the project, is enthusiastic about how much more quickly it can run simulations to capture the temperature-dependent mechanical properties of the block copolymer film and inorganic layers. “It gives an example of how researchers can use a less expensive technique to look at what patterns they can get with the block copolymers,” says Nicaise.
Opportunities for further development
As someone who has been looking at array fabrication and these kinds of films for nearly a decade, Nicaise sees a lot of room for improving the technique so that the alignment covers larger areas. At present, the alignment is best close to the cracks and the film remains well aligned within around 100 micrometres of the crack. In this respect it has potential for being a useful local patterning approach. “I am hoping some researchers can pair it with other techniques for shear alignment such as laser annealing for local annealing,” adds Nicaise.
That said, there are other advantages. While viscous flow and electromagnetic field approaches to impose alignment on block copolymer self-assembly are based on the properties of the polymers themselves – such as the dielectric coefficient or magnetic permeability – the shear stress from the top coat stems from the materials of the top coat and substrate, allowing more freedom in the polymers that can be used.
Nicaise suggests that fabricating chips with features below 10 nm is a possible application of the approach. In addition, by leaving the top coat on, it may be possible to stack several layers of scribed films and anneal them in one go, providing an approach to accompany other methods to produce 3D structures. There is also the possibility of working with how the block copolymer self-assembly interacts with nanolithographically patterned topographical templates.
Researchers at North Carolina State University have developed a mathematical model for computing radiation treatments that could substantially reduce side effects, while delivering the same results as conventional radiotherapy (Phys. Med. Biol.63 015036).
Most radiotherapy treatments are fractionated, with patients receiving the total radiation dose split into multiple treatments delivered over several days or weeks. Such fractionated regimes reduce radiation-induced cell damage, because if the same physical dose is delivered in multiple fractions, it allows healthy cells to recover between treatments. Current clinical protocols stipulate that patients receive the same dose in each treatment session; but this regime may not always be optimal.
“Different doses, carefully planned to minimize side effects, can be just as effective,” explained Dávid Papp, assistant professor of mathematics at NC State University. “However, the extent of this benefit has never been assessed. The algorithms we use now to determine the best personalized treatments don’t work when computing treatments with different dose distributions in different fractions.”
Papp developed and tested a “spatiotemporal fractionation” approach, in which different dose distributions are delivered in different fractions. By hypofractionating parts of the tumour while delivering approximately uniform doses to the surrounding tissue, such treatments can reduce the radiation dose to healthy tissue while maintaining effectiveness against the tumour.
In a proof-of-concept study, Papp and colleagues tested the plan against model slices of five different liver tumours, each representing a unique tumour size or location, to allow comparisons with actual clinical treatments.
“We wanted to see what the quantitative benefits of such a new protocol would be,” said Papp. “How much can you reduce the radiation’s effect on the liver while making sure that the tumour receives a consistent and effective dose? A reduction of 20% would reduce side effects enough to warrant a change in everyday clinical practice.”
The computed spatiotemporal plans reduced the liver dose by 13 to 35%, without compromising other clinical goals. Papp has now begun work on refining the model to make it more robust, with a view toward in vivo testing.
“Conventional radiation treatments don’t necessarily achieve maximum benefit,” Papp says. “Our protocol, by delivering a high single-fraction dose to parts of the tumour during each fraction and a consistent lower dose to the liver and other healthy tissue, could reduce patient side effects substantially while maintaining the same effectiveness as conventional treatments.”