Thinking of taking your e-reader on holiday this summer? Sitting around in the sunshine catching up on all the books you haven’t had time to read may soon be even more enjoyable thanks to a new reflective screen technology that works without a backlight. Developed by Andreas Dahlin and colleagues at Sweden’s Chalmers University, the technology is based on colour-changing nanostructures, and it could be a promising alternative to the energy-intensive digital screens currently employed in smartphones and tablets.
Conventional digital screens require a backlight to illuminate the text or images they display. Not only does this require extra energy, it also means that screens are sometimes too dim to be comfortably used outdoors – especially on bright, sunny days.
To overcome this problem, researchers have been exploring ways of incorporating so-called structural colours into the “electronic paper” of reflective displays. Materials that exhibit structural colour do not contain dyes and pigments. Instead, they rely on nanostructures that reflect or scatter light waves of certain frequencies, and they do not fade over time – especially if the structures are made of less-reactive “noble” metals such as gold or platinum.
Not bright enough
One particularly promising technique for making structural colours is to combine metallic nanostructures with materials that are electrochromic – that is, they change colour when a small electrical voltage is applied to them. Until now, however, devices based on the strongly electrochromic material tungsten trioxide (WO3) often lacked colour purity (chromaticity) and were not bright enough for practical applications.
The Chalmers team explains that this lack of brightness (low reflectance) poses a serious challenge for designers of electronic paper because only a fraction of the display’s surface will show a given colour when using subpixels arranged side by side. To reduce the number of subpixels required, they needed to create electrochromic surfaces that can provide many different colours.
Inverted design
In their latest study, Dahlin and colleagues developed a new type of inorganic electrochromic nanostructure that has both a high reflectance and an excellent colour range. They did this by modifying the design of an ultrathin flexible material based on layers of WO3, gold and platinum that they previously developed in their laboratory. While this older design could reproduce all the colours an LED screen can display and required only a tenth of the energy of a standard tablet, the colours on this earlier reflective screen were not displayed with optimal quality.
The researchers have now reversed the thin-film layers within this structure in a way that allows all the electrical components to be “hidden” behind the reflective surface. This involved placing the electrically conductive material in the device underneath the pixelated nanostructure that reproduces the colours, rather than above it as was originally the case. “The new design means you look directly at the pixelated surface, therefore seeing the colours more clearly,” Dahlin explains. “They can be seen through a glass cover, which will make colour images much easier to see in a real device.”
Commercialization prospects
The researchers showed that their nanostructures clearly outperform the best colour e-reader on the market today in terms of both colour range/quality and brightness, and Dahlin is confident that a product containing the new technology could be developed commercially within a “couple of months if a large player on the market really decides to give it a try”.
“As well as smart phones and tablet screens, the main application would be colour-changing surfaces or displays for use in situations where light is high, such as outdoors during the day,” he tells Physics World. “The devices are not fast enough to show video, so if used in advertising, they would offer energy and resource savings compared with both printed posters or moving digital screens.”
The researchers, who report their work in Nano Letters, say they are now trying to make the same nanostructures using another process that wastes less gold and platinum in the preparation stage.
Before looking for an internship, Constantine Pelesis already knew that he wanted to go into nuclear medicine. “I was studying part-time for a Master’s degree in medical physics with the University of Surrey, while also working as a teaching assistant there.” In the first summer of his MSci course he was looking for something that would give him some experience, help him to broaden his network, and provide some extra income. He therefore contacted the South East Physics Network (SEPnet), an organization that works across nine universities in south-east England to support physics students in finding placements.
Although SEPnet didn’t have any vacancies, it put Pelesis in touch with Adaptix, a company in Oxfordshire working on medical-physics devices, and he sent an e-mail to express his interest. Adaptix later called him when an internship position came up that was relevant for him, showing that it’s always worth expressing your interest. Even if there are no internships available at the time, situations can change and new opportunities are always emerging.
“Adaptix sent me a job description with the different projects it had available, and I applied with a cover letter and a CV,” Pelesis recalls. “I was invited to an interview with the chief science officer, who then became my supervisor when I was offered the internship.” After learning he had been successful, he moved to Oxford for the summer. “I loved getting to know Oxford and I did a lot of exploring while I was there,” he says. “I visited many places where they filmed Harry Potter, and I also explored the countryside around the city. Adaptix is just outside the city and my commute was a walk along the river every day.”
During the internship, Pelesis worked on computational modelling of X-rays. “I learnt how to use a new software package to do Monte Carlo simulations of electrons interacting with a metal plate,” he says. “ These simulations predicted how much energy would be deposited when the electrons generated X-rays, and how this varied depending on the set-up of the equipment. It isn’t as good as doing a physical experiment, but it gives you some initial signs about which set-ups look most promising, so that fewer need to be tested experimentally.”
Pelesis emphasizes the importance of the soft skills he developed while he was there, through working as part of a team and giving presentations on his project. He recommends speaking to lots of people across a company to get a broader view and make the most out of an internship. “I talked to people from different departments about their roles and got an understanding of how a whole company works together. It’s also good for building up your professional network.”
I talked to people from different departments about their roles and got an understanding of how a whole company works together. It’s also good for building up your professional network.
Constantine Pelesis
After his internship, Pelesis stuck with his plan to go into medical physics, and now works for the National Health Service as a nuclear medicine clinical technologist at Singleton Hospital in Swansea. This involves carrying out various procedures, such as administering radiopharmaceuticals to patients and using gamma ray cameras to image their internal organs, to see if they are functioning properly. As a next step in his career, he wants to become a clinical scientist, which would involve more work on quality control of the medical machines and radiation protection, and less patient-facing work.
Pelesis advises prospective interns to not only ask their university departments about opportunities, but also search widely online and get a LinkedIn account. “It’s another way of finding opportunities and seeing what jobs are out there,” he says. “It also enables you to make the most out of your internship by keeping in touch with the people you meet there.” Pelesis is still in contact with his Adaptix supervisor, who has introduced him to other people who work with the NHS in Wales. “An internship is a great opportunity to learn from people and build up your network,” he says. “You never know where it might lead.”
The X-ray attenuation of the patient calculated from the DICOM header information of chest X-rays, and compared with CT gold-standard size measures. (Courtesy: Hilde Bosmans)
More than 3.6 billion diagnostic imaging exams are performed each year across the globe, with medical radiation use accounting for 98% of the population’s dose from artificial sources. To keep track of this radiation burden, radiology departments employ dose management systems that extract information from X-ray exams to estimate patients’ radiation levels or flag suspicious dose outliers.
The size of a patient will influence their individual organ doses. But for projection radiography (two-dimensional X-ray imaging), dose management systems don’t usually have access to such information. Instead, they use conversion factors based on a reference patient, resulting in less accurate dose calculations.
To address this shortfall, researchers in Belgium have developed a metric to estimate patient size directly from the X-ray images. Importantly, the new metric only uses parameters available in the header of the patient’s DICOM files (which store medical images and related data). They describe the approach in Physics in Medicine & Biology.
“Having a structured, validated methodology for size estimation using DICOM header information paves the way for automated size-specific dosimetry in digital radiography,” explains Hilde Bosmans from the University Hospitals Leuven. “With the widespread adoption of dose management systems that track the patient’s dosimetric records, patent-specific effective organ dose calculations will allow for better data in terms of radiation-induced risks. This is valuable for all patients, and particularly for obese or thinner patients and patients that undergo X-ray exams regularly.”
Metric definition
Bosmans and colleagues proposed an attenuation metric related to the dose absorbed in the patient – based on the ratio of incident air kerma to detector air kerma – that correlates with patient size. They defined this metric for both thoracic and abdominal projection radiography, using 137 thoracic and 137 abdominal projection images as input data. These patients also had recent CT exams of the same body part, serving as the gold standard for patient size.
The researchers hypothesized that an attenuation metric, related to the ratio of detector air-kerma to incident air-kerma (DAK/IAK), would inversely relate to the patient size. (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/ac0d8c)
To establish the ground-truth patient size, the researchers used the CT scans to calculate the water equivalent diameter (WED) and water equivalent thickness (WET) of all patients. They then plotted these ground-truth WED and WET values versus the natural log of the attenuation metric, for both thoracic and abdominal scans. This generated four correlation curves that could then be applied to estimate patient size based solely on DICOM information from the projection radiographs.
The team note that some of the DICOM fields required for this approach (exposure index, kerma–area product, exposed area and source–detector distance) are optional. “They are, however, usually available in the DICOM header or could be made available by the system vendor,” says Bosmans. “In several countries, it is even mandatory to have these data displayed. After all, they are important indices to monitor the practice and the equipment, and they can unravel problems or occasional malpractice.”
The researchers validated the technique’s ability to estimate patient size using four different radiography systems. For all devices, they examined X-ray exams from 50 new patients and used the correlation curves to estimate WED and WET values based on the DICOM information.
Three of the systems (Carestream’s DRX Evolution, Siemens’ Axiom Luminos dRF and Canon’s CXDI-11) included a standardized exposure index, in which the exposure index is linearly proportional to the detector air kerma (rather than being vendor-specific), thereby enabling consistent performance evaluation across devices and departments.
For thoracic exams on these systems, the differences between estimated and ground-truth WED were all within ±15%, with absolute differences of 4% on average. Estimated WET values had absolute differences of 8%, 7% and 7%, for DRX Evolution, Axiom Luminos dRF and CXDI-11, respectively. In the two systems used to perform abdominal scans, the average absolute differences between estimated and ground-truth values were 4% and 6% for WED, and 6% and 8% for WET.
The researchers also examined a system without standardized exposure index: the Triathlon DR from Oldelft. For thoracic exams, the technique underestimated WED and WET, with average differences from the ground truth of –36% and –57%, respectively. For abdominal scans, the algorithm gave similar results to the other systems, with deviations of 3% for WED and 5% for WET.
Improved accuracy
The researchers suggest that the new metric could enable individualized risk assessments with better accuracy than using a generic conversion factor. They emphasize that their method is in principle applicable to all devices that acquire X-ray projections. Including patient size in a dose management platform would improve the dosimetric data and improve dose outlier management by reducing false positives from overweight patients and false negatives due to underweight patients.
Bosmans notes that this development is part of a larger project to create, along with personalized CT dose estimates, automated personalized dosimetry for 2D projection imaging. The research is performed in collaboration with the medical software company Qaelum and supported by a Flemish VLAIO grant. The intention is to implement the automated size estimation metric into Qaelum’s dose and quality monitoring software.
“Ultimately, hospitals and patients will benefit from a total solution that will automatically evaluate the quality of the exam at the one side and provide advanced effective and organ dose estimations in X-ray imaging at the other,” she tells Physics World.
X-ray flares originating from behind a black hole have been observed for the first time – by an international team led by Dan Wilkins at Stanford University in the US. The wavelength-shifted X-ray flashes are believed to have originated as photons that collided with the black hole’s inner accretion disc, before being redirected towards Earth by the black hole’s colossal gravity. By observing the effect in more detail, astronomers could gain important insights into the immediate surroundings of black holes.
Just before material passes across the inescapable event horizon of a black hole, theories predict that it is superheated to millions of degrees, forming a rotating corona of plasma surrounding the black hole. Meanwhile, the black hole’s magnetic field is continually twisting, snapping and recombining as the plasma rotates. This magnetic activity imparts colossal amounts of energy on plasma electrons, which produces intense, characteristic flashes of X-rays.
While these events have been widely observed, recent calculations by Wilkins suggest that we should also see smaller, delayed X-ray flashes. These X-rays are emitted behind the black hole from our perspective – but then reverberate off the inner surface of its orbiting accretion disc. Due to Einstein’s general theory of relativity, these echoes should be bent around the black hole, and magnified by its intense gravitational field.
Dimmer flashes
Furthermore, the orbital motion of the accretion disc means that X-ray photon wavelengths will be shifted to varying degrees, depending on where within the disc they reverberate from. As a result, these dimmer flashes can offer glimpses of an environment completely obscured from our view.
In their study, Wilkins’ team made X-ray observations of the supermassive black hole at the centre of the I Zwicky 1 – a galaxy about 59 million light-years away – using NASA’s NuSTAR telescope, and the ESA’s XMM-Newton instrument. Just as Wilkins predicted, both telescopes clearly detected energy-shifted X-ray flashes which followed brighter larger flares – providing key evidence that X-rays from behind the black hole had echoed off its accretion disc.
Through future observations of the effect, Wilkins and colleagues hope that astronomers could learn much more about the physical processes taking place in black hole coronas – which have so far proven notoriously difficult to study. The ideal opportunity for these measurements will come with the ESA’s Athena X-ray observatory, planned for launch in 2031. Featuring a far larger mirror than existing X-ray telescopes, the instrument will for the first time enable in-depth observations of X-rays originating throughout the entire coronas of black holes.
I once went to a New Year’s Eve party when I heard a graduate physics student apologize for leaving early. He was working on an accelerator experiment, he explained, and his shift was starting at midnight. To the astonishment of almost everyone, he seemed to be looking forward to getting back into the lab. I, though, was not surprised, having interviewed enough scientists to recognize their enthusiasm for regarding leisure time as a precious opportunity to work.
The history of science is full of discoveries by researchers supposedly on holiday. Harold Urey, a chemistry professor at Columbia University, famously discovered deuterium on Thanksgiving Day 1931, which is a time when Americans cook, eat turkey and pumpkin, and hang out with relatives, including many they haven’t seen in a while. Not Urey, who sent the Physical Review a paper about his Thanksgiving Day work the following week, and won the Nobel Prize for Chemistry three years later.
In 1956 another Columbia researcher, Chien-Shiung Wu, and her husband were planning a vacation in the Far East to celebrate the 20th anniversary of their emigration from China. They had booked tickets on the Queen Elizabeth – but Wu backed out, and left her husband to take the trip alone. She wanted to take advantage of an opportunity that had arisen to investigate Tsung-Dao Lee and Chen-Ning Yang’s recent idea that parity conservation had not been experimentally tested in the weak interaction. By the beginning of the next year, Wu and her team discovered that parity was violated in the weak interaction, in one of the most surprising finds in the history of physics.
Another famous working-holiday find took place at the end of 1925, when the Austrian physicist Erwin Schrödinger, who was then married, went on a skiing vacation in Arosa, Switzerland with a girlfriend. He returned on 9 January 1926 with the rudiments of the wave equation that would revolutionize quantum physics, and earn him a share of the 1933 Nobel Prize for Physics (with Paul Dirac).
Moonlighting
I have also heard of physicists who have been caught secretly working while supposedly doing something else. The Israeli theoretical physicist Yuval Ne’eman, for example, was an active member of the Knesset in the 1980s, after helping to found a right-wing political party. However, he was often chastised by reporters and politicians after TV cameras swooped in on him during boring speeches and caught him doing physics equations.
In fact, I have often encountered the feeling that physicists view it as their right to work on holiday. I remember once asking a physicist about an interview he’d had for a job at Bell Labs. He told me that it had all gone well – until he’d asked if he’d be able to work in his lab over Thanksgiving. The interviewer hesitated, then finally promised him that he could. “And what about Christmas?” The interviewer hemmed and hawed, but could not commit. “I took the job anyway,” the physicist told me, frowning.
And even if you instruct physicists to take a break, they often don’t. During the summers in the early years of Brookhaven National Laboratory, scientists were expected to stop work early on Friday afternoons and go to a nearby beach on the south shore of Long Island. But I have it on good authority that on many such occasions some of them – including Lee and Yang, who were visiting the lab that summer – used sticks to write equations in the sand.
I recall speaking with one old-timer, whose name I have forgotten, who reminisced about the days when physicists travelled back and forth between the US and Europe by boat. They would typically have a blackboard installed in their staterooms – the old slate kind that you wrote on with chalk – and would use the journey of a week or so to let their imaginations fly. “I don’t know how today’s jet-setting physicists can do any serious thinking,” this person told me.
However, physicists do know how to relax. Albert Einstein liked sailing and the violin. Niels Bohr played football. And Robert Oppenheimer took postdocs to his New Mexico ranch, where he forbade them to talk of physics except when they had an eminent visitor. In one story, Oppenheimer took his guests horseback riding at midnight on a mountain ridge in a cold downpour in the middle of a lightning storm. Coming to a fork in the path, Oppenheimer said “That way it’s seven miles home, but this way it’s only a little longer, and it’s much more beautiful!”
But the passion with which physicists pursue these hobbies can have a dark side. The CERN experimental physicist Paul Musset, who was a musician and climber in his spare time, died in a mountaineering accident on Mont Blanc in 1985. Musset was 52, an active researcher who sometimes worked nearly through the night, and had been a candidate for the Nobel prize as a co-discoverer of weak neutral currents. I also recall one male experimental physicist confessing to me how ashamed and guilty he felt staying in the lab to do an experiment early in his career. Expecting imminent crucial results, he ended up missing the birth of his first child.
The critical point
But perhaps what I have described is a thing of the past, and physics in the 21st century has become so professionalized and bureaucratic that today’s practitioners more frequently view it as a drudge that they can’t wait to break free of. Or maybe, with e-mail and Zoom calls constantly connecting us with the world, holidays don’t even give physicists a break. So do you view holidays as opportunities to interrupt your work, or to intensify it? Send me your experiences and I’ll write about them in a future column.
There are many possible therapeutic options for treating patients with cancer. It would be incredibly helpful to be able to predict in advance which of these treatments might be successful. Thanks to research published in the Journal of Nuclear Medicine, we now have a new method to help determine whether a particular tumour might be successfully treated with iron-targeting cancer treatments.
An energetic solution
For cancer cells to endlessly multiply, they need a huge amount of energy. The machinery needed to generate that energy requires a lot of iron as a key building block. We have known for many years that cancer cells are hungry for iron. This has led to therapies being developed that target the significant iron reserves in tumour cells and turn this iron into a weapon to use against the cancerous cells.
Researchers at the University of California, San Francisco, have developed a way to assess the amount of available iron inside a cancer cell and therefore predict whether these new iron-targeting treatments might be effective on a particular patient’s tumour. This parameter has previously been impossible to measure.
“Iron rapidly oxidizes once its cellular environment is disrupted, so the intracellular form can’t be quantified reliably from tumour biopsies,” explains co-senior author Adam Renslo.
Instead, the researchers have developed a radiolabelled molecule that reacts with the available iron in cells and results in it getting stuck in those cells. The amount of this radiotracer retained in a cell is proportional to the amount of iron that was available in the cells initially. The radiolabel is already used widely in PET scans: a safe, detectable, radioactive fluorine-18 atom incorporated into the molecule’s structure. When the radioactive fluorine decays, it emits a positron, which can be detected outside the body using a PET scanner.
While testing this new method, the researchers found that the uptake of their labelled molecule correlated with the amount of an enzyme that processes the iron in the cell – indicating the likelihood of there being more iron present.
They also found that the success of treating different cancer cell types with iron-targeting drugs could be predicted by the uptake of their labelled molecule. PET images of the brain of a mouse with an implanted tumour clearly revealed the tumour amid the surrounding tissue.
Co-senior author Michael Evans notes that iron dysregulation “occurs in many human disorders, including neurodegenerative and cardiovascular diseases, and inflammation”. He says that being able to use this new tool in patients, to determine how much of this free iron is present and whether treatments targeting it may be successful, represents “an important milestone towards understanding the therapeutic potential” of these treatments.
Researchers in the US have discovered an entirely new liquid phase that arises as ultrathin films of glass are deposited directly onto cooled substrates. Led by Zahra Fakhraai at the University of Pennsylvania, the researchers used an intense X-ray source to reveal extremely dense, highly stable structures within the films, which transitioned to more conventional bulk liquids above a certain temperature.
Glasses typically form as a material undergoes rapid cooling from its molten state. Below a certain transition temperature, molecules within this supercooled liquid (SCL) slow down, allowing the material’s structure to solidify. The result is a substance with similar properties to a crystalline solid, but with atoms in a disordered configuration that more closely resembles a liquid.
If the glass is subjected to temperatures higher than its transition temperature, thermodynamic effects can drive its molecular structure into a state of equilibrium over time as the material gradually relaxes into structures such as droplets and ordered crystals. Although this effect is limited in bulk glass, it presents more of a problem for ultrathin, nanoscale glass films formed from SCLs. Due to their low transition temperatures, these useful materials are prone to forming droplets and crystals as they age, inhibiting the capabilities of small-scale features.
Keeping molecules mobile
To circumvent this issue, Fakhraai’s team used a technique called physical vapour deposition (PVD). Here, solid films form directly from gases as molecules are deposited onto a substrate. By keeping the substrate just below the transition temperature of the glass, the team ensured that the molecules remained mobile enough to rearrange themselves, and thus to adopt more stable configurations as they relaxed to their equilibrium state.
The result was a highly stable glass film with a structure that could not have been achieved through conventional techniques except via millions of years of ageing. To study the structure of this film in more detail, Fakhraai and colleagues used extremely powerful X-rays produced by the National Synchrotron Light Source II at Brookhaven National Laboratory.
Through this analysis, the team discovered that the PVD method produced an entirely new type of liquid. Arising within films between 25 and 55 nanometres thick, this liquid undergoes a phase transition to a typical bulk liquid at transition temperatures roughly 35 K cooler than ordinary SCL transitions. Intriguingly, this exotic new phase is extremely dense, with molecules more closely packed together than the researchers had thought was possible without applying immense pressures.
In future experiments, Fakhraai’s team hopes to study the parameters of this unique phase transition in more detail. The discoveries made could provide a deeper understanding of the behaviour of glasses as a whole. Subsequently, improvements to existing theories could serve as a predictive platform for developing new, more advanced glass-based materials.
Let’s start with the project. What were you doing up on the North Slope?
We wanted to try a technique called distributed acoustic sensing (DAS) under ice for the first time. The way DAS works is that you take a standard fibre-optic cable, like the ones used for telecommunications, and on one end of the cable you have a device called an interrogator that rapidly pulses laser light down the fibre. This creates a certain amount of backscattered light – there’s Rayleigh scattering, Brillouin scattering, Raman scattering and so on that all comes back to you. We’re interested in Rayleigh scattering, which comes about due to natural imperfections in the glass (changes in the density of the glass affect its refractive index) and doesn’t change the wavelength of the scattered light.
So you send a laser pulse down the fibre, and then you send out another pulse and compare what you get back. If the fibre has been strained or changed length in the meantime, there will be a phase shift between the two pulses. Then you can use your knowledge of the optical properties of glass to convert the phase shift into a nanometre per metre per second strain rate that tells you what happened to the fibre between the two pulses. In essence, the fibre is like a 40 km-long (in our case) seismometer that sits inside a 4 m-deep trench (which protects the fibre from icebergs that might ground and catch on the cable) in the sea floor and measures vibrations in the Earth.
The nice thing about this DAS technology is that it’s very, very difficult to get a traditional ocean-bottom seismometer or even hydrophones running in the Arctic environment. The transportation access is terrible, the weather can be dangerous – all sorts of things can go wrong. But now, instead of having a few ocean-bottom seismometers for a short period, we have 20,000 channels of seismic data.
How do you get that many channels?
Although the fibre-optic cable is continuous, the strain rate is measured at discrete intervals along it – hence the nanometre-per-metre-per-second units. For this experiment, the intervals were 2 m long, so over the 40 km length of the cable, that’s a total of 20,000 channels coming from this cable, all in an area that previously had virtually no seismic data at all. And if we want to do the same experiment next year to see if there’s been any change in the environment, this fibre is going to be in the same spot for decades – it was installed by a telecommunications company called Quintillion to bring the Internet to the North Slope of Alaska. We’re just piggybacking on it.
What was it like working up there? I guess it was pretty dark in February.
Yes, the Sun wasn’t up for long during the day and it wasn’t very high above the horizon when it was, but I was surprised at how light it was. Everything’s covered in snow, so you get a lot of reflected light. The temperature, though, was –43 °C, with a wind chill of –60 °C, and we had a lot of blowing snow and ice fog when we drove from our accommodation in Deadhorse, Alaska to the cable landing station at Oliktok Point (the most northerly spot in North America that you can drive to, at around 70 degrees north latitude). But the great thing about DAS is that you can take most of the data from inside the landing station. You’re not huddled in a tent out on an ice sheet.
As for how the instruments coped with the conditions, the interrogator is certainly not indestructible. It has a narrow range of temperatures where it’s happy, and you can’t store it below 0 °C without damaging it. That was difficult because on the two-hour drive out to Oliktok, we had to have the instrument taking up a seat in the car – we couldn’t put it in the back of a pickup truck because by the time we got to the field site, it would have been ruined. The interrogator can overheat, too, which is a problem if you use it in the desert in Nevada.
Bundle up Plunging temperatures, freezing fog and COVID-19 restrictions all contributed to the challenges of collecting data in the Arctic. (Courtesy: Kyle Jones, Sandia National Laboratories)
Why Nevada?
I originally got interested in DAS because of a series of experiments I was involved in at the Nevada National Security Site, which is where the US did its continental nuclear weapons testing between the 1950s and the early 1990s. Nowadays, we’re doing chemical explosion tests for nuclear nonproliferation purposes, to help us discriminate earthquakes from explosions.
A few years ago, as an add-on experiment – not really expecting to get great data – I proposed using DAS to monitor these explosions. It worked beyond our expectations, and after that, it was like I had a new hammer and I was looking for nails sticking up: I wanted to find more ways to use this fantastic new tool. I had done some previous work in Alaska using traditional seismometers to listen to permafrost melting, so when the first examples of DAS on the sea floor came out late in 2019, I thought, “I wonder if there’s a telecoms networks up in Alaska where I could try this under the sea ice?”
DAS worked beyond our expectations – it was like I had a new hammer and I was looking for nails sticking up
So I sent an e-mail to Quintillion – initially just a cold e-mail sent via the “Ask us about our Internet service” button on their website. But luckily, the person who was monitoring that address had studied geophysics at the University of Alaska, so when I said, “Hey, I’d like to do distributed acoustic sensing on your fibre”, I got an almost immediate response. I had no expectation I’d ever hear from them, but I did, and it was great.
Who else are you working with?
One of our partners on the Arctic project is Silixa LLC, the UK-based company that makes our DAS system. They do a lot of work with the oil and gas industry, which uses DAS for microseismic monitoring in wells and was an early adopter of the technology. But we take data that the oil industry doesn’t, so that strains the limits of DAS and helps Silixa develop their technology further. It’s already working well, though. The explosions I mentioned in Nevada involved 50,000 kg of explosives and the sensors were only 80 m away, whereas in the Arctic we’re listening to tiny ice quakes 40 km out in the ocean.
What are ice quakes?
Ice quakes can happen when ice that’s “fast”, or stuck to the shore, collides with free-floating ice on the sea. They occur within shore-fast ice and within sea ice, too, but they are more common at the interface because it’s an area of great friction – especially when the tide goes in and out, and the wind pushes on the ice as well. You can also get what’s called basal-slip icequakes, where bottom-fast ice grinds on the sea floor.
It’s interesting that we see ice quakes in our data because our sensors are not in the ice: the ice is at the surface, then there’s a layer of water, then the sea floor and then our cable. The physics of how these layers couple is complex. For example, ice is a solid, so it can carry both P (pressure) waves and S (shear) waves, but fluids like water can’t propagate shear waves. That means you can tell the difference between ice quakes and earthquakes, because with earthquakes the system picks up waves that can’t propagate in water.
What else do you hear when you listen under the Arctic ice?
One other thing we hear is what’s known as flexural gravity waves. When the wind relentlessly pushes on the ice, it causes the ice to mound up, and then gravity takes the ice back down. That produces a long-period wave (at least, it’s long in seismic terms – it’s about 40 seconds) as the ice works and flexes. We see that in the data and that’s one of the things we’ll use to measure ice thickness. This is a difficult measurement to do, and a lot of times it’s done from satellites with a resolution that is maybe not as good as we would like.
We hear ocean dynamics as well. The ice does not reach all the way to the sea floor where the cable is, so we see the effect of currents in the ocean as well, under the ice. That is also a very hard measurement to take, so data like this is gold for climate modellers – they can add it to their ocean circulation models.
What do you hope to learn in the longer term?
Our major goal is to gather more information about climate. We’d like to have a better idea of the intensity and distribution of Arctic storms, for example. We should be able to look up the ambient noise level of a storm and determine how severe it is. In the future, we’d like to combine our DAS work with measurements made by buoys, so that we can work out the transfer function between the strain rate of the fibre in the sea floor and the wave height of the buoys, either on ice or in the water. If we know that relationship and it’s robust, then we’ve essentially put 20,000 buoys in the ocean that aren’t going anywhere and we can come back to them year after year. The DAS fibre is going to be there forever in practical terms; at least, I’ll be retired before it’s gone.
That kind of data will help us understand ocean circulation and wave height. So if we find that wave height is increasing every year – and maybe it is and maybe it isn’t, but let’s say it is – then that will have big implications for coastal erosion, which is already a problem on the North Slope. The higher the wave height, the more energy is impinging on the shore, so if we know about it we can make better predictions of how long coastal cities like Utqiagvik can continue in their current locations before they have to move. Fundamentally, we’re listening to the rhythm of the climate.
A peerless edifice rises into a brilliant blue sky. Around a central, vaguely pyramidal tower, dozens of spires and turrets of assorted shapes and designs jut out between battlements and buttresses. Around the base runs a fortified wall, behind which a watchful dragon emerges from a body of water, while a lighthouse beams down from one side of the imposing structure.
This isn’t the design of a new Physics World head office [alas! – ed.], but that of a colossal sculpture that recently broke the Guinness World Record for the tallest sandcastle ever built. Crafted from 4860 tonnes of sand, spanning 32 m wide and rising to a height of 21.16 m, the castle (see photo above) was constructed by Dutch artist Wilfred Stijer and his 30-strong team of sculptors. It was built, with the aid of an elaborate wooden scaffold, in July 2021 in the Danish seaside village of Blokhus, in North Jutland. Thanks to a layer of glue applied to its surface after completion, the super sandcastle is expected to remain on display for visitors to enjoy until February or March next year, when the next heavy frost will set in.
However, Van den Dungen’s previous two attempts to break the tallest-sandcastle record failed after one edifice collapsed days before completion, and the other was foiled by a flight of protected shore swallows that had nested on the construction site. None of us are likely to build anything as ambitious when we’re holidaying on the beach, but is there anything that science can tell us about how to make the perfect sandcastle?
Arming his students with buckets and spades, Bennett sent them to the 10 most popular beaches in the UK at the time, with instructions to collect samples of sand from each. Once they had brought the material back to the lab, his team dried the sand, poured it into beakers, added water and turned each full container upside down. “We then piled weights on each ‘lab-castle’ top and noted the total weight [it could sustain] before collapse,” Bennett explains.
The key to a strong sandcastle, the team found, was to mix one bucket of water to every eight buckets of sand. That 8:1 volume ratio, which was the same for all 10 locations tested, is in fact roughly the same composition found on real beaches around the point where the water comes nearest to the shore at high tide.
According to Bennett, this perfect ratio ensures that the water helps only to bind the sand, rather than act as a lubricant. Too much water and your structure will flow and collapse, which is what happens when sandcastles meet their natural predator, the tide. Too little, on the other hand, and the sand crumbles.
In fact, the strength of a pile of sand depends on two factors. The first is the structure of individual grains. Those that are more angular and irregular will lock together better than grains that have become rounded by virtue of having been transported a long way – a process that abrades them through the action of wind and waves. It’s why sand containing lots of microscopic, angular fragments of broken seashells is good for building strong sandcastles, Bennett explains. The other, more important, factor is the amount of water held between them, with smaller grains holding more water.
Bennett’s study led him to name Torquay in the south-west of England as Britain’s best place for sandcastles, thanks to what he calls “its delightful red sand”. Close in second is Bridlington in East Yorkshire, with Bournemouth, Great Yarmouth and Tenby all tying for third. “It was a simple but effective experiment,” Bennett recalls, explaining that he still uses the investigation as a fun way to engage people in geological concepts.
He admits, though, that any sand can, in principle, be used to make sandcastles – and that the selection of Torquay’s red sand as the “winner” of his 2004 study was in no small part down to its attractive aesthetic properties. It did help, however, that the sand in question originated more than 200 million years ago when Britain, which was then located in the interior of the supercontinent Pangaea, was part of a desert that outsized the Sahara. Torquay’s sand therefore has lots of fine grains, which Bennett says boosts its cohesive properties.
Bridges, not too far
For a physicist, a sandcastle is simply a structure made of a compacted granular material (sand) mixed with a liquid (water or seawater). But how does this water help sand grains to stick together? The answer lies in the surface tension of the films of water that form between the grains. Just as the surface of water in a test tube curves up at its edges due to adhesive forces between the glass and the liquid, so water forms tiny “capillary bridges” between sand grains. These bridges pull the grains towards each other, minimising the surface area between the water and air while increasing the surface area between water and the sand to which it is attracted.
Now, while an 8:1 ratio of sand to water might be the best for sculpting, it turns out that wet sand is still stable – i.e. acting as if it were a solid – over a wide range of water contents. There’s obviously something odd about the force that holds sand together, which is what inspired Stephan Herminghaus, a physicist at the Max Planck Institute for Dynamics and Self Organization in Göttingen, Germany, to take a close look at the phenomenon.
Rather than studying sand itself, he and his team turned to a model system made of wet, glass beads that were of similar size and shape to sand. Using X-ray microtomography – a technique that creates digital cross-sectional images of an object without damaging it – they were able to create 3D images of the beads and examine what happens as more water is added to the fake sand. The tiny capillary bridges, which initially link two separate grains, begin to get bigger and merge, forming increasingly complex structures that often look like a series of ring pulls from drinks cans stuck together (figure 1).
As the bridges grow, they come into greater surface contact with the grains, which increases the binding effect of the water thanks to its attraction to the sand grains. At the same time, however, the concave arch of the capillary bridge becomes less pronounced, reducing the negative pressure in the water As the negative pressure of the water is what draws the grains together, reducing it causes the grains to be drawn together less.
The two effects balance out, meaning that the simulated sand retains the same stickiness as more water is added to it. However, the rule was found to break down once the water occupied around 15% of the sand pile, or 35% of the total available pore space between the grains. Beyond that limit, the integrity of the pile began to weaken.
You don’t need much water to create big, tall sandcastles: small capillary bridges act like a glue between grains of sand
“The remarkable insensitivity of the mechanical properties [of the pile of sand] to the liquid content is due to the particular organization of the liquid in the pile into open structures,” the researchers note in their 2008 paper (Nature Materials7 189). In other words, we now know why you don’t need much water to create big, tall sandcastles: it’s all down to the small capillary bridges, which act like a glue between grains.
A thumping good time
But is there a theoretical limit on how high you could build a sandcastle? That was a question that Daniel Bonn, a physicist at the University of Amsterdam in the Netherlands, set out to explore with his colleagues in 2012. They did this by pouring various amounts of wet sand into plastic cylinders of various diameters, before cutting away the mould and seeing how high the columns could be made before they collapsed.
The team found that columns give way when they buckle elastically under their own weight. Given this, the researchers determined that the maximum possible height of a sand column increases in proportion to the base radius of the column to the power 2/3. Do the maths and you’ll see that to build a column of sand twice as high as your friend, you need to make its radius √8 ≈ 2.8 times as big. From measurements of the elastic modulus of the wet sand, meanwhile, they concluded that the optimum strength is achieved at a liquid volume fraction of around 1%.
Size matters By pouring wet sand into plastic cylinders, researchers at the University of Amsterdam in the Netherlands led by Daniel Bonn found that the maximum possible height of a sand column is proportional to the base radius of the column, to the power 2/3. (Courtesy: Mehdi Habibi)
That figure differs from the ratio Bennett found in his bucket-and-spade study, which is perhaps not surprising given that real sandcastles tend not to be cylindrical, as in Bonn’s study, but often more conical. After all, a cone shape maximizes the stability of a sandcastle structure, as a modelling study published by Wenqiang Zhang from Zhengzhou University in China revealed last year (IOP Conf. Series: Earth and Environmental Science514 022071).
Asked about practical tips for budding castle sculptors, Bonn says that compaction is key to stability. That’s why professional sandcastle builders usually use machines called “thumpers” that mechanically compact sand before it’s used by repeatedly stomping down on the ground. Compacting sand helps to shorten its capillary bridges, making the sand stronger.
What’s also useful is polydisperse sand containing a wide range of grain sizes. While we tend to think of sand as being made just of quartz, for geologists the term refers to any particles of worn rock ranging in size from 62.5 μm up to 2 mm. Expert castle builders in fact often favour sculpting with “river sand”, which has even finer clay particles ranging in size from 0.98 to 3.9μm. According to Bonn, river sand effectively puts small grains in the pockets between the large ones, thereby creating more capillary bridges and a stronger structure.
Clays, in other words, act as a “glue” between particles, even when there is little or no water. But if you haven’t got river sand, you can get a similar effect using seawater. As your sandcastle dries, salt crystals get deposited on the grains of sand, acting as a substitute glue. It’s one added advantage of building sandcastles at the seaside.
The sands of time
But even if there isn’t a nearby ocean to keep things wet, capillary bridges can form between sand grains as a result of vapour spontaneously condensing inside porous materials and between adjacent surfaces. Known as “capillary condensation”, this phenomenon can affect not only adhesion but also properties such as corrosion and friction in a wide variety of settings. In fact, the ancient Egyptians might even have benefited from making capillary bridges by pouring water onto sand in order to make it easier to pull heavy stonework across (see box).
Capillary condensation is usually described by an equation drawn up by the British physicist and mathematician William Thomson (later Lord Kelvin) in 1871. It links macroscopic properties such as pressure, curvature and surface tension, but the equation also holds at the microscopic scale. Indeed, it has proven to be surprisingly accurate even down to a scale of around 10 nm.
To investigate why this might be, a team led by the Nobel-prize-winning physicist Andre Geim at the University of Manchester recently fabricated the smallest capillaries possible. Some only one atom high, they were created from layers of atom-thin mica and graphite, separated by narrow strips of graphene that served as spacers. Geim and his team found that these tiny capillaries can accommodate only a single layer of water molecules within them (Nature588 250).
Studying condensation in these capillaries, the team realized that the Kelvin equation remains an excellent description even at the molecular scale – even though at these dimensions, water changes its properties as its structure becomes more discrete and layered. “This came as a big surprise. I expected a complete breakdown of conventional physics,” says lead author Qian Yang. “But the old equation turned out to work well.”
According to the team, however, the qualitative agreement between the equation and reality is also fortuitous. Capillary condensation under ambient humidities creates pressures of around 1000 bars – more than found at the bottom of Earth’s deepest oceans. This pressure may hold the grains in a sandcastle together, but it also fractionally deforms the tiny capillaries in the researchers’ experiments, counteracting the altered properties of water at the molecular scale.
“Good theory often works beyond its applicability limits,” admits Geim. “Lord Kelvin was a remarkable scientist, making many discoveries but even he would surely be surprised to find that his theory – originally observed in millimetre-sized tubes – holds even at the one-atom scale. In fact, in his seminal paper Kelvin commented on exactly this impossibility. So our work has proved him both right and wrong, at the same time.”
Water like an Egyptian
(Courtesy: Sir John Gardner Wilkinson, 1854)
If building sandcastles doesn’t satisfy your construction itch, it turns out that sand and water can be used to help make far more elaborate structures. Writing in a paper published in 2014 (Phys. Rev. Lett.112 175502), a team led by Daniel Bonn – a granular physicist from the University of Amsterdam – argued that the ancient Egyptians used water to harden desert sand. This tougher stuff allowed the Egyptians to move sledges bearing heavy stones more easily about when constructing pyramids and other colossal monuments.
The inspiration for this notion came from a roughly 3900-year-old mural that once adorned a wall in the tomb of Djehutihotep, one of the most influential nomarchs (or provincial governors) in Egypt’s Middle Kingdom, which ran from about 2050 to 1780 BCE. The decoration depicted a giant colossus of Djehutihotep – the height of four men – being pulled on a sledge across the desert by 172 workers (pictured above).
What’s interesting is that the figure standing at the front of the sledge in the mural is pouring water on the sand over which the giant statue is soon to be hauled, while two other slaves replenish his supply. Egyptologists had long dismissed this curious action as a ritual, but Bonn and colleagues experimentally demonstrated that adding a certain amount of water to sand stiffens it by forming microscopic “capillary bridges” (see main text).
These bridges lower the friction coefficient of the sand, while also stopping sand from piling up in front of the sledge or letting it sink into the sand. Specifically, the team found that the coefficient of dynamic friction halves when the water content of the sand reaches around 5%. Anything more, however, and the friction rises, even surpassing the value for dry sand at a 10% water content.
It gets everywhere
Examining the physics of sand and the capillary forces that hold it together is useful for more than just building the best sandcastle. For example, the imaging techniques developed by Herminghaus and his team to study glass beads can be applied to grain–liquid–air interfaces more broadly. That work could therefore yield practical applications away from the seaside – from stopping powders clumping up to improving our ability to anticipate and prevent landslides.
Nailing down the mechanical properties of wet sand can also inform construction efforts. After all, most roads, railways, houses and buildings are built on sandy soils, but these need to be stable if such structures are to survive. Water can reinforce sand piles, which helps them become more stable, but it can also be a danger in reducing compaction.
As any civil engineer knows, building on un-compacted sand risks “quicksand”, a builder’s nightmare. Consisting of loose sand saturated with water, quicksand can initially appear solid. But being a non-Newtonian fluid, the quicksand liquefies when agitated, for example through a ground tremor. It forms a suspension and loses its viscosity, allowing objects to sink into the sand.
That’s a particular problem in the Netherlands, where Bonn is based, which has lots of quicksand on land reclaimed from the sea using dikes. Known as “polder”, this land cannot be built on immediately, forcing builders to have to wait several years for the sand to compact before starting construction. “If it is not compacted,” says Bonn, “you can sink away and get stuck in it.”
The perfect mix
So before you hit the beach, let’s recap. For a truly breath-taking sandcastle, select a location with a decent amount of finer-sized sand. Take wet sand from around the high-tide mark, which will give you the ideal 8:1 ratio of sand to water. Compact your material to increase stability. If you want a tall tower, aim for a wide base and a conical shape. Then simply unleash your creativity. You’ll have a masterpiece on your hands…until your structure is, inevitably, washed away by the incoming tide.
Stand-up paddle boarding is up near the top of my bucket list, so I was pleased to learn that researchers at the Fraunhofer Institute for Wood Research, Wilhelm-Klauditz-Institut have created a board that is made from 100% renewable materials – instead of the usual petroleum-based materials. What is more, the light-weight material that they have developed for paddle boards can be used in buildings, cars and ships.
Christoph Pöhler and colleagues created the material using balsa wood from old wind turbine blades. Yes, that’s right, those giant blades can contain up to six cubic metres of wood. When a blade reaches the end of its lifetime, the wood is often discarded and even burnt – which is a waste of a valuable resource.
Pöhler’s team has created a way of separating the wood from the fibreglass that it is bound to. The wood is then pulverized to create a powder that is used to make a light-weight wood foam – a process that does not require the addition of an adhesive.
Flax reinforcement
The hard shell of the paddle board is made from a bio-polymer that is reinforced with flax fibres. The process has been patented and the researchers hope to have a demonstrator model available next year. One possible future use of the material is cladding for the thermal insulation of buildings.
Staying on the theme of sustainability, the folks at The Quantum Daily and Oxford Instruments have put together a nice video (see below) that looks at how quantum technologies could help in the fight against climate change. Examples include how quantum computers could someday rapidly and efficiently do complex calculations that could lead to the development of sustainable technologies – calculations that are well beyond the capabilities of even the most powerful supercomputers.
The video also looks at way of reducing the energy consumption of quantum technologies themselves – which at the moment can be very large, especially for devices that must be cooled to near absolute zero.