A new and very efficient protocol for assessing the correctness of quantum computations has been created by Samuele Ferracin, Animesh Datta and colleagues at the UK’s University of Warwick. The team is now collaborating with experimental physicists to evaluate the protocol on nascent quantum processors.
Quantum computing has been advancing rapidly and physicists are now in the early stages of building quantum processors that can outperform supercomputers on certain computational tasks. However, quantum computations can easily be disrupted by environmental noise, which destroys quantum information in a process called decoherence. As a result, it is crucial to ensure that a quantum computer has done the required calculation and has not fallen prey to decoherence.
Datta explains, “A quantum computer is only useful if it does two things: first, that it solves a difficult problem; the second, which I think is less appreciated, is that it solves the hard problem correctly. If it solves it incorrectly, we had no way of finding out.”
Defeats the purpose
The conventional way of checking is to do the calculation on a conventional computer and then compare the results. While this is possible for simple computations, even the most powerful supercomputers will not be able to check the work of future quantum computers. Indeed, using huge amounts of conventional computing power to do this defeats the purpose of developing quantum computers.
Ferracin and colleagues take an alternative approach that involves having a quantum computer run a series of simple calculations, whose solutions are already known. The protocol involves calculating two values. The first is how close the quantum computer has come to the correct result and the second is the level of confidence in this measurement of closeness.
These parameters allow the researchers to put a statistical boundary on how far the quantum computer can be from the correct answer of much more difficult problem.
Having honed their protocol over the past several years, Ferracin and colleagues are now collaborating with experimentalists, enabling them to ascertain how well the scheme performs in real quantum computers.
Looking to the future, the team hopes that its protocol will allow quantum computers to do calculations that are inaccessible even to the most powerful conventional supercomputers. “We are interested in designing and identifying ways of using these quantum machines to solve hard problems in physics and chemistry, to design new chemicals and materials, and to identify materials with interesting or exotic properties,” says Datta.
Gallium-68 (68Ga) is a positron emitter that’s becoming established as a valuable diagnostic isotope, primarily for detection of neuroendocrine tumours (NETs). Such tumours do not metabolize glucose well – precluding their visualization via standard FDG-PET scans – but overexpress somatostatin receptors that bind, for example, to the PET agent 68Ga-Dotatate.
Currently, 68Ga is made using a germanium/gallium generator, in which 68Ga is created as its parent isotope 68Ge decays. Unfortunately, this approach only produces enough 68Ga for two or three patient scans per day. Canadian company ARTMS Products is developing an alternative technique, in which a low-energy cyclotron is used to create 68Ga from solid zinc-68 targets. The company has now demonstrated record breaking, multi-curie levels of 68Ga production.
Paul Schaffer.
“The primary challenge with germanium/gallium generators is that they can’t seem to make the generators fast enough, and when they do, the output is relatively limited,” explains Paul Schaffer, founder and CTO of ARTMS. “The generators make about one patient dose for each elution. So even if they run two or three per day, that’s only two or three patient doses per day, which is an expensive proposition for many centres.”
To address this problem, ARTMS developed the QUANTM Irradiation System (QIS), a 68Ga production scheme that includes enriched 68Zn targets, a transportation device that attaches onto the port of an existing medical cyclotron and a send-and-receive station that terminates inside a shielded workspace. Schaffer notes that the system is manufacturer-agnostic and can be installed on any major cyclotron brand.
The hardware enables a technician to load a non-radioactive 68Zn target into the transportation system. Automated pneumatics and robotics then move the target to the cyclotron’s target port, where it is irradiated for two hours by a proton beam. This proton irradiation generates 68Ga within the target via the 68Zn(p,n)68Ga nuclear reaction. The irradiated target is then brought back to the shielded space where the 68Zn can be extracted and purified for use in radiopharmaceuticals.
“What makes the ARTMS system unique is we’ve demonstrated that it can produce 68Ga at levels of 10 Ci – or 370 GBq,” says Schaffer. “This is 100 to 200 times more activity in a two-hour irradiation than a germanium/gallium generator can put out. This puts the problem not on the amount of gallium that you have but, with its 68 min half-life, the rush to use it all.”
He suggests that a hospital should to be able to produce about a day’s worth of 68Ga and scan several patients following a single cyclotron run. “And at the end of the day, you don’t have any 68Ge or long-lived by-products to deal with,” he notes.
The record-breaking 68Ga production was achieved at Odense University Hospital. The team there demonstrated multi-curie production of two radiopharmaceuticals: 68Ga-Dotatate (known as NETSPOT) and a prostate-specific membrane antigen (PSMA) radiopharmaceutical for imaging prostate cancer. The Odense team is now embarking on a validation study in preparation for regulatory submission.
“We have demonstrated the production level, we’ve demonstrated the chemistry and we’ve demonstrated that 68Ga quality is consistent with regulatory standards, but we have not yet received regulatory approval,” says Schaffer.
He explains that other centres worldwide are working with ARTMS to achieve regulatory approval for the QIS in their respective countries. “We have a system in Zurich and one in Wisconsin, and also have systems being installed in Japan, the UK, Costa Rica and Toronto,” he says. “I think it’s in everyone’s interest to get this approved as quick as possible.”
Schaffer notes that the new production technology is intended to supplement, rather than replace existing 68Ga generation methods. “There are going to be areas in the world that will continue to rely on reactor-produced isotopes. There will be areas where Ge-68 and Mo-99 generators just make sense,” he tells Physics World. “But there are many areas of the world that rely on cyclotron technology, and that’s where ARTMS wants to fit in, that’s our goal.”
Scientists finalizing plans for the world’s largest radio telescope are in a race against time to try and plug at least some of a €250m hole in the project’s finances. They are hoping to persuade new and existing member states of the Square Kilometre Array (SKA) to stump up more than €100m of additional funds within the next year to avoid reducing the instrument’s core scientific research, which one leading expert says would raise doubts about whether the project is worth building at all.
When built, the SKA should consist of several hundred mid-frequency radio dishes spread out across southern Africa alongside a few hundred thousand low-frequency dipole antennas located in Australia. This would allow astronomers to peer back to the first stars in the universe and study gravitational waves, among other things. However, a series of price rises means that the observatory’s design, agreed in 2015 and itself a much slimmed-down version of the original blueprint, now has a cost well above a cap of €691m that was imposed by member states in 2013.
At a meeting in Nice last week, Andrea Ferrara, an astrophysicist at the Scuola Normale Superiore in Pisa, and chair of the SKA’s science and engineering advisory committee, told representatives of the project’s 10 current member states that at least €800m will be needed to ensure the observatory’s core research remains intact. Speaking to Physics World, he pointed out that the current estimated total cost of €940m means that even increasing the funding to €800m would still lead to cuts. Things that might potentially be pared back, he says, include computing power as well as the low-frequency antennas, which would reduce the observatory’s resolution.
Tough questions
SKA director-general Philip Diamond says he is “working very, very hard” to reduce the funding gap. One avenue is trying to increase contributions from the countries who agreed earlier this year to set up an intergovernmental organisation to build the SKA — Australia, China, Italy, the Netherlands, Portugal, South Africa and the UK – and possibly also Canada, India and Sweden. He also hopes that new countries, such as Germany, France, Switzerland, Japan, South Korea and Spain, could be persuaded to join the project.
Diamond foresees the intergovernmental body, known as the SKA Observatory, existing as a legal entity by mid-2020, with governments then committing funding by November or December that year. He says that discussions with potential new member states “gives me confidence that we are considerably above” the current cost cap, although he acknowledges that none of these countries “are guaranteed yet”, adding that the existing countries “are all showing flexibility in considering looking for additional money”. Diamond admits, however, that they are investigating cuts as well as possibly delaying construction if the necessary funding is not available in a year’s time. Construction is currently envisaged to be complete by 2028, he adds.
According to Ferrara, any less than €800m for the SKA would make it “difficult to cover what would be considered transformative science right now”. As to whether it would be worth building the observatory at all if no new funding materializes, he replies that “that is a very tough question”. It would, he says, “mean building an instrument that will not deliver what it was supposed to do”.
Timeline: The Square Kilometre Array
2006 Southern Africa and Australia are shortlisted to host the Square Kilometre Array (SKA) beating off competition from Brazil and China. Due to be completed in 2020 and cost €1.5bn, the facility would comprise about 4000 dishes, each 10 m wide, spread over an area 3000 km across
2012 The SKA Organisation fails to pick a single site for the telescope and decides to split the project between Southern Africa and Australia. Philip Diamond is appointed SKA’s first permanent director-general replacing the Dutch astronomer Michiel van Haarlem, who had been interim SKA boss
2013 Germany becomes the 10th member of SKA, joining Australia, Canada, China, Italy, the Netherlands, New Zealand, South Africa, Sweden, the UK. SKA’s temporary headquarters at Jodrell Bank in the UK opens. SKA members propose a slimmed-down version of SKA known as SKA1. With a cost cap of €674m, it would consist of 250 dishes in Africa and about 250 000 antennas in Australia
2014 Germany announces it will pull out of SKA the following year
2015 Jodrell Bank beats off a bid by Padua in Italy to host SKA’s headquarters. India joins SKA
2017 Members scale back SKA again following a price hike of €150m, which involves reducing the number of African dishes to 130 and spreading them out over 120 km
2018 The first prototype dish for SKA is unveiled in China. Spain joins SKA
2019 Convention signed in Rome to create an intergovernmental body known as the SKA Observatory. The Max Planck Society in Germany joins SKA. New Zealand announce it will pull out of SKA in 2020
The distribution of snow on flat Arctic sea ice follows two distinct statistical patterns, suggesting that wind shapes the snowscape via two separate processes. A collaboration from Sweden, Canada, the UK and the US came to this conclusion after analysing correlations between snow thickness measurements over different length scales. The researchers propose that statistical patterns over distances of around 10 m represent dune formation, while interactions between snow dunes account for patterns at the 30–100 m scale. The results could help improve climate models, which do not fully account for the thickness and variability of snow cover on sea ice.
In the Arctic, fresh sea ice starts to form from around October. Before external stresses generate topographical features like cracks and pressure ridges, the ice is remarkably flat, and any unevenness in the snow that accumulates can only be due to the action of the wind. For this reason, first-year sea ice offers an ideal natural laboratory in which to test models of wind-blown snow distribution.
Woosok Moon, of Stockholm University and the Nordic Institute of Theoretical Physics, and colleagues found two sites where such conditions arose repeatedly over several years. At Dease Strait, near Cambridge Bay in northern Canada, they performed snow thickness measurements on first-year ice in the spring of 2014, 2016 and 2017. At Elson Lagoon on Alaska’s north coast, similar measurements were made in the spring of 2003 and 2006. The wind speed and direction were known at each site for the months preceding the measurements, so the researchers had a good record of the factors that determined how the snow accumulated there.
The team used nonlinear time-series analysis to characterize the statistical properties of the snow’s thickness over different length scales. As the distance between the measurements increased, they noticed a change in the statistical pattern that described the thickness fluctuations.
“We start by measuring the snow thickness in an area every 10 m, and then we do the same thing in another area. We find that the statistical characteristics of the two sets of measurements are very similar,” explains Moon. “But when we measure the thickness every 60 m instead, we see that this sample is different statistically from the 10-m sets.”
Usually, seeing such a change in the statistics of a system indicates that different phenomena govern the behaviour at each scale. Snow particles are transported by the wind in much the same way as sand grains are, and since the fluctuations in snow thickness at the 10-m length scale are similar to those seen in sand dunes, the researchers think that the same formation mechanism operates in both cases.
The cause of the statistics exhibited at longer measurement scales is more mysterious, but Moon and colleagues say that its noise characteristics mirror the self-organized criticality seen elsewhere in nature, such as in the distribution of different sizes of earthquakes along a geological fault line. This is the phenomenon whereby complex statistical patterns emerge from large numbers of simple interactions, which in this case are represented by the calving and merging of drifting snow dunes.
The complexities of snow accumulation in the Arctic are more than a statistical curiosity. Depending on the season, the snow layer can either promote or prevent the melting of sea ice. In the winter, thick snow acts as a blanket, trapping the ocean’s heat below and preventing the growth of new sea ice. In the spring and summer, the snow’s bright white surface reflects sunlight, keeping the ice frozen for longer – at least until melt ponds start to form, at which point the snow’s albedo drops and thawing accelerates.
“For proper quantification of the above physics, the most important properties of the snow are its thickness and its temporal and spatial evolution,” says Moon. “Now we understand the role of the wind on these properties, we can move to more complicated topographies and use our knowledge to better understand the influence of rugged surfaces.”
One of the main computing tasks at CERN is to sort through huge numbers of particle collisions and identify the ones that are interesting. What impact has this “big data” had on high-energy physics?
I’d like to invert the question, because rather than talking about the impact of big data on high-energy physics, I think it’s more interesting to talk about the impact of high-energy physics on big data.
High-energy physicists started working with very large datasets in the 1990s. Out of all the scientific disciplines, our datasets were among the largest, and we had to develop our own solutions for handling them – firstly because there was nothing else, and secondly because CERN operates within a specific social, economic and political framework that encourages us to spread work around different member countries. This is only natural: we’re getting a lot of money for computing from national funding agencies, and they naturally privilege local investments.
So, high-energy physicists were doing big data before big data was a thing. But somehow, we failed to capitalize on it, because we didn’t communicate this well at the time. A similar thing happened with open-source software. We have a philosophy of openness and sharing at CERN. We could have invented open-source and popularized it. But we didn’t. Instead, when open-source started to become widespread, we said, “Oh, yeah, that’s interesting. This is what we have been doing for 20 years. Nice.”
The problem is that we have very little room to capitalize on our ideas beyond fundamental physics. Our people are working day and night on experimental physics. Everything else is just something we do because we want to do physics. This means that once we’ve done something, we don’t have time to develop it further. Sure, Tim Berners-Lee came up with the concept of the World Wide Web when he was at CERN, but the Web was taken up and developed by the rest of the world, and the same thing happened with big data. Compared to the amount of data Google and Facebook now have, we have very little indeed.
How does CERN openlab help to tackle the computing challenges faced at CERN?
We identify computing challenges that are of common interest and then set up joint research-and-development projects with leading ICT companies to tackle these.
In the past, CERN has taken a sound engineering-style approach to computing. We bought the computing we needed at the most convenient conditions, and off we went. Nowadays, though, computing is evolving so quickly that we need to know what is coming down the pipeline. Evaluating new technologies after they are on the market is not good enough, so it is important to work closely with leading companies to understand how technologies are evolving and to help shape this process.
Another aspect of CERN openlab’s work involves working with other research communities to share technologies and techniques that may be of mutual benefit. For example, we are working with Unosat, the UN technology platform that deals with satellite imagery and which is hosted at CERN, to help estimate the population of refugee camps. This is a big challenge, and part of the problem is that it’s difficult – sometimes even dangerous – to count the actual number of people living in these camps. So, we are developing machine-learning algorithms that will count the number of tents in satellite photos of the camps, which refugee agencies can then use to estimate the number of people.
Future proofing Carminati expects machine learning to play an increasingly important role in data analysis at CERN. (Courtesy: CERN/Maximilien Brice and Julien Marius Ordan)
What are some of the technologies you see becoming more important in the future?
We’re already using machine learning and artificial intelligence (AI) across the board, for data classification, data analysis and simulations. With AI, you can make very subtle classifications of your data, which is of course a large part of what we do to find new particles and elaborate on new physics.
The high-energy physics community actually started looking at machine learning in the 1990s, but every time we started to do something, we saw the tremendous possibilities, and then we had to stop for lack of computing power. Computers were just not fast enough to do what we wanted to do. Now that computers are faster, we can explore deep learning using deep networks. But these networks are very slow to train, and this is something we are working on now.
What about quantum computers? What impact will they have on high-energy physics?
This is very hard to say, because it’s crystal-ball thinking. But I can tell you that it’s important to explore quantum computing, because in 10 years we will have a massive shortage of computing power.
High-energy physics is in a very funny situation. We have two theories: general relativity and quantum mechanics. General relativity explains the behaviour of stars and planets and the evolution of the universe, and it is the epitome of elegance. It’s a beautiful theory, and it works: you use it every day when you use GPS on your smartphone. Quantum mechanics, in contrast, is a very complex theory, and although it’s pretty successful – the fact that we found the Higgs boson is a testament to its success – there are a lot of unsolved questions that it doesn’t answer. The other problem is that quantum mechanics does not work with general relativity. When we try to unify them, the outcomes are really weird-looking, and the few predictions that we make are not borne out by reality. So we have to find something else.
But how do you find that “something else”? Usually, you find something that is not explained by present theories of physics, and that unexplained thing gives theorists hints about how to proceed. So we have to find something that contradicts our current view of the Standard Model. However, we know that this cannot be a big thing, because every observation we make is more or less confirmed by the model. Instead, we are looking for something subtle. The old game was to find a needle in the haystack of our data. The new game is to look in a stack of needles for a needle that is slightly different. This new game will involve an incredible amount of data, and incredible precision in processing it, so we are increasing the amount of data we take and increasing the quality of our detectors. But we will also need much more computing power to analyse these data, and we cannot expect our computing budget to increase by a factor of 100.
That means we have to find new sources of very fast computing. We don’t know what they will be, but quantum computing may be one of them. We are looking into it as a candidate to provide us with computing power in the future, and also because there is one really exciting thing that you can do with quantum computing that you cannot do nearly as well with normal computing, and that is to directly simulate a quantum system.
What is the particle-physics community doing to meet its more immediate computing challenges?
We are moving forward along several axes. One of them is to exploit current technology as well as possible. It used to be that when computers got faster, it was because they had a faster clock rate. That kind of increase in power was readily useable, because you could just park your old program on a new machine and it would run faster. More recently, though, computing power has gone up because of increases in the number of transistors on a chip, meaning that you can make more operations in parallel. But to take advantage of this power you need to rewrite your programs to exploit this parallelism, and that is not easy.
We are also exploring different computing architectures, such as graphical processing units GPUs, to see how well they can fit into our computing environment. And of course, there is the work we are doing on novel technologies such as machine learning, which could really improve the speed of certain operations.
The biggest prize, though, would be quantum computing. Quantum computers would be useful across our entire workload. But we will have to develop new ways of thinking to exploit them, and this is why it is so important that CERN openlab is working with different companies in this area. Can we imagine software that is independent of the type of computing we are using? Will we have to write a different program for each type of quantum computer? Or will we develop algorithms that can be ported onto different quantum computers? For the moment, nobody knows. An extension of C++ or Python that could be ported to different quantum computers is, for the moment, science fiction. But we have to think creatively in this direction if we want to assess the capabilities and opportunities.
CERN is expected to announce a delay to a major upgrade of the lab’s Large Hadron Collider (LHC) at a meeting at CERN tomorrow. Work began on the SwFr1.5bn (£1.1bn) High Luminosity Large Hadron Collider (HL-LHC) last year, with the revamped machine originally set to switch on in 2026. Physics World understands that a one-year delay is expected to be agreed so that the lab can plug a gap of around £100m that was expected to be contributed to the HL-LHC by non-member countries. The upgraded facility may not now start until 2028.
The HL-LHC upgrade is designed to increase the collider’s luminosity increase by a factor of 10 over the original machine. This requires a significant modification to the beam line around the two largest LHC detectors – ATLAS and CMS. The work will involve upgrading about 1.2 km of the 27 km ring by including 11-12 T superconducting magnets and superconducting “crab” cavities – that reduce the angle at which the bunches cross – to increase the number of collisions at the two detectors. The upgrade also involves modifications to the LHC’s detector so that it can handle the increased luminosity.
Down for longer
Work on the HL-LHC began in 2018 during “long shutdown 2”, which will last until 2021 and will see the completion of most of the civil construction for the new machine. The LHC will then run at 14 TeV for three years before being switched off again for the components of the HL-LHC to be installed during “long shutdown 3”. This was due to begin in 2024 and be complete in mid-2026 after which the HL-LHC would have a month of commissioning before physics begins at the end of that year.
However, Physics World understands that CERN will now have to contribute around £100m more towards the upgrade, which was expected to come from other non-member countries. This move could lead to a delay to the start of long shutdown 3, which is now expected to begin in 2025 and potentially last for three years – rather than 30 months as planned. In this case, long shutdown three would finish at the end of 2027 with physics on the HL-LHC not beginning until early 2028. This potential schedule change was also included in slides by the Columbia University particle physicist Gustaaf Brooijmans at a US high-energy-physics advisory panel meeting in Bethesda, Maryland, last week.
A decision to delay the HL-LHC is expected to be announced following a meeting at CERN tomorrow.
Update 27/11/19: CERN says that no decision has been made on whether to delay the HL-LHC and adds that if any is announced it would not be due to funding reasons. They also deny that there is a funding gap of £100m. Lucio Rossi, head of the HL-LHC upgrade, told Physics World that an international independent review that recently examined the cost and schedule of the HL-LHC found that the dates “were solid” with no need to delay. He adds, however, that it may be necessary to collect more physics data on the LHC as well as extend the detector upgrades. In this case, it may be advised to make the LHC run three one-year longer that would then shift the start of long shutdown three for the HL-LHC to 2025. “There is no delay,” adds Rossi. “Rather an optimization of the physics.”
Researchers have created a new terahertz radiation emitter with highly-sought-after frequency adjustment capability. The compact source could enable the development of futuristic communications, security, biomedical and astronomical imaging systems.
The high bandwidth, high resolution, long-range sensing and ability to visualize objects through materials, makes terahertz electromagnetic frequencies much-coveted. However, the costliness, bulk, inefficiencies and lack of tunability of traditional terahertz emitters has stymied these promising avenues. This new combined laser terahertz source, product of a collaboration between researchers at Harvard, the US Army, MIT and Duke University, paves the way for future technologies, from T-ray imaging in airports and space observatories, to ultrahigh-capacity wireless connections.
“Existing sources have limited tunability, not more than 15-20% of the main frequency, so it’s fair to say that terahertz is underutilized,” explains co-senior author Federico Capasso from Harvard University. “Our laser opens up this spectral region, and in my opinion, will have revolutionary impact.”
The team has now described the theoretical proof and demonstration of this widely tunable and compact terahertz laser system (Science 10.1126/science.aay8683).
Perfect partnership
Capasso is no stranger to laser technology. He invented a compact tunable semiconductor laser, the quantum cascade laser (QCL), which is used commercially for chemical sensing and trace gas analysis. The QCL emits mid-infrared light, the spectral region where most gases have their characteristic absorption fingerprints, to detect low concentrations of molecules.
But it wasn’t until a conference in 2017 when Capasso met Henry Everitt, senior technologist with the US Army and adjunct professor at Duke University, that the idea to apply the widely tunable QCL to a laser with terahertz ability, formed.
Everitt, alongside Steven Johnson’s group at MIT, theoretically calculated that terahertz waves could be emitted with high efficiency from gas molecules held within cavities much smaller than those currently used on the optically pumped far-infrared (OPFIR) laser – one of the earliest sources of terahertz radiation. Like all traditional terahertz sources, the OPFIR was inefficient with limited tunability. But, guided by the theoretical calculations, Capasso’s team were able to use the QCL to dramatically increase the terahertz tuning range of a nitrous oxide (laughing gas) OPFIR laser.
“The same laser is now widely tunable – it’s a fantastic marriage between two existing lasers,” says Capasso.
Universal use
In initial experiments with the shoe-boxed sized QCL pumped molecular laser – QPML – the researchers demonstrated that the terahertz output could be tuned to produce 29 direct lasing transitions between 0.251 and 0.955 THz.
Artistic view of the QCL pumped terahertz laser showing the QCL beam (red) and the terahertz beam (blue) along with rotating molecules inside the cavity. The figure shows emission spectra of each gas. The QCL is continuously tunable and the terahertz laser emits at discrete frequencies and different ranges depending upon the gas used in the laser. (Courtesy: Arman Amirzhan, Harvard SEAS)
It was Johnson and Everitt’s theoretical models that highlighted nitrous oxide as a strongly polar gas with predicted terahertz release in the QPML. Similarly, a whole menu of other gas molecules have been predicted for terahertz generation at different frequencies and tuning ranges. Using this menu, it should be possible to select a gas laser appropriate for almost any application.
“This is a universal concept, because it can be applied to other gases,” says Capasso. “We haven’t quite reached one terahertz, so next thing is to try a carbon monoxide laser and go up to a few terahertz, which is very exciting for applications!”
Both Capasso and Everitt are particularly keen to use their laser to look skywards and sensitively identify unknown spectral features in the terahertz region. The team is developing higher power terahertz QPMLs for astronomical observations, while also eagerly working towards other commercial applications.
Regular readers of the magazine will be familiar with “Lateral Thoughts” – Physics World’s long-running column of humorous or otherwise offbeat essays, puzzles, crosswords, quizzes and comics, all written by our readers – that appears on the back page each month. For this month’s special issue “physics at the movies”, siblings Eugenia Viti and Ivan Viti have crafted a comic that takes tackles the tricky subject of physics on the big screen – can you name the two film bloopers depicted and the one movie that got the science right?
Eugenia is a cartoonist, illustrator and writer living in Chicago, US. Follow her on Instagram @eugeniaviti. Ivan has a PhD in physics and is working as an advanced development engineer at Hydro-Gear. He lives with his wife, three cats, two dogs and one goldfish
Scientists at Princeton University in the US have discovered that a material known as a Weaire-Phelan foam can act as an optical filter. As well as adding to our understanding of such foams, which have been studied for more than 130 years, the discovery might also spur the development of novel optical telecommunications devices.
Foams have a multitude of practical applications and are common ingredients in products ranging from chemical filters to heat exchangers. In mathematical terms, they are known for forming structures that minimize the surface area of the geometrical shapes, or cells, that make them up. It was this property that attracted the attention of the 19th-century Scottish physicist Lord Kelvin. In 1887, Kelvin proposed that the “luminiferous aether” thought to permeate all of space might have a foam-like structure. He then attempted to find the most efficient way of filling a 3D space by sub-dividing it into interlocking cells of equal volume and minimizing the surface area between cells. The resulting bubble-like structure came to be known as a Kelvin foam.
Extra efficient space-fillers
More than a century later, the Irish physicist Denis Weaire and his student Robert Phelan improved on Kelvin’s conjecture by putting forward an alternative arrangement that requires even less surface area. Although a Weaire-Phelan (WP) foam looks superficially similar to the disordered froth of soap bubbles or the head on a glass of beer, it is in fact a precisely structured arrangement containing two types of cells (as opposed to just one in Kelvin’s original proposal) of equal volumes. Twenty-five years after its discovery, a WP foam remains the most efficient space-filling ordered bubble foam known to exist.
3D photonic networks of foam edges
Much research has been done on WP foams and many of their physical properties are well-understood. The Princeton team of Michael Klatt, Paul Steinhardt and Salvatore Torquato, however, took a different approach, studying the optical properties of 3D photonic networks made from the edges of a WP foam, a Kelvin foam and another type of dry crystalline foam known as C15. Foams such as these contain very little liquid, and their edge structures are characterized by a set of relationships known as Plateau’s laws. These laws dictate that the borders of individual cells within the foam meet in sets of four, with tetrahedral bond angles equal to around 109° at each vertex. In principle, it would be possible to turn such edge structures into photonic networks by solidifying the foam and coating it with a dielectric material.
A photonic band gap
The team simulated Maxwell’s electromagnetic wave equations for these structures to determine how they behave when light passes through them. These calculations were executed by Klatt on the supercomputing facilities of the Princeton Institute for Computational Science and Engineering, and were computationally intensive, requiring a detailed set of calculations based on analysing the foams using a software tool called Surface Evolver, which optimizes shapes according to their surface properties. The results showed that all three foams have refractive indices that vary on the length scale of electromagnetic waves such as visible light – a phenomenon known as a photonic band gap. The presence of such gaps affects how light or other waves propagate through the material, allowing some wavelengths to pass through while completely reflecting others.
Band gaps are typically measured in percentages that indicate the size of the frequency gap relative to the gap’s central frequency. According to the researchers’ calculations, the Kelvin foam has a band gap of 7.7%, while that of the C15 foam is 13%. The WP foam has the largest photonic band gap of the three, at 16.9%. These figures are comparable to or greater than the band gaps found in self-organizing photonic crystals such as synthetic opals. The band gaps of all three foams are also highly isotropic, meaning they do not have strongly directional properties. This could be useful for designing photonic waveguides and other optical circuits.
The rise of “phoamtonics”
The researchers say that their calculations open up a host of possibilities for future work on WP foams and similar materials, in a field they dub “phoamtonics” (from “foam” plus “photonics”). One possibility would be to use these foams to transport and manipulate light, for example in telecommunications applications. At present, much of the data travelling across the Internet is carried by glass optical fibres, but when it reaches its destination, the photonic signal is converted into an electrical one, with an associated loss of speed and precision. Torquato suggests that photonic bandgap materials could guide the light much more precisely than conventional fibre optic cables, and might even serve as optical transistors that perform computations using light.
The finding also expands the range of 3D heterostructures available for photonic applications beyond photonics crystals, quasicrystals and amorphous networks, Torquato adds. “While the WP foam does have a smaller band gap than other well-known materials like ordered diamond networks (31.6%), it might offer some advantages thanks to its multifunctional properties,” he tells Physics World.
Flexible glass that does not shatter on impact could soon be made using insights from a study of a glass-like material made from aluminium oxide. Erkka Frankberg at Tampere University in Finland and colleagues have come to this conclusion after studying the molecular mechanisms the prevent cracks forming in the material.
Glass has lots of very useful properties including optical transparency, durability and low electrical conductivity. However, the inherent brittleness of the material has prevented it from finding a wider range of applications.
Glass is brittle because it no way to effectively dissipate mechanical energy when it is deformed by external forces. Instead, the energy accumulates around microscopic defects. This leads to localized concentrations of stress and eventually to the propagation of sharp cracks and shattering.
Blunter cracks
This weakness is particularly pronounced in traditional glasses made of silicates (silicon oxides), which form rigid tetrahedral structures that encourage the propagation of sharp cracks. In principle, brittleness in glass could be overcome by blunting the tips of cracks as they propagate – something that occurs in ductile materials. If achieved, this would give glass far higher mechanical strength, and make it less likely to fail because of defects.
Frankberg’s team created a new type of glass from aluminium oxide (alumina) using a technique called pulsed laser deposition. This was a significant challenge in because the material normally occurs in a crystalline form, rather than in an amorphous glassy state. In contrast to silicates, amorphous alumina can deform irreversibly at room temperatures. Through a combination of transmission electron microscopy with molecular dynamics simulations, they explored the mechanisms by which this deformation occurs.
The team discovered that molecular bonds in amorphous alumina are up to 25 times more likely to break and reform when distorted – compared to bonds in silicate glass. This allows mechanical stresses in the materials to relax. Furthermore, localized strain events within the material can accumulate into ductile flows instead of concentrated stresses, allowing blunter cracks to form. The team then showed that this “viscous creep” mechanism allows amorphous alumina to endure far higher strains without fracturing. Indeed, they were able to the materials elongate by up to 100% in the most extreme scenarios.
Despite the excitement surrounding their findings, Frankberg’s team acknowledges that they looked at idealized samples of amorphous alumina that were free from defects. This means that commercialized products incorporating the material are currently an unrealistic prospect. However, the researchers say that their results provide important guidelines for developing generalized strategies for tailoring the mechanical properties of oxide glasses.