UN secretary-general António Guterres urged world governments to adopt a transformational approach to tackling the climate emergency – speaking on Monday at the opening ceremony of the UN Climate Change Conference (COP 25) in Madrid, Spain. “We need a rapid and deep change in the way we do business – how we generate power, how we build cities, how we move and how we feed the world. If we don’t urgently change our way of life, we jeopardize life itself,” he said.
Running from 2 to 13 December, COP 25 is a crucial meeting ahead of the 2020 UN Climate Summit in Glasgow where nations will be expected to present updated climate plans, in accordance with the 2015 Paris agreement. Guterres urged nations to replace words with actions in order to meet the Paris target of limiting global temperature rise to 1.5 °C above pre-industrial levels by the end of the century.
“Catastrophic” effect
“10 years ago, if countries had acted on the science available, they would have needed to reduce [carbon] emissions by 3.3% each year. We didn’t and today we need to reduce emissions by 7.6% each year to reach our goals,” he said. Guterres says that on current trends, there will be global heating of between 3.5 °C and 3.9 °C degrees by the end of the century. “The impact on all life on the planet, including ours, would be catastrophic,” he said.
We see everywhere a new dynamism, a new determination that makes me be hopeful
António Guterres
In a press conference later in the day (see video at top of article), Guterres made it clear that he is cautiously optimistic that targets can be met, following the 2019 UN Climate Action Summit that he hosted in New York in September. “We see everywhere a new dynamism, a new determination that makes me be hopeful. I’m hopeful but not yet entirely sure because there is still a long way to go and we are still running behind climate change.”.
American presence: Nancy Pelosi (seated third from right) and her US colleagues at COP 25. (Courtesy: James Dacey)
COP 25 was originally to be hosted by Chile, but due to political unrest the summit was switched to Madrid just a few weeks ago. Speaking at the opening ceremony, Spain’s acting prime minister Pedro Sánchez welcomed delegates to Madrid and made the case for more environmentally-focused economic policies. “Today we know that if progress is not sustainable, it’s not worth calling it progress. Today we have the scientific certainty that the human hand is behind the damage to the fragile balance that enables life on our planet,” he said.
“Handful of fanatics”
Sánchez also made the pointed comment that “only a handful of fanatics deny the evidence” of anthropogenic climate change. Though at a press conference later in the day, he denied that he was referring to any specific parties.
At a separate media event, the Speaker of the United States House of Representatives, Nancy Pelosi, insisted that the US is still committed to the goals of the 2015 Paris agreement despite President Trump’s formal request to withdraw from the accord. Accompanying the Democrat politician was a congressional delegation including members of the House Select Committee on the Climate Crisis, a body established earlier this year.
“By coming here we want to say to everyone: we’re still in, the United States is still in,” said Pelosi. “Our delegation is here to send a message on Congress’ commitment to take action on the climate crisis is iron clad. We must act because the climate crisis for us is a matter of public health – clean air, clean water for our children’s survival our economy.”
Kathy Castor, chair of the select committee, spoke about plans to publish a climate action plan in March 2020 containing public policy recommendations. “We intend to follow the science. And we intend to ensure that vulnerable communities across America –and across the globe – have every opportunity to participate in this clean energy economy and transformation,” she said.
Nine years ago, controlling a beam of antihydrogen and trapping a mere 38 atoms of the stuff was enough to win Physics World’s “Breakthrough of the Year” award. Today, such achievements are practically routine, and members of the winning ALPHA and ASACUSA collaborations at CERN have helped to transform their singular technical achievement into a fruitful new sub-field of experimental physics.
“The whole purpose of the demonstration in 2010 was so that we could do measurements on antihydrogen,” says ALPHA’s spokesperson Jeffrey Hangst, a physicist at Aarhus University in Denmark. “I would go so far as to say that we have done what we promised we would do.”
Between 2010 and the end of 2018, when CERN’s antiproton source shut down as part of a planned upgrade to the Large Hadron Collider (LHC), scientists in ASACUSA and ALPHA used their antihydrogen beam and antihydrogen trap (respectively) to perform a series of ground-breaking measurements. Notable milestones include the first measurements of the 1S to 2S atomic transition frequency in antihydrogen and the most precise measurement of the antiproton/electron mass ratio in antiprotonic helium.
The aim of all these experiments (and others in the growing field of antimatter research) is to look for discrepancies between matter and antimatter. Any such differences would violate the Standard Model of particle physics, and might even explain the predominance of matter over antimatter in the observable universe. So far, neither ALPHA nor ASACUSA has seen definitive evidence for a matter/antimatter asymmetry, but the precision of their measurements is improving, and Hangst says that ALPHA’s latest upgrade should make it competitive with conventional hydrogen spectroscopy.
The success of ALPHA and ASACUSA has also inspired a new generation of antimatter experiments. Three collaborations are now developing ways to study how antimatter responds to gravity, and Hangst says that the oldest – an ALPHA offshoot called ALPHA-g – came tantalizingly close to producing a result in the weeks before the most recent LHC shutdown. A newer experiment, known as AEgIS, aims to study the effects of gravity on positronium atoms (electron–anti-electron pairs) rather than antihydrogen, while a third, GBAR, received its first antiprotons in 2018.
All in all, the years since the 2010 breakthrough have been good ones for antimatter research, and Hangst looks back with pride on what he and his colleagues have achieved. “This is now a field,” he tells Physics World. “It used to be just speculation.”
The term “e-textiles” can refer to either electronic textiles or electrically integrated textiles, but generally they’re a combination of electronics that are meant to be integrated into a textile substrate. The language can be confusing because there are lots of different ways to do it. You can print conductive traces onto a plastic, laminate it to a textile and call that an e-textile, but in that case the circuit itself is not a textile at all. You can also take a conductive thread and knit it into a fabric with conductive traces and call that an e-textile, but that is more textile than it is traditional circuitry. So while the main definition is “electronic textile”, there are lots of different ways to make them, ranging from printed electronics to traditional textile-making techniques.
What are some applications for the types of e-textiles that interest you?
There are quite a few markets that you can apply e-textiles to, but one of the best ways to think about it is that e-textiles are a kind of circuitry that has unique mechanical properties. When you need circuitry that can function in an environment that is flexing around multiple axes; or that can exist on a substrate that can be creased and draped over an object such as a human body; or you need a circuit that spans a large surface area (which can sometimes be hard to achieve with traditional electronics), then an e-textile can be very useful. The applications that interest us are all driven by these mechanical properties.
When it comes to specific industries and use cases, e-textiles can be very useful for garment-based wearables in the medical sector. An example might be devices that are collecting electrocardiography (EKG) or electromyography (EMG) data. One of the big issues for those applications is that if you need to do long-term health monitoring, especially of your heart, you generally have to stick all these electrodes to yourself every day under your clothes, and that encourages people to avoid doing the monitoring. But if you can make those data-collecting devices comfortable and easier for people to wear, then they are more likely to wear them. That’s the general school of thought.
Now you see me A sample of the LOOMIA Electronic Layer used as a lighting strip. (Courtesy: LOOMIA)
We’re also interested in e-textiles for outdoor gear applications. Here, the guiding principles are heating for comfort, lighting for safety (see photo) and being able to do both of those things in materials that are flat and flexible. An example might be a bicycling uniform, which you’d want to be well-lit for safety, or a tent, where you might want to have heating as well as lighting. A final application that sometimes surprises people is the automotive industry. Some cars already have heated seats, which are wired applications of e-textiles, but you can also have textile-based user interfaces to the car or controls on the dashboard.
What are the technical challenges in making these materials? Presumably they need to be washable…
Oh, definitely. Especially for the garment-based applications. The challenge here is that a washing machine is really, really hard on any material, not just e-textiles. You’re basically submerging something in water and whipping it around at high speeds and agitating it in the presence of a detergent. So when you’re trying to make a circuit washable, you’re trying to make it extremely robust and mechanically stable so that it’s not impacted by the force of the machine – but an e-textile also needs to be flexible and soft, or it won’t work well in a garment. Those two things are fighting against each other, because the more mechanically stable a material is, often the less soft it is. You’re trying to find a balance.
Dry cleaning is also an interesting challenge. Sometimes people think it’s less aggressive, but in fact dry cleaning uses perchloroethylene, which is a chemical that wants to eat away at lots of materials, and especially at lots of materials used for electronic textiles. So it’s not really a good option either.
Producing e-textiles is also a hard challenge. They require a lot of manual labour right now, and there’s not really a super well-known or fool-proof way of making them, like there is for making printed circuit boards (PCBs). If you want a PCB, it’s a lithography process, everybody does pretty much the same thing, and it comes out working every time. But that doesn’t exist for the e-textiles space yet.
What approaches have you used at LOOMIA to try to solve these problems?
One of the main things was to decide which properties we were going to sacrifice. For example, our e-textile is not stretchable. The reason for that is that when a circuit stretches, it often impacts the electrical resistance, and we wanted a textile that was highly conductive, with consistent electrical properties. That meant we had to choose a material that wasn’t stretchable. We tried to refine the list of features down to the ones that mattered most.
Circuits that scrunch E-textiles like this sample from LOOMIA incorporate foldable, flexible circuits. (Courtesy: LOOMIA)
We’ve also developed a manufacturing process that easily translates from prototyping to production. Machines for automated knitting are very expensive, and they require a lot of training to use. They’re incredible if you have that training and money, but we wanted to opt for something that didn’t require as much of either, so that it would be easy to go from prototype to production and back whenever we’re customizing our circuit.
How did you get interested in this field?
My background’s actually in fashion, and I was always really interested in how garments were put together. Then I needed a website to show off my work, so I started taking programming classes, and I discovered that it’s a similar process: you have these pieces of code that you put together to make a website, just like you have pattern pieces that you put together to make a garment. That got me interested in software and the tech world, and I started doing artist residencies that involved a lot of hardware projects.
This was at the beginning of the 2010s, and a lot of the things I ended up doing were, in essence, really early applications of electronic textiles. I saw that there was a need for a material that would enable wearable electronic textiles to be scaled up, so that they became real products rather than prototypes or proofs-of-concept. That was how I started down the research track to our main product, the LOOMIA Electronic Layer, which I developed in collaboration with our chief technology officer, Ezgi Ucar (see photo below). I was doing an artist residency for a software company, Autodesk, at the time, looking particularly at digital fabrication techniques for electronic textiles, which involve mixing up conductive inks. And when I learned there were some companies interested in these inks, I thought, wow, maybe there’s potential here.
E-textile experts LOOMIA founder and technical lead Madison Maxey (left) with chief technology officer Ezgi Ucar. (Courtesy: LOOMIA)
What made you decide to start your own company?
I am very interested in materials as enabling technologies, and it just seemed like the most interesting thing I could do was to try and make an enabling technology of my own. It helped that e-textiles is an emerging industry where a unique perspective can be an asset – it’s not like the software industry, for example, where you really need to be an expert to do anything useful. I’d also worked a little bit in fashion, and I wasn’t excited about continuing down that path – I enjoy working in the tech industry and on tech-related projects much more. So being able to work at the intersection of textiles, materials and technologies was a good fit for my interests.
How did you get funding?
In 2013, I did a Thiel Fellowship, which is a programme that gives young people money to build new things instead of going to university to study. By the time my fellowship ended, I had worked on wearable technologies for some pretty big-name companies, and that background, plus doing a fellowship that venture capitalists recognize as a good source of talent, meant that people were willing to take meetings with me when I started asking for funding. It also helped that I started LOOMIA (which was originally a studio called The Crated – we changed the name when we got investors) at pretty much the peak of the “hype cycle” for wearable tech, so people were really interested in making investments. We didn’t raise a tonne of money, but it was definitely something I couldn’t have funded on my own.
Sometimes, having too many networking opportunities can be distracting.
Madison Maxey
Speaking of the hype cycle, what do you think is coming up next for e-textiles as an industry?
I’ve seen reports suggesting that e-textiles are heading towards the “plateau of productivity” part of the hype cycle, and I personally think that’s true. The applications we’re seeing now are much more functional. They have good ideas behind them. The people doing them seem like they have a plan. One of the things we always ask is, why do you need e-textiles? Why can’t you use a standard circuit on a PCB? And the use cases we’re looking at now really do benefit from having an electronic textile inside the system, so I feel we’re getting to that productive plateau. At least, I’m hoping we are, because I’ve kind of made my bet on this space!
Looking back to when you started the business, what do you know now that you wish you’d known then?
I wish I would have known more about how to manage a start-up, how to hire the right people and how to make good financial decisions. Sometimes, the things you’re encouraged to do by the investment community aren’t the most important things to be doing in the early days of a start-up, and I wish I would have known that earlier.
One example is that people tell you to network – go to lots of events, meet lots of people. And yes, when you’re actively raising money, it does help to meet people and be at the top of their minds. But in general, you should focus on the product or technology you’re developing, because that’s the way the work gets done. Sometimes, having too many networking opportunities can be distracting.
The other example is that sometimes you’ll be encouraged to hire the most senior people you can find. The problem with that is that working on a team of three or four people really isn’t for everyone. Someone who’s very senior and very experienced may be used to working with a lot of infrastructure, and in a start-up they won’t have that. So I think you should look for people who are very motivated and self-directed as well as talented. It’s very easy to go for the vanity factor of saying, “Oh, this person used to work at Google, and now they’re here.” That’s the kind of thing investors want you to have. But I don’t think it’s necessarily an indicator that that person is the best for your team.
As an artist and textile designer, what’s on your materials wish list? What properties would be really amazing for the work you do?
Silicone and thermoplastic polyurethanes (TPUs) are some of the best materials for insulating electronic textiles, and TPUs in particular might help with making e-textiles washable. A TPU with a really strong adhesive that helps hold things in place mechanically during a washing process would be helpful for us. In addition, we’re always looking for good epoxies and conductive glues that combine elasticity with mechanical strength. Finally, we’ve recently been looking for piezoelectric sheets of material, which so far we’ve only been able to find in one place. So we’d be pretty interested in hearing about that if any materials people have something they’re working on.
Everyone loves a good book over the holiday period, but if you’re stuck for choice, why not check out the December issue of Physics World magazine, which contains our annual bumper reviews section.
Kites, wings, drones, hoops: the December 2019 issue of Physics World is now out.
There’s a cracking autobiography by quasicrystal pioneer Paul Steinhardt, a wonderfully written examination of the power of mathematics in physics by Graham Farmelo, the strange crossover between psychics and physics, and more besides.
Plus don’t miss Margaret Harris’s great feature on airborne wind energy – a potentially powerful new green-energy source – and Edwin Cartlidge on the potential of muons as a means to inspect cannisters of nuclear waste.
• Empowering new scientific voices – Rose Mutiso and Jessamyn Fairfield say that public engagement not only makes science more accessible but also helps it to be more diverse and collaborative
• Winning starts – The story of Research Instruments and its founder, Mike Lee, is a fascinating lesson in adaptability. It also comes with a wonderful coda, as James McKenzie explains
• Paper tools – Feynman diagrams reveal why the tools theorists use are as important as the theories themselves, writes Robert P Crease
• Harnessing the wind – From Caribbean islands to the windswept coasts of northern Europe, a new way of generating renewable energy is taking shape. But will it ever reach the mainstream energy sector? Margaret Harris explores the promise and the challenges of airborne wind energy
• Muons: probing the depths of nuclear waste – Having used them to look through rock, physicists are now exploiting muons to peer inside canisters of radioactive waste. The ability could prove very handy for nuclear inspectors, as Edwin Cartlidge reports
• Occult arts and sceptical sciences – Philip Ball reviews Physics and Psychics: the Occult and the Sciences in Modern Britain by Richard Noakes
• Kamchatka or bust: an unlikely quest – Hamish Johnston reviews The Second Kind of Impossible: the Extraordinary Quest for a New Form of Matter by Paul J Steinhardt
• Mathematical mindset – Matin Durrani reviews The Universe Speaks in Numbers: How Modern Maths Reveals Nature’s Deepest Secrets by Graham Farmelo
• A relative journey – Ian Randall reviews Einstein on the Run: How Britain Saved the World’s Greatest Scientist by Andrew Robinson
• Extremely absurd and incredibly fun – James Kakalios reviews How To: Absurd Scientific Advice for Common Real-World Problems by Randall Munroe
• Beyond biology – JV Chamary reviews Superior: the Return of Race Science by Angela Saini
• The tempestuous genius of Fritz Zwicky – Andrew Robinson reviews Zwicky: the Outcast Genius Who Unmasked the Universe by John Johnson Jr
• Making a difference – Jude Dineley catches up with three early-career scientists whose work outside the lab is helping improve the academic environment for others
• Once a physicist – Havovy Cama is the global purchasing skills-development and training manager at Cummins, where she develops online e-learning resources for purchasing professionals
• Dark digits – Inspired by this year’s revolutionary image of a black hole, Physics World reader Michael Metcalf has created a variant of a traditional sudoku so that it begins with a central shadow surrounded by a bright region.
Researchers have designed a brain–computer interface framework to study changes in neuronal patterns through a series of motor-imagery or visual-spelling tasks. (Courtesy: J. Physiol. 10.1113/JP278118)
An international team of researchers has shown that just one hour’s use of a brain–computer interface (BCI) induced spatially specific changes in the brain’s neural connections. This finding raises the possibility of creating a therapeutic system tailored for patients suffering from brain disorders or cognitive impairment (J. Physiol. 10.1113/JP278118).
A BCI is a device that collects, analyses and translates human brain signals into commands that can be understood and processed by a computer. Over the past few decades, research in this field has increased, opening up possibilities for many clinical applications. For example, a BCI that’s able to accurately translate human intentions could help restore the independence of severely disabled individuals.
One question that researchers are now keen to answer is whether these systems could have an impact on the participant’s brain. This phenomenon, in which neuronal connections are significantly affected by using the BCI device, is called BCI-induced brain plasticity.
Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences, TU Berlin and the Public University of Navarra have attempted to answer this exact question. In their study, they looked at whether two different types of BCI systems, one involving imagining physical movements and one involving visual stimulation, can cause signs of neural plasticity. MRI scans taken after one hour of using either of the two BCI approaches showed both structural and functional changes in the brain areas controlling these actions.
Decoding the brain
The team designed a pipeline with structural and functional brain MRI scans recorded before and after the BCI sessions. Each MRI session involved acquisition of an anatomical scan, a functional MRI (fMRI) scan with a motor imagery task and three resting-state functional MRI (rsfMRI) scans. The team chose to perform functional scanning as it reveals areas of higher brain activity, by detecting changes associated with blood flow and metabolism. The fMRI acquisitions were task-based, while the rsfMRI scans were task-free and enabled analysis of functional connectivity.
The BCI session involved the use of an electroencephalogram-based BCI system that measures and translates brain signals into commands that can be understood by a computer program. One group of 21 volunteers underwent a motor-imagery experiment, while the other group, which consisted of 19 volunteers, had to perform a visual-spelling task. During the motor-imagery session, the subjects were instructed to imagine moving their right hand or feet, while during the visual-spelling sessions, they had to spell out phrases through visually picking out letters shown on a computer screen.
Clear differences of brain activity
After collecting all the data, the team analysed the MRI scans to see whether the BCI sessions had affected the brain’s structure and function. The results were promising: in the visual-spelling group, structural MRI scans showed signal increases in the grey matter of occipital and parietal areas (brain regions involved with visual tasks); in the motor-imagery group, the sensorimotor cortex showed significant differences from the control scan.
The researchers also analysed the rsfMRI data for post-BCI session variations and observed significant clusters of modulated functional connectivity in the respective cortical areas. Finally, they note that the task-based fMRI scans recorded for both groups showed no significant differences amongst the visual-spelling volunteers, but for the motor-imagery group, showed an increase in brain activity in the left sensorimotor cortex in scans where the subjects were asked to imagine moving their right hand.
The results show that an hour of BCI training, on subjects who have never been exposed to this technology, has an effect on the brain’s structural and functional plasticity. More work is needed to figure out whether these changes translate into long-term consolidation, but for now, the emerging question is whether these devices can be used for subject-specific therapy.
As this study revealed the high spatial specificity of the BCI effects, the authors think that there is potential in “tailoring BCI-based therapeutic approaches individually to, for example, stroke patients, according to the individual patient’s lesion location”.
Plastics containing carbon and hydrogen monomers can be highly flammable, and once they ignite, they produce flammable gases that can fuel a fire further. For this reason, many materials in this class – including polystyrene, one of the world’s most widely-used plastics – cannot be employed in building construction unless they are made flame-retardant or concealed behind barriers such as drywall, sheet metal or concrete.
Now, however, researchers in Spain have found that a polystyrene that incorporates ultrafine particles of iron in a mesoporous silica matrix is much less likely to burst into flames or emit smoke when heated. The researchers’ technique, which also slightly increases the glass transition temperature of polystyrene, might be used to improve the thermal-oxidative stability and fire retardancy of polymers in general. This is important because accidentally-ignited foamed polystyrene materials have led to serious incidents in the past, including fires at Dusseldorf International Airport and in the Channel Tunnel.
Nanofillers improve thermal stability and fire-retardant properties
Past research has shown that polystyrene’s thermal stability and fire-retardant properties improve when nanofillers are incorporated into the material. In one previous work, De-Yi Wang of the IMDEA Materials Institute in Madrid and colleagues showed that a mesoporous silica known as SBA-15 is a particularly good candidate in this respect thanks to its tuneable pores that can be functionalized with other compounds. In polystyrene that contained SBA-15 modified with cobalt oxide (Co3O4), for example, volatile organic chemicals produced when the composite material was heated became trapped in the pores and were then only released gradually – improving the material’s thermal stability.
In their new work, Wang and colleagues began by adding dopamine hydrochloride to powdered SBA-15 and reacting the solution for 12 hours. After that time the dopamine polymerized into polydopamine (PDA). They cleaned and filtered the resultant product (denoted by SBA-15@PDA) before drying it at 80°C overnight.
The team then added an aqueous solution of ferrous nitrate – Fe(NO3)3 – to the SBA-15@PDA and mixed the two components for 24 hours using a magnetic stirrer. This ensured that the Fe3+ ions had completely diffused into the pores of the SBA-15 and had coordinated with the PDA structure. After several further processing steps, they hot-pressed the composite SBA-15@PDA@Fe into different shapes so that they could test its thermal and combustive behaviour.
Analysing volatile organics and combustion behaviour
The researchers analysed the volatile organics generated after subjecting their test shapes to heat. They did this using Fourier transformation infrared spectroscopy in a thermogravimetric analyser. They also studied the material’s combustion behaviour by measuring the so-called limiting oxygen index (LOI) and by using the cone calorimeter test (CCT), which involves heating up the samples in a crucible from room temperature to 800°C at a rate of 10°C per minute.
Compared to simple polystyrene composites containing only SBA-15, those containing SBA-15@PDA@Fe had a stronger affinity for aerobic volatiles than anaerobic ones, the researchers report. This has the effect of delaying the release of oxidatively decomposed products and thus improves thermal-oxidation stability. What is more, the SBA-15@PDA@Fe improved the LOI (by 1.7%), meaning the altered material produces less smoke. The glass transition temperature of the material (that is, the temperature at which it goes from being a solid to one that flows) was also around 10°C higher than pure polystyrene, again showing that the composite is more thermally stable.
The tantalizing new form of matter mentioned in the subtitle refers to the quasicrystal – a material in which atoms are arranged in a well-defined structure that does not have translational symmetry. In the opening chapters of his book, Steinhardt explains why the 1982 discovery of the first quasicrystal by materials scientist Dan Shechtman came as a huge surprise, because it violated well-established rules of crystallography.
In other words, quasicrystals had been impossible – but for Steinhardt, it was a “second kind of impossible” that was based on scientific assumptions that were not immutable. Impossible was also the word used by scientific heavyweights including Richard Feynman and Linus Pauling to describe Steinhardt’s quest to develop a scientific theory of quasicrystal formation, and to find naturally occurring examples of the material (Shechtman’s samples had been synthesized in the lab). “There is no such thing as quasicrystals, only quasi-scientists,” Pauling is reported to have said.
The first part of the book is devoted to the extraordinary effort that Steinhardt and a few of his students put into developing a theoretical framework for how quasicrystals could form in a liquid as it solidifies. In addition to using cardboard and plastic models to show that a 3D quasicrystal arrangement of atoms is possible, his team had to explain why the atoms would create extremely complex quasicrystals, when they could instead form much simpler crystalline structures.
Steinhardt’s early work convinced him that quasicrystals could form in nature, and so he and a few colleagues devised a computer algorithm to search databases of minerals for evidence of quasicrystals. Analysis of the top candidates were disappointing so all he could do was put out a call for mineralogists around the world to be on the lookout for these materials.
That call was made in a paper published in 2001, but Steinhardt had to wait until 2007 for the next big breakthrough. It came in an e-mail from Luca Bindi, a mineralogist at the University of Florence who had also developed an obsession with quasicrystals, one that matched Steinhardt’s. Having read Steinhardt’s paper, Bindi had found a candidate. Incredibly, a year later in 2008, a quasicrystal was confirmed among samples held in a Florence museum’s collection of minerals.
That mineral was labelled khatyrkite and was believed to have been gathered on the Kamchatka Peninsula in the far east of Russia. But there was a problem – two of the US’s leading geoscientists said in no uncertain terms that they believed the sample was not natural, and was very likely to be a bit of slag from an industrial metallurgical process.
After some international sleuthing that reads much like a detective thriller, Steinhardt and Bindi were crestfallen to discover that it was going to be very difficult to prove that the mineral actually came from Kamchatka. The sample was apparently gathered by a shady scientist and former Soviet apparatchik, who asked Steinhardt for an exorbitant amount of money to establish provenance. And when he refused, the former apparatchik threatened other Russians not to help with the search.
Other characters Steinhardt encountered included a recalcitrant widow in Amsterdam whose late husband sold the sample to the museum, and the mysterious “Tim the Romanian” who apparently acted as go-between the Dutch dealer and the Russians. The bizarre and highly unreliable provenance of the sample caused a serious rift between Steinhardt and his geoscientist colleagues – who remained concerned that the quasicrystal might be artificial and temporarily quit Steinhardt’s team.
Despite these doubts, in 2009 Steinhardt and colleagues published a paper in Science (10.1126/science.1170827) announcing the discovery of the first ever naturally occurring quasicrystal. With that mission accomplished, many scientists would have left it there. However, an oxygen-isotope study of the sample then revealed what Steinhardt and others had suspected for some time – that the quasicrystal came from space, as a meteorite. What is more, the team had located the Russian scientist Valery Kryachko (a former underling of the apparatchik) who had found the sample in Kamchatka.
Despite being a theoretical physicist who had never spent a night in a tent, he found himself leading an expedition to a remote part of Russia in search of the remains of a meteorite
So as far as Steinhardt was concerned, there was only one thing he could do. Despite being a theoretical physicist who had never spent a night in a tent, he found himself leading an expedition to a remote part of Russia in search of the remains of a meteorite. Incredibly, the team was able to dig enough tiny grains out of the chilly Kamchatka mud to begin to answer important questions about where and when the quasicrystals came from.
Throughout the book, Steinhardt is effusive with praise and respect for his colleagues, but his characterizations do verge on the mawkish. After a few hundred pages, it becomes clear that every new collaborator mentioned in the book would have a brilliant scientific mind, be highly critical and be a hard worker. Despite his praise and respect for his team – many of whom were not physicists – there is also a whiff of “theoretical physicist knows best”, and it feels as though Steinhardt sees his discovery as an example of how the pure and disciplined thought processes of a theoretical physicist can cut through the noise and confusion of other disciplines.
The Second Kind of Impossible is a book that I could not put down because it was fast-paced and had genuine surprises in every chapter. It also provides an insight into the professional life of an “A-list” physicist at an Ivy League university. Steinhardt seems to have little trouble gaining access to the best materials characterization facilities in the US. He manages to gain funding from an unnamed private benefactor for an expedition that some experts had described as “hopeless”. Moreover, a study that Steinhardt describes as only producing “dud” results is written up in a paper published in Physical Review Letters. Many jobbing physicists would be thrilled to publish their best ever results in this high-status journal. Ultimately, Steinhardt deserves his place on the A-list because he was right about naturally occurring quasicrystals and science is the better for it.
A technique for remotely entangling ions of strontium much more accurately and at far higher rates than previously possiblehas been unveiled by physicists in the UK. The team says that their scheme paves the way to scalable quantum computers made from multiple ion traps that are linked to one another via photonic interconnects.
Quantum computers promise to greatly outperform even the most powerful conventional computers on certain tasks. While some progress has been made, many challenges remain – including how to achieve the quantum entanglement of large numbers of quantum bit (qubits).
Trapped ions offer a way of generating qubits with very low levels of noise, and therefore maintain the quantum coherence that is required to perform calculations. Indeed, the quantum states of ions have been made to persist for over 10::min. Each ion is held in a vacuum using electric fields and is suspended over a micro-fabricated chip. Manipulated by laser beams, the ions can then be placed in a superposition and entangled with their neighbours.
Wiring and laser beams
Although the coherence times of rival technologies based on bulk matter are often far shorter – superconducting qubits, for example, generally last for less than a thousandth of a second – ion traps are relatively slow and are limited in the numbers of qubits they can store. This is because it becomes increasingly difficult to accommodate the wiring and laser beams needed as more qubits are added.
As such, researchers are exploring ways of connecting ion qubits in different traps. In the latest work, Christopher Ballance and colleagues at Oxford University have shown how to link trapped ions by entangling them using the photons they emit when excited by a laser beam. This technique was first realized by Chris Monroe and colleagues at the University of Maryland in the US, and now the Oxford group has boosted both the rate and fidelity of the entanglement by collecting more of the photons given off by the ions and by limiting imperfections in the emission process.
Their experiment involves generating a sequence of very short laser pulses, splitting each pulse in two and then directing each half of that pulse to an ion of strontium-88. Each of the excited ions then decays to a superposition of two different energy levels, causing it to emit a photon whose polarization is entangled with that of the ion. The train of photons emerging from each half of the experiment is then focused by a lens and fed into a length of fibre-optic cable.
The ions are entangled by directing the photons that emerge from the fibres onto a beam splitter, with the output from that monitored by two detectors. It is when both detectors click that the ions become entangled. The quality, or “fidelity”, of the entangled state is obtained through a Bell-state measurement. With the ions separated by 5::m of optical fibre, Ballance says that the technique provides entanglement over a sufficiently long distance to network many quantum computers together.
As they report on the arXiv server, the researchers found that they could generate, on average,182 entangled ion pairs per second, with a fidelity of 94%. This compares to a rate of just five entangled pairs per second that was achieved by Monroe’s group in 2014, and a mere 0.001 every second in 2007.
“A big deal”
Monroe says the Oxford result “is a big deal, and the most recent demonstration of the fast-improving rate of off-chip quantum communication between ions”. He reckons it should be possible to push the rate well beyond 1000 entangled pairs per second, at which point, he says, “it is approaching the speed of local ion-ion operations and therefore useful for scaling.”
In fact, Ballance reckons that it might be possible to improve the latest rate by a factor of up to 100, in part by replacing the 5::cm-diameter lenses currently used to direct photons with reflective surfaces that can be placed much closer to the ions – and therefore collect more light. As with classical computers, he says that the aim is to “get to the point where the interconnect is not the bottleneck”.
Ballance says that the qubits’ long coherence times couldmake them ideal computer processors, as well as memories. To make his point he refers to Google’s recent report of “quantum supremacy”, which involved carrying out an operation claimed to be impossible on a classical computer in any reasonable amount of time. Although describing the result as a “milestone” in quantum computing, he argues that it also demonstrated the limitation of superconducting qubits – given their short-lived quantum states. The company’s researchers, he says, “have to improve memory to increase performance”.
Another possible use of remotely entangled ion qubits, adds Ballance, is the deployment of high-precision quantum sensors over large areas.
Rainer Blattworks on trapped ions at the University of Innsbruck in Austria and describes the latest work as “a nice technical achievement” that provides “good progress on the way to real-world applications”. But he cautions that in future the technique will have to be applied in the presence of more ions, perhaps to link two quantum computer nodes or to develop a quantum repeater. “There will be certainly more technological problems before such devices are readily available,” he says.
“I thought I’d better throw some cold water on that fire; it’s fine for it to smoulder, but we shouldn’t let it overheat,” writes Strassler.
When will we be able to slip on a flying suit and soar in the sky like Tony Stark in Iron Man?
Physics student Daria Stekolnikova has written a book called The Flying Humans to answer that question. She is trying to raise a little over £3000 to get the book published and is looking for backers.
It looks like a donation will get you a signed copy of the book and you can find out more here.
I’m sure that by now just about everyone has seen that bizarre video showing Elon Musk unveiling the Tesla pickup truck. The infantile design of the vehicle – a four-year-old could draw a better truck – combined with the easily-shattered “Tesla Armor Glass” made me think that the event was some kind of joke. But apparently it was for real.
Rhett Allain has looked at the physics of why the truck’s window broke so easily. You can read more in Wired.
“A tour de force” is how physicist Boris Blinov from the University of Washington described research carried out at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, in 2009. For decades physicists had dreamt of building a quantum computer that can solve problems faster than a conventional counterpart. Then in August 2009, a NIST team led by Jonathan Home unveiled the first small-scale device that could be described as a quantum computer. The work represented a huge step forward – so much so that we choose this development as the very first Physics World 2009 Breakthrough of the Year 10 years ago in 2009.
Building up to the breakthrough, Home’s team had used ultracold ions to demonstrate separately all of the steps needed for quantum computation – initializing the qubits; storing them in ions; performing a logic operation on one or two qubits; transferring the information between different locations in the processor; and reading out the qubit results individually. But in 2009, the group made the crucial breakthrough of combining all these stages onto a single device. Home’s set-up had an overall accuracy of 94% – impressive for a quantum device – but not good enough to be used in a large-scale quantum computer.
Several major technology firms have entered the field or ramped up their efforts too – notably Google, Microsoft and IBM – while the Canadian firm D-Wave Systems has been selling quantum computers with an increasing number of “quantum bits”, or qubits. Its latest model having 2000.
The most recent and – to date – highest-profile result was the announcement in October that Google had used a 53-qubit quantum computer to reach “quantum supremacy” – a term to denote that a quantum computer can solve a problem in a significantly shorter time than a conventional (classical) computer. Although, Google’s machine outperformed a classical computer on a very specific problem, the move marked a major milestone for the field.
A decade on from the world’s first quantum computer on a chip, we have now entered the era of quantum supremacy or “quantum advantage” as some would rather have it. The coming decade promises to be equally exciting.