Skip to main content

Room at the bottom

Everyone has heard of Silicon Valley, but few really understand how it became the home of the global computing industry. Before the 1980s, the area’s technological economy was dominated by the development and manufacture of magnetic recording storage, and it has been said, somewhat tongue-in-cheek, that it was then more of a “Rust Valley” due to the prevalence of various ferrous-ferric oxide mixtures employed in this industry. The label “Silicon Valley” did not gain currency until later, when Fair-child Semiconductor and its descendants Intel and Advanced Micro Devices began to commercialize their metal-oxide-semiconductor (usually silicon) field-effect transistor technology (MOSFET) – thus transforming both the valley and its worldwide image.

Moore’s Law celebrates the life and career of a scientist who played a major role in these developments. In the geek world, Gordon Moore is best known as the progenitor of “Moore’s Law”, the empirical observation (made in 1965) that the density of MOSFETs on an integrated circuit would double every 18–24 months. This doubling has indeed occurred more or less on schedule. The book’s subtitle describes Moore as a “quiet revolutionary” and the first word is certainly accurate; Moore is definitely not a superstar who attracts the kind of press promotion received by the likes of Bill Gates, Steve Jobs and, most recently, Elon Musk. But I prefer the description “quiet hero”. In his own industry, Moore has been to his colleagues what Steve Wozniak was to Steve Jobs at Apple Computer – the real font of technical (not sales) innovation behind their respective enterprises.

The hardback book I now hold in my hands is some 4 cm thick and contains much more material than can be absorbed in one or two evenings of reading. To summarize, it describes how Moore was born and raised in the San Francisco Bay Area; attended local universities in San José and Berkeley; graduated from the latter in 1950 with a degree in chemistry; and obtained a PhD in that discipline from the California Institute of Technology. Following postdoctoral studies at Johns Hopkins, Moore joined William Shockley at Beckman Instruments in California, but in 1957 he and seven other young researchers broke with the notoriously difficult Shockley and accepted financial support from an entrepreneur, Sherman Fairchild. Over the next 10 years, their new company, Fairchild Semiconductor, pioneered the development of MOSFET devices, but not their successful commercialization. That began in 1968, when Moore and Robert Noyce founded the company that became Intel – arguably one of the most successful American enterprises of the later 20th century.

Narrating this tale takes up most of the book, which is replete with moving family memorabilia and corporate intrigue. An example of the latter was Intel’s uneasy alliance with IBM, which Moore engineered in the early 1980s. With demand for IBM’s line of personal computers and mainframes exceeding its in-house manufacturing and development resources, it purchased, temporarily, a 15% interest in Intel to assure continuity of supply. Ordinarily, such a purchase could have fallen foul of US antitrust legislation, but at the time, IBM mainframes underpinned a large number of US defence and intelligence resources. This led to concerns that if the company had to source parts for its machines off-shore (particularly in Japan), it could engender a security risk. Hence, IBM was assured that its temporary funding of Intel would not be subject to antitrust action.

So much for biography. Now let’s put on our physicist hats. Just how did Moore’s law come to be, and when will it be repealed? The basic concept behind MOSFETs was revealed in patents filed by two physicists, Julius Edgar Lilienfeld in the US and Oskar Heil in the UK, in 1926 and 1935 respectively. (Perhaps these dates should be the real “t = 0″ for Moore’s law.) So why did it take almost four decades for the device to be realized in practice? Developing ancillary tools for fabrication took time, of course, but lack of demand was also a factor. Simply put, it took a while for the window of “conventional” technology (the vacuum tubes, junction transistors and bulk diodes that underpinned the devices that emerged after the Second World War) to slam shut, and for demand for faster and smaller “1” and “0” switches to take off. The micro- and nano- “wrenches” -“hammers” and “pliers” (actually vacuum deposition chambers, X-ray and electron diffraction, lithography and an alphabet soup of other technologies) required for manufacture had actually existed in the tool sheds of academic research institutions, US national laboratories, and a few hi-tech companies (notably IBM and Bell Labs), but it took the opening window of economic promise to get these tools off the shelf.

So, was the inevitability of Moore’s law foreseen in the basic physics of MOSFETs and of the tools needed for its commercialization? I would argue that it was, and here Richard Feynman deserves a lot of credit. In 1959, well before Moore’s 1965 speculation, Feynman gave his now-famous lecture “There’s Plenty of Room at the Bottom” (a play on the title of the 1959 film Room at the Top). In the lecture, Feynman pointed out that our known laws of materials physics more than allowed the evolution of micro-nano fabrication that gave rise to Moore’s law. And the rest is history.

Well, almost. On current trends, MOSFET volumes will approach atomic dimensions in a decade, and the last section of Moore’s Law (entitled “All Good Exponentials End”) discusses this problem. Keep in mind we’re talking physics here, not economics. Today, all computers, whether in the cloud or in your pocket, are based on the Turing–Von Neumann stored program concept using “irreversible” binary logic and switching devices. By “irreversible”, I mean that the storage technology is incapable of “remembering” whether it contained a 1 or 0 before its current state. In 1961 – barely a year after Feynman and four before Moore – Rolf Landauer of IBM postulated a thermodynamic limit on the density of irreversible binary logic. Roughly stated, the Landauer limit scales as the number of switches per unit volume times kT ln 2. This unitary Landauer limit was verified in a 2012 article in Nature.

So when will Moore collide with Landauer? This has been a point of debate for at least a decade, but unfortunately it is not clearly addressed in Moore’s Law. Some have suggested that Landauer’s limit could be overcome by storing and manipulating our 1s and 0s in a black hole – a sort of Feynman cellar, if you will. If we could somehow convey this to Feynman’s spirit today, his response might be, “Of course. There’s still plenty of room at the bottom…and the top of the universe as well!”

  • 2015 Basic Books $35.00hb 560pp

Searching for life on other planets

The search for signs of extraterrestrial life looks set to be one of the most exciting scientific endeavours of the 21st century and scientists have no shortage of places to look. Astronomers have already discovered nearly 2000 exoplanets and they look set to find many more. While most of these known exoplanets are gas giants that appear to be inhospitable to life, the discovery of Earth-like rocky exoplanets could come courtesy of the next generation of telescopes.

In this podcast recorded at the Canadian Association of Physicists Congress in Edmonton, Sara Seager tells physicsworld.com editor Hamish Johnston how astronomers are gearing up to use the James Webb Space Telescope – due to launch in 2018 – and other ground- and space-based facilities to look for water vapour, oxygen and other gases in the atmospheres of rocky exoplanets. These and other gases such as methane could indicate the presence of life on these distant worlds, but Seager points out that many measurements on many different exoplanets will be needed before we can say with reasonable certainty that life exists.

‘Metasheet’ blocks a narrow band of radiation, letting the rest pass

A “metasheet” that is extremely efficient at absorbing electromagnetic radiation in a very narrow band of wavelengths, while remaining transparent elsewhere in the spectrum, has been produced by researchers in Finland and Belarus. It was made by placing simple wire helices in strategic locations throughout the material, so that the helices absorb and dissipate energy contained in both electric and magnetic fields. The metasheet works for microwave radiation, and could be useful for making radiation detectors, telecommunications devices, energy-harvesting systems and even radar-cloaking devices. In principle, the design could also be modified to work for visible light.

The idea of a device that absorbs radiation at specific wavelengths is not new, but most existing devices reflect the unabsorbed radiation back to its source. This rules out many useful applications where transmission of the unabsorbed radiation is needed. To address this shortcoming, several groups have attempted to develop “Huygens’ metasurfaces”, which comprise arrays of sub-wavelength inclusions that scatter radiation only in the forward direction. If the resonant wavelength of the inclusions is chosen correctly, they can collectively dissipate radiation at a wavelength of choice. Radiation at other wavelengths is diffracted by the inclusions and its wavefronts are reconstructed to achieve transmission.

Previous designs have used different inclusions to absorb the electric and magnetic components of the incident radiation. While the different inclusions can be designed to have their central resonance peaks at the same wavelength, the absorption tends to fall away at different rates either side of the peaks. This prevents other wavelengths from being perfectly transmitted and leads to undesirable reflections. One solution is to use a material that is “bianisotropic”, which means that it can interact with both the electric and magnetic fields of the incident radiation. While this solves the problem of mismatched interactions away from the resonance peak, previous metasurfaces based on this principle could only observe one circular polarization of radiation.

Handy helices

Earlier this year, Viktar Asadchy and colleagues at Aalto University in Finland produced a “metamirror”. This device is transparent to wavelengths away from a resonant wavelength, while reflecting the resonant wavelength at a specific angle. The team has now extended this work to produce a surface that absorbs radiation at a specific wavelength and dissipates its energy as heat.

With these multifunctional structures, we can achieve completely amazing properties
Viktar Asadchy, Aalto University

The researchers made their resonators from helices of nickel-chromium wire, which is a dissipative material commonly used in electrical-resistance heaters. These helices are bianisotropic, which when excited by incident electromagnetic radiation become electrically polarized along the axis of the helix and magnetically polarized azimuthally. Because of its chirality – a helix can either twist with a right-handed or left-handed orientation – each helix is polarization sensitive, and therefore only absorbs light with a single circular polarization. The researchers therefore designed their metamaterial to include both right-handed and left-handed helices embedded in a plastic-foam substrate. This, they calculated, should produce metasheets that absorb light at a desired wavelength regardless of its polarization.

Manufacturing imperfections

They then tested the absorption of materials containing both single- and double-turn helical inclusions. At the resonance wavelength, they found that the single-turn helices absorbed 92% of the incident microwave radiation, whereas the double-turn helices absorbed 81%. These absorption figures, while impressive, are lower than the researchers’ theoretical predictions that single-turn helical arrays would absorb 96.5% of radiation and double turn 99.9%. The researchers attribute this difference to manufacturing imperfections.

The team is now looking to build on this and previous research to produce non-reflecting arrays of transmitters that can be stacked in layers. “For example, the first layer will transmit the wave in one direction or focus it at one point,” Asadchy explains. “The second layer, which will stay behind this first one, will focus a wave of another wavelength at another point. Then we can combine several layers of these transmit arrays because they are transparent at their non-operational wavelengths. “With these multifunctional structures, we can achieve completely amazing properties,” says Asadchy.

“I think that this work is very significant for our community because it points out a very new device, which is an invisible filter,” says Filiberto Bilotti of the University of Rome. He cautions, however, that while in principle the physics is scalable to work at shorter wavelengths, creating a practical material that works for near-infrared or visible light is “not that trivial” because the conductivity of metals drops at shorter wavelengths. As a result, a different strategy would be needed to create short-wavelength metasheets.

The metasheets are described in Physical Review X.

Spain and Chile will host next-generation gamma-ray observatory

Sites in Spain and Chile have been chosen to host the Cherenkov Telescope Array (CTA) – a huge gamma-ray observatory 10 times more sensitive than existing instruments, which will study supernova explosions, binary star systems and active galactic nuclei. Astronomers working on the project expect they will get approval at the end of the year to start building the arrays. It is hoped that the CTA will begin taking data at both locations by the end of 2020, with full operations by 2023.

High-energy gamma rays are generated in the most energetic events in the universe, and studying these messengers can reveal important information about the violent processes that created them. When a gamma ray interacts with a particle in the Earth’s atmosphere, it produces a shower of lower-energy particles. These particles travel through the atmosphere faster than the speed of light in the atmosphere, creating a cone of blue light akin to a sonic boom. Telescopes on the ground collect this Cherenkov radiation, which scientists then analyse to determine the energy of the original gamma ray and from what direction it came.

The CTA will consist of two arrays. The smaller array – consisting of 15 telescopes 12 m in diameter and four at 23 m – will study the northern sky from the Spanish island of La Palma, which is off the Atlantic coast of North Africa. The larger observatory will have 70 telescopes at 4 m diameter, 25 at 12 m and four at 23 m. It will look toward the southern sky from Paranal in Chile’s Atacama Desert, and the first few small telescopes are likely to be deployed to the Chile site in mid-2016.

‘Excellent astronomical conditions’

According to CTA project manager and technical director Christopher Townsley, these two sites were chosen over other competitors for several reasons, including the level of existing infrastructure and the estimated long-term operation costs. “Both the sites chosen have well-established observatories nearby and proven excellent astronomical conditions,” Townsley explains. If for any reason the La Palma and Paranal site negations fall through, however, the group has alternative north and south locations in Mexico and Namibia, respectively.

The CTA will build on the technologies developed for current ground-based gamma-ray telescopes – such as the Very Energetic Radiation Imaging Telescope Array System in the US as well as the High Energy Stereoscopic System in Namibia – that use Cherenkov imaging techniques. Apart from being 10 times more sensitive than any rival, the CTA will also study a wider range of energies, from about 10 GeV to 300 TeV, although energies above 10 TeV will be accessible only with the southern site’s 4 m instruments.

Scientists plan to do two astronomical surveys with the CTA: one of the galactic plane that contains the galactic centre – a site swarming with high-energy sources – and the other of one-quarter of the full sky. The observatory could also shed light on dark matter, according to CTA spokesperson Werner Hofmann of the Max-Planck-Institute for Nuclear Physics in Heidelberg. He explains: “If dark matter is indeed made of neutralino particles in the TeV mass range, CTA is in the best position to detect this radiation – a discovery that would have tremendous impact and far-reaching consequences for astrophysics, particle physics and cosmology.”

The most exciting science that the next-generation gamma-ray detector can do, however, is uncovering new surprises. “When you build an instrument that is much more capable than existing instruments, and you’re exploring a new waveband,” says CTA co-spokesperson Rene Ong of the University of California, Los Angeles, “you’re going to discover something unexpected.”

Leading UK scientific organizations urge governments to tackle climate change head on

 

A joint statement on climate change from 24 of the UK’s foremost academic and professional institutions, including the Royal Society and the Institute of Physics, which publishes Physics World, has been released today. The statement recognizes that human activity is responsible for climate change, and the organizations are urging governments to take immediate action if they are to avert the serious risks posed by the changing climate.

The statement brings together a variety of institutions from across the sciences, social sciences, arts, humanities, medicine and engineering fields. These leading UK institutions say that to tackle climate change, governments worldwide, including that of the UK, must seize the opportunity at November’s UN Climate Change Conference in Paris to negotiate a legally binding and universal agreement on tackling climate change, based on the latest scientific evidence.

Physical view

“The scientific evidence that climate change is real, and that it’s caused by human action, is compelling. If we’re to limit its effects, then we have to act sooner rather than later,” says Institute of Physics president Frances Saunders. She adds that “physics is not only central to understanding the climate, it will also be fundamental to mitigating its effect and transitioning to low-carbon technologies”.

The statement warns that to truly limit global warming in this century to 2 °C relative to the pre-industrial period will require a transition to a net zero-carbon world by early in the second half of this century, and calls on governments to put in place the necessary policy and technological responses, while seizing the opportunities of low-carbon and climate-resilient growth. The organizations agree that the scientific evidence is now overwhelming that the climate is warming, and that human activity is largely responsible for this change through emissions of greenhouse gases.

Climate change poses risks to human beings and ecosystems the world over by worsening existing economic, environmental, geopolitical, health and societal threats, and generating new ones. These issues include everything from the increased risk from extreme weather – the like of which is already being seen globally – with a rise of 2 °C above pre-industrial levels, to substantial species extinction and global food insecurity at or above 4 °C.

  • In 2013 Simon Buckle from the Grantham Institute for Climate Change spoke to Physics World to share his thoughts on big impact of climate change. Watch more from our 100 Second Science video series

Physicists build universal optics chip

A group of physicists in the UK has made a programmable photonic circuit that can be used to carry out any kind of linear optics operation. The researchers say that the device provides experimental proof of a long-standing theory in quantum information, and could help speed the development of photonic quantum computers, as well as establishing whether quantum computers are fundamentally different from their classical counterparts.

The research builds on work carried out back in 1897 by German mathematician Adolf Hurwitz, who showed how a matrix of complex numbers known as a unitary operator can be built up from smaller 2 × 2 matrices. A unitary operator provides a mathematical description of a linear optical circuit. This is any circuit that uses fairly standard optical components – such as mirrors, half-silvered mirrors and phase shifters – to route photons and cause them to interfere with one other. The operator has as many rows as there are output ports in the circuit and as many columns as there are input ports. With only one photon in the circuit, the probability that it travels from a particular input to a particular output is given by the square of the corresponding matrix entry.

In 1994 Anton Zeilinger, then at the University of Innsbruck in Austria, and colleagues showed theoretically that since 2 × 2 matrices can describe the components used in linear optical circuits, such components can be configured to reproduce any unitary operator. More recently, Anthony Laing of the University of Bristol realized that by building a device capable of reproducing any unitary operator, it would be possible with that single device to carry out any linear optics experiment on the same number of input and output ports. Now, Laing and a number of Bristol colleagues have built and operated such a device.

Unitary operator on a chip

Laing designed the device so that it could sit on a single chip, explaining that problems of instability would make it close to impossible to build it using components on a lab bench. This is because any movement of a micron or more would prevent photons from combining coherently inside the circuit’s interferometers. Laing’s colleague Jacques Carolan points out that such “integrated optics” still poses challenges, such as how to fine-tune the waveguide couplers that act as tiny beam splitters, so that exactly half of the light from one of a pair of waveguides tunnels into the other. The researchers were able to overcome this particular problem by collaborating with scientists and engineers at the NTT Corporation in Japan.

The result is a silica-on-silicon device designed to fit onto a 6 inch wafer, comprising 15 interferometers and 30 electrically controlled phase shifters. The input is from six single-photon channels and the chip sends its output to an array of 12 single-photon counters. Christened a universal linear optical processor, or LPU (in analogy with a CPU), the device has been used by the Bristol researchers not only to prove the theory put forward by Zeilinger’s group, but also to demonstrate a number of specific applications.

One of these is the creation of a controlled NOT gate, a crucial component in certain types of quantum computer. Such a gate takes two quantum bits, or qubits, as input, and flips the state of one of the qubits only if the other qubit has a value of one. Physicists had thought that this nonlinearity could not be carried out using purely linear optics. But the Bristol group has used its LPU to demonstrate the validity of a theory put forward by Raymond Laflamme, then at Los Alamos National Laboratory, and colleagues in 2001, which predicted that certain kinds of additional measurement could induce nonlinearities even in purely linear optics.

Boson sampling machine

Among the other measurements they have carried out, Laing and co-workers have also operated the LPU as a “boson-sampling” machine, a kind of simplified quantum computer that carries out one, fixed task: working out the probability that photons arriving at a certain combination of input ports generate a particular output. Since boson samplers should be relatively easy to scale up, says Laing, they might allow quantum physicists to put the Church–Turing thesis to the test in the fairly near future. The thesis states that every kind of computation can be carried out on a Turing machine, which is essentially a classical digital computer. However Laing points out that quantum computation appears to violate that idea.

According to Laing, the LPU is completely reprogrammable and can be switched from one experiment to another in milliseconds. He says it could save experimentalists months building a particular linear optics experiment, operating it, and then dismantling it. He even suggests that it might prove to be an asset to theorists. “You don’t need to get your hands dirty using this,” he says. Carolan adds: “It’s exciting to think of all of the things the chip can do that we don’t yet even know about.”

‘Super-cool’, but limited

Paul Kwiat of the University of Illinois at Urbana-Champaign describes the latest work as “a tour de force of integrated optics engineering”. He points out, however, that the Bristol group doesn’t specify the fraction of photons that are lost in the LPU, and that the scheme still relies on external, nonlinear devices to supply and detect the photons. “It’s kind of like demonstrating a super-cool electric car but being limited by batteries that only allow you to drive 10 miles,” he says. “Once someone solves the battery problem, then the car will do amazing things.”

Laing acknowledges that the generation and detection of photons away from the chip introduces losses. The answer to this is to integrate all components, both linear and nonlinear, onto the same chip. “That is challenging,” he says. “But it is the direction that the field is moving in.”

The research is described in Science.

Balancing bicycles, looking back on Trinity, pricing up Pluto and more

By Tushna Commissariat

Mechanics was never my favourite topic when I was studying physics for my BSc, but I think I might have been more interested if we had looked at real-world situations rather than square blocks sliding down an incline plane. A bicycle that carries on, sans rider, without toppling over for quite a long time, for example, would have got my attention. This is a rather well-known quirk of mechanics though and it isn’t even the first time we have discussed it on the blog. Indeed, Physics World‘s James Dacey, a keen cyclist, delved into the topic in 2011. This week, we spotted a a new Minute Physics video on the subject, over at ZapperZ’s Physics and Physicists blog. Watch the video to get a good, if a tiny bit rushed, explanation of the three forces that come into play to allow a bicycle at a certain speed to zip along without its human companion. As the video suggests, all is not known about the secrets of free-wheeling bicycles just yet though, and I have a feeling that we will blog about it again in the years to come.

(more…)

Physicist who brought symmetry breaking to particle physics dies at 94

The Japanese–American particle physicist Yoichiro Nambu died on 5 July at the age of 94. Nambu shared one half of the 2008 Nobel Prize for Physics, with the other half split between Makoto Kobayashi and Toshihide Maskawa. Nambu won his half of the prize for realizing in 1960 how to apply spontaneous symmetry breaking to particle physics.

Nambu achieved his breakthrough while working on how spontaneous symmetry violation can cause substances to become superconducting. His work inspired Peter Higgs, François Englert and others in the 1960s to develop the theoretical mechanism for the Higgs boson, which was discovered by CERN’s Large Hadron Collider (LHC) in 2012. A year later, Higgs and Englert shared the 2013 Nobel Prize for Physics for building on Nambu’s ideas to predict the existence of the Higgs boson.

In an interview with Physics World in 2004, Higgs acknowledged Nambu’s influence: “Although my name gets thrown around in this context, it was Nambu who showed how fermion masses would be generated in a way that was analogous to the formation of the energy gap in a superconductor.”

“Nambu’s work was an act of imagination that was way ahead of its time”
Frank Wilczek, Massachusetts Institute of Technology

Commenting on Nambu’s Nobel prize, Frank Wilczek of the Massachusetts Institute of Technology told Physics World that “Nambu’s work was an act of imagination that was way ahead of its time”. Wilczek, who shared the 2004 Nobel Prize for his work on the strong interaction, added that “It introduced the idea that what we perceive as empty space is, to a deeper vision, a medium that complicates the motion of matter we observe.”

Nambu was born on 18 January 1921 in Tokyo and studied physics at the Imperial University of Tokyo from 1940 to 1942. Like many physicists of his generation, he then worked on military applications of radar. He completed a PhD at the University of Tokyo in 1952 and then worked briefly at Osaka City University before moving to the Institute for Advanced Study in Princeton, US. In 1954 Nambu arrived at the University of Chicago, where he spent the rest of his career and became an American citizen in 1970.

The STEM jobs paradox rumbles on

By Hamish Johnston and Margaret Harris

Are countries such as the UK, the US and Canada suffering from a shortage of scientists and engineers, or are scientists and engineers struggling to find jobs there? Our US correspondent Peter Gwynne reports that, according to a recent survey, physicists in that country can expect to be rewarded with handsome salaries if they work in industry – which suggests that their skills are in great demand. However, over in the New York Review of Books, an article on “The frenzy about high-tech talent” claims that “by 2022 the [US] economy will have 22,700 non-academic openings for physicists. Yet during the preceding decade 49,700 people will have graduated with physics degrees.”

In the past few years, Physics World has published several articles on the “STEM shortage paradox”, where reports of severe skills shortages in science, technology, engineering and mathematics (STEM) coexist with lukewarm – and sometimes borderline alarming – data on employment in these fields. Hence, conflicting reports on career prospects for physicists don’t really surprise us anymore (although this is actually slightly different to what we’ve seen before, in that rosy employment data are going up against a downbeat statement about demand, rather than vice versa). But even so, when two reports point in such different directions, it’s tempting to conclude that one of them must be wrong, or at least missing something important.

(more…)

Bountiful buckyballs resolve interstellar mystery

It is official: buckminsterfullerene molecules, or “buckyballs”, exist in our galaxy – the Milky Way. The latest, unambiguous confirmation, from researchers at the University of Basel in Switzerland, not only confirms a 20-year-old prediction but also indicates that C60 might be ubiquitous in space.

It is nearly a century ago now that scientists first detected certain features in starlight from the Milky Way. Since then, these features, called diffuse interstellar bands (DIBs), have also been recorded in the spectra from the interstellar medium of other galaxies. However, researchers have not yet identified which chemical molecules are responsible for producing these bands.

In 1994 astrophysicists Pascale Ehrenfreund and Bernard Foing, who are now at George Washington University in the US and the European Space Agency, reported on two bands at 9632 Å and 9577 Å. These bands were thought to come from C60+ molecules because C60+ absorbs light at these wavelengths.

Band match

Now, thanks to new spectroscopy experiments on bound C60+ and helium at 5.8 K (the temperature of interstellar space), a team led by John Paul Maier in the Department of Chemistry at the University of Basel has found that the light absorption features of C60+ indeed match those of the two bands identified more than two decades ago. The researchers obtained their result by comparing the light absorption spectra of stars through diffuse interstellar clouds and the electronic spectrum of C60+.

“Scientists have known about DIBs for more than 90 years now and around 400 of them are currently listed in the literature,” explains Maier. “However, until our work on C60+, none of these bands had been unequivocally identified.”

The fact that a molecule as complex as C60+ is present in the interstellar medium (in fact, the two C60+ DIBs have also already been observed in protoplanetary nebula) indicates that it may be commonplace in space and stable in very hostile environments, Maier adds.

Maier notes that it has taken his team 20 years to measure the spectrum of C60+ in the gas phase. “Our motivation to undertake this study started in 1993, when we obtained the light absorption spectrum of C60+ in solid neon at 6 K,” he says. “The following year Foing and Ehrenfreund found the two DIBs that had wavelengths close to those in our laboratory measurements and suggested that the absorbing molecule (or ‘carrier’) in these DIBs was C60+. Confirming this theory has been no easy task and has required measuring the molecule’s absorption in the gas phase and at low temperature.”

Ultracold RF trap

The researchers developed a technique that allows them to trap ions with different masses in a radio frequency trap at ultralow temperatures. “Using such a process, we can trap thousands of C60+ ions,” explains Maier. However, because it is not possible to directly measure the absorption of such low concentrations of C60+, the team bound it with helium. “We knew from experiments that we had already carried out on small ions some time before that the helium has only a very tiny influence on the C60+ and that the absorption spectrum we measured would essentially be that of C60+,” he says.

They proved their assumption by attaching two helium atoms to each C60+ molecule and showed that any remaining shifts in the absorption bands were less than 0.2 Å – an error figure that is much more precise than that required when analysing astronomical data.

The team measured the light absorption of the C60+-He complex by exciting a few hundred of the species in the ion trap with a laser. When excited, the helium atoms are removed. By quantifying the decrease in the number of C60+-He+ species in the trap, Maier and colleagues were able to obtain the absorption spectrum of the C60+ alone. “When we then compared the wavelengths of the laboratory-measured absorption with the astronomical data, we found a perfect agreement – thus proving that C60+ is indeed at the origin of the interstellar absorption, and that it exists in space.”

Circle of life

Their results suggest that C60 is probably produced in dying stars that “push out” the molecule into the planetary nebula and into diffuse interstellar clouds, he adds. “The ionizing radiation present here means that the C60 predominantly exists as the positively charged ion C60+. Ultimately the diffuse clouds provide seeds for new stars to form and the C60+ is thus recycled back into them,” says Maier. Astrophysicists will now ask themselves whether C60 and its ion are in fact responsible for the formation and presence of smaller molecules identified in the next ‘evolutionary’ phase of diffuse clouds – dense clouds.”

Carbon cousins?

In recent years, astronomical observations have found DIBs across the entire optical range, meaning that they abound in space. Indeed, they have recently been spotted in the Milky Way’s satellite galaxies, known as the Magellanic Clouds, and in other galaxies. The carrier C60+ molecules may consequently be crucial in the formation of organic material across the universe.

The Basel researchers say that they will be trying to find out if any of these DIBs are directly related to derivatives of C60+, such as those containing metals and other elements. “This possibility has already been put forward by the British scientist Harry Kroto (who was awarded the Nobel Prize for Chemistry in 1996 – shared with Robert Curl and Richard Smalley) shortly after C60 was discovered,” explains Maier. “As mentioned, experiments on this topic are very challenging and it took us 20 years to obtain the data we present on C60+ alone. It is now up to the next generation of scientists to continue with this work. I am due to become emeritus professor in a year’s time, so do not have much time left to have a go at solving such fascinating problems – another life is required!” he says.

The research is published in Nature

Copyright © 2026 by IOP Publishing Ltd and individual contributors