Phosphorene, a new 2D material that is a crystalline allotrope of phosphorus, could be ideal for making photodetectors that work across a range of wavelengths, from the visible to the near infrared. So say researchers in the Netherlands, who are the first to have studied how field-effect transistors (FETs) made from phosphorene respond to light of different frequencies.
Currently, most 2D materials research is focused on graphene, but the fact that this material lacks a direct electronic band gap between its valence and conduction bands means that scientists are now starting to look at other 2D candidates too. A band gap is essential for electronics applications because it allows a material to switch the flow of electrons on and off.
Transition materials
Among the promising newcomers are the transition metals known as “dichalcogenides” (TMDCs). Such materials have the chemical formula MX2, where M is a transition metal (such as molybdenum or tungsten) and X is a “chalcogen” (such as sulphur, selenium or tellurium) – when they are scaled down from their bulk to monolayers, the materials go from being indirect band-gap semiconductors to direct band-gap semiconductors. This scaling also allows them to efficiently absorb and emit light, making them ideal for use in a variety of optoelectronic devices.
However, there is a problem in that TMDCs respond relatively slowly to light. They also have a large band gap – roughly between 1.5 and 2 eV – and so are only suitable for device applications that work in the visible part of the electromagnetic spectrum. A material with a direct and small band gap, as well as a fast photoresponse, could therefore bridge the gap between graphene (a zero-gap semiconductor) and TMDCs with their large band gaps.
Black layers
Phosphorene, also known as few-layer black phosphorus and which can be obtained by mechanically cleaving black phosphorus crystals (in the same way that graphene layers are mechanically exfoliated from bulk graphite), is one such material. Although researchers have known about bulk black phosphorus since the 1960s, it is only very recently that they have tried to isolate single layers of the material. Just as in graphene, phosphorene atoms are arranged in a hexagonal lattice, but with its direct and small band gap (of 0.3 eV for the bulk material and between 0.33 and 0.81 eV for the device made in this work), phosphorene can quickly switch between insulating and conducting states. The material is still thin enough to confine electrons, however, so that charge flows quickly through the structure – something that leads to the high charge mobilities that are crucial for making ultrafast photodetectors and other electronics devices.
When exposed to visible and near-infrared light, FETs made from phosphorene show a photoresponse that reaches 4.8 mA/W. This is faster than both MoS2– and WS2-based photodetectors. Importantly, the material is also “ambipolar”, which means that it can conduct with both electrons and holes, and so can be used to construct p–n. Finally, phosphorene’s hole mobility can reach nearly 300 cm2/Vs, which is about three to five times that of MoS2, while silicon’s hole mobility is just 100 cm2/Vs.
Light response
The researchers obtained their phosphorene by exfoliating bulk black phosphorus, and then used it to fabricate FETs in the lab. “In contrast to conventional FETs fabricated in the microelectronics industry, our transistors have their conduction channel exposed to allow light to reach the channel. By shining light of different wavelengths onto the channel, we can determine how the transistors respond to this light,” says Michele Buscema of Delft University of Technology.
According to the team, the FETs could make good optical sensors and solar cells. Phosphorene is particularly suited to detection applications in the near infrared and so also in night-vision imaging, for example, where TMDCs do not work because of their large band gap. The Delft team is now looking at exploiting the ambipolar behaviour of black phosphorus to build p–n junctions and solar cells.
Two fully functional optical memories on single chips have been fabricated by researchers in Japan. The devices use bistable optical cavities to store the bits, and allow multiple bits to be controlled simultaneously by the same waveguide. The researchers hope that, in future, such a memory could be used for optical logic operations to increase the speed of computation.
Today, optical fibres are the material of choice for transmitting data, thanks to their lower signal attenuation compared with copper wires and their much higher bandwidth. Currently, however, optical signals have to be converted into electronic ones for processing, and then once more, to convert the output back to an optical signal. Such conversions consume energy and time, and fail to utilize the biggest advantage of optical transmission – that photons do not interfere with each other, meaning that several signals with different frequencies can travel down one fibre simultaneously in a process known as “multiplexing”. Photonic signals have to be “demultiplexed” before an electronic processor can deal with them, and so optical processors are of interest to many researchers.
Random memories
A key element in any processing unit is the random access memory (RAM), in which data are stored temporarily while the computer runs a program. A modern electronic RAM usually stores each bit of memory as the charge on a capacitor, and various optical equivalents have been proposed. In 2012 Masaya Notomi and colleagues at NTT Laboratories in Kanagawa, Japan, designed a four-bit RAM made from a photonic crystal – a periodic optical nanostructure comprising a network of holes that allows some wavelengths of light to propagate while blocking others. Inside the photonic crystal were four identical cavities that had two possible refractive indices – a pulse of light at the cavity’s resonant frequency would allow a switch between the two indices, while light at a different frequency would reveal the cavity’s state without disturbing it. By designating the two states as 0 and 1, the researchers created a readable and rewritable memory. However, each of the cavities had to be controlled by a separate waveguide.
Now, the same researchers have made the cavities much smaller and non-identical, allowing them to introduce multiplexing. They created two different types of optical RAM – one made from silicon and the other from indium phosphide and indium gallium arsenide phosphide. In each RAM, multiple cavities were arranged lengthways, with a single waveguide passing all of them. The researchers used computer modelling to work out exactly how to move specific holes in the photonic crystal such that each cavity had a slightly different resonant frequency. They were then able to send a “write” pulse down the waveguide containing the frequencies of whichever bits they wanted to switch and only those cavities would respond.
Stable lifetimes?
The silicon RAM contained 105 working cavities, with all the resonant wavelengths falling between 1540 nm and 1570 nm, at an average spacing of just 0.23 nm, all of which was fabricated on a silicon crystal just 1 mm long. Unfortunately, the cavity states were stable for less than 10 ns – too short for a viable optical memory. However, the lifetime of the bits in the indium-phosphide-based RAM was, in principle, infinite. Because indium phosphides are less well established in industry than silicon, the technology for manufacturing indium-phosphide components is less precise, and so Notomi and colleagues could only produce a 28-bit memory. However, they believe this provides a better blueprint for future research. “Our final goal is to produce better indium-phosphide systems by improving the fabrication accuracy,” says Notomi.
Martin Hill of the University of Western Australia in Crawley describes the paper as “a nice piece of work on a difficult area of photonics”. But he also points out that, at present, the switching speed of the optical cavities is lower than the switching speed of electrical transistors, and says that before the device becomes useful as a product, the researchers need a way of making the switching frequencies more predictable and reproducible.
Physics may aim for simplicity, yet the world it describes is a mess. There is disorder wherever we look, from an ice cube melting to the eventual fate of the cosmos. Of course, physicists are well aware of that untidiness and have long used the concept of “entropy” as a measure of disorder. One of the pillars of physical science, entropy can be used to calculate the efficiency of heat engines, the direction of chemical reactions and how information is generated. It even offers an explanation for why time flows forwards, not backwards.
Our definition of entropy is expressed by one of the most famous formulae in physics, and dates back over a century to the work of the Austrian physicist Ludwig Boltzmann and the American chemist J Willard Gibbs. For more than 20 years, however, the Greek-born physicist Constantino Tsallis, who is based at the Brazilian Centre for Physics Research (CBPF) in Rio de Janeiro, has been arguing that entropy is in need of some refinement. The situation, according to Tsallis, is rather like Newtonian mechanics – a theory that works perfectly until speeds approach that of light, at which point Einstein’s special theory of relativity must take over.
Likewise, says Tsallis, entropy – as defined by Boltzmann and Gibbs – works perfectly, but only within certain limits. If a system is out of equilibrium or its component states depend strongly on one another, he believes an alternative definition should take over. Known as “Tsallis entropy” or “non-additive entropy”, it was first proposed by Tsallis himself in a 1988 paper (J. Stat. Phys.52 479) that has gone on to become the most cited article written by a scientist (or group of scientists) based in Brazil. So far it has clocked more than 3200 citations, according to the Thomson Reuters Web of Science.
To many who study statistical mechanics, Tsallis entropy makes for a much broader view of how disorder arises in macroscopic systems. “Tsallis entropy provides a remarkable breakthrough in statistical mechanics, thermodynamics and related areas,” says applied mathematician Thanasis Fokas at the University of Cambridge in the UK. In fact, Fokas goes as far as saying that subsequent work motivated by Tsallis’s discovery has been “a new paradigm in theoretical physics”.
Tsallis entropy has, though, been divisive, with a significant number of physicists believing he has not uncovered anything more general at all. But the voices of these detractors are fast being lost in the crowd of support, with Tsallis’s original paper being applied to everything from magnetic resonance imaging to particle physics. So are these applications exploiting a truly revolutionary theory? Or to put it another way: is Tsallis to Boltzmann and Gibbs what Einstein was to Newton?
Old concept
Entropy as a physical property was introduced by the German physicist Rudolf Clausius in the mid-1860s to explain the maximum energy available for useful work in heat engines. Clausius was also the first to restate the second law of thermodynamics in terms of entropy, by saying that the entropy, or disorder, of an isolated system will always increase, and that the entropy of the universe will tend to a maximum. It was not until the work of Boltzmann in the late 1870s, however, that entropy became clearly defined according to the famous formula S = kB ln W. Here S is entropy, kBW is the number of microstates available to a system – in other words, the number of ways in which a system can be arranged on a microscopic level.
Boltzmann’s formula – so famous that it is carved on his gravestone in Vienna (as S = k log W) – shows that entropy increases logarithmically with the number of microstates. It also tends to class entropy as an “extensive” property – that is, a property, like volume or mass, whose value is proportional to the amount of matter in a system. Double the size of a system, for instance, and the entropy ought to double too – unlike an “intensive” property such as temperature, which remains the same no matter how large or small the system.
One example of entropy being extensive is a spread of N coins. Each coin has two states that can occur with equal probability – heads or tails – meaning that the total number of states for the coins, W, is 2N. That number can be entered into Boltzmann’s formula, but, given that an exponent inside a logarithm can be moved to the front of the same logarithm as a multiplier, the expression simplifies to S = NkB ln 2. In other words, the entropy is proportional to N, the number of coins, or matter, in the system; by Boltzmann’s definition, it is extensive.
Boltzmann’s formula is not, though, the final word on entropy. A more general Boltzmann–Gibbs formula is used to describe systems containing microstates that have different probabilities of occurring. In a piece of metal placed in a magnetic field, for example, the spins of the electrons inside are more likely to align parallel than antiparallel to the field lines. In this scenario, where one state (parallel alignment) has a much higher probability of occurring than the other (anti-parallel alignment), the entropy is lower than in a system of equally likely states; in other words, the alignment imposed by the magnetic field has made the system more ordered. Nonetheless, the entropy here is still extensive: double the electrons, double the entropy.
Wide benefits Tsallis entropy has been used to describe (clockwise from top left): fluctuations of the magnetic field in the solar wind; cold atoms in optical lattices; signs of breast cancer in mammograms; and particle debris generated at the Large Hadron Collider. (Courtesy: From top left: iStock/SERG_AURORA; I Bloch, MPQ; Chris Bjornberg/Science Photo Library; CERN/CMS Collaboration)
Unfortunately, it is not always possible to keep entropy extensive when calculating it with the Boltzmann–Gibbs formula, says Tsallis, and this, in his view, is the crucial point. He believes that entropy is extensive not just some of the time, but all of the time; indeed, he believes that entropy’s extensivity is mandated by the laws of thermodynamics. Calculations must always keep entropy extensive, he says – and if they ever suggest otherwise, those calculations must change. “Thermodynamics, in the opinion of nearly every physicist, is the only theory that will never be withdrawn,” Tsallis insists. “The demands of thermodynamics must be taken very seriously. So if Boltzmann–Gibbs entropy does not do the job, you must change it so it does do the job.” For Tsallis, thermodynamics is a pillar of physics and must not be tampered with at any cost.
As to why thermodynamics restricts entropy to being extensive, he says, there are two main arguments. One is a complex technical argument from large deviations theory, a subset of probability theory. But another, simpler, argument is based on intuition. Thermodynamic functions depend on one or more variables, which for most systems can be either intensive or extensive. However, it is possible to switch a function that depends on an intensive variable to a version that depends on a corresponding extensive variable, and also vice versa, by using a mathematical “Legendre transformation”. For instance, a Legendre transformation can switch a function for energy that depends on temperature to one that depends on entropy – and since temperature is an intensive variable, this implies that entropy must be correspondingly extensive. “The Legendre transformation is the basic mathematical ingredient that makes thermodynamics work,” says Tsallis. “And you quickly see that entropy must be in the extensive class.”
Systems in which the Boltzmann–Gibbs formula does not keep entropy extensive include those that are out of equilibrium, or where the probability of a certain microstate occurring depends strongly on the occurrence of another microstate – in other words, when the elements of a system are “strongly correlated”.
As an example of such correlation in statistics, Tsallis gives linguistics. Take four words almost at random, for example “one”, “many”, “child” and “children”, and you might expect to find, via probability theory, 4 x 4 = 16 possibilities for two-word phrases. As it happens, many of these possibilities are not permitted – you cannot say “one children” or “child many”. There are, in fact, only two syntactically correct possibilities: “one child” and “many children”. Grammar produces strong correlations between certain words, and so greatly reduces the number of allowed possibilities, or entropy.
There are other obvious examples in the physical world of strong correlations affecting entropy. In the presence of a whirlpool, for instance, water mole-cules do not take any path, but only those that give the overall resemblance of a vortex, because the mole-cules’ motions are correlated. And it turns out that in any system with strong correlations, the number of possible microstates, W, no longer increases exponentially with the number of elements, N, as it does in the coin example where W = 2N; instead, it might, say, follow a power of N such as W = N2.
This is a problem for the Boltzmann–Gibbs expression of entropy, says Tsallis, because mathematically N can no longer be taken outside the logarithm as a multiplier. The formula is now written as S = kB ln N2, which simplifies to S = 2kB ln N. In other words, entropy is no longer proportional to N; it is forced to be non-extensive. “If you keep using Boltzmann–Gibbs entropy, you are going to violate extensivity,” says Tsallis. “And I don’t want that.”
A cloudy idea
None of this was clear to Tsallis back in 1985. At that time he was at a meeting in Mexico City about statistical mechanics, when the study of fractals was becoming fashionable. Fractals are shapes that can be broken down into parts, each of which retains the statistical character of the whole, and are found throughout nature in, for example, lightning bolts, clouds, coastlines and snowflakes. Look closely at one of the arms of a snowflake, for instance, and it is possible to discern features that resemble the snowflake’s overall shape.
A mathematical generalization of a fractal is a “multifractal”, which describes such hierarchical structures using probabilities raised to a power, q (that is, pq). Tsallis describes how, during a coffee break at the meeting in Mexico City, he stayed behind in a room where another professor was explaining this concept to a student. “I couldn’t hear them,” he recalls, “but I knew they were talking about multifractals because of their writing on the blackboard – probability to the power q. And suddenly it came to my mind that that could be used to generalize Boltzmann–Gibbs entropy.”
Tsallis believes he instantly thought of entropy because the famous Boltzmann–Gibbs formula was always somewhere in his mind, “as it is for every statistical mechanist in the world”. But having written down a new formula, he did not know what, if anything, he had discovered. For two years he mulled over its implications, until a workshop in Maceió, Brazil, where he discussed it with two physicist colleagues, Evaldo Curado of CBPF and Hans Herrmann, who is now at ETH Zurich in Switzerland. “They were very stimulating, both of them,” Tsallis says.
From the discussion with Curado and Herrmann as well as with others around that time, Tsallis realized that his expression for entropy could be used to preserve the property’s extensive nature in cases when the Boltzmann–Gibbs formula makes it non-extensive – that is, in systems with strong correlations. Leaving Maceió, on a plane back to Rio, he performed calculations to convince himself that his formula worked, and then looked upon it with admiration. “I found it very cute, very pretty,” he recalls.
Making sense of disorder Constantino Tsallis feels that our conventional understanding of entropy, as developed by Ludwig Boltzmann and J Willard Gibbs, works only within certain limits and that for systems that are out of equilibrium or host to strong correlations his alternative definition should take over. (Courtesy: Centro Brasiliero de Pesquisas Físicas)
The new expression, called by him non-additive entropy and by others Tsallis entropy, derives its merit from the exponent, q, of the probability (see “Tsallis entropy defined” below). When the correlations in a system are weak or non-existent, q tends to one and the expression reduces to the standard Boltzmann–Gibbs formula. However, when the correlations in a system are strong, q becomes more or less than one to “bias” the probabilities of certain microstates occurring. The parameter q, which is now called the Tsallis index by proponents of the theory, is therefore a way of characterizing a system’s correlations – particularly how strong they are.
Three years after his formulation of non-additive entropy, in 1988, Tsallis published his Journal of Statistical Physics paper on the topic. For five years, few scientists outside Brazil were aware of it, but then its popularity skyrocketed – possibly due to research showing how non-additive entropy could be used in astrophysics to describe the distribution functions of self-gravitating gaseous-sphere models, known as stellar polytropes. Since then it has been used to describe, for example, fluctuations of the magnetic field in the solar wind, cold atoms in optical lattices, and particle debris generated at both the Large Hadron Collider at CERN in Switzerland and at the Relativistic Heavy Ion Collider at the Brookhaven National Laboratory in the US. In these cases, unlike Boltzmann–Gibbs entropy, Tsallis entropy is claimed to describe much more accurately the distribution of elements in the microstates; in the case of the LHC, these elements are the momenta of hadrons. More recently, Tsallis entropy has been the basis for a swathe of medical physics applications.
Defenders and detractors
Many people – notably the US physicist Murray Gell-Mann, who won the 1969 Nobel Prize for Physics for his theoretical work on elementary particles – agree that Tsallis entropy is a true generalization of Boltzmann–Gibbs entropy. But there are many detractors too, among whom the principal charge is that the Tsallis index q is a mere “fitting parameter” for systems that are not well enough understood.
Naturally, Tsallis disagrees. If the fitting-parameter accusation were true, he says, it would not be possible to obtain q from first principles – as he did in 2008, together with quantum physicist Filippo Caruso, who was then at the Scuola Normale Superiore di Pisa in Italy. Tsallis and Caruso showed that q could be calculated from first principles for part of a long, 1D chain of particle spins in a transverse magnetic field at absolute zero. The value of q, which was not equal to one, reflected the fact that quantum effects forced some of the spins to form strong correlations (Phys. Rev. E78 021102).
This calculation required a knowledge of the exact microscopic dynamics, which is not, however, always possible. In situations where the dynamics are not known, says Tsallis, then q indeed has to be obtained from fitting experimental data, but he claims that doing so is no different to how other accepted theories are employed in practice.
As an example, Tsallis cites the orbit of Mars, which could be calculated from first principles – but only if both the distribution of all the other planets at a given moment, and the initial conditions of masses and velocities, were all known. Clearly, he says, that is impossible. “For the specific orbit, astronomers collect a lot of data with their telescopes, and then fit that data with the elliptic form that comes out of Newton’s law [of gravitation], and then you have the specific orbit of Mars,” he adds. “Well, here, it’s totally analogous. In principle, we would always like to be able to calculate q purely from mechanics, but it’s very hard, so q often has to be obtained from fitting.”
Mathematical physicist Henrik Jensen at Imperial College London takes a more nuanced view. He says that, for many years, proponents of Tsallis statistics did in fact make their case by calling attention to its greater ability to fit to data. But this, he says, is no longer true. “In the last couple of years work…has demonstrated that one might arrive at Tsallis statistics from very general assumptions about how complex correlated systems behave,” he adds.
That the Tsallis index is merely a fitting parameter is not the only criticism, however. In 2003 physicist Michael Nauenberg at the University of California, Santa Cruz claimed that Tsallis statistics is, for various technical reasons, incompatible with the zeroth law of thermodynamics, which states that two systems at different temperatures placed in thermal contact will reach thermal equilibrium at some intermediary temperature (Phys. Rev. E67 036114). “Boltzmann–Gibbs statistics leads to this law, but Tsallis statistics violates it,” says Nauenberg. Why that should be the case is a rather technical argument, but he claims that if a thermometer were made from a substance whose entropy could only be described with Tsallis statistics, it would not be able to measure the temperature of ordinary matter.
“Tsallis statistics is a purely ad hoc generalization of Boltzmann–Gibbs statistics,” Nauenberg continues. “But since the appearance of Tsallis’s paper, applications of the new statistics have been made, without any justification whatsoever, to virtually every system under the Sun. As a fitting technique it may have some merits, but it is not a valid generalization of Boltzmann–Gibbs statistics.”
Eugene Stanley, a statistical and econophysicist at Boston University in the US, believes Nauenberg’s criticism is misplaced. He says that the zeroth law of thermodynamics is an “important and quite subtle” point that is still being explored for systems with strong correlations. “I suspect that many people don’t have a clear idea about a very deep question such as the extended validity of the zeroth principle of thermodynamics. Up to now, everything seems consistent with the possibility that the zeroth principle also holds for [Tsallis] systems, which violate Boltzmann–Gibbs statistical mechanics.”
Certainly, not everyone is convinced by the new theory of entropy, and the debates look set to continue. But on the wall of his office, Tsallis has posters of both Einstein and Boltzmann – perhaps in the subconscious hope that he will one day be known for overturning conventional statistical mechanics, as Einstein’s special theory of relativity overturned classical mechanics.
“Any physicist is supposed to know that classical mechanics works only when the masses are not too small and not too fast,” says Tsallis. “If they’re very small, you have to use quantum mechanics, and if they’re very fast, you have to use relativity.” But with statistical physics being one of the pillars of contemporary physics – and an obligatory subject in physics degree courses all over the world – he feels that students should be taught its limitations. “They should learn where Boltzmann–Gibbs statistics works, and where it doesn’t.”
If Tsallis’s ideas hold sway, that equation on Boltzmann’s gravestone may soon need updating.
Tsallis entropy defined
Standard Boltzmann entropy, where the probabilities of all microstates are equal, is given by the classic equation S = kB lnW, where S is entropy, kB is the Boltzmann constant and W is the total number of microstates in the system.
If the system has lots of different microstates, i, each with its own probability pi of occurring, this equation can be written as the Boltzmann–Gibbs entropy S = –kBpi lnpi.
Tsallis entropy, Sq, is claimed to be useful in cases where there are strong correlations between the different microstates in a system. It is defined as
where q is a measure of how strong the correlations are. The value of q is either more or less than one in such systems – effectively to bias the probabilities of certain microstates occurring – but in the limit where q approaches 1, Tsallis entropy reduces to the usual Boltzmann–Gibbs entropy. The parameter q is called the Tsallis index by proponents of the theory.
Medical applications of Tsallis entropy
In recent years, one of the most active fields in which Tsallis statistics has been applied is medical physics. In 2010, for instance, medical physicist Luiz Murta-Junior and colleagues at the University of São Paulo in Brazil applied Tsallis statistics to magnetic resonance imaging (MRI), to help them to delineate different types of tissue in the brain. A loss in the brain’s grey matter, for example, can be the cause of neurodegenerative diseases such as multiple sclerosis, which is why doctors turn to MRI to see how much grey matter there is relative to other tissues.
In any MRI scan, different tissues appear as different shades of grey, but each of these shades is actually made up from pixels with a range of different luminosities. The trick therefore is to work out the top and bottom thresholds in luminosity for each tissue – for instance, grey matter may contain pixels with luminosities between 20 and 90 on an eight-bit scale. This range corresponds to a certain value of entropy, since the greater the spread of luminosity values the greater the “disorder”. If there are just two different tissues in an MRI scan – grey matter and white matter – a scientist can analyse the image to determine the distribution of each tissue using an algorithm that adjusts two entropy variables until their total is a maximum.
Obvious benefits A functional magnetic resonance imaging scan of a brain (left) that has been analysed first with conventional statistics (middle) and then with Tsallis entropy (right), which more clearly reveals different kinds of brain tissue.(CC-BY Braz. J. Med. Biol. Res.)
An algorithm based on Boltzmann–Gibbs entropy, and typical extensions of it, can do this. But according to Murta-Junior and colleagues, Boltzmann–Gibbs entropy does not allow for long-range correlations between pixels, which can arise in regions with complex, fractal-like shapes. The São Paulo researchers therefore turned to Tsallis entropy, and found that it could delineate grey matter from white matter and cerebrospinal fluid much more precisely (Braz. J. Med. Biol. Res.43 77). “By accurately segmenting tissues in the brain, neurologists can diagnose the loss of grey matter earlier, and patients can be treated sooner with much better results,” says Murta-Junior.
In the same year as the São Paulo group’s research, electrical engineers at the Indian Institute of Technology Kanpur used Tsallis statistics to improve the detection in mammograms of mineral deposits known as microcalcifications, which are sometimes a sign of breast cancer. And in 2012 computer scientists at the Changchun University of Science and Technology in China again used Tsallis entropy with MRI, this time as an aid for image-guided surgery. This suggests that the debates about the fundamental validity of Tsallis statistics are scarcely deterring those wishing to make use of it.
A solid-state supercapacitor that works under great stresses and vibrations has been developed by researchers from the US. Unlike traditional supercapacitors, the new design does not delaminate under stress and could lead to a variety of practical applications, from more-efficient devices to renewable-energy storage.
Unlike batteries – which work through chemical reactions – supercapacitors store energy in the form of electrically charged ions, which are assembled on the surfaces of porous electrodes. Supercapacitors also have numerous benefits: they can charge and discharge in minutes – unlike batteries, which take hours – and have a much longer lifespan, lasting for millions of cycles rather than thousands. Their drawback, however, comes in their reduced storage capacity – to hold a given charge, most supercapacitors need to be much larger and heavier than an equivalent lithium-ion battery.
Heavy-weight storage
One idea to overcome this limitation lies in creating supercapacitors that act as both energy storage and structural support. By doubling up the otherwise “dead weight” of structural materials, the rapid-charging, long-lasting nature of supercapacitors could be utilized without needing an internally distinct power source. This concept of a “structural supercapacitor” could have many potential applications – for example, a laptop where the case acts as a battery, renewable energy stored within the walls of a house, or even a rapidly charging electric car that stores power in its own chassis. To be used as such a structural device, the supercapacitor would need to work under considerable stresses and vibrations. Traditional supercapacitors are ill-suited to this task – being layered, their electrodes and electrolytes are prone to separate when such forces are applied.
Load bearing The engineers suspended a heavy laptop from the supercapacitor to demonstrate its strength. (Courtesy: Vanderbilt Nanomaterials and Energy Devices Laboratory)
To overcome this problem, a team of researchers, led by Cary Pint from Vanderbilt University in Tennessee, has engineered a supercapacitor with better-integrated layers. The design features electrodes made from silicon wafers, the inner sides of which are electrochemically etched to create a surface covered in nano-sized pores. These are then coated with a protective, ultra-thin layer of carbon, before being vacuum-squeezed together around an ion-conducting polymer. This electrolyte seeps into the silicon’s nanopores, setting into a strong mechanical bond, which does not come apart easily.
When tested, the researchers found that the design offers an energy density of up to 10 W h/kg and operates perfectly, even under stresses of 44 psi and vibrational accelerations over 80 g – the latter of which is greater than the forces acting within working jet engines. While designs for structural supercapacitors have been reported previously (mostly based around the use of carbon fibres) the team’s design is capable of 3 to 4 orders of magnitude more charge storage – comparable, in fact, with (regular) commercial supercapacitors. Pint points out that the design showed that its “performance is not compromised while we operate the device under compression, shearing, tensile stretching, high-amplitude vibrations, and impact forces”. He adds that the manufacturing process is simpler than in traditional supercapacitors and involves minimal cost, with both the component materials and the production process being relatively cheap. Furthermore, the constituent parts are both bio-friendly and non-flammable, thereby removing some of the safety concerns traditionally associated with lithium-ion batteries.
“The study is also a nice contribution to the larger area of research on mechanically robust electrodes for electrochemical energy-storage systems,” says Vadym Mochalin, a nanomaterial expert at Drexel University in Philadelphia, who was not involved in the research. “[This] will likely inspire similar designs for lithium-ion batteries, micro-supercapacitors on chip, sensors, and other devices.”
What does it mean to be a scientist from an ethnic minority background? Is it harder to get career breaks and to reach the top of a field? Can your background actually be a source of inspiration? Is it even useful to anyone to be discussing these questions?
These are among the issues touched upon in a new series of video interviews with 10 British scientists with minority ethnic heritage. The interviews were conducted by researchers at the British Library as part of a larger audio history project commissioned by the Royal Society called Inspiring Scientists: Diversity in British Science. You can watch all 10 interviews on the Royal Society website.
Particle physicist Roberto Battiston has been appointed president of the cash-strapped and demoralized Italian Space Agency (ASI). Previous agency boss Enrico Saggese quit the post in February following the start of corruption investigations against him and the organization has since been in the hands of a temporary commissioner. Battiston, a professor at the University of Trento, will try to restore confidence in the organization but will need to find a way of funding Italy’s contribution to the European Space Agency (ESA) while at the same time financing major domestic missions.
Saggese, an electronic engineer, resigned after police raided his office and home together with those of six other people suspected of involvement in bribery; prosecutors having opened investigations into a number of contracts awarded by ASI, as well as looking into a €1m holiday supposedly paid for with agency money and several dubious consultancies. Saggese told the then research minister Maria Chiara Carrozza that he was completely “extraneous” to the allegations and that he had stepped down “in the interest of ASI’s international prestige”.
Aerospace relaunch
The government appointed law professor Aldo Sandulli to lead the agency while the search for a new president got under way. Carrozza set up a three-person committee, coordinated by materials physicist Fabio Beltram, to produce a shortlist of candidates. The committee received 55 applications for the job, from which it drew up a list of five names, which, in addition to Battiston, included that of one-time ASI president Giovanni Bignami. On 16 May, new research minister Stefania Giannini announced her choice. Describing Battiston as “an excellent physicist”, Giannini said in a press release that his appointment would allow “the relaunch of the aerospace industry, which has an incalculable strategic value for our country”.
Spending his early career working on particle accelerators, Battiston carried out research that, among other things, contributed to the discovery of the W and Z bosons. For much of the last 20 years, in contrast, he has been using the knowledge gained with accelerators to develop space-based detectors for precision studies of cosmic rays. In particular, he has been deputy principal investigator for the Alpha Magnetic Spectrometer (AMS), a $2bn instrument attached to the International Space Station that is designed to search for dark matter by measuring fluxes of high-energy electrons and positrons.
Battiston, who moved to the University of Trento from the University of Perugia in 2012 to set up a new institute dedicated to astroparticle physics and technology, should be joined at ASI by a new board of directors within the next few weeks. His term as president is due to last for four years and can be renewed for a further four.
Budget management
Battiston feels that the technical and management experience that he has gained with the AMS project, which involves 600 physicists from 56 institutes from around the world, will stand him in good stead when it comes to his new position. But he recognizes that being in charge of a 240-strong research institute will not be easy. “I am satisfied to have been chosen,” he says, “but that satisfaction is mixed with a sense of how demanding the task will be.” Indeed, Battiston will have his work cut out in trying to manage the agency’s funding. The annual budget provided by the research ministry remains at €503m, having fallen from €570m in 2011. Of that, some €400m goes to ESA, which means that only around €100m a year remains for domestic activities.
Battiston told news agency Adnkronos that ASI’s funding is currently “unbalanced” and that a greater share of it should be dedicated to national projects. “It is not that the ESA programmes don’t give an industrial and scientific return to our country,” he is quoted as saying, “but Italy must also have its own strategic autonomy.”
ASI watchers say that there are not currently enough funds to launch the second generation of Italy’s COSMO-SkyMed, a group of satellites that provide Earth observation for both military and civilian purposes. In addition, they say that the agency will struggle to contribute its share of the costs needed to develop a new Ariane 6 rocket launcher as well as develop a new version of the Vega launcher.
Nathan Berkovits has a very simple office. There’s a desk with a computer, a crumpled sofa, a few chairs and a whiteboard covered with equations. But then you don’t really need anything too fancy if you’re a string theorist – apart from an innate mathematical aptitude for describing the fundamental interactions of nature in terms of 1D strings. In fact, the relatively low cost of theoretical physics has always made it a popular choice for emerging nations to kick-start their scientific efforts. Brazil is no exception: the Institute for Theoretical Physics (IFT) at São Paulo State University (UNESP), where Berkovits is based, dates back to 1951, when Brazilian science was just getting off the ground.
But for the past two years, Berkovits has had new things on his mind other than wondering why string theory only holds true in a 10D world or figuring out how to “compactify” the extra dimensions so that it tallies with our familiar 4D world. That’s because in February 2012 he took over as acting head of the first overseas offshoot of the renowned International Centre for Theoretical Physics (ICTP) in Italy. Rather cumbersomely known as the ICTP South American Institute for Fundamental Research, the ICTP-SAIFR has the same aims as its parent institute, which are to carry out top-quality theoretical physics, host scientific schools and workshops, and invite leading scientists to stay.
The ICTP-SAIFR is located on the first floor of the existing six-storey IFT building in the downtown Barra Funda district of São Paulo – the largest city in South America. It may look like many other university buildings, but the institute is remarkable in that it is the physical realization of the ICTP’s founding father Abdus Salam. When he set up the ICTP on Italy’s Adriatic coast in 1964, Salam wanted it to pursue world-class research and nurture scientists in the developing world, which it has done to great effect in the intervening 50 years. But Salam, who died in 1996, also wanted the ICTP to set up regional satellite centres in developing nations.
Salam’s vision was finally realized in 2012 with the founding of ICTP-SAIFR, which receives most of its budget from the São Paulo Research Foundation (FAPESP), with additional support from the ICTP and UNESP. Some 20 UNESP professors are affiliated to the IFT, and the first of a further five permanent faculty members of the new institute – particle theorist Eduardo Pontón from Columbia – has already been recruited. These faculty are being unearthed by a prestigious international search committee featuring star names like string theorist Ed Witten, cosmologist Martin Rees and particle theorist David Gross. “We are interested in experienced people who can come and set up a new group of promising theorists at the start of their careers,” says Berkovits.
International appeal
The ICTP-SAIFR has so far run 10 international schools, eight mini-courses and seven workshops attended by some 1000 visitors. It is also home to nine full-time postdocs, one of whom is Riccardo Sturani, who moved to São Paulo from Italy in March 2013 after his contract at the University of Urbino ended. Sturani models cosmological sources of gravitational waves and calculates what these signals might look like if seen at detectors such as LIGO in the US and VIRGO in Italy. But with no previous connection with the ICTP, Sturani certainly would not be living and working in São Paulo were it not for the new institute. “The institute is very good – it’s active and you can discuss your work a lot,” he says. “The only drawback is that living in São Paulo is very hard for me – I’m not used to big cities and I don’t often get to see my friends back home.”
Despite the loud, noisy, big-city vibe, Sturani admits he would consider staying in São Paulo once his contract is up. In fact, the many charms of the city – including its friendly people and vibrant cultural life – can be hard to resist. Just ask the US-born Berkovits, who studied physics at Harvard University and did a PhD at the University of California, Berkeley before moving to Brazil in the mid-1990s. Back then, many home-grown Brazilian physicists still opted to further their careers by working abroad, so going in the opposite direction might have seemed an odd move for a young researcher. “Actually, I saw it as a challenge,” he recalls. “It felt like something new for me, plus I like Brazil and it’s enjoyable living here. People are optimistic, friendly and helpful, and see the country getting better.”
But one thing that has not changed in the 20 years since Berkovits first came to Brazil is that it is still a highly bureaucratic nation. Dealing with paperwork can be infuriating and time-consuming for university researchers, reducing their time for science. The bureaucracy can also mean that people get selected for academic posts for reasons that are not always transparent. But with its unique form of funding, the ICTP-SAIFR can “get round” the system, for example by letting Berkovits recruit scientists in a clear and rigorous way. And in a relatively conservative nation where there is a wide resistance to change, the presence of a new, outward-looking institute can only be a good thing for Brazil.
The technique of 3D printing, more accurately known as additive manufacturing, has enjoyed extensive coverage in the media in recent years. Consumers will play an increasing role in manufacturing the products they use regularly, it is often suggested, perhaps even designing those products themselves. One frequently cited scenario involves consumers using free Web-based packages or mobile-phone photogrammetry applications to specify a design, then submitting it to a commercial 3D-printer operator who will carry out the manufacturing step and ship the product directly to them.
To a limited extent, this picture is already a reality. The commercial availability of low-cost 3D printers over the past five years has certainly allowed consumers to operate home-based manufacturing technology. Such low-cost platforms, which have come about because of the expiry of patents covering earlier prototyping systems, work on a principle called “filament extrusion”, in which a thin polymer thread held at a temperature close to its melting point is squeezed out from a small hole and deposited in raster fashion on a substrate. Consumers are also beginning to take advantage of commercial 3D-printing services, which use established industrial laser-based systems that generally work on the “powder bed” principle: a high-power laser scanned across an extremely thin bed of powder fuses a slice of a 3D object together, and the process is repeated to build up a volume.
Consumer devices bear little resemblance to their industrial additive-manufacturing counterparts, however. They tend to be able to deal only with a handful of extruded thermoplastic materials and – despite the “you can make anything” hype – are greatly limited in the size, resolution and complexity of the parts that they can produce. Indeed, low-cost 3D printing makes up only a relatively small part of the overall 3D-printing industry: around 6% by revenue.
At the more established industrial end of the spectrum, however, the variety of printable material types has increased to include polymers, metals and ceramics. Such single-material addition is already an established manufacturing process in many sectors, including the hearing-aid industry, which manufactures millions of custom hearing-aid shells annually using stereolithography. Selective laser-melting technology has also recently been adopted by GE Avionics to produce metallic components in aero engines, while polymer laser-sintering technology is allowing firms such as FreshFiber to produce customized and complex designs for smartphone covers.
Researchers are now exploring 3D-printing techniques that not only allow complex objects to be manufactured at low cost from almost any material, but will also enable objects with different functionalities to be printed in a single step. This next generation of additive-manufacturing technologies, which is rooted in advanced material-deposition systems and optical techniques, is being investigated by the Engineering and Physical Sciences Research Council’s Centre for Innovative Manufacturing in Additive Manufacturing at the University of Nottingham in the UK, which is undertaking internationally leading work in multifunctional 3D-printing research.
Beyond the hype
It might seem that 3D printing is a novel technology. In fact, the underlying principle – in which raw material is deposited layer by layer without the need for moulds, tools or dies – was first commercialized almost three decades ago. Originally developed for rapid design studies and prototyping, the earliest 3D-printing systems used an ultraviolet laser to selectively photopolymerize a liquid photocurable monomer resin, repeating the process layer by layer until the part is completed. Successive innovations since then have led to the spectrum of additive systems we know today, including the low-cost extrusion-based machines as well as industrial systems based on fusing powders using infrared lasers.
As one of the leading research groups in this area, the 3D Printing Research Group at Nottingham is going beyond single-material 3D printing. Rather than incrementally improving existing single-material approaches, we are working on the next generation of systems capable of depositing multiple materials all at once. Doing so will enable the technology to leap beyond simple structural applications and allow the delivery of entire functionalized systems, such as 3D circuitry, drug-delivery systems and even biological tissue, within a single processing step.
We expect that multifunctional 3D printing will be commercially available within the next 15 years, but there are major technical challenges ahead
We expect that multifunctional 3D printing will be commercially available within the next 15 years, but there are major technical challenges ahead. These range from the accurate and reliable simultaneous deposition of different materials to the design of components and how best to integrate different materials for a given function. The variety of disciplines involved reflects the vastly different functional properties that could be printed directly into structural products: electrical circuitry, optical tracks, embedded sensors, energy sources, displays, antennas, chemical agents and even biologically active structures are just some of the many targets. A large part of research in this area is therefore focused on the search for functional 3D-printing materials.
New design approach
The first challenge, however, is to create a design methodology that enables us to integrate printed structures with lattices that contain functional components. Constraints within traditional manufacturing approaches, such as injection moulding or CNC machining, mean that, in general, it is not possible to fabricate the optimal shape for a given task. Instead, we are turning to algorithms that seek out the optimal shape or composition for a device taking into account weight, aesthetics and functionality in a single processing step.
A promising technique for depositing structural materials, especially engineering polymers, is drop-on-demand ink-jetting. This can take two forms: directly jetting a material that is suspended or in solution; and “reactive jetting”, whereby a monomer and catalyst are deposited separately and polymerization occurs in situ during the manufacturing process. This minimizes waste, reduces the need for solvents and opens up a wider range of materials that can be printed. Our group has printed nylon using this approach, but the current focus is on engineering materials that include polymethyl methacrylate, polyimides and polyurethane.
We are also investigating materials and deposition processes for conductive and dielectric systems, including highly novel research into the ink-jetting of metals with a melt temperature up to 1500 °C. This work, to be carried out in collaboration with the Netherlands-based manufacturer Océ, can potentially overcome limitations with conventional printed-electronics materials (such as their low conductivities), which currently rely on inks that have been loaded with nanoparticles of active materials such as silver.
One of the most exciting developments in this field is the creation of multifunctional structures at the nanoscale, which demands further improvements in the deposition resolution. At Nottingham we are developing a multi-photon lithography system that can produce arbitrary 3D structures of the order of 100 nm for advanced micro- or nanoscale applications, and using optical-tweezer technology to functionalize the structures deposited (see box). By coupling these two systems we can open the door to new nanoscale devices and structures, for example by providing the sensory input for embedded data-processing capability.
Dial-up medicine
Over the coming years multifunctional 3D printing will move beyond fundamental research towards new manufacturing systems. Currently, we are at the stage of understanding the fundamental principles and challenges of multifunctional 3D printing, but it is clear that we need a joined-up approach between numerous disciplines – including physics, chemistry, optics, electronics, materials science, biology and pharmaceutics. To this end, we are working with the School of Pharmacy at Nottingham to explore the 3D printing of pills and multiple therapeutics designed to release drugs at a rate that is personalized to the patient. This would lead to a radically decentralized business model in which drugs or implants could be distributively manufactured. It is envisaged that a medical practitioner would be able to “dial up” a treatment that is then made to a bespoke specification in a room next door, offering personalized care to patients at the point of need.
Multifunctional 3D printing promises genuinely new kinds of manufacturing processes with advanced capabilities, and ultimately a new generation of optimized and highly functional products. Supported by innovative business models, these products will set new standards in performance, efficiency and user-friendliness.
Optical techniques carve nanoscale features
Printing functional nanoscale devices calls for sophisticated optical techniques. Two-photon polymerization (left) allows a finer resolution than is possible with more common single-photon absorption methods. By using photons with half the energy required to drive a polymerization reaction, the two photons need to be absorbed within a very short timescale. Because this process is proportional to the square of the intensity of photons incident on the target, only in the most focused regions of the incident light – when using a high numerical aperture and a femtosecond-pulsed laser – will two-photon absorption occur, therefore resulting in high resolution and feature sizes of the order of 100 nm. By coupling this set-up with an optical-tweezer arrangement, whereby very small forces can be exerted on an object through the focusing of a laser beam (right), larger micron-sized objects can be drawn into the vat of material and encased within a matrix or attached to a surface. This offers the potential for functionalizing a surface or a material by combining it with particles prepared separately and then brought together.
During the Second World War, the US military developed a method to distinguish real plants from enemy camouflage based on the way they react to light. Chlorophyll absorbs blue and red light while reflecting green, giving plants their colour. But plant cells also reflect infrared light, which is not energetic enough for photosynthesis, to prevent overheating. Therefore, while green camouflage looks the same colour as a plant to the naked eye, it appears dark on an infrared film, while vegetation appears white.
As remote sensing from satellites became possible in the 1960s, this principle was developed into an index called the normalized difference vegetation index (NDVI), which relates reflectance in the visible region to the infrared. The NDVI also provides a measure of how healthy a plant is based on its photosynthetic activity: an increase in the reflectance of red light indicates that a plant is not photosynthesizing as effectively as usual, perhaps because of drought or lack of nutrients. However, satellite NDVI measurements are also affected by atmospheric conditions, satellite geometry and the angle of incidence of the Sun’s radiation.
Gauging a plant’s needs
To get round these complications, in the 1990s plant scientists and agricultural engineers at Oklahoma State University in the US developed ground-based optical sensors. Their aim was to use NDVI measurements to gauge a plant’s nitrogen needs, thereby allowing farmers to adjust the amount of fertilizer applied to a crop. Early sensors were passive and worked by measuring light reflected from the Sun, which varies during the course of a day and with the level of cloud cover. In 1998, however, researchers developed active sensors using LEDs as a red and infrared source, and by 2002 US firm NTech Industries had commercialized the technology under the trademark “GreenSeeker”.
Big business
Crop sensing has expanded rapidly in the last few years, with the acquisition of NTech by Californian firm Trimble in 2009 and the appearance of competitors such as Holland Scientific’s Crop Circle and Topcon’s CropSpec, which uses a laser diode rather than an LED source. In today’s systems, several sensors are mounted along a boom attached to a tractor and readings are used to adjust the levels of fertilizer being applied in real time. A typical four-sensor GreenSeeker system costs in the region of $10,000, but the resulting savings can be much bigger according to Trimble’s Anna Hebert. “A farmer in Oklahoma saved more than $32,000 in fertilizer costs in 2013 ($24 per acre) and more than $175,000 since he began using GreenSeeker in 2005,” she says.
The next move for agricultural optics appears to be upwards. Andreas Burkart at the Jülich Research Centre in Germany is leading a project to observe crops using a lightweight spectrometer made by US firm Ocean Optics aboard an unmanned aerial vehicle (UAV). “The dream of such an ‘eye in the sky’ is old,” says Burkart, “but now UAVs can be readily purchased and configured so that even a biologist like me can operate one easily.” The restricted payloads and flight times of UAVs make such systems more suitable for crop observation than treatment. Indeed, according to Michael Allen of Ocean Optics, the UAV device captures data about 20 times faster than with traditional ground-based techniques without causing any disturbance to the crops. “That’s a huge advantage when you have to assess dozens of acres of cropland,” Allen told Physics World.
The next step, says the Jülich team, is to use spectral data to reconstruct a full phenotype, including the size of plants, and the number and colour of their leaves. A UAV could then quickly scan an area containing several different varieties of a crop, such as barley, and assess how each performs in different conditions. Trimble launched its first UAV system targeted at farmers in January.
Unlike ground-based active optical sensors, measurements from the air rely on reflected light. But a recent project by a multidisciplinary group at the University of New England in Australia aims to get the best of both worlds by mounting off-the-shelf active optical sensors on ultra-low-level aircraft flying 3–5 m above the ground.
Detecting disease
Researchers are also developing more advanced sensors. For example, the small amount of red light re-emitted after being absorbed in photosynthesis, known as chlorophyll fluorescence, can provide a sensitive indicator of plant stress. Optical techniques have even been demonstrated that can detect plant diseases before they can be ascertained any other way, allowing the preventative application of pesticides. By studying the reflectance spectra of sugar-beet crops infected with three different fungal pathogens, a team from the Institute for Crop Science and Resource Conservation at the University of Bonn in Germany has recently identified spectral features that are correlated with each disease.
“From online measurements of the fat content of milk to screening incoming materials for food manufacturing, optical solutions are becoming mainstream,” says Allen. “Miniaturization and ease of use will drive more farmers to optical technologies.”
A modern laptop is 100 times smaller, a million times faster and 10,000 times less expensive than the first general-purpose electronic computer built in 1946. This remarkable reduction in size and cost is largely down to semiconductor-fabrication methods that have been developed since the 1960s, which allow large and complex electrical circuits to be miniaturized onto tiny chips. But the technology is by no means limited to computers – the modern solid-state laser has benefited from a similar trend.
By combining state-of-the-art nanofabrication techniques with modern photonics and solid-state lasers, researchers are attempting to shrink some of the largest scientific instruments ever built: particle accelerators. This could lead to a new generation of compact and affordable machines for high-energy-physics exploration and X-ray science, and industrial devices for cancer therapy, food sterilization and security.
Conventional particle accelerators confine microwave energy in specially designed hollow metallic guides called radio-frequency cavities, which deliver a forward thrust to particles passing through them. As the microwave power is increased, the particles gain more energy. However, as is clear to anyone who has put a metallic object inside a microwave oven, the interior metal surfaces of a cavity begin to suffer electrical breakdown if the microwave power gets too high. This phenomenon leads to electrical arcs that can damage the machine, and limits the amount of energy that particles can gain over a given distance – a quantity called the accelerating gradient – to around 30 MeV/m. The only way to increase the energy of conventional accelerators is therefore to make them bigger.
In a research collaboration based at Stanford University and the SLAC National Accelerator Laboratory in California, US, we are using techniques from the microchip industry to make micron-scale accelerator structures from dielectric materials, which have high laser-damage thresholds and low ohmic losses. By accelerating particles using the energy from lasers instead of microwaves, gradients of billions of electron-volts per metre are possible. In principle, this would allow us to shrink conventional accelerators by a factor of 100 or more for a given particle energy.
The idea of using light to accelerate particles dates back to the 1960s, and since then a variety of all-dielectric, high-gradient structures have been proposed. But it is only recently that it became possible to fabricate the first laser-accelerator prototypes, thanks to high-power solid-state lasers, optical fibres and photolithographic techniques driven by the communications and semiconductor industries.
An alternative approach that has made impressive strides in recent years is plasma acceleration. When an ionized gas (or plasma) is excited by an intense laser pulse, it produces a wave that accelerates a trailing bunch of particles – similar to a surfer riding on the wake of a motorboat. Plasma accelerators use intense lasers that provide millions of billions of watts of peak power to excite the plasma wave, with one excitation occurring every few seconds.
By comparison, photonic-microstructure-based accelerators would use off-the-shelf tabletop lasers to produce millions of pulses per second with a high energy efficiency. This large number of pulses potentially allows micro-accelerators to operate with beam currents comparable to those in conventional accelerators, despite their tiny size, and such devices could also be mass-produced inexpensively.
The first demonstrations of chip-based photonic accelerators were carried out by the SLAC–Stanford collaboration in 2013. Graduate students Edgar Peralta and Ken Soong, respectively, fabricated the prototype structures and developed the precision laser optics required to test them. Using photolithography, more than 200 individual accelerators were etched into a pair of bonded silica wafers. Each of the 1 mm2 devices contains a channel as narrow as 400 nm, into which are etched a sequence of ridges and gaps. When illuminated by a laser, these ridges shape the laser field to produce a wave that can impart energy to a particle that arrives at the appropriate time (see figure below).
Because most applications require particle accelerators to operate at relativistic energies, we designed the micro-accelerators to produce a wave that will continuously accelerate particles travelling at near-light speeds. To demonstrate the effect, individual structures were carefully aligned in the path of a pre-accelerated electron test beam at SLAC, while being illuminated by a titanium-sapphire laser. Because the particles arrived in bunches that were longer than the 800 nm wavelength of the laser, some were accelerated and some decelerated depending on whether they arrived at the peak or at the trough of the accelerating wave.
Surfing a wave A computer simulation of particles in a microchip accelerator shows the accelerating (orange) and decelerating (blue) portions of the laser-induced wave in a nanostructured channel. (Courtesy: Ben Cowan, Tech-X Corp.)
High gradients
Our initial tests last year demonstrated accelerating gradients as high as 300 MeV/m (Nature503 91), but by further narrowing the width of the accelerating channel to enhance the accelerating field and using materials such as sapphire that can withstand higher laser intensities, it should be possible to increase the gradient by a factor of 10 or more. This would allow us, in principle, to shrink the entire 3 km-long SLAC linear accelerator to the size of a room, or to create lower-energy medical accelerators on a single wafer that would fit in your hand.
Alongside our ongoing experimental efforts, Peter Hommelhoff at Friedrich Alexander University in Germany and co-workers recently used an approach similar to ours to accelerate lower-energy (28 keV) electrons with gradients of 25 MeV/m (Phys. Rev. Lett.111 134803). The German group is also developing ways to fabricate sub-micron-sized particle sources that can be triggered using the same laser that powers the accelerators. Such a particle injector would negate the need for a conventional radio-frequency accelerator on the front end of the microchip accelerator, as was necessary in our experiments, and is vital for making on-chip devices that can accelerate electrons to sufficiently high energies.
These first demonstrations of chip-scale accelerators are important steps, but a complete accelerator is still some way off. To reach useful particle energies, many of these tiny structures must be fabricated in succession and illuminated by a sequence of laser pulses that are precisely timed so that the particles see a continuous energy gain as they travel forward. The arrival times of the electrons and the laser must be synchronized to within a fraction of an optical period, corresponding to a few tens of attoseconds. This could be done, for instance, by introducing on-chip light guides with precise lengths that selectively funnel light into successive structures.
Another challenge is to avoid some of the particles being decelerated instead of accelerated. This requires that the incoming particles arrive in bunches that are separated by a distance equal to the wavelength of the laser and timed so that they all ride on the peak of the accelerating wave. Researchers at SLAC and Stanford have previously demonstrated a way to prepare the particles in this way using a laser to modulate the energy of the electrons followed by a magnetic compressor, which allows lower-energy particles to catch up with higher-energy ones. Accelerating these micro-bunched particles in unison at the peak accelerating field would also allow for monochromatic beams with a narrow energy spread, and we are now trying to integrate these approaches to design new particle sources that are optimized for testing micro-accelerators.
Similar techniques have been proposed to build a powerful source of pulsed X-rays for advanced crystallography research, for instance. By making on-chip laser-driven devices rapidly deflect rather than accelerate particles, a sequence of alternating deflectors in series would form an “undulator” that forces electrons to wiggle back and forth to generate bursts of synchrotron radiation. Such micro-bunched beams could potentially deliver X-ray pulses with durations of tens of attoseconds, allowing scientists to probe atomic and molecular dynamics on timescales three to four orders of magnitude shorter than present-day “conventional” X-ray free-electron lasers, such as the Linac Coherent Light Source at SLAC.
Industry appeal
We estimate that within the next 5–10 years tens of millions of electron-volts could be generated in an integrated device fabricated on a single 15 cm diameter wafer. To reach the billions or trillions of electron-volts needed for high-energy physics exploration, we will need to carefully align many such wafers in succession with sub-micron precision. This is straightforward when nanofabricating multiple devices on a single wafer, but more challenging if individual macroscopic objects are to be aligned. Consequently, in the next decade microchip accelerators will probably find their first applications in industry and medicine, where the required particle energies are in the much lower range of 1–20 MeV.
Companies that build accelerators for these sectors are beginning to express interest in commercializing the technology. Although the machines used in industry and medicine are smaller than high-energy colliders, they can still be the size of a room, weigh tens of tonnes and cost millions of dollars. Microchip-based versions could be sufficiently small and portable to allow endoscopic direct electron-beam cancer therapy, for instance. They could also enable more affordable medical X-ray sources for the treatment of clogged arteries, computed tomography (CT) scans and high-resolution imaging of biological tissues.
Similar devices for industry could provide a valuable X-ray source for non-destructive testing of products and materials, for instance by identifying cracks and deformations in hard-to-reach locations, or enabling new processes in the microchip industry by using multiple-beam X-ray lithography to rapidly “print” nanoscale patterns onto integrated chips. Accelerators for X-ray security screening of cargo and aircraft could also be made more widely available and, being smaller, capable of scanning larger areas or moving targets.
A number of these nearer-term applications are beginning to be explored and although the technology is not yet ready to compete with high-energy machines such as the 27 km-circumference Large Hadron Collider at CERN in Switzerland, it is clear that photonic devices are a promising approach for the future of particle acceleration.