Skip to main content

Blood Moons, teachers who moulded the minds of great physicists and more

Jocelyn Bell Burnell and a quote about her high-school physics teacher Henry Tillott

This week has been an exciting and busy one at Physics World HQ, what with two Nobel prizes that included physics – the actual Nobel prize for physics, of course, as well as this year’s chemistry Nobel, which was given to three physicists. Since last week’s Red Folder was full of Nobel trivia and facts, I will only point you to two more interesting Nobel-related articles. The first is an excellent article on the Slate website, by one of our regular freelance authors Gabriel Popkin, where he looks at female physicists who deserve a Nobel. His list is in no way exhaustive, but does well to highlight some excellent work done by women that deserves recognition, so do take a look at “These women should win a Nobel prize in physics”. Also, Ethan Siegel from the Starts With a Bang! blog has written an excellent essay to silence any would-be naysayers about the worthiness of giving the Nobel to the researchers who developed blue LEDs. In “Why blue LEDs are worth a Nobel Prize”, he outlines the history of LEDs and talks about just how many applications they have in today’s times.

(more…)

Butterfly’s colourful trick of the light recreated in the lab

A material that is capable of the “reverse diffraction” of light and that is inspired by butterfly wings has been made by researchers in the US. The structure combines features of the butterfly’s reverse-diffraction grating with a normal diffraction grating, giving it additional properties that are not present in the natural structure. The researchers believe its applications could range from solar cells and LEDs to security labelling.

Many animals have evolved elaborate colour schemes, often to attract mates or confuse predators. In 2010 Jean Pol Vigneron and colleagues at the University of Namur in Belgium, together with collaborators in Panama, discovered something curious about the iridescent spot on the wings of the male Pieralla Luna butterfly, commonly known as the forest-floor satyr. The colours are produced because the scales on the butterfly’s wing behave as diffraction gratings, albeit with a twist. In a normal diffraction grating, the light with the longest wavelengths diffracts the most; so as the viewing angle becomes more oblique, the colour changes from violet to green to red. The spot on the Pieralla Luna‘s wing, however, changes colour the other way round: it looks violet when viewed obliquely and red when viewed almost straight on.

Analysing the wing’s structure, Vigneron and colleagues found that the diffraction arises from scalloped ridge patterns on the surfaces of the wing scales: these scales curve upwards at the edges, forming diffraction gratings that are perpendicular to the wing. The function of this structure to the butterfly remains unclear, although its absence from females of the species suggests a sexual-signalling role (see “Drab butterfly reveals its hidden colours”).

Regular rows and columns

Now, researchers at Harvard University have replicated this structure using micron-scale silicon plates with scalloped edges, and have added their own twist. The scales on a wing are tens of microns apart, and are irregularly spaced, so there is no coherence between the light reflected by different scales. On their silicon-plate grating, however, the researchers arranged pillars of plates in a regular rectangular pattern, with rows separated by just 5 μm and columns by 10 μm (see image above). This periodicity allows the plates to behave collectively as a “hierarchical double-diffraction grating” – a normal diffraction grating in which each diffraction element is itself a reverse-diffraction grating.

The vertical diffraction elements (the scallops on the plates responsible for the reverse diffraction) are much closer together than the normal diffraction elements (the plates themselves), so the reverse-diffraction pattern is much more widely spread. “Because it’s a much smaller periodicity, you only see one order from it in the actual experimental results,” explains Harvard’s Grant England. The combined diffraction pattern shows multiple sets of diffraction fringes from the normal diffraction pattern, but the relative intensity of the colours in each set of fringes is modulated by the broad fringes of the reverse-diffraction pattern.

The two diffraction gratings can be manipulated independently, giving the researchers more control over the combined diffraction pattern. By moving the plates closer together or farther apart, the positions of the diffraction orders from the periodic plates are shifted, while the superimposed reverse-diffraction pattern remains unaffected. Conversely, when the researchers shear the surface, thereby bending the plates without altering their spacing, the angle of incidence of light on the reverse-diffraction gratings is altered, while the incident angle on the normal diffraction grating stays the same. Each set of fringes stays in the same position but the relative intensities of the colours in each fringe change.

Multiple copies

The researchers used industrially compatible techniques to produce the bio-inspired gratings, etching a single master before producing as many copies as needed, and team member Mathias Kolle believes that the structure could have multiple technological uses. “In the butterfly case, it’s very likely a display feature,” he says. “For signalling purposes, it would be a uniquely identifiable structure, so the obvious thing we thought about was security printing.” He suggests that the technology could be used in the authenticator labels of banknotes and credit cards. Beyond that, he sees possible applications for light management in LEDs or photovoltaic cells.

Roy Sambles, a metamaterials scientist at the University of Exeter in the UK, says that “Maxwell’s equations are well known; the equations of diffraction are well known – there is nothing new in the diffractive optics.” He is more impressed, however, by the engineering. “You’ve got a multi-stacked diffractive structure in the vertical direction and then you’ve got diffraction gratings in both directions in the horizontal plane; so in some senses it’s a triple diffractive structure – which is really quite neat and quite a clever piece of technology,” he says.

The research is published in Proceedings of the National Academy of Sciences.

The argument for sticking with SUSY

As a theorist, I keep a “bucket list” of the phenomena I would most like to see observed in nature before I am shuffled off this mortal coil. Not long ago, my bucket list looked like this:

  • a Higgs boson;
  • gravitational waves;
  • superpartner particles;
  • direct evidence supporting superstring/M-theory.

The discovery of a Higgs boson in experiments at CERN’s Large Hadron Collider (LHC) was, of course, the joyous fulfilment of the first item on my list. Recently, it seemed I might be able to cross gravitational waves off as well, when scientists in the BICEP2 collaboration reported observing the imprint of these waves in the cosmic microwave background. Since then, a few flies have appeared in the stew, so to speak, but I would still be surprised if we do not see evidence of gravitational waves within a decade. As for evidence supporting string theory, in my view this is not as unlikely as some critics have argued, but I accept that it is far more of a long shot.

But what about the third item on the list? Superpartner particles or ‘‘sparticles’’ are predicted to exist under an extension of the Standard Model of particle physics that makes use of the principle of supersymmetry (or SUSY). SUSY is not a single model any more than Newton’s second law is a rule that applies solely to falling apples. Instead, SUSY is a framework for many models; indeed, this adds to the complication of its discovery. Collider searches can only strive to observe the predictions of models. Even under many reasonable assumptions, numerous such models remain possible. Ruling out one or a few does not exclude a far greater multitude of others.

Giant sequoia

The common factor in all SUSY models is a prediction that each of the observed fundamental particles in the Standard Model has at least one very similar “twin” that differs from known particles only in mass and quantum spin. For example, the known force-carrying bosons (the photon, gluons, W and Z particles) all have quantum spins equal to one, so their sparticle twins must be fermions, which have spins equal to one-half. Similarly, the matter particles – electrons, neutrinos, quarks and so on – are all spin one-half fermions, so their sparticle twins must be bosons with spin equal to zero. Figure 1 shows the increase in particles that would occur, as a minimum, if a supersymmetric version of the Standard Model was found to describe nature.

In the years before the LHC was switched on in 2008, many physicists theorized (and rhapsodized) about the ease with which sparticles would be discovered in LHC collisions. So far, though, sparticles have been conspicuous by their absence. This has led to a “SUSY is dead” meme being promulgated among many interested parties – aided, perhaps, by the unreasonable expectations created ahead of the switch-on. My own views, however, have always been more cautious. In fact, in 2006 I publicly lamented the irrational exuberance that seemed to have swept the particle-physics community about the likelihood of early detection of SUSY. These views were disregarded, but – like the predictions of Cassandra in Greek mythology – they proved to be accurate. Now, as the LHC is being prepared to return to active operations, I welcome the opportunity to comment once more on the prospects for discovering SUSY at work in nature.

As with the hypothetical hunt for giant sequoia trees, finding evidence for SUSY depends on the observer looking in the right place

In my view, the current situation is akin to that of an explorer who, having scoured the eastern seaboard of North America, concludes that no groves of Sequoiadendron giganteum exist in the entire continental USA. As with this hypothetical hunt for giant sequoia trees, finding evidence for SUSY depends on the observer looking in the right place. My aim in this article is to explain why, despite the difficulty, the search for SUSY continues to be worthwhile, and why I disagree with colleagues who believe that it will never be found as an accurate description of nature. There is more of an issue than simply finding a SUSY model: the physics definition of the quantum vacuum state is also at stake.

The taming of the vacuum

To appreciate the attraction that SUSY holds for theorists, it helps to understand the theory’s history. Today it is often forgotten that SUSY did not emerge to answer problems in our understanding of nature or the Standard Model. Instead, it emerged in the 1970s from “blue sky” considerations of the mathematical properties of relativistic quantum field theory (QFT).

At the time, it was thought that all possible symmetries that could exist in QFT had already been enumerated. Indeed, a 1967 result called the Coleman–Mandula theorem had essentially asserted that physicists already knew all there was to know about symmetries, at least at the mathematical level. However, the theorem had a hole in it, and in 1971 two Soviet-era physicists, Yuri Gol’fand and Evgeny Likhtman, exploited this hole to discover a new possibility: a symmetry hidden in the mathematics of QFT that relates fermions to bosons and vice versa.

Gol’fand and Likhtman worked during a time when the Iron Curtain inhibited communication, and their discovery was essentially unknown in the West. A few years later, however, SUSY had a Western emergence thanks to Julius Wess and Bruno Zumino, who investigated the possibilities of mathematical symmetries between fermions and bosons in response to some studies (by Jean-Loup Gervais and Bunji Sakita) of the mathematical properties of early models of string theory. This new fermion–boson symmetry eventually became known as “supersymmetry”.

At this stage, theorists found supersymmetry attractive because it seemed almost magically to ameliorate a notoriously difficult problem with QFT. To get a feel for this problem, here is an analogy. Imagine blowing on the surface of a bowl of still water. Even if you blow very gently, you will still produce waves that can be seen to move across the surface. Now imagine carrying out the same action, but not too closely, with a pot of vigorously boiling water. In this scenario, it would be difficult to detect the waves created by your gentle blowing because the activity of different sizes of bubbles coming to the surface would tend to obscure them.

This is the essential problem of QFT. In a quantum universe, the lowest state of energy – the vacuum – is not empty. Instead, it is like the surface of the boiling liquid, with “virtual particles” that pop in and out of existence like bubbles. Hence, to describe the vacuum mathematically, we need a way to subtract the motions caused by the behaviour of the virtual particles. But ordinarily, the way theorists define the vacuum state in QFT is ambiguous. This ambiguity is most pointedly seen in a theoretical approach known as the “canonical method of second quantization”. So what is this ‘‘second quantization’’ about? Well, the ‘‘first quantization’’ is the one that replaced the Newtonian concept of the particle with the concept of wave–particle duality and used wavefunctions to describe particle behaviour mathematically. But within this first, familiar quantization, the number of equivalent particles described by wavefunctions is not allowed to change. This runs into a problem as it does not allow these equivalent particles to decay into others. What second quantization does is to fix this by extending the mathematics in a way that allows for the kind of particle creation and annihilation that takes place constantly in the vacuum.

When deriving the descriptions of the energy states associated with a QFT using the second quantized method, there comes a point at which one must ignore the existence of a certain constant, which has a value of infinity, in order to make numerical predictions. Ignoring infinities might seem like a bad idea, but in the 1950s, Richard Feynman, Sin-Itiro Tomonaga and Julian Schwinger showed that this piece of mathematical legerdemain is, in fact, possible. Their method is known as “renormalization” and it yields results that agree, to better than one part in a billion, with experimental data. In 1965 they were rewarded with the Nobel Prize for Physics.

Sweeping infinities under the rug through renormalization made theorists uncomfortable, which is where SUSY entered the picture

Despite the fantastic success of renormalization, the process of “sweeping infinities under the rug” has long made theorists uncomfortable. Some began to search for alternatives, and this is where SUSY first entered the picture. It turns out that renormalization is a requirement of all known QFTs, except those with the property of SUSY; QFTs with a SUSY component require fewer (and some even no) such procedures. Only SUSY QFTs have a well-defined vacuum state and this is because of supersymmetry.

Throughout the 1970s, the link between SUSY and the renormalization problem drove much of the interest in the theory. The shift came in the early 1980s when members of several theoretical collaborations pointed out that the mathematics describing a particle–sparticle pair could be consistent even if the members of the pair had different masses. They arrived at this result by borrowing mathematical concepts that are essentially the same as those in the Higgs boson construction, and the effect on the theoretical community was significant; many people with no previous involvement in SUSY suddenly became very, very interested.

This shift in thinking was the genesis of the theory we now call the minimal supersymmetric Standard Model (MSSM). This mathematical construction is called “minimal” because it consists of the particles of the well-established Standard Model together with the minimal number of sparticles required by SUSY. Later, theorists also proposed a “next to minimal” supersymmetric Standard Model (NMSSM) that includes more undiscovered particles and sparticles, and other flavours of SUSY predict an even richer sparticle world. At this point – and not before – the Standard Model became part of SUSY’s story.

Gaps in the Standard Model

These historical notes show that it is inaccurate to regard SUSY as having been “invented” to solve problems with the Standard Model. However, from today’s perspective, some of the most prominent reasons for continuing to explore SUSY do indeed stem from attempting to fill gaps in the model. The Standard Model is like a finely wrought instrument, but it is silent on topics such as why the values of masses, mixing angles and coupling constants are what we measure them to be, and why distinct particles exist in such profuse numbers.

One of the most commonly cited problems with the Standard Model is that it lacks a compelling reason to introduce new elementary particles, such as WIMPs (weakly interacting massive particles), that could account for the behaviour of dark matter. The presence of so many superpartners in the MSSM provides a logical – indeed almost compelling – solution to this problem (see “Theories of the dark side”, July 2014).

1 Filling the gaps

Table of subatomic particles

This “periodic table” for matter particles and force carriers shows how SUSY theory expands the Standard Model by predicting fermionic partners (top left) for known bosons like the gluon, photon, Z and W particles (top right), along with bosonic partners (bottom right) for known fermions like the electron, muon and tau, the three neutrinos and the six quarks (bottom left). SUSY also predicts that the Higgs boson discovered at CERN is only one of at least five such bosons to exist in nature. These additional bosons (and their superpartner equivalents) are believed to have much higher masses than that of the “plain vanilla” Higgs. The L and R subscripts indicate that these particles (and antiparticles) are associated with left- or right-handed spin. Particles not yet detected lie on a blue background.

Another flaw with the Standard Model is that it does not provide any insight into the “gauge hierarchy” problem in QFT. This is a more technical issue, and it concerns how the coupling strengths of the three forces in the Standard Model – the electromagnetic force, the weak nuclear force and strong force – vary with distance. For example, the strong force, which is described by quantum chromodynamics (QCD), is responsible for binding quarks together to form protons and neutrons. If the distance between two quarks is approximately the radius of a neutron (0.3 × 10–15 m), then the coupling constant for this force, gs, is about 2.49. (You can think of gs as the QCD force’s equivalent of the electric charge in electromagnetism.) But if the distance between two quarks shrinks to one-tenth of the neutron’s radius, gs falls to about 1.48. This means that the closer quarks are to one another, the weaker the QCD force is that attracts them, and vice versa.

The phenomenon of the changing value of coupling constants is known as “running coupling constants”, and the discovery of its effect in QCD resulted in a Nobel prize being awarded to David Gross, David Politzer and Frank Wilczek in 2004. It occurs because in QFT, matter particles (such as the two quarks in this example) are always bathing in the sea of virtual particles that pop out of the vacuum (this was alluded to earlier with the “boiling surface” analogy). The presence of these virtual particles changes the forces to which the observable particles are subject, just like adding salt to pure water changes the buoyancy force. Their effect is distance-dependent, and whether the coupling constants get larger or smaller as the particles get closer together depends on the force studied. Forces that become weaker with decreasing distance are said to possess ‘‘asymptotic freedom’’. By the same token, these forces must become stronger with increasing distance, a phenomenon known as “infrared slavery”.

Changes to the coupling constants at very small distances are important because, within the context of QFT, investigating what happens at small distances corresponds to investigating systems with very high energies. Our universe is one such system, and the moment when its energy density was greatest occurred around the Big Bang. The “running coupling constants” scenario implies that in the very early universe, the coupling constants, which characterize the three forces of the Standard Model, must have had very different values than they do now. Moreover, at times near the Big Bang, the differences in the masses of the bosons that carry these three forces became negligible. The reason for this is that the average energy in any location was, at that time, enormous. Thus, according to Einstein’s mass–energy equivalence, the masses of all of the Standard Model force-carrying particles (which all have spin-1) would have been tiny compared with the energy in the environment surrounding them, and we can ignore their comparatively tinier mass differences in such conditions.

Using the machinery developed by Sheldon Glashow, Helen Quinn and Steven Weinberg, one can explore the behaviour of the coupling constants in this high-energy regime according to the Standard Model and compare it with the behaviour expected if the Standard Model had a SUSY extension. In the basic Standard Model, there is no single energy at which all three constants join together to have the same value, but when a SUSY component is added, unification – meaning the point when they all have the same value – occurs at an energy of about 1016 GeV (figure 2). Thus, including SUSY implies that the single force uniting the QCD, electromagnetic and weak force near to the time of the Big Bang would have undergone a single transition later, becoming three distinct forces (a discovery made by Savas Dimopoulos, Stuart Raby, and Frank Wilczek).

2 To unify, or not to unify uniquely

Chart of the physical forces at different distances between particles

The strengths of the coupling constants of the electromagnetic, strong and weak nuclear forces vary with the distance between particles (top scale) and, equivalently, with energy (bottom scale). In the low-energy world we see around us, and even in the higher-energy conditions produced in the current generation of particle colliders (dashed line), the values of these coupling constants differ. SUSY theory predicts that their values will converge, or unify, at an energy of about 1016 GeV (right). But without SUSY, the convergence does not occur (left) at a single point.

The final major complaint made about the Standard Model is that it is not “natural”. A useful way of thinking about what “natural” means in this context (and one I am borrowing from Tristan Hubsch of Howard University) is to imagine flipping a coin and observing how it lands. The “natural” expectation is that the result will be either “heads” or “tails”, but there is also a very small possibility that the coin will land on its edge and stay there. This would be a very “unnatural” outcome though, and if it occurred repeatedly, one would become suspicious that something else was going on.

Within the Standard Model, the observed mass of the Higgs boson is a bit like the coin landing on its edge; its mass seems unnaturally light when compared with all of the possible mathematical outcomes allowed in the model. The reason this lightness seems “unnatural” is to do with the fact that the Higgs boson (unlike all of the other elementary bosons we know about) is a spin-zero particle, and the mass of such particles is affected in a unique way by quantum fluctuations associated with the virtual particles mentioned earlier. Theorists like me worry about how the mass of a light Higgs boson can be stable in the face of these quantum fluctuations; although such stability is mathematically possible, within the Standard Model it is about as likely as a coin landing on its edge after many flips. However, if we add SUSY to the Standard Model, then the mathematics becomes technically “natural” – as if you made the coin balance on the edge by putting your finger on top and holding it there.

The serendipity quotient

If we accept these arguments about dark matter, gauge hierarchy and naturalness, then the Standard Model as currently conceived must be incomplete. But these phenomenological factors are not the main reasons that I support further investigations into SUSY. For me, the most important benefit of supersymmetric theories is the same one that got me excited about them back in the 1970s: they are the only QFTs that treat the quantum state with the lowest possible energy in an elegant way.

This brings me to a property I call the “serendipity principle”. Reality is both self-consistent and filled with extraordinarily large numbers of apparent coincidences. Accurate theories of nature should, I believe, also have these qualities. This guiding principle suggests that, given two formal mathematical approaches to describing a physical system (and in the absence of clear observational or experimental guidance), the approach that supports the largest number of robust, unexpected and serendipitous relations is the one most likely to provide a path for continued progress in the long run.

A spinning coin

The three problems I described in the previous section – dark matter, gauge hierarchy and “naturalness” – are all examples of SUSY’s high serendipity quotient. There are, of course, many others. One of the first examples of the serendipitous use of SUSY concepts was discovered by Edward Witten in 1981. In a universe described by Einstein’s general theory of relativity, energy must always be a positive quality, and Witten showed this was true via a mathematical proof using SUSY. Prior to this no such proof had ever been found for general relativity.

Another example concerns the possible extension of something similar to quark–lepton symmetry. This occurs between quarks and the family of particles known as leptons, which includes the electron and its heavier cousins, the muon and the tau. When this symmetry – which explains why pairs of leptons appear in the Standard Model alongside pairs of quarks – is applied to a version of the Standard Model that possesses SUSY, the implication is that the Higgs boson discovered at the LHC is not the only one. Instead, there must be a minimum of five Higgs bosons. This might seem like an embarrassment of riches, but if at some future point a second Higgs boson is observed, it could be a sign of SUSY appearing in nature. (There are other mathematical explanations for multiple Higgs bosons, but in the absence of SUSY, these other explanations would come at the price of signalling that there are more forces in nature than we know about already.) The detection of these additional Higgs bosons could be difficult because most SUSY models predict that their masses would be considerably heavier than the “plain vanilla” Higgs boson found in the Standard Model. Still, one can be optimistic that, at some point, it might be possible to detect them.

Quo vadis, SUSY?

Only careful observation of nature can bring the clarity needed in this field. As experimentalists at the LHC prepare for upgraded operations in the next year, they will take the lead in settling the question of SUSY. At the same time, we need to be alert to the work of scientists who are looking for indications of SUSY elsewhere in the cosmos, particularly those involved in the continued search for dark matter as well as other possible astrophysical anomalies. And of course, we theorists will continue, as we have done for decades (and with results applicable outside of SUSY, as in the example of Witten’s work), to develop new tools to enhance our ability to probe the Standard Model and QFTs.

The LHC tunnel at CERN

Ultimately, our science demands both patience and scepticism. In discussing the possibility that SUSY could be an accurate description of nature, neither “irrational exuberance” nor “irrational pessimism” should prevail. We as a community need to exercise particular probity in this because extreme views do a disservice to science among the general public, and we can only thrive with public support. The cancellation in 1993 of the US’s Superconducting Super Collider, with which we could have discovered the Higgs boson (and much else) more than a decade ago, stands as a stark warning of the need for clarity, discipline and an ability to enunciate clearly how our drive to understand nature fits within national and international aspirations.

Yet while faith is no substitute for observation in science, I remain convinced (to misquote Mark Twain) that the reports of SUSY’s death have been greatly exaggerated. If the universe is not supersymmetric at some fundamental level, it will mark the first time in several centuries when symmetry has failed to be a reliable guiding principle in uncovering the mysteries of our universe. Although this could happen, I suspect it will not. In the words of one early SUSY theorist, Peter van Nieuwenhuizen, “We hope that nature is aware of our efforts.” In the end, the property of the SUSY principle that gives me the greatest confidence that it will be vindicated in the long run is its remarkably distinct properties in taming the behaviour of quantum fluctuations compared with all other versions of QFT ever discovered. It is this power, indicated by taming the vacuum, that suggests there is something unique about supersymmetrical QFT theories. For a theorist of my generation, the question of why the QFT vacuum is more stable for SUSY theories harkens back to a similar question asked at the turn of the 19th century: ‘‘Why is the ground state of the hydrogen atom stable?’’

How to inspire scientists in developing nations

 

By Matin Durrani

I’ve now returned to the UK from my visit to the International Centre for Theoretical Physics (ICTP) in Trieste, which has been celebrating the 50th anniversary of its founding. As part of those celebrations, the ICTP has created a special half-hour video documentary (above), which shows how scientists in various parts of the globe have not only furthered their own careers through visits to the ICTP, but have also used that experience to improve science back in their home countries

The video, which I watched in Trieste, features scientists from everywhere from Nepal to Cuba, from Ethiopa to Peru, and from Cameroon to China – and, of course, Pakistan itself where the ICTP’s founder Abdus Salam was from. Entitled From Theory to Reality: ICTP at 50, it was made by Italian film-maker Nicole Leghissa, who spent two months travelling around the world to the locations seen in the film.

(more…)

Are ‘weak values’ quantum after all?

A technique known as a “weak measurement”, which allows physicists to measure certain properties of a quantum system without disturbing it, is being called into question by two physicists based in Canada and the US. The researchers argue that such measurements, and their counterparts known as “weak values”, might not be inherently quantum mechanical and do not provide any original insights into the quantum world. Indeed, they say that the results from such measurements can be replicated classically and are therefore not properties of a quantum system.

More than 25 years ago, Yakir Aharonov, Lev Vaidman and colleagues at Tel Aviv University in Israel came up with a unique way of measuring a quantum system without disturbing it to the point where decoherence occurs and some information is lost. This is unlike conventional “strong measurements” in quantum mechanics, in which the system “collapses” into a definite value of the property being measured – its position, for example. Instead, the researchers suggested that it is possible to gently or “weakly” measure a quantum system, and to gain some information about one property (such as its position) without disturbing a complementary property (momentum) and therefore the future evolution of the system. Although each measurement only provides a tiny amount of information, by carrying out multiple measurements and then looking at the average, one can accurately measure the intended property without distorting its final value.

Screening unwanted measurements

The process involves making a “pre-selection” by preparing a group of particles in some initial state, followed by weakly measuring each of the particles at some point in time. Then, a “post-selection” second set of measurements is made at a slightly later time. The results of the weak measurements will, on average, imply certain results for the post-selection measurements, but they do not determine them. Ultimately, by screening unwanted measurements, one is left with a “weak value”.

In the original theoretical paper published in 1988, Aharonov and colleagues consider measuring the spin of a spin-½ particle. First, an ensemble of particles of only a particular state, say spin up, is created – this is the pre-selection. Next, one would make weak measurements of the spin of the particles many times, but as “gently” as possible. A final measurement would be made, and particles that are not in the desired state are discarded in the post-selection process. Then, by combining all three measurements, one would be able to measure the state of the system, according to Aharonov and colleagues.

The paper, however, identifies a very strange property of weak values. If the weak measurement is done in a certain way, it is possible for the weak value of the spin to be 100 rather than one-half, which would be the outcome of a strong measurement. Aharonov and colleagues call this an “anomalous weak value” and the paper remains controversial.

In 2011 Aephraim Steinberg and colleagues at the University of Toronto demonstrated the technique by tracking the average paths of single photons passing through a Young’s double-slit experiment. In recent years, this method of weak measurement has gained momentum and has been used in some quantum-information technologies, including quantum feedback control and quantum communications.

Weak understanding

Now, Christopher Ferrie of the University of New Mexico along with Joshua Combes of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, are questioning the concept of weak measurement. Indeed, they are highly sceptical of the whole field, saying that, at the very least, what information is gleaned from a weak measurement is currently not understood.

There might be something genuinely quantum about weak values, but to my eye that’s not clear yet
Joshua Combes, Perimeter Institute for Theoretical Physics

“Weak values do not seem to be a property of the system in any way,” says Ferrie. He and Combes claim that while the idea of weakly measuring a system is fine, making pre- and post-selections is akin to having a set of data and just favouring a subset of it – meaning that any measurement made is a consequence of classical statistics, rather than a physical property of the system. “So long as there is some co-relation between the second [weak measurement] and third [post-selection] steps, you will have an anomalous weak value,” says Ferrie. But such a correlation would mean that the original quantum system being measured is no longer sound.

To illustrate their point, the researchers have come up with a classical analogue of a weak value presented in the Aharonov paper by adopting the world’s simplest random system: a coin flip.

It can be imagined as a coin-flipping game in which one player, called Alice, flips a coin and only passes the coin on to the other player, Bob, if it is “heads”, which is pre-selection. Bob has no prior knowledge of the state of the coin and can only glance at it quickly to try to determine its state, which is the weak measurement. Bob then fumbles the coin so there is a small chance that it flips, which is the disturbance. Finally, he hands the coin back to Alice, who looks at its state and discards all coins that come back heads – which is post-selection.

Very occasionally, Alice will receive a coin that is tails. Because all the coins were pre-selected heads, she assumes that Bob had measured heads and then flipped the coin during the disturbance process.

If heads is given the value +1 and tails –1, and if Alice concludes that Bob flips one in every 100 coins, the mathematical operations outlined by Aharonov and colleagues suggest that the weak value for Bob’s weak measurements is 100. Like Aharonov’s “spin 100” weak value, this is an anomalous result, because the values assigned to heads and tails are +1 and –1, and one would expect the weak value to be somewhere between the two.

Ferrie and Combes say that their example shows that weak measurements are merely an artefact of classical statistics and classical disturbances, and they argue that when a classical explanation suffices, there is no need to invoke a quantum explanation. “Statistics can fool you,” says Combes. “We think this particular weak-value puzzle is a statistical question, not a fundamentally quantum question. There might be something genuinely quantum about weak values, but to my eye that’s not clear yet.”

Rainer Kaltenbaek of the Quantum Foundations and Quantum Information group at the University of Vienna found the general idea underlying Ferrie and Combe’s analysis very interesting. “In particular, it shows that there’s often still confusion about what to make of weak values,” he says. He points out other research, carried out by Franco Nori of the University of Michigan and colleagues, which Combes and Ferrie refer to in their current work. Nori’s team has interpreted Steinberg’s experiment in completely classical terms.

Referring to Steinberg’s experiment with the photon trajectories, Kaltenbaek says that a weak value can be calculated if one averages over many photons. “It only becomes complicated if you try to infer something from that for a single photon – in my opinion, it [a weak value] does not tell you anything useful at all for a single photon,” he says, and that Combes and Ferrie’s recent research illustrates that quite well.

Combes and Ferrie’s analysis is published in Physical Review Letters.

Celebrating the mind

Martin Gardner (1914–2010) is often called “the best friend mathematics ever had”.

Gardner earned that epithet primarily for the “Mathematical games” column he wrote for Scientific American for more than two decades from 1959 to 1981, as well as for his many other popular books and writings (see http://martin-gardner.org for a full list). According to the mathematician Colm Mulcahy, who chairs a committee marking the centenary of his birth on 21 October 1914, Gardner “revolutionized how mathematics and rationality were perceived by a sizeable chunk of educated people”.

Mulcahy compares Gardner’s impact to that of Carl Sagan on astronomy, Richard Feynman on physics and Steve Jobs on computers. Gardner indeed developed an immense and loyal following. From the early 1990s onwards, hundreds of followers – not just mathematicians but logicians, magicians, philosophers and puzzle-designers too – have attended a biennial “Gathering for Gardner” (or G4G) to share Gardner’s diverse interests (see July 2008 p20).

Notoriously shy, Gardner did not want a memorial service. So when he died in May 2010, his friends improvised. The previous G4G (the ninth) had taken place just two months earlier, and it felt repetitious to organize a similar event. Instead, Gardner’s followers hatched the idea of hosting a set of events simultaneously in different places on what would have been Gardner’s 96th birthday – 21 October 2010. They called it “Celebration of Mind”.

That October, some 66 events – including five in the UK – were held on six different continents. Some consisted of just three or four people trying games and puzzles in bars, others of meetings of dozens or more in homes, while one Nebraska puzzle company hosted more than a thousand people. The celebration was so successful that it was held again the following year, when numbers increased to 70, and spread to seven continents, including scientists at McMurdo Station in Antarctica.

In 2012 the mathematician-songwriter Vi Hart created four videos about “hexa-flexagons” – the folding paper models to which Gardner had devoted his first column. The 26-year-old Hart, sometimes called “today’s Martin Gardner”, has a strong youth following, thanks to her engaging videos (www.youtube.com/user/Vihart). The hexaflexagon videos went viral, attracting millions of hits and promoting that year’s celebration, which had 156 events worldwide.

This year will see the usual commemorative markings expected at a centennial: an article about Gardner in Scientific American, tributes in magazines and websites devoted to mathematics, magic and puzzles, a possible Google Doodle (a daily variant of the search engine’s logo) on 21 October, and so forth. But Gardner’s real spirit lives in the Celebration events (www.celebrationofmind.org).

Inner secrets

Gardner’s secret was to allow non-mathematicians to experience the pleasure of mathematics by getting them to actually do it by solving problems, rather than by being told the right answers. He enticed people by interweaving frivolity and seriousness. “The frivolity keeps the reader alert,” Gardner wrote in Mathematical Carnival (1975), his seventh collection of columns. “The seriousness makes the play worthwhile.”

While mathematicians claim Gardner as their own (though his most advanced degree, a BA, was in philosophy), many of his writings are about physics. These include a book on relativity and a “Trick of the Week” feature that ran in The Physics Teacher from 1990 to 2002. In Mathematical Carnival, for example, Gardner asked what would happen if a doughnut-shaped piece of solid iron were heated – would the diameter of its hole get larger or smaller? Elsewhere in the same book, he asked readers to imagine what would happen if a child, sitting in the back seat of a car with all windows closed and vents off, were holding a helium balloon floating at the end of a string. “When the car accelerates forward,” Gardner asked, “does the balloon stay where it is, move backward, or move forward? How does it behave when the car rounds a curve?”

The answer to this question – no, I’m not going to tell you, that’s the point – is surprising, and I was shocked when I saw it happen. John Railing, a magician and Celebration organizer, told me he tried it recently while driving his young daughter to a birthday party. After dropping her off, he mentioned it to the other parents, and for a few minutes was thrilled to hear all the adults at the party talking physics. “It’s a shame you can Google this nowadays and find that people have posted videos,” Railing says.

Googling it is not the Gardner way. The Gardner way is to ignite your fascination so that you experience the pleasure of finding the answer yourself. Ultimately, the fascination of balloon behaviour in cars is more visceral even than the cerebral attractions of the twin paradoxes of relativity and the Alice and Bob entanglements of quantum mechanics.

The critical point

A few weeks ago, a major publisher sent me a message promoting a textbook on “scientific rationality”. This appeared to mean a formal set of rules governing the operation of an abstract and universal “mind” that could “think from nowhere” about a range of areas. Learning to practise science, the author seemed to think, was a matter of acquiring a method in a classroom, then going out in the world to apply it.

That’s a hoax – a fake picture of science hundreds of years old – that I’d never inflict on students. They’d learn more about science from a “Celebration of Mind” event, where they’d experience themselves in the act of encountering a real puzzle of a specific sort at a specific time. That teaches more about how scientists actually practise their craft than formalized rules ever could.

Martin Gardner, who knew that very well, was also one of the best friends physics ever had.

Chemistry Nobel awarded for super-resolution microscopy

The 2014 Nobel Prize for Chemistry has gone to Eric Betzig, Stefan Hell and William Moerner for developing super-resolution microscopy techniques based on the fluorescence of molecules. The prize is worth SEK 8m (£690,000) and will be shared by the three winners, who will receive their medals at a ceremony in Stockholm on 10 December.

Betzig is a US citizen and works at the Howard Hughes Medical Institute, Hell is a German citizen and is at the Max Planck Institute for Biophysical Chemistry in Göttingen, and Moerner is a US citizen based at Stanford University.

The trio won the prize for overcoming what had seemed to be an insurmountable barrier to using microscopes to see features in biological cells that are smaller than a few hundred nanometres across – the so-called diffraction limit. Hell took one approach to solving the problem, while Betzig and Moerner took a somewhat different route. Both techniques, however, involve “tagging” a relatively large biological molecule of interest with much smaller fluorescent molecules that glow briefly (or “blink”) after being illuminated with a pulse of laser light.

Suppressing fluorescence

Hell’s method involves firing two lasers at the sample. One (the exciting laser) is tuned to cause the molecules to fluoresce and the other laser is tuned to suppress fluorescence. The clever trick is that the suppressing light has a dark region in the middle of its beam, the size of which is defined by the diffraction limit. The exciting light, on the other hand, illuminates a spot with a size defined by the diffraction limit. The effect of overlapping these two beams is the emission of fluorescent light from a central region that is smaller than the diffraction limit. Indeed, the size of the region can, in principle, be made arbitrarily small by adjusting the relative intensities of the two lasers. An image is acquired by scanning the location of the central region across the sample.

The technique developed by Betzig and Moerner involves illuminating the sample with a weak laser pulse to ensure that only a tiny fraction of the fluorescent molecules will blink at a given time. This tiny fraction means that it is extremely unlikely that any of these blinking molecules are separated by distances less than the diffraction limit. Each molecule will emit a number of photons during a blink and these are detected as an intensity peak that has a normal distribution with a width that is limited by the diffraction limit. However, because the light is known to come from a single molecule, the location of the molecule can be placed with high probability at the centre of the normal distribution. Furthermore, the uncertainty in the location of the molecule falls as one over the square root of the number of photons detected. While an individual image only shows the locations of a few molecules, repeating the process many times allows a composite image of all the molecules to be created.

Well deserved

Frank Jäckel of the University of Liverpool in the UK, who worked in Moerner’s lab in the 2000s, described his former colleague as a “great researcher” and said that the prize is “well deserved”. He points out that Moerner’s discovery in 1989 that just one molecule in a sample can be detected was an important breakthrough that lead to the development of the super-resolution microscopy technique.

John Dudley, president of the European Physical Society, told Physics World that the work of Betzig, Hell and Moerner “has pushed back the accepted wisdom of the limits of optical resolution [and is] a timely reminder how we should constantly question received wisdom and pre-conceived ideas”. He added that “Sometimes what we think is impossible is possible with new technology and imagination.”

Eric Betzig was born in 1960 in Ann Arbor, Michigan. After obtaining a BS in physics from the California Institute of Technology in 1983, he received his PhD from Cornell University in 1988 for work on near-field scanning optical microscopy.

Betzig then moved to Bell Labs in New Jersey, where he continued to work on near-field optics for applications in biology and data storage. In 1994 he left Bell Labs to start a research and consulting firm called NSOM Enterprises, but two years later left to become vice-president of R&D in his father’s machine-tool company, Ann Arbour Machine Company.

Return to academia

In 2002 commercial failure left Betzig unemployed, so he founded another R&D consulting firm, New Millennium Research. Three years later he went back to academia, joining the Howard Hughes Medical Institute (HHMI) in Ashburn, Virginia, where he is currently leader of the HHMI’s Janelia Farm Research Campus.

Stefan Hell was born in 1962 in Arad, Romania. After receiving his Diplom degree from the University of Heidelberg, Germany, in 1987, he was awarded a PhD from the same university in 1990 for work on the imaging of transparent microstructures. In 1991 Hell moved to the European Molecular Biology Laboratory, also in Heidelberg, and two years later became a senior researcher at the University of Turku in Finland. In 1997 Hell moved to the Max Planck Institute for Biophysical Chemistry in Göttingen, becoming a director in 2002. Hell is also currently a division head at the German Cancer Research Center in Heidelberg.

William Moerner was born in 1953 in Pleasanton, California. In 1975 he was awarded degrees in physics, mathematics and electrical engineering from Washington University in St Louis and in 1982 received his PhD in physics from Cornell University for work on the vibrational relaxation dynamics of alkali halide lattices. After his PhD, Moerner worked at the IBM Almaden Research Center in California as a researcher and then project leader, before joining the University of California, San Diego in 1995. He – along with his research group – then moved to Stanford University in 1998.

Who was the real Abdus Salam?

Photo of members of Abdus Salam's family at the International Centre for Theoretical Physics in Trieste

Matin Durrani in Trieste, Italy

It’s now my third day here at the International Centre for Theoretical Physics (ICTP) in Trieste, which is celebrating its 50th anniversary in grand style. Two days ago we had a marvellous seven-course dinner at Duino Castle, including a hugely spectacular fruit-laden golden-jubilee cake, while yesterday there was a possibly even more sumptuous eight-course dinner hosted by the city that has been home to the centre for half a century.

But pervading all the events has been Abdus Salam, the Pakistani Nobel-prize-winning theoretical physicist who set up the centre in 1964. We know pretty much what Salam did from a scientific point of view, which was celebrated in his 1979 Nobel prize for unifying the weak and electromagnetic forces, but what exactly was he like as a person?

(more…)

Isamu Akasaki, Hiroshi Amano and Shuji Nakamura win 2014 Nobel Prize for Physics

The 2014 Nobel Prize for Physics has been awarded to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for their development of blue LEDs. The prize is worth SEK 8m (£690,000) and will be shared by the three winners who will receive their medals at a ceremony in Stockholm on 10 December.

Akasaki is a Japanese citizen and works at Meijo University and Nagoya University. Amano is a Japanese citizen and works at Nagoya University. Nakamura is a US citizen and works at University of California, Santa Barbara.

The prize citation honours the trio for “the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”. The now ubiquitous LEDs are used in a wide arrange of applications from televisions to sterilizers and do not contain toxic mercury that is found in fluorescent lamps.

Three-colour blues

A source of white light needs LEDs that deliver red, green and blue light. The first red LED was created in the 1950s and researchers then managed to create devices that emitted light at shorter wavelengths, reaching green by the 1960s. However, researchers struggled to create blue light.

In the 1980s Akasaki and Amano working at Nagoya University and Nakamura working at the Nichia Corporation focussed on the compound semiconductor gallium nitride (GaN), which could be ideal for creating blue LEDs because it had a large band-gap energy corresponding to ultraviolet light.

There were many challenges, however, in making useable LEDs based on GaN. One major problem was how to create high-quality crystals of GaN with good optical properties. This was solved independently in the late 1980s and early 1990s by Akasaki and Amano and also by Nakamura. Both teams used metalorganic vapour phase epitaxy (MOVPE) techniques to deposit thin films of high-quality GaN crystals onto substrates.

Doping discovery

Another seemingly insurmountable challenge facing the researchers was how to dope the GaN so it is a p-type semiconductor, which is crucial for creating an LED. Akasaki and Amano noticed that when GaN doped with zinc is placed in an electron microscope, it gives off much more light. This suggested that electron irradiation improved the p-doping – an effect that was later explained by Nakamura.

Image of blue LEDs

The next step for both teams was to use their high-quality, p-doped GaN along with other GaN-based semiconductors in multilayer “heterojunction” structures. Nakamura was then able to create the first high-brightness blue LED in 1993.

Praising the laureates, the chair of the Nobel committee for physics Per Delsing said “A lot of big companies tried to [develop blue LEDs] and they failed, but these guys persisted and eventually they succeeded.”

Today, GaN-based LEDs are used in back-illuminated liquid-crystal displays in devices ranging from mobile phones to TV screens. LEDs emitting blue and ultraviolet (UV) light have also been used in DVDs, where the shorter wavelength of the light allows higher data-storage densities. Looking into the future, UV-emitting LEDs could be used to create basic yet effective water-purification systems, because UV light can destroy micro-organisms.

Invention or discovery?

Over the past 10 years there have been three other physics Nobel prizes awarded for work with significant commercial potential: giant magnetoresistance in 2007; fibre optics and charged-coupled devices in 2009; and graphene in 2010. While most prizes are associated with more esoteric discoveries, like the Higgs boson, Alfred Nobel decreed in his will that the prize could also be given for an important invention in physics.

“Alfred Nobel would be very happy about this prize,” says Delsing. “[The blue LED] is really something the will benefit most people.”

David Gross from the Kavli Institute for Theoretical Physics at University of California, Santa Barbara, who shared the 2004 Nobel prize for his work on asymptotic freedom, is happy that in recent years both pure and applied research are being recognized. After addressing a meeting in Trieste to mark the 50th anniversary of the International Centre for Theoretical Physics, where he had stressed the importance of blue-sky research, Gross told Physics World that “Every five or six years the prize is awarded to an invention that has conferred a great benefit to humankind, such as the transistor, the laser and fibre optics. I think the existing ratio is just about right.”

Akasaki was born in Chiran, Japan, in 1929. He graduated from Kyoto University in 1952 and received his PhD in 1964 from Nagoya University.

Amano was born in Hamamatsu, Japan, in 1960. He received his PhD in 1989 from Nagoya University.

Nakamura was born in Ikata, Japan, in 1954. He graduated from the University of Tokushima in 1977 with a degree in electronic engineering and obtained a Master’s degree in the same subject two years later. He then joined the Nichia Corporation, a small company located in Tokushima on the island of Shikoku. Nakamura was awarded a PhD in 1994 from University of Tokushima.

Quantum dances at the intersection of science and culture

I’m fascinated by the interactions between science and culture, which is what led me to the Brooklyn Academy of Music (BAM), which was hosting the US première of a dance piece called Quantum that had previously debuted where it had been created, at CERN. The event was staged in a simple, black-box space, with the audience seated around a square floor in three rows with no proscenium. But it was an upscale black box, with elegant seating upholstered in a blue-and-gold metallic sheen. Four industrial lights were suspended from the ceiling by long cables.

The lights dimmed. When they came back on, six dancers paired in couples jiggled and jerked as if buffeted by Brownian-like forces. The overhead lights began moving in slow, silent circles, making it seem as if the stage itself were in motion. Symmetries appeared in some movements of the dancers, passing from couple to couple, while the music alternately crackled, chimed and sounded like static. The dancers ceased their pairings and began moving as a plasma-like whole. At one point they gathered together to create a sphere with their hands; their movements were shaping an object whose movements began shaping their own. The four overhead lights now began to move independently, making light splotches combine and recombine all over the dancers and floor – and it suddenly dawned on me that this kinetic lighting system, too, was part of the performance. (I hadn’t read the programme carefully beforehand.) The motions of dancers and lights eventually slowed to a halt. The light vanished, once again bathing the black box in darkness. The ensemble of wavelike movements of the particle-like dancers, I thought, had created an artistic whole.

André Schaller, Switzerland’s Ambassador and Consul General in New York, opened the reception afterwards by citing the piece as the product of a “creative collision” between art and science.

I ran into Gilles Jobin, who had choreographed Quantum during an artist’s residency at CERN. I asked him the following question: “If a fellow choreographer who knew nothing about the piece were to watch it, is there anything in the movement or structure of the work that might cause that person to say ‘That choreographer must have spent several months at a physics lab!’?” Gilles paused, then said “No.” The influence of the laboratory environment, he said, was in inspiring him to come up with certain kinds of what he called “movement generators”, or inspirations for the dancers to create their own movements. “For instance, all those symmetries – like ghost symmetries – that I didn’t even know existed!” he said. I asked him why he had chosen the work’s title. “I considered other names,” he said. “Basically, Quantum was just a convenient tag that referred to the context – the CERN laboratory environment – in which I had created the work.”

It was easy to pick out Julius von Bismarck, designer of the kinetic lighting system. His appearance – tall, shaved head, long flowing beard – is as unforgettable as his name. He had also been to CERN, and I asked him how, if at all, the laboratory environment had shaped the work. “Interference,” he said. “I thought a lot about the way light interacts with itself to form patterns. Also chaos – the way patterns can turn slowly to chaos but we still seek patterns in the chaos.”

Carla Scaletti, who composed the music, told me that a physicist working on the ATLAS experiment had provided her with some LHC data files, and that she had used the numbers in those files to control the parameters of her sound.

The six dancers in Jobin’s company were from five different countries; they included Catarina Barbosa from Portugal, the shortest dancer. I asked her if she had felt any difference between performing the work at BAM and at CERN. She told me that there definitely was a difference. The CERN performances were on a stage above the CMS detector, and it definitely felt like a “physics space”. At BAM, she said, it was a “dancer’s space”, more intimate.

For symmetry considerations, I thought I’d end the evening by tracking down the tallest dancer, a Brazilian. But by then the power of the quantum was weakening, I lost track of him, and I headed back to Manhattan.

Copyright © 2026 by IOP Publishing Ltd and individual contributors