Skip to main content

The eerie silence

Whether or not we are alone in the universe is one of the great outstanding questions of existence. For thousands of years it was restricted to the realm of philosophy and theology, but 50 years ago it became part of science. In April 1960 a young US astronomer, Frank Drake, began using a radio telescope to investigate whether signals from an extraterrestrial community might be coming our way. Known as the Search for Extraterrestrial Intelligence, or SETI, it has grown into a major international enterprise, involving scientific institutions in several countries. Apart from a few oddities, however, all that the radio astronomers have encountered is an eerie silence. So is humankind the only technological civilization in the universe after all? Or might we be looking for the wrong thing in the wrong place at the wrong time?

SETI emerged from the postwar expansion of radio astronomy, and the dawning realization that radio telescopes possess the power to communicate across interstellar distances. A landmark paper published in 1959 in the journal Nature by Giuseppe Cocconi and Philip Morrison urged researchers to perform a systematic search of the sky for alien radio traffic (184 844). Drake took up the challenge, using the 26 m dish at Green Bank in West Virginia, and others around the world soon joined in.

Much of the activity is now co-ordinated by the SETI Institute in California located close to the NASA Ames Laboratory, which specializes in astrobiology. The research is almost all privately funded. The jewel in SETI’s crown is the Allen Telescope Array, a system of 350 small networked dishes currently under construction in Northern California and named after its principal benefactor, the Microsoft co-founder Paul Allen. To date, 42 dishes are operational. There is also a small optical SETI programme, which searches for brief flashes of laser light, and we should not forget the numerous amateur enthusiasts taking part in Internet-based projects like SETI@home.

The concept of SETI was greatly popularized by the late Carl Sagan, the charismatic Cornell University planetary scientist and author of the book Contact, which subsequently became a Hollywood movie starring Jodie Foster in the role of the star-struck radio astronomer who picks up an alien message. Sagan championed the notion that an altruistic civilization somewhere in the galaxy might be beaming radio signals at the Earth to bestow cosmic wisdom or establish a dialogue. It is an uplifting vision, but is it credible?

A major problem with Sagan’s thesis is that if there are any aliens out there, they almost certainly have no idea that the Earth hosts a radio-savvy civilization. Suppose there is an advanced alien community 500 light-years away – close even by optimistic SETI standards – then however fancy their technology might be, the aliens will see the Earth today as it was in the year 1510, long before the industrial revolution. In principle they could detect signs of agriculture and construction works such as the Great Wall of China, and they might predict that we would go on to develop radio astronomy after a few centuries or millennia, but it would be pointless for them to start signalling us until they obtained positive evidence that we were on the air. This would come when our first radio signals reached them, which will not be for another 400 years. It would then take a further 500 years for their first messages to arrive. So Sagan’s scenario might be conceivable in another millennium or so.

Does this mean that SETI is a waste of time? Not necessarily. There may be other radio traffic we can detect. Unfortunately, the Earth’s biggest antennae are not currently sensitive enough to pick up television transmitters at interstellar distances, and unless the galaxy is teeming with civilizations frenetically swapping radio messages, it is exceedingly improbable that we would stumble upon a signal directed at another planet that simply passed our way by chance. A more realistic hope is that an alien civilization has built a powerful beacon to sweep the plane of the galaxy like a lighthouse. A beacon could serve a variety of purposes: as a monument to a long-vanished culture; as a way to attract attention and make first contact; as an artistic, cultural or religious symbol; or the cosmic equivalent of graffiti. It might even be a cry for help, or, as with the humble lighthouse, a warning.

Over the years there have been many unexplained radio pulses. The most famous was the so-called Wow! signal, detected on 15 August 1977 by Jerry Ehman using Ohio State University’s Big Ear radio telescope. The signal lasted for 72 s (rather a long pulse), and has not been detected again. Ehman discovered it while perusing the antenna’s computer printout, and was so excited he wrote “Wow!” in the margin. The signal has never been satisfactorily accounted for as either a man-made or a natural phenomenon.

Unfortunately, current radio astronomy is not well adapted to evaluating putative beacons. The traditional SETI approach is to listen in to promising target stars for about half an hour each, simultaneously covering a billion or more 1 Hz channels in the low gigahertz (109 Hz) frequency range. The output is then analysed using software that is able to identify narrow-band (sharp-frequency) continuous sources. If one is detected, then the astronomers carry out a series of checks to eliminate man-made signals, including pointing the telescope off and on the target to see if the signal fades and returns, and enlisting a distant back-up telescope for confirmation.

The problem is that this all takes time: a brief ping from a beacon cannot be crosschecked and might not recur for months or even years. It would probably be shrugged aside as being of natural origin, or simply left as a mystery. Ideally, a search for beacons would involve a dedicated set of instruments that stares towards the star-rich region of the Milky Way uninterrupted for years on end. This part of the galaxy is where the most ancient stars – and perhaps the oldest and wealthiest civilizations – are to be found. But a project of this magnitude is unlikely to be funded in the foreseeable future.

The Drake equation

When Frank Drake embarked on radio SETI, he wrote down an equation to quantify the expected number, N, of communicating civilizations in the galaxy. It is not so much an equation in the conventional mathematical sense, more of a way to quantify our ignorance. It is N = R*fpneflfifcL, where R* is the rate of formation of Sun-like stars in the galaxy, fp is the fraction of those stars with planets, ne is the average number of Earth-like planets in each planetary system, fl is the fraction of those planets on which life emerges, fi is the fraction of planets with life on which intelligence evolves, fc is the fraction of those planets on which technological civilization and the ability to communicate emerges, and L is the average lifetime of a communicating civilization.

The Allen Telescope Array in Northern California and Frank Drake

Some of the terms, such as the fraction of stars with planets, can now be quantified rather well – astronomers estimate fp to be greater than 0.5. Moreover, NASA’s Kepler planet-finding mission, which was launched in March 2009, should soon provide some indication of how many planets are Earth-like, i.e. ne. However, the uncertainty in N is totally dominated by two huge unknowns: fl and fi. Scientists currently have no credible theory of life’s origin, so putting a probability on it is meaningless. When SETI began, it was widely believed that life on Earth was an incredibly unlikely fluke, a chemical accident of such low probability we would not expect it to happen anywhere else in the observable universe. Today, the pendulum of opinion has swung to the point where many astrobiologists declare that life arises easily and is almost bound to occur whenever a planet has Earth-like conditions. If they are right, then the galaxy should be teeming with inhabited worlds. The Nobel-prize-winning biologist Christian de Duve even goes so far as to call life “a cosmic imperative”.

Unfortunately, the hypothesis of biological inevitability, while fashionable, has no observational support at this time. There is one way we can test it, though, without actually discovering biology on another planet. If life really does pop up readily in Earth-like conditions, then no planet is more Earth-like than Earth itself, so surely it must have formed many times right here on our home planet. And how do we know it did not?

It turns out that nobody has really looked. Most terrestrial life is microbial, and biologists have only scratched the surface of the microbial realm. Many bizarre micro-organisms have been discovered, including the so-called extremophiles that thrive in conditions lethal to most known forms of life, but so far all of these organisms have turned out to belong to the same tree of life as you and me. However, this means little. Biologists customize their techniques to target standard life, so any microbes with a radically different form of biochemistry would tend to be overlooked. In recent months, however, there has been a surge of interest in searching for a second sample of life in the form of a “shadow biosphere”. This would be a hitherto overlooked domain of microbial life existing alongside (and perhaps even interpenetrating) the standard terrestrial biosphere populated by organisms with radically different biochemistry; the shadow bio_sphere would have life, but not as we know it. The point is that if we found that life on Earth had started from scratch more than once, then the case for life as a cosmic imperative would be hard to ignore; it would be extraordinary if life had started on Earth more than once but not at all on all other Earth-like planets.

Even if life is common in the universe, the likelihood of intelligent life – fi in the Drake equation – might still be very low. Biologists disagree sharply in their assessment of whether intelligence is an insignificant aberration, like the elephant’s trunk, or belongs to the category of traits such as wings and eyes, which fulfil such a basic biological role that they have been “invented” by evolution again and again. However, if life does get going elsewhere, at least it is in with a chance to evolve intelligence. So in my opinion, the big unknown in the Drake equation remains fl. Until we have a better idea of what that fraction is, any attempt to put a “reasonable” numerical value on N is fanciful.

Signatures of intelligence

Even if receiving a message for humankind turns out to be a forlorn hope, we might still accumulate evidence, perhaps indirect, that shows we are not alone in the universe. The only way we can deduce that intelligence exists, or has existed, beyond Earth, is through its technological footprint. Because we cannot know the specifics of highly advanced alien technology, this line of inquiry involves a great deal of guesswork. Also, an alien civilization might not make a deliberate attempt to be conspicuous, so traces of its activity could be very subtle and require sophisticated scientific methods to tease out.

Humans have significantly modified their planet in just a few thousand years, so it is not inconceivable that a multimillion-year technological community has made noticeable changes to its astronomical environment. Long ago, the physicist Freeman Dyson suggested that an energy-hungry civilization might create a shell of material around its host star to trap most of the radiation. If such a “Dyson sphere” existed, it would leave a distinctive signature in the infrared. Searches for these objects have been made, but so far have yielded nothing (see “The search for astroengineers”).

Extremophile microbe

Other large-scale astroengineering projects might involve adapting the host star in some way, thereby changing its spectral and thermal characteristics, and thus making it stand out as an anomaly to a sharp-eyed terrestrial astronomer. Even changes confined to a planet’s surface may be detectable in the not-too-distant future in the form of industrial pollutants or other weird molecules in the spectrum of the planet’s atmosphere. The Kepler mission should soon produce a tally of Earth-like extrasolar planets that would be a natural target list for a future space-based optical system with this capability. We must also be alert to the possibility that an alien community might produce very different by-products than humanity – perhaps ultra-energetic neutrinos in the peta-electron-volt (1015 eV) range or intense bursts of gamma-ray photons from matter–antimatter annihilation that would be too concentrated to come from any plausible natural source.

Easier to find would be traces of alien technology in our astronomical backyard. In 1950 Enrico Fermi famously noted that a space-faring civilization would be able to spread across the galaxy in a period of time far less than the age of the galaxy, and that it would require only one such expansionary community for the Earth to have been “taken over” long ago. The fact that aliens are not already here suggested to Fermi that they are not out there either – a conclusion that has since been dignified with the term “the Fermi paradox”.

There are many resolutions of the Fermi paradox in addition to the obvious one that there are no aliens. For example, space travel may be too costly or dangerous to be worth undertaking, or alien civilizations may inevitably self-destruct before they embark on colonizing other worlds. But a more interesting resolution worth testing is that interstellar migration has and is happening, although in a more complicated manner than Fermi envisaged. Robin Hanson, an economist at George Mason University, has used an economic model of migration in which communities spread out from their home planet and colonize others – some consolidating, others moving on – to form a complex web of settlement and movement, in which there is always a fastest wave of migration at the “frontier” advancing into unexplored territory. Hanson’s model suggests a possible scenario in which a migration wave may have passed through our region of the galaxy but then moved on, perhaps leaving some telltale signs in the form of artefacts, industrial waste or mining activity.

When might this have happened? One of the hazards of reflecting on SETI is the temptation to think on too short a timescale. The solar system is a fraction of the age of the galaxy, and Earth-like planets may have existed billions of years before the Earth even formed. In the absence of any reason to the contrary, one should, to a first approximation, assume a roughly uniform probability distribution over many billions of years for the rate of emergence of technological civilizations. If so, then the expected historical date of a migration wave is not to be measured in thousands or even millions of years, but billions.

In other words, it is not inconceivable that the solar system was visited, say, three billion years ago. If the probability distribution actually rises with time, which seems more realistic, then it may be more accurate to think in terms of tens to hundreds of millions of years. But the chances of the Earth having been visited in the last few thousand years – the stuff of much science fiction – is exceedingly low, and would represent an extraordinary coincidence. Why would our planet happen to be visited just as human civilization started to flourish?

Alien signs

If aliens had visited our planet a hundred million years ago, would any traces of their technology have survived to the present day? Anything on the Earth’s surface would be severely degraded by weathering, tectonic activity, glaciation and so forth. The scars of large-scale mining or quarrying might remain, however, albeit perhaps buried beneath rock strata and detectable only in careful geological surveys. An artefact deliberately buried on the Earth or, better still, the Moon, could easily have gone undetected. Radioisotopes from nuclear explosions or engineering might show up as geological anomalies.

Infrared image of a galaxy

Comets and asteroids would be a good source of raw materials, and might display signs of meddling, for example by anomalous absences or distributions of certain types. However, any surviving engineering structures in the asteroid belt would be very hard to spot without exhaustive searches. An advanced technology might also have exploited exotic energy sources, such as dark matter or magnetic monopoles. Monopole particles are predicted to have been made in copious quantities in the Big Bang but have never been detected, in spite of many searches. This puzzling lack of detection is widely presumed to require the theory of inflation, according to which the universe leapt in size by a huge factor in the first split second after the Big Bang, thereby reducing the density of monopoles to near-zero. But inflation is far from proved, and if monopoles do exist in the universe at some reasonable density, then they would make an ideal energy source. A north and south monopole would be each other’s antiparticles, and so could be annihilated. Because their mass is predicted to be 1015 times that of a proton, the energy released per annihilation event would be enormous. If an alien technology had harvested all the monopoles in our region of space, it would be no surprise that we do not detect any. Of course, this explanation is highly fanciful, but it serves to illustrate the type of thing we need to watch for: the anomalous absence of something that should be there, or the anomalous presence of something that should not.

One idea investigated by SETI researchers is to look for an alien artefact at the stable L4 and L5 Earth–Sun Lagrange points – pockets in space where an object can keep pace with the Earth as it orbits the Sun. A probe sent from an alien planet, or a monitoring device left after an expedition moved on, could be parked there without the need for orbital corrections. It has even been suggested that such a probe might attempt to establish contact with us by radio or via the Internet. Should that happen, the problem of the finite speed of light, which is a dampener for traditional SETI, would be circumvented.

As a final example of what we might look for, an alien expedition or migration wave may have tampered with terrestrial microbiology, perhaps creating its own shadow biosphere to assist with mineral processing, terraforming or energy production. Also, if the aliens really wanted to leave a message for posterity, implanting it in the genomes of micro-organisms might be a better strategy than sending out radio signals from a beacon. Using viruses or living cells as information repositories has many advantages: biological nanosystems are self-replicating and self-repairing, and have the potential to conserve information for millions of years. Some genes, for example, have remained largely unchanged for more than a billion years.

Post-biological intelligence

Speculation about SETI is bedevilled by the trap of anthropocentrism – a tendency to use 21st-century human civilization as a model for what an extraterrestrial civilization would be like. But when contemplating a multimillion-year technology, human categories are almost certainly misleading. Perhaps the most suspect assumption is that we would be dealing with flesh-and-blood beings. It is likely that biological intelligence is but a transitory phase in the evolution of intelligence in the universe; even on Earth we can predict the rise of “artificial” intelligence and glimpse a future in which engineered information-processing systems and genetically modified neural networks will be merged to create novel “thinking systems” that far outstrip human intellectual prowess.

What the priorities or technological demands of such entities might be we cannot know. The most powerful thinking systems in the universe might turn out to be instantiated in quantum computers – what the Nobel-prize-winning physicist Frank Wilczek has dubbed “quintelligence”. Such entities may be physically small, have negligible energy requirements and be located in intergalactic space to exploit its low temperatures. If so, the technological footprint of quintelligence would be so modest that we would never spot it.

The late author and futurist Arthur C Clarke once remarked that a sufficiently advanced technology would be indistinguishable from magic. It is therefore crucial that we expand our thinking about alien technology from mere extrapolations of human technology and begin looking for any system or process that displays the hallmark of intelligent manipulation. After 50 years of traditional SETI, the time has come to widen the search from radio signals. Using the full array of scientific methods, from genomics to neutrino astrophysics, we should begin to scrutinize the solar system and our region of the galaxy for any hint of past or present cosmic company.

SETI is enormous fun and of great interest to the public. The momentous nature of a positive result hardly needs to be spelt out. Unfortunately, the subject represents a level of speculation unusual even by the standards of contemporary theoretical physics, and it may turn out to be a wild-goose chase. Nevertheless, as Cocconi and Morrison pointed out in their trailblazing paper, if we do not bother to look for extraterrestrial intelligence, the chances of finding it are zero.

Talking physics in the Twittersphere

The conventional media – be it newspapers or TV – often do more harm than good in their portrayal of scientific issues. Indeed, there is little evidence that the mainstream media do much to improve the public’s understanding of science. Broadcasters still feel it is necessary to have someone to speak against climate change to have “balance”, while we regularly see newspapers attempting to show that everything we consume either causes or cures cancer. The Daily Mail, for example, has published at least 20 stories about cancer and wine alone in the last two years, with headlines such as “Drinking just one glass of wine a day can increase risk of cancer by 168%” (23 February 2009) and “People who drink a large glass of wine a day can reduce their chances of getting bowel cancer” (18 January 2010).

Scientists themselves have not helped. All too often, communication from scientists to the public has been either sparse or so laden with jargon that it fails to get any message across. There is also a failure to understand that most people expect science to come up with “the truth” – a black-and-white statement of fact. Science, hedged with probabilities and error bars, needs a much more detailed conversation with the public than either the media allow or scientists realize.

Interacting or broadcasting?

The rise of new media is now making it possible to establish a broader range of communication, and to make the link between scientists and the public more interactive. Broadly, these new opportunities split into two – personal broadcasting with feedback, and interaction with a community. (I am excluding from this discussion members-only sites, such as the Institute of Physics’ MyIOP, which are specifically designed for professionals only.)

Interacting with an online community can be done through a fan group on Facebook, such as the one run by Physics World, or a dedicated environment such as Nature Network. Here, discussions can be built round topics of interest, allowing interaction between scientists and the public in a relatively unstructured fashion. A good example of this is the chemistry forum on Facebook. Here you will find everything from discussions of chemistry as a career to requests for answers to questions such as “Why does silver chloride turn black when exposed to light?”.

Membership of such a community, however, usually implies more commitment than just reading and responding to a blog post, and as such is always likely to have a relatively small audience. The alternative – personal broadcasting with feedback – is better developed than communities and leaves the agenda to the scientists. One of the first effective methods for scientists to communicate directly to the public was Wikipedia, where the science articles are often surprisingly well written and accurate. The article on the Big Bang, for example, has plenty of good content with many opportunities to click through to access more detail on related material. The open nature of the Wikipedia format means that much more information can be provided than in a conventional encyclopedia.

Although Wikipedia remains an interesting way to explain an area of expertise, it lacks the opportunity for feedback. This is where blogs come into their own. A science blog gives a scientist or science writer the opportunity to really explore areas of interest and to put them across to the general public. And because readers can then attach comments, conversation emerges between them and the blogger.

Most blogs are arguably more narrowcasting than broadcasting, typically reaching tens or hundreds of readers. But a good, consistently written blog will build up a following and can start to reach a wider audience. And, crucially, that two-way flow in the comments makes for a much richer form of communication – for example, take a dip into Science Blogs for some good illustrations of this kind of interaction.

Twittering on

This two-way interaction is taken to the extreme in the microblogging site Twitter. It might seem that any medium limiting the author to 140 characters is far too basic to have a benefit for science communication, but Twitter has two huge things going for it. One is immediacy, the other a sense of connection that is not present in an ordinary blog.

The immediacy is evident when you look at the way that Twitter has been used successfully by, for instance, the Planck satellite team and CERN to keep their followers informed of developments. It is like a live news ticker-tape stream – but much more personal. As for the sense of connection, the power of Twitter is its two-way nature. Regular Twitter users, such as the UK science minister Lord Drayson, are much more likely to reply to you on Twitter than they are to respond to an e-mail.

The personal feeling of Twitter makes it ideal for breaking down barriers between the science community and the general public. And though the 140 character message length is limiting, it is actually no bad thing. It makes us think about how we phrase information – essential when communicating. And it is always possible to include a website link in the Tweet for people to access more information.

Science tweeting

Do blogs and Twitter give a real benefit to science? I believe so. You only have to look at the confusion caused by the conventional media’s dealing with a topic like the MMR (measles, mumps and rubella) vaccine and the allegations of its links with autism. Those in the know were able to go to Ben Goldacre’s Bad Science blog and get an understanding of the limitations of the science involved. In the same way, the Bad Astronomy blog has covered a range of issues in the physical sciences, and more general blogs like my own introduce many science topics.

The best thing for readers of science blogs and followers of science Tweets is that they get their information from those who really understand it, rather than through the filter of an arts-graduate news editor. And though the channel may only reach a small number of individuals, it is arguable that good science communication with the public has to be multi_tier: popular-science books giving the detail and the context; science magazines and news media giving the headlines and features; and interactive online science via blogs, Twitter and forums giving the fine detail and offering the ability to ask questions.

Of course, there is one significant problem with using the Internet to communicate science: anyone can write online about science subjects. If we are to encourage the public to get information in this way, then they also need to be aware of the importance of knowing where the information is coming from. There are two broad mechanisms for this.

The first is “official” blogging and Tweeting. If, for instance, I sign up for a Twitter feed from CERN, I am pretty sure I will get good information. The second is peer recommendation. I follow a law-related blogger called Jack of Kent who has been hugely informative on the libel trial of the physicist-turned-science-writer Simon Singh. Singh has been sued for libel by the British Chiropractic Association for an article in the Guardian where he claimed the organization promoted “bogus” treatments. The Jack of Kent blog and Twitter feed have provided detailed information on the case and rallied supporters. I do not know Jack of Kent nor is he “official”, but the strength of recommendation from others that I trust on their blogs and Twitter feeds made it likely that he would be a good source.

Benefits of blogs

So if the electronic media, particularly blogging and Twitter, are valuable, what should physicists be doing about it? We ought to encourage more scientists to make use of these media to communicate directly with the general public. It does not have to take up a lot of time. If you are at CERN or the European Space Agency, then you have a communications team to help, but there is no reason why individual scientists should not build their own following online.

However, before plunging in, would-be science communicators should get a feel for best practice. Internet science communication is most effective when informal and phrased with a good eye for what is interesting, rather than just technically correct. Such writing needs an enticing story-like structure (Twitter can then act as an attention-grabbing headline for a blog post on the subject). And perhaps most importantly, it needs to be phrased on a level with the readers, rather than talking down to them. Some science bloggers, for instance, do not hesitate to describe anyone who disagrees with them on evolution as stupid. Taking this attitude turns a wide section of the audience against the writer, whether or not they agree with the science. Being condescending is no way to communicate.

Without good communication, we get poor public awareness, lack of support for science funding, and, worst of all, no understanding of what science is and why it is so important. Internet science is a 21st-century version of the kind of personal communication that the Royal Institution manages so well in its public lectures. It is an opportunity that should not be missed.

The Majorana mystery

“For years after I first learned about Ettore Majorana, I wanted to write a book about his life… but I always deferred to an uncertain future the act of putting pen to paper. Then, one day I read a newspaper clipping and realized that the mystery was about to come full circle. Ettore had just turned one hundred, and a major discovery had been made in the deep waters near Catania. The time had arrived for the final unveiling of the Majorana legacy.”

With that spirit, the theoretical physicist João Magueijo presents A Brilliant Darkness: The Extraordinary Life and Mysterious Disappearance of Ettore Majorana, the Troubled Genius of the Nuclear Age. This lengthily subtitled work is a peculiar and intensely personal biography of Majorana, the Italian scientist who, in his short lifetime, was considered a genius on a par with Galileo Galilei and Isaac Newton by none other than Enrico Fermi. Today, Majorana’s name appears in many areas of cutting-edge physics research, ranging from elementary particle physics to applied condensed matter and mathematical physics. Such long-lasting contributions are certainly a testament to Majorana’s uncommon far-sightedness, while his disappearance after boarding a ship bound for Naples on 25 March 1938 plays – or at least ought to play – only a marginal role in explaining why he has captured the interest of so many scientists and historians.

Ettore Majorana was born on 5 August 1906 in Catania, Sicily, into a family with a rich scientific, technological and political heritage. The extended Majorana family included renowned scientists, jurists, members of the Italian parliament and university chancellors. In 1923 Ettore enrolled as an engineering student at the University of Rome, where he excelled and counted physicists like Giovanni Gentile Jr and the future Nobel laureate Emilio Segrè among his friends. Urged on by Segrè and Edoardo Amaldi, who at the time had recently changed subjects from engineering to physics, Majorana eventually agreed to meet Fermi; afterwards, he decided to switch to physics as well.

At the time, Fermi was head of Rome’s Institute of Physics, located on the historic Via Panisperna. Magueijo describes this set-up as “a kindergarten for geniuses: a group of young, extremely bright physicists led by Fermi [who] worked at an institute where they were given free rein”. Majorana made substantial theoretical contributions to the group’s research, and in 1928 – while still an undergraduate – he published his first paper, on atomic physics.

Somewhere along the line, however, Majorana developed an odd aversion to getting his work into print. He was to publish only nine articles from 1928 to 1933 (and one more in 1937), and most of these likely appeared only at the insistence of Fermi or others. Nevertheless, his uninterrupted theoretical activity during this period is well attested by his friends and colleagues, and, fortunately, a large part of this work has been preserved in his personal notes.

Although unknown in the 1930s, most of this work has now been published. It is therefore strange that Magueijo does not seem to take it properly into account in his biography. But even mentioning only two outstanding papers, as Magueijo does, is enough to reveal Majorana’s genius. For example, in his famous neutrino paper of 1937, “Symmetric theory of electrons and positrons,” he introduced the so-called Majorana neutrino hypothesis. When the paper appeared, it was revolutionary, because it argued that the antimatter partner of a given matter particle could be the particle itself – in other words, that a particle could be its own antiparticle. Moreover, with unprecedented far-sightedness Majorana suggested that the then-undiscovered neutrino could be such a particle. Today, many experiments are devoted to detecting some of the phenomena that arise from Majorana’s hypothesis, including neutrino oscillation and possible neutrinoless double beta decay. Yet according to Magueijo, Majorana considered the neutrino paper “a minor appendix” to another paper he published in 1932, which he regarded as his real masterpiece: “Relativistic theory of elementary particles with arbitrary spin'”.

In light of these fundamental contributions – and many others not considered in the book – the author’s conviction that “a lot of Ettore’s apparent powers was mise-en-scene“, or a bit of an act, is quite amazing. Another surprising assertion (also unsupported by evidence) is that Majorana “vexed” Fermi on a psychological level. At one point, Magueijo writes that “Fermi felt humiliated by Ettore’s genius but also by his overall attitude towards science and life. Because beyond his light-hearted side, Fermi had a big complex. Not with regard to women, like Ettore, but with regard to science.” Several pages later, the author is even less kind, suggesting that “Fermi was intellectually a bit limited. He had great skills but, above all, relied on vast stores of energy, hard work, and determination: pure brute force. His imagination was lacking.”

The author’s own imagination does not fail, not when he tries to explain physical concepts, nor especially, when he outlines the character and personality of Majorana. Several presumed anecdotes are, in Magueijo’s telling, unprintable in a “family magazine” like Physics World, but in other respects the following passage is typical: “He was brought up by social outcasts and grew monstrously distorted, lacking social skills and independence, full of ineptitude. People like him – when they don’t become criminals, drug addicts, or psychopaths – can’t help being intellectually superior. But they’re ‘Frankenstein’, artificially gifted, clever ‘against nature’.”

Magueijo’s imagination also influences his discussion of Majorana’s mysterious disappearance from a Naples-bound boat. Rather than keep to known facts, Magueijo prefers to take suggestions from literary narratives, folk influences and so on. Yet in some ways suggestions are all we have; even the “major discovery in the deep waters near Catania” quoted in the opening of this review relates to neutrino experiments in the Mediterranean Sea and not to any new information about Majorana’s fate. As the author explicitly points out in his conclusion, “This book could not be more open-ended, which is why I had the drive to write it; I hate eternal truth. We don’t know what happened to Ettore, and we don’t know if the neutrino is Majorana. But who cares? As Einstein and Infeld once put it, science is a bad thriller, one in which we never get to know whodunit”. Certainly, it is a book that makes for unconventional reading.

Communicating science

A dozen young graduate students stand awkwardly in a line on stage. They look around hesitantly as Alan Alda prepares to lead them in an improvisation exercise. The 73-year-old actor, best known for his appearances in hit TV shows such as M*A*S*H and The West Wing, is trying to see if such exercises, more commonly associated with theatrical training, can help young scientists to improve their public-speaking skills.

A handful of senior scientists, sitting in the audience, note the discomfort of their junior colleagues. “My wife jokes that every scientist has a different form of Asperger’s,” one says, trying to lighten the mood in the theatre at Stony Brook University. Alda joins in. “A mathematician friend once told me that an extrovert mathematician is the one who looks at the other guy’s shoes!” he quips.

There can be few better improvisation coaches for scientists than Alda, whose distinctions include five Emmy awards for his role as M*A*S*H army surgeon Hawkeye Pierce. An energetic and prolific performer, he has been an actor, director, screenwriter, author, producer, political activist and creative consultant, ingeniously applying skills he has acquired in one area to another. He recast the role of performing artist, becoming a kind of itinerant, multifaceted public intellectual.

Science frontiers

As a youth in New York City, Alda’s playful imagination meshed comfortably with a passion for careful and systematic inquiry. At high school, he encountered the two-culture divide – “You had to take the art path or the science path,” he told me – and chose the former. Unlike other two-culture victims, however, he overcame the trauma, and began voraciously reading about science. Alda later combined his career and his love for science – first as host of the PBS TV show Scientific American Frontiers, which showcased advances in science and medicine for more than a dozen years from 1993, and later on Broadway as the star of QED (2001–2002), in which he played Richard Feynman.

In 2006 – the year after Frontiers ended – Alda came to Stony Brook to talk about his memoir, Never Have Your Dog Stuffed and Other Things I’ve Learned, at a joint meeting of the university’s film festival and writers’ conference. At dinner, he sat next to the then-president of Stony Brook, Shirley Strum Kenny, telling her that his experiences with Frontiers convinced him that scientists could – and should – do a better job expressing themselves in public. Kenny responded by inviting him to help develop a training programme.

“I was thinking about scientists writing for the media,” recalls Kenny, who had just founded a journalism school at Stony Brook. “But Alan was thinking more generally about all forms of communication skills, including verbal presentation.” Before long, Kenny had appointed a committee to brainstorm ideas for such a programme, which included scientists from Stony Brook and the nearby Brookhaven National Laboratory, as well as Howard Schneider, a former editor of Newsday and soon-to-be dean of the Stony Brook journalism school.

The group applied for – and last year won – a grant from the US government to establish a Center for Communicating Science (CCS) at Stony Brook that would train scientists to communicate more effectively in traditional and new media. Alda, meanwhile, began thinking of ways he could teach improvisation to scientists at the centre, and train others to do the same. Alda was able to try out his ideas in January 2008 among a group of 20 engineering students at the University of Southern California, having been invited by the journalist K C Cole, who recently wrote a biography of Frank Oppenheimer – the charismatic physicist-turned-science communicator who founded the Exploratorium in San Francisco (see “Between the lines”). Alda then returned to Stony Brook last summer and autumn to take part in seven pilot improvisation seminars at the CCS, which involved Stony Brook graduate students as well as researchers from Brookhaven.

Theatre games

“I had two reasons for wanting to teach improvisation to scientists,” Alda told me. “First, I had been changed by improvising as a young actor. Everyone is; they become more charismatic, more watchable. Second, in my work on Frontiers, I noticed that, to the extent that I could get scientists to be spontaneous – not to lecture but to give and take with me – they became more watchable. They were more alive, and easier to understand.”

Researchers tackle improvisation exercises

In teaching scientists about improvisation, Alda draws on the work of Viola Spolin (1906–1994), who pioneered the use of improvisation for educational purposes in the US. After carrying out a wide-ranging study of what makes human beings more responsive and spontaneous in different kinds of social situations, Spolin wrote a book – Improvisation for the Theatre – that is widely regarded as the “bible” of the theatre world. “It’s like basic research,” Alda once said of the book, in a remark quoted on the cover of its latest edition, which has, it says, “changed the theatre for generations”.

Spolin’s work is excellent for beginners and non-actors because it does not involve “acting”, but a step-by-step, rule-following process that tends to distract beginners from anxiety. “The Spolin method is not ‘Do this! Don’t do that!’,” explains Deborah Mayo, one of the Stony Brook theatre-arts professors who is working with Alda. “It’s more ‘What if?’. What if you were talking to your grandmother? What if you were talking to a child? It gets you away from thinking about yourself and puts you more in touch with the situation. You do it for a while, and your hands unclench, your feet uncross, you relax.”

Before Alda’s sessions at Stony Brook, the graduate students were videotaped giving brief summaries of their research work and its significance for the general public. Alda selected some of his favourite Spolin games, such as throwing invisible balls and imaginary tug-of-war, which helped turn the students’ attention to how they interacted with the group. Alda then built on those exercises with more elaborate games adapted to science. For instance, he would pair up students and ask one to explain their work to the other while pretending that the other person was someone with whom they had a particular relationship – such as a relative or funding official. Audience members would have to guess that relationship. The Stony Brook theatre professors also contributed their favourite exercises.

“My first reaction”, says Claire Allred, who recently finished a PhD in atomic physics at Stony Brook and is now a postdoc at Columbia University, “was ‘How are crazy improvisation exercises going to assist me in holding down a conversation with someone who doesn’t understand quantum mechanics?’ But in the end it made me stay focused on whom I’m talking to rather than what I’m talking about. Staying focused on whom I’m speaking to is important because it becomes easier for them to follow my explanation and be excited about it.”

After the workshop, students were again videotaped delivering another brief summary of their work. The results, which can be viewed online, are striking: the scientists are much more at ease and animated, explaining their research clearly and from a far more personal point of view. Alda encouraged students to attend more than one session, and wondered what else could be done to further improve their skills. “After viewing the tapes with them, I said, ‘Look, you are all scientists – help me figure a way to test and improve this!’.” He paused, and you could sense his inner imaginative artist collaborating with his inner rigorous scientist. “I’d like to check this out in a functional MRI. What’s happening in the brain? What else makes the same centres fire? If we knew that, could we develop ways to strengthen it?”

Communicating science

Alda’s improvisation sessions were so successful that elements from them have been incorporated into more elaborate workshops that Stony Brook’s CCS will be offering for the first time this spring. The first workshop is to be held on 9 April at Brookhaven – Alda plans to attend – with subsequent sessions on 14 May at Stony Brook University and on 24 May at Cold Spring Harbor Laboratory. Although participants are not required to take part in the improvisation seminars, these seminars are the workshops’ most distinctive and ambitious component.

Each will begin with a panel of journalists, scientists and public-information officers discussing various issues in science communication. Participants will then chose from a menu of seminars, such as advocacy writing (opinion pieces, letters to public officials and so on), dealing with traditional media, and interacting with new media, including podcasts, blogs and video presentations. Limited to 12 people, the three-hour improvisation seminar will include techniques for preparing and delivering talks.

The Stony Brook programme is not, of course, the first to improve scientists’ communication skills. The American Association for the Advancement of Science, for example, has held a dozen workshops since 2008 that have trained over 900 scientists and engineers in summarizing their work in non-technical language. The European Science Communication Network, meanwhile, helps scientists funded by the European Union to speak on radio and TV, and to make the best use of blogs, podcasts, interactive websites and other “new media”. It is holding a series of 10 three-day workshops in Dubrovnik, Croatia, from 12 July to 15 August. The Stony Brook centre, however, is unique for its improvisation sessions, and because it is not just a vehicle for conducting the workshops but will offer a menu of ongoing programmes.

The critical point

The mystery, though, is less how to train scientists to communicate their work with more passion, but why this training might be needed in the first place. Given how passionate scientists normally are about their work, why are they all not as eloquent, enthusiastic and inspiring as Frank Oppenheimer?

“Good question,” says Alda. “We all learn ways of ex-communicating. For scientists, part of it is that they work with ideas so compressed that it takes a lot of study to understand them. Another part is that there’s a necessity in science to be cautious and exact, not to put personality ahead of evidence. But I feel that, along with precision, the more presence and clarity they can bring to speaking about their work, the better.”

No doubt a variety of other factors are also involved. Advocacy in science is often discouraged. Tenure committees evaluate publications, not communication skills. Scientists may fear media misrepresentation, with “Climategate” – the episode last November when a hacker stole and made public thousands of e-mails and other documents traded by climate scientists at the University of East Anglia – providing a recent example of routine scientific shorthand portrayed as incompetence or deception. And what scientist has the leisure to learn yet another skill?

But what Alda is after is not really adding another skill on top of what scientists now do. “Good communication is not a luxury and somehow apart from science,” he says, “but is actually the essence of science. Allowing the person behind the work to emerge doesn’t have to get in the way of rigour. What I’m really hoping for is to see a contagion of communication skills among scientists. I think it has the power to put the glorious excitement of science smack in front of our brains, where it ought to be.”

The value of simulation

For generations “theory” and “experiment” were the twin drivers of scientific research. In the past few years, however, a new tool has come into its own: simulation. Computer simulations have revolutionized many areas of science and engineering, and their increased use as an alternative to traditional, hands-on interactions with physical systems has been possible thanks to significant improvements in computer processing speeds and access to computing resources.

Yet according to the sociologist Sherry Turkle, all is not well in the world of simulation. In Simulation and Its Discontents – the last in her series of four books on the relationships between “things” and “thinking” – Turkle addresses some of the limitations of simulations and gives voice to those who have concerns about their use. She cautions that while simulations have been embraced by a number of scientific and technical communities, the use of simulations can also be associated with an increasing distance from physical reality.

This is an important topic that is worthy of serious discussion. But unfortunately, the manner in which Turkle addresses it does not inspire much in the way of thoughtful reflection. The first half of her book is an extended essay about studies she conducted at the Massachusetts Institute of Technology, first in the mid-1980s and then in the mid-2000s. These studies aimed to assess attitudes towards simulation and visualization; as such, it would have been valuable for her to quantify prevailing opinions during both time periods and to assess any significant changes over the intervening years. Instead, Turkle relies heavily on qualitative comments from anonymous study participants, and focuses almost exclusively on commenters’ reservations about simulation use, with little discussion of the positive aspects associated with the widespread integration of computers into research and practice.

For example, in one passage Turkle describes a user’s experience with an early computer-aided-design programme as “choosing among options on a computer menu” and quotes him as wishing that “his work felt more ‘his’ “. This is an interesting perspective, but the book provides no sense of how widely these types of feelings are held. Readers cannot know whether they are hearing a serious issue frequently identified by many designers, a common but minor complaint readily accepted alongside the advantages simulations can offer, or simply the grievances of an uninspired designer happy to blame his problems on something other than himself.

For those of us who use simulations as part of our everyday work, Turkle’s depressing tone and lack of context is frustrating. It is also rather curious, because the second half of the book contains four largely positive “case studies” that describe the use of simulations in space exploration, architecture, oceanography and biology. These case studies (which are written by other authors, including William Clancey of NASA’s Ames Research Center) do provide some balance, but they also create a disconnect between the book’s two halves that Turkle does not address.

The studies’ scope is also rather limited. In this volume of the series, at least, Turkle’s primary focus is the use of simulations in data integration and visualization, but recent advances in understanding biochemical reactions, weather forecasting, high-energy physics and many other fields have relied on simulations in a much broader way, through using computers to solve complex mathematical equations. Although it is important to verify the equations solved and the solution methods used, such computational research has opened up valuable new avenues of study within many fields. Simulation has even provided experimental test beds in some fields, such as cosmology, where real experiments are not possible.

Despite her book’s shortcomings, Turkle does raise some important questions, including a concern about users’ reliance on “black box” software that they do not fully understand. In an ideal world, all scientists and engineers would understand the inner workings of every technology they employ in their work. In practice this is seldom feasible. Nevertheless, science progresses by building on previous work, rather than by scientists re-inventing the wheel; in many cases, simply understanding the principles behind the wheel is sufficient. Knowledge at the individual level may have been lost, but this knowledge remains available to the scientific community – and the trade-off is that new knowledge can be added by allowing scientists to focus on newer, more specialized areas of research. Turkle also raises a concern about the blurring of the division between computer science and natural science. However, many scientific advances arise at the interfaces between disciplines, and the benefits of such interdisciplinary work are now widely recognized.

Questions are always asked when a new technology is introduced, and users quickly identify both advantages and disadvantages. Lecture-presentation software, for example, eliminates the need for printing transparencies, but it introduces the possibility of technology mismatches (and also allows speakers to modify their presentations up to the last minute – a mixed blessing). Hence, there will always be some individuals who prefer the older technologies. There is nothing wrong with making that choice, but while it is important to understand the potential problems that are avoided, it is also valuable to know what potential advantages are, in the same stroke, given up. Thus, although recognizing any technology’s limitations is always imperative, by sounding only cautionary notes without ever offering an alternative or acknowledging the many benefits, Turkle risks becoming a dinosaur rather than a visionary.

Between the lines: CERN special

The starship of zeptospace

The SI prefix “zepto” is rarely used. Few things in the human experience, it seems, are small enough to be measured in terms of 10–21. But as Gian Francesco Giudice explains in A Zeptospace Odyssey: A Journey into the Physics of the LHC, this oddity may be due for a boom, thanks to CERN’s Large Hadron Collider (LHC). The LHC is the first instrument capable of exploring matter on scales of less than 100 zeptometres, a strange regime where phenomena like supersymmetry and extra dimensions may await discovery. Giudice’s book places this milestone into context, describing the construction of both the Standard Model of particle physics and the instruments designed to test it. The story begins with Maxwell’s theory of electromagnetism and passes rapidly through relativity, quantum chromodynamics and cosmology, finishing with a brief discussion of dark matter, dark energy and the multiverse (see p40, print edition only). It is a familiar narrative, but in this telling it is also engaging and frequently witty, speckled with original analogies and pithy quotations from physicists and cultural figures alike. Better still, it comes straight from the horse’s mouth: Giudice is a particle physicist at CERN, and much of the book is based on public lectures and seminars he has given “out of the desire to impart…some of the awe and excitement I personally feel at being part of this historic project”. The result is an excellent introduction to the LHC, enlivened with a few amusing detours – including the revelation that the LHC would cost about the same, kilo for kilo, if it were made of premium Swiss chocolate. Tempting, but we prefer the real thing.

• 2010 Oxford University Press £25.00/$45.00 hb 256pp

Better than magic

“Using magic in a physics laboratory is usually considered a poor choice, what with the rabbit fur and dove feathers getting into everything.” This practical attitude is typical of The Quantum Frontier, which focuses less on the “why” of particle accelerators like the LHC, and more on the “how”. After a quick discussion of the Standard Model and its possible extensions, author Don Lincoln offers an in-depth but mostly non-technical look at the LHC’s innards. His main focus is on the collider’s detectors, and he describes the basic workings of the four main ones (CMS, ATLAS, LHCb and ALICE) in some detail, while two smaller detector experiments (TOTEM and LHCf, designed to study proton diffraction and cosmic rays, respectively) receive a more cursory treatment. The result is a useful experimental companion to the many theory-oriented books on particle physics. It finishes with a short but interesting discussion of what might come after the LHC – marred only slightly by the fact that Lincoln, a scientist at CERN’s US rival Fermilab, clearly has a vested interest in ensuring that any successor facility is built close to home.

• 2009 Johns Hopkins University Press £13.00/$25.00 hb 192pp

In the CERN hot seat

Being a CERN director-general is not simply a case of being an expert in particle physics and accelerator design, as Herwig Schopper describes in The Lord of the Collider Rings at CERN: 1980–2000 (dreadful pun most definitely intended). As lab boss for most of the 1980s, Schopper also needed to handle billion-dollar budgets, manage rival teams of ambitious researchers, and deal with politicians wincing at the cost of the Large Electron Positron collider – the machine that was very much Schopper’s baby. Add in the challenge of digging a 27 km long tunnel under the Jura Mountains to the need to calm local residents wary of what was mushrooming beneath their homes and you can see why being CERN boss is no ordinary job. Particularly amusing are Schopper’s descriptions of official visits from the likes of former French President François Mitterand, who had to be chauffeured into CERN in Schopper’s old Peugeot 600 because the car the director-general had obtained – a Mercedes – was German. A clunky style and the odd typo aside, the book will be essential material for professional historians looking back on CERN’s golden years.

• 2009 Springer £36.99 hb 211pp

Web life: Trailblazing

So what is the site about?

This year marks the 350th anniversary of the founding of the Royal Society, the world’s oldest continuously existing learned society. Since its beginnings in 1660 as a “College for the Promoting of Physico-Mathematical Experimental Learning”, the society has published some 60,000 papers in disciplines from physics to biology, spread over a handful of different journals. Trailblazing brings the daunting scale of this archive down to size with an interactive timeline of 60 papers, carefully selected to reflect the society’s history. Written by luminaries such as Newton, Faraday and Dirac along with lesser-known scientists, all of the papers are available to download free of charge, and each one is prefaced by comments from current scientific experts.

Can you tell me about some of the papers?

Readers will find a little bit of everything in this timeline, which begins with some rather cringe-worthy medical experiments from the late 1660s and continues through to a 2008 entry on geoengineering. Most of the papers are in English, although Alessandro Volta’s 1800 description of the construction of an electric battery is written in French. Among the other physics and astronomy entries, be sure to check out Newton’s 1671 paper containing his “New Theory about Light and Colors”, the 1752 account of Benjamin Franklin’s famous kite-flying investigations and Egon Orowan’s 1938 explanation of metal fatigue. This last paper covers an industrially important phenomenon first spotted by railway engineers more than a century before Orowan, a Hungarian-born physicist then based at Birmingham University in the UK, succeeded in describing it mathematically. While Orowan’s breakthrough may not be in the same league as, say, Maxwell’s unification of electricity and magnetism, it is nevertheless worthy of inclusion. History, after all, is made by good scientists as well as great ones.

What are some other highlights?

The earliest years of the timeline are an antiquarian’s dream, complete with archaic spellings and those elongated letter s’s that look like f’s. It is, however, amusing to see how little some other things have changed since then. In Edmund Halley’s report on the lunar eclipse of 1715, he notes with regret that “my worthy Colleague Dr. John Keill by rea∫on of Clouds ∫aw nothing di∫tinctly at Oxford but the End”. Intriguingly, it seems that Keill’s Cambridge counterpart Roger Cotes had other things on his mind that night – Halley delicately suggests that Cotes “had the misfortune to be oppre∫t by too much Company, ∫o that, though the Heavens were very favourable, yet he mi∫s’d both the time of the Beginning of the Eclip∫e and that of total Darkne∫s”. (Ah, those naughty academics!) Reading such delights online is not quite the same as reading them in the original manuscripts. However, the Web versions are a lot easier to find, and there is even a link where you can sign up to receive free e-mail alerts when new articles cite the timeline articles – although in Halley’s case, this seems unlikely.

What can I learn from the timeline as a whole?

One of the most striking trends that emerges when you look at papers from different eras is the gradual professionalization of science. In the early years of the Royal Society, for example, astronomical observations were mostly being performed by amateurs. By the time Norman Lockyer reported his 1898 discovery that the Sun’s corona is much hotter than its surface, the Reverend Whoevers and Lord Thingummies cited in Halley’s time were almost entirely replaced by Professor So-and-sos. The level of technical detail also increases dramatically. Funny letter s’s notwithstanding, any literate person could read Franklin or Faraday and understand exactly what these pioneers were doing – not least because technical diagrams were, it seems, rather more clearly labelled in the 18th and 19th centuries. Tellingly, the most recent pure-physics paper selected for inclusion in the timeline is Stephen Hawking and Roger Penrose’s 40-year-old work on black holes, but there is still hope for the future: the timeline stretches as far as 2050, and there is plenty of space for new entries.

That’s hella signatures

100bNote.jpg
0.0000000000000001 helladollars?

By Michael Banks

“Yotta”, “zeta”, “exa” and “peta” could now be joined by a new number prefix, the “hella”, if a physics student from University of California, Davis, gets his way.

Austin Sendek has started a petition on the social networking site Facebook to establish a new, scientifically accepted prefix for 1027(that is 1 followed by 27 zeroes, or 1000000000000000000000000000).

Yotta (1024), which was established in 2001, is currently the largest number established in the International System of Units (SI) – the world’s most widely used system of measurement — with zeta (1021), exa (1018) and peta (1015) following close behind.

“Hella” comes from Californian slang for “very” or “a lot of”. Sendek says that by accepting the term the SI system can “not only rectify their failing prefix system but also honor the scientific progress of Northern California.”

The petition is gaining ground fast with over 20 0000 signatures (or “fans” on the Facebook page) – or 0.0000000000000000000002 hellafans.

So what could you use the hella for? Sendek claims it could be applied in many “crucial calculations”, including the wattage of the Sun (0.3 hellawatts), or the number of atoms in a large sample (6.02 hellaatoms in 120 kg of carbon-12).

Sendek has not said what he would like to call the number for 10-27 (10-24 is the yocto). So physicsworld.com readers, any suggestions?

Junctionless transistor makes its debut

Researchers in Ireland have succeeded in making the first junctionless transistor ever. The device, which resembles a structure first proposed way back in 1925 but not realized until now, has nearly “ideal” electrical properties, according to the team. It could potentially operate faster and use less power than any conventional transistor on the market today.

Transistors are the fundamental building blocks of modern electronic devices – and all existing transistors contain semiconductor junctions. The most common type of junction is the p–n junction, which is formed by the contact between a p-type piece of silicon – doped with impurities to create an excess of holes – and an n-type piece of silicon, doped to create an excess of electrons. Other junctions include the heterojunction, which is simply a p–n junction containing two different semiconductors, and the Schottky junction between metal and semiconductor.

The number of transistors on a single silicon microchip has been increasing exponentially since the early 1970s, and has gone up from a few hundred to over several billion today. As a result, transistors are becoming so tiny that it is becoming increasingly difficult to create high-quality junctions. In particular, it is very difficult to change the doping concentration of a material over distances shorter than about 10 nm. Junctionless transistors could therefore help chipmakers continue to make smaller and smaller devices.

Patented in 1925

Now, Jean-Pierre Colinge and colleagues at the Tyndall National Institute of University College Cork have dispensed with the very idea of a junction and instead have turned to a concept first proposed in 1925 by Austrian-Hungarian physicist Julius Edgar Lilienfield. Patented under the title “Device for controlling electric current”, it is a simple resistor and contains a gate that controls the density of electrons and holes, and thus current flow.

The team’s version of the device consists of a silicon nanowire in which current flow is perfectly controlled by a silicon gate that is separated from the nanowire by a thin insulating layer. The structure itself is very simple, looking a bit like a telephone cable that is fixed to a surface by a plastic clip (see figure). Crucially, there is no need to alter the doping over very short distances. Instead, the entire silicon nanowire is heavily n-doped, making it an excellent conductor. However, the gate is p-doped and its presence has the effect of depleting the number of electrons in the region of the nanowire under the gate.

If a voltage is simply applied along the nanowire, current cannot flow through this depleted region. According to Colinge, this region “squeezes” the current in the nanowire in the same way as the flow of water in a hose is stopped by squeezing it. However, if a voltage is applied to the gate, the squeezing effect is reduced and current can flow. The team also made a similar device with a p-type nanowire and n-type gate.

The most perfect of transistors

The structure is simple to build, even at the nanoscale, which means reduced costs compared with conventional junction fabrication technologies, which are becoming more and more complex. The device also has near-ideal electrical properties, adds Colinge, and behaves like the most perfect of transistors. This means that it hardly suffers at all from current leakage – the bane of conventional devices – and so could potentially operate faster and using less energy.

The Tyndall team says that it is now talking to some of the world’s leading semiconductor companies to further develop and possibly license its technology.

“Although the idea of a transistor without junctions may seem quite unorthodox, the word “transistor” does not imply the presence of junctions, per se,” write the researchers in Nature Nanotechnology, where the work was published. “A transistor is a solid-state device that controls current flow and the word transistor is a contraction of ‘trans-resistor’.”

Both answers correct in century-old optics dilemma

For 100 years physicists have been struggling to reconcile two different formulations describing the momentum of light travelling through a transparent medium. One, put forward by German mathematician Hermann Minkowski in 1908, stipulates that light’s momentum increases when it enters a medium, while the other, advanced a year later by the German physicist Max Abraham, instead says that the momentum of light decreases. Now, Stephen Barnett of the University of Strathclyde in the UK has concluded that both formulations are in fact correct, with the difference essentially boiling down to whether one considers the wave or particle nature of light.

It is well known than when light enters a material medium it slows down in proportion to the refractive index, n, of that medium. Minkowski and Abraham wanted to know how light’s momentum changes as a result. Abraham calculated that the momentum of a single photon within the light is also reduced by a factor n, a result which agrees with our experience of everyday objects – as their speed drops, so too does their momentum. Indeed, a number of powerful arguments have been put forward over the years in support of this position. Prominent among these has been a simple proof based on Newton’s first law of motion and Einstein’s equivalence of mass and energy, which considers what happens when a single photon travels through a transparent block and transfers some of its momentum to the block, given that the motion of the system’s centre of mass-energy must remain constant.

Minkowski’s formulation, on the other hand, seems more natural from the point of view of quantum mechanics. As light slows down inside a medium its wavelength also decreases, but quantum mechanics tell us that shorter wavelengths are associated with higher energies, and therefore higher momenta. In fact, Minkowski’s approach suggests that the momentum of a single photon of light increases by a factor n as it passes through a medium. This result can also be supported by strong theoretical arguments, among them one that considers what happens when an atom moving at some speed through a medium absorbs a photon and experiences an electronic transition.

Fundamental physical principles at stake

As Barnett points out, this problem has kept physicists interested for so long because it appears to put one or more fundamental physical principles at stake – on the one side Newton’s first law and Einstein’s famous E = mc2 and on the other the notion, familiar from de Broglie waves, that momentum is inversely proportional to wavelength.

These two formulations reflect the fact that in different situations momentum does different things Stephen Barnett, University of Strathclyde

Both formulations have received experimental support, particularly that of Minkowski. For example, in 2005 Wolfgang Ketterle and colleagues at the Massachusetts Institute of Technology reported evidence in favour of Minkowski by transferring momentum from laser beams to matter waves that had been formed from a few million atoms cooled to just above absolute zero. However, in 2008 a group led by Weilong She of Zhongshan University in China passed a laser beam through a tiny filament of silica and found that the filament recoiled as the light exited, indicating, in accordance with Abraham, that the light gained momentum as it left the material.

According to Barnett, however, both formulations are correct. He says that the one put forward by Abraham corresponds to a body’s “kinetic momentum” – its mass multiplied by its velocity. Minkowski’s momentum, on the other hand, is a body’s “canonical momentum” – Planck’s constant divided by its de Broglie wavelength. “These two formulations reflect the fact that in different situations momentum does different things,” he adds. “In free space they coincide, but not when inside a medium.”

Don’t mix up the two

Physicists have known for some years that this distinction might explain the dilemma but have been unable to prove it. That is to say, they have been unable to reconcile the two different formulations with electromagnetic theory. Barnett overcame this problem when he realized that the two approaches cannot be treated in the same way mathematically – that of Abraham requires considering momentum as transferred by individual particles whereas that of Minkowski instead involves the commutation relationship between momentum and position, a wave property. “It is when you mix the two up that you get the problem,” he says.

The question is: when is the particle momentum relevant and when is the wave momentum relevant? Ulf Leonhardt, University of St Andrews

This point is underlined by Ulf Leonhardt of the University of St Andrews in the UK, who says that, simply put, Abraham described the momentum of light as a particle whereas Minkowski described the momentum of light as a wave. As such, he agrees that both formulations are correct. However, he does not think that the debate is really over. “The question is: when is the particle momentum relevant and when is the wave momentum relevant? Are there cases when a mixture of wave and particle properties appear?” he asks. “When science answers one question, ten new questions appear.”

Barnett is also not entirely satisfied. “We now know that Abraham and Minkowski were both right,” he says. “But we don’t yet know why nature requires two momenta.”

The work is reported in Phys. Rev. Lett. 104 070401.

Copyright © 2026 by IOP Publishing Ltd and individual contributors