Skip to main content

Fruitloopery

A few years ago New Scientist carried a quote in its “Feedback” section, which often reports science-related foibles, from the German politician Wolfgang Böhmer. Talking about his country’s healthcare reforms, Böhmer was reported as saying “I can’t see the quantum leap. But even if we proceed in smaller steps this would be a success.” In commenting, the magazine’s editors quipped “The statement has left us trying to think of a step that is smaller than a quantum leap. So far we haven’t succeeded.”

In response to a couple of readers’ comments, the editors later corrected themselves. In physics, a “quantum leap” is the transition of a system from one state to another without any intervening states, and is not necessarily small. The editors said what they had meant was a “Planck leap” or “length” – the smallest meaningful distance, below which quantum effects dominate and render meaningless the very concept of “distance”. Still, they were correct that Böhmer’s remark was strictly gibberish because a quantum leap cannot be subdivided.

Apart from being mildly amusing, what this story illustrates is that “quantum leap” has taken on very different meanings for scientists and non-scientists alike. In the 1980s the UK computer firm Sinclair launched an overhyped, supposedly game-changing machine called the Sinclair QL (for quantum leap), which it quickly abandoned. Quantum Leap was also the name given to a US comedy/science-fiction TV series of the early 1990s with a time-travelling protagonist – a holder of six doctorates, whose “special gift was quantum physics” – who took a different jump through space–time each episode. And in 2000 a camp called Quantum Leap Farm was founded in Florida to help disabled equestrians change their lives through building new relationships with horses.

Leaps and jumps

So how did “quantum leap” leap from scientific terminology applying to subatomic state transitions to an idiom meaning “big jump”? And is such popular use of scientific terms meaningful – or a disturbing mistake that must be corrected?

In the scientific world, the phrase “quantum leap” stems from Niels Bohr’s application of the quantum to atomic theory in 1912–1913. Bohr’s work implied that electrons do not have an infinite number of possible orbits about the nucleus, as planets do about the Sun, but a small selection. Electrons must leap or jump instantaneously from one possible orbit to another without tracing a path.

This idea – indeed, most news of quantum theory – did not reach the general public before the 1920s. Until then, in the popular press, the term “quantum” was used in its traditional meaning of “amount”, and applied to all aspects of human life – in expressions such as quantum of trade, quantum of naval strength, quantum of proof or of damages (in discussions of lawsuits), quantum of alms for the poor, quantum of wealth needed for a good life, and so forth.

After the development of quantum mechanics in 1925–1927, however, popularizations such as Arthur Eddington’s The Nature of the Physical World spread word of quantum theory among the public. The word “quantum” now became a metaphor for discontinuity, albeit small ones at first. In 1929, for instance, The Sun, a US newspaper, noted that modern life had become governed by things that click. “Clocks, obviously,” it wrote. “But also typewriters, adding machines, cash registers, speedometers, tachometers, stock tickers, automatic telephones, telegraph instruments – the whole tribe of appliances that operate by jerks are the masters of men who work. It is the reign of the quantum theory in industry [its italics].”

The clicks of such devices were made by small discontinuous transitions. But language has a “moment” (a tendency to twist things) of its own. As modern life encountered ever discontinuous transitions – in the scales of things such as populations, budgets and military might – and needed a term, “quantum leap” had vitality and glamour. It was soon applied to any large, qualitative increase, especially of effort, money or military strength. The first entry for “quantum leap” in the Oxford English Dictionary refers readers to a physics definition; the second, for non-scientists, defines quantum leap as “a sudden, significant, or very evident (usually large) increase or advance”.

Metaphor making

The change in meaning of “quantum leap” is not unique. Other scientific terms and phrases – including complementarity, uncertainty principle and catalyst – now name aspects of ordinary life. Meanwhile, ordinary words – not just quantum but also moment, force and gravity – have gone in the opposite direction and ended up as technical scientific terms. Such transformations generally happen via metaphors.

Metaphors contain two terms, a primary and a secondary. In “love is a rose”, for instance, love is the primary term, the meaning of which is being explored, while rose is the secondary term, used to elucidate the first. This is a “filtrative” metaphor, for it asks us to filter our perceptions of the primary term in the light of certain well-known features of the secondary (love, like roses, is pretty but thorny). The terms are not confused. A rose is not love; it remains in the garden, its identity unaffected. However, a new meaning has appeared – love’s rose-likeness – that allows us to understand our experience better.

Metaphors are particularly valuable when part of our experience is enigmatic – when the “correct” words are insufficient, and we need new ones even if technically incorrect. For example, in Here Come the Maples – a 1976 short story by John Updike – the protagonist Richard Maple ruminates about his decaying marriage when his thoughts are momentarily interrupted by a chance reading about subatomic discoveries (the italics are Updike’s).

“He…read, The theory that the strong force becomes stronger as the quarks are pulled apart is somewhat speculative; but its complement, the idea that the force gets weaker as the quarks are pushed closer to each other, is better established. Yes, he thought, that had happened. In life there are four forces: love, habit, time and boredom. Love and habit at short range are immensely powerful, but time, lacking a minus charge, accumulates inexorably, and with its brother boredom levels all.”

Maple does not think his marriage is subatomic physics. Still, he finds its terms useful in understanding its dynamics. Maple is trying to fill in what he intuits but cannot say. He is confused, wants to understand, and uses the best tools he has at the moment – the words of an article he happened to have stuffed in his pocket. The article could have been about almost anything – economics, sports, theatre – and he would have seized on those rather than physics. What matters is the phenomenon to which the metaphor is pointing – here, Maple’s marriage – not what is being used to point.

Expressions, however, can stop being metaphors when we forget about their origins, and cease to connect the expressions with the world from which they came. Think of the “bonnet” of a car – or what Americans call a “hood” – which no longer prompts us to think about head garments. Pointers can turn into names. But only certain laboratory terms make this discontinuous jump – this quantum leap – to becoming pointers and names. It generally occurs for scientific ideas that, as Eddington wickedly put it, are “simple enough to be misunderstood” or, rephrased more charitably, “simple enough to be suggestive”.

But scientific words can make this transition in other ways besides filtrative metaphors. In “creative” metaphors, the priority of the terms is swapped. In an extraordinary linguistic reversal, the secondary term deepens in meaning through the metaphor to subsume its previous meaning as well as that of the primary term. The pointer becomes the pointed at.

In physics, for instance, a “wave” originally meant something that took place in a medium. However, its metaphorical extension to light (which does not require a medium in which to move) and thence to quantum phenomena (where what moves are probabilities) changed its meaning. A “wave” is now not just a metaphor but the correct term for light itself, and other things such as probability variations that it did not originally name.

A more troubling example is “complementarity”, Bohr’s term for the fact that particle and wave behaviour are simultaneously necessary yet mutually incompatible in the quantum world. This puzzling feature, Bohr thought, sprang from the fact that human beings have to be both actors and spectators when observing the microworld. Noting that this dual role of both acting and watching is also a feature of anthropology, biology and psychology, Bohr tried to extend complementarity to those domains. Had he been successful, it would have been a dramatic instance where concepts developed in subatomic physics could be applied non-metaphorically in the human sphere.

Yet his efforts met with mixed results, and are regarded today with embarrassment by many physicists. Indeed, in 1998, after the physicist Alan Sokal mocked humanists for delving into physics to support their ideas in a way that seemed ignorant at best and zany at worst – in what has come to be known as “Sokal’s hoax” – historian Mara Beller published an article in Physics Today entitled “The Sokal hoax: at whom are we laughing?”. She cited remarks by Bohr – but also by Heisenberg and Pauli – to make the point that in this respect physicists could sometimes be as zany as humanists, and there is no neat way to distinguish between the two.

Falling for fruitloopery

New Scientist refers to pretentious and erroneous use of scientific words as “fruitloopery” – a term that itself originated in an especially weird filtrative metaphor. Froot Loops are a popular US breakfast cereal – it comes in small, garishly coloured ring-shaped pieces with fruit-like flavours – introduced by the Kellogg Company in 1966. For a time, “fruit loop” was US slang for a gay man, or a gay-friendly neighbourhood, but soon all but lost this connotation and began to mean something lightweight, wacky and a bit pretentious.

In 2005 Mike Holderness, a freelance contributor to New Scientist, mentioned in an article “professional dissidents” who are given the oxygen of publicity by those science journalists who, he wrote, “divide all stories into precisely two sides that get equal space: too often the reality-based community versus fruitloops and/or special interests”. The word fruitloopery quickly grew into the in-house New Scientist term for the use of scientific words, such as quanta or tachyons, either wildly out of context or in a completely unverifiable way.

However, I think the term should be extended to any pretentious and erroneous use of scientific terms. Much self-help literature and amateur philosophy is studded with such execrations; one of my personal favourite examples being by the actress Shirley MacLaine, who remarked that today’s physicists are suggesting “that the universe and God itself might just be one giant, collective ‘thought'”.

Physics seems to inspire more fruitloopery indicators than other fields because, I think, of its cultural prestige. Those who link a sham product or woolly thought with physics principles are being deliberate, meaning to imply that it has an especially deep and secure grounding. Advertisers and actresses do not make mistakes, only fruit loops.

The critical point

Why are we troubled by Wolfgang Böhmer’s words but not those of Richard Maple? Why do we find the invocation of physics principles dangerous in self-help literature but not in the names of farms and films? The answer, I think, has to do with the intentions of the metaphor-makers; that is, not with the fact that a meaning is being transformed but why. No fruitloopery is involved if the metaphor-makers are aware of the genesis of the scientific term, assume the audience is also aware, and are genuinely trying to increase understanding. It is fruitloopery when the metaphor-makers are being deceptive or self-deceptive – when terms are used not as tools of knowledge or expression, but to peddle wares, impress the gullible or cloak one’s ignorance. The distinction, unfortunately, is harder to spot than it seems.

From childhood dream to lead space-walker

Drew Feustel is a 46-year-old US geoscientist who, in 2009, did something that very few human beings had done before him. Feustel left the Earth on a NASA mission to the Hubble Space Telescope, where he carried out a series of space-walks to repair this iconic instrument. He then returned to space in May 2011, when he served as the lead space-walker on Space Shuttle Endeavour‘s final mission to the International Space Station.

Back on Earth, Feustel joined Physics World to give this exclusive interview about his experiences at the final frontier. Early in the interview, he gives a vivid account of the feelings he experienced in the final moments before his first take-off.

When the rocket lights, and it hits you in the back like someone’s smacked you with a frying pan, you realize you’re going to space

“You don’t really believe that the launch is going to happen because you’ve waited all this time to get into space. But when they get to 10 and then they keep going down to 1, you realize that somebody’s serious about putting you into space,” he says. “When they get to zero, and the rocket lights, and it hits you in the back like someone’s smacked you with a frying pan, you realize that now’s the time you’re going to space – and you’re no longer going to stay on the planet.”

Replacing Hubble’s batteries

Feustel was sent to the Hubble Space Telescope with six other astronauts as part of STS-125, the final manned servicing mission. The crew’s task was to replace the battery units and to make some upgrades to Hubble’s scientific instruments. Two years later, Feustel was part of the six-person crew aboard STS-134, which went to the International Space Station on the 25th and final flight of Space Shuttle Endeavour. This mission successfully delivered the Alpha Magnetic Spectrometer (AMS), an instrument designed to detect cosmic rays and to search for dark matter.

When talking about the Hubble mission, Feustel speaks of the crew’s awareness of the vital role Hubble has played in astronomy. “We believe it’s one of the most important scientific instruments that humans have ever built. It’s taught us about the solar system, the origins of space, the universe that we live in, and also about our future,” he says.

There are points in the interview when Feustel seems to exude an almost otherworldly sense of calm. But, he also maintains a good sense of humour when describing his rare experiences, for instance when he talks about the space-walks he performed as part of STS-125 and STS-134. “The most important thing about space-walks is to not let go. Because you don’t want to separate yourself from the vehicle that has your ride back to the planet Earth,” he jokes.

Finding the path to space

But perhaps above all, Feustel comes across as an intensely focused individual who dreamed of going to space as a child and had the audacity to never lose sight of that dream. “I never knew what my path to spaceflight would be, but I knew that I believed it would be a part of my life,” he explains.

I never knew what my path to spaceflight would be, but I knew that I believed it would be a part of my life

Feustel says that his experiences in space have altered the way he views the Earth, particularly when it comes to environmental concerns. “Nowhere on the ground can you really see the atmosphere, except that we see the blue sky. But when you’re in space and you look down upon the planet, you can easily see that thin veil that separates us from the vacuum of space – and you realize how fragile it is and how important it is for us to protect the planet.”

Feustel did a degree in geophysics and attained his PhD from Queen’s University in Canada. After spending several years working in the mining and geological exploration industries, he was selected by NASA as a mission specialist in July 2000.

This interview was filmed at the AGU Fall Meeting 2011.

Invisibility cloaking goes thermodynamic

Researchers in France have shown how to isolate or “cloak” objects from sources of heat – a breakthrough that could help cool down electronic devices and thereby pave the way towards more powerful computers. They also show how the same technique could be used to concentrate heat, which might prove useful in advanced solar technologies.

Invisibility cloaks are based on the mathematics of transformation optics – bending light such that it propagates round a space, rather than through it – and were proposed by John Pendry of Imperial College in London and Ulf Leonhardt of the University of St Andrews in 2006. Now, Sebastien Guenneau of the University of Aix-Marseille and colleagues at the French national research council (CNRS) wondered whether a similar thing could be done with heat. While intuitively, it might seem unlikely that the same mathematics could be applied to thermal diffusion, given that heat does not propagate as a wave but simply diffuses; the researchers found that the transformed equation worked.

Adapting optics

To devise the specific transformations for a thermal invisibility cloak, they considered the heat from a hot object flowing from the left to the right in two dimensions, with the intensity of the heat flux through any region in space represented by the distance between “isotherms” – lines of constant temperature in that region. The more closely spaced the isotherms, the higher is the intensity of the flux. The researchers then transformed the geometry of these isotherms so that they went around rather than through a circular region that is to the right of the heat source, meaning that any object placed in this region would now be shielded from the heat flow.

The invisibility cloak that is needed to achieve this transformation would be a 2D ring built up from many concentric layers of varying diffusivity – a property that reveals how quickly a material conducts heat relative to its heat capacity per unit volume. In their calculations, the researchers modelled a cloak with an inner radius of 200 µm and an outer radius of 300 µm, and then calculated the change in heat flow around the cloak on the order of milliseconds. Because these are the kinds of distances and times relevant to the operation of microelectronic devices such as transistors, the researchers believe that this kind of cloak could be used to protect such devices from unwanted temperature gradients.

Thermal isolation

At larger scales another possible application, says Guenneau, is shielding objects from thermal-imaging cameras. Warm objects such as humans or vehicles can be seen at night using infrared imagers because their black-body spectra peak in the infrared. Putting such an object inside the kind of invisibility cloak devised by the French group would mean isolating it thermally from the outside and therefore concealing any temperature difference between it and the local environment, making the technology of particular interest to the military.

Designing a heat concentrator, on the other hand, which follows from an earlier proposal by Pendry to build concentrators for light, involves calculating the transformation that can divert the isotherms into a central region, rather than away from it. Such concentration of heat into a small space could prove useful in solar energy, says Guenneau, because it could improve the heat exchangers used for instance in concentrated solar-energy systems.

Cloak fabrication

Guenneau and co-workers are now collaborating with scientists at the University of Lille, France, to build these actual devices, in what they hope will be within a matter of months. As Guenneau explains, it ought to be far easier to build a thermal rather than an electromagnetic cloak because the broad range of diffusivities needed to bend the path of heat such that it almost completely bypasses an object can be found in nature. On the other hand, electromagnetic cloaking relies on the fabrication of completely artificial materials made up of extremely small and complex structures.

The 20 concentric layers that make up both the cloak and concentrator will have to be made from a few different materials with various diffusivities, such as metal (which, being a conductor, is highly diffusive) and polymer (which is weakly diffusive). Testing the devices will then involve placing them next to a 500-µm-long resistor and imaging the resulting distribution of heat flux using a thermal camera. If these tests all go to plan, says Guenneau, the step after that would be to make 3D devices.

Other researchers agree that the cloak and concentrator designed by the French group could in principle be built. Tomas Tyc of Masaryk University in the Czech Republic says that their works benefit from a “rigorous adaptation of the method of transformation optics to the diffusion equation”. Pendry, meanwhile, says that possible applications might include “Heat sinks that grab excess heat produced by a device, channel it away from sensitive areas and safely dump it into a heat bath.”

The research is to be published in an Optical Society of America journal.

Lab study could aid inkjet printing

A filament of liquid squirted from a nozzle will sometimes contract into a single drop and other times break up into many segments. Now, researchers in the UK have mapped the parameters that will lead a filament to break up, and they believe that this knowledge could help in the design of inkjet printers.

Inkjet printing requires single drops of ink to be deposited on paper, and for this reason engineers are keen to avoid conditions that would encourage ink filaments squirted from an inkjet nozzle to break up. Theory shows that there are several crucial parameters: the liquid’s density, viscosity and surface tension – all three of which can be grouped into a single “Ohnesorge number”. The other is the aspect ratio, which describes the filaments’ length in relation to diameter. In general, longer filaments with a high aspect ratio, and filaments with a small Ohnesorge number, are likely to break up.

However, experiments attempting to confirm this theory have been limited, according to Ian Hutchings and others at the University of Cambridge. To map break-up parameters, engineers have previously used commercially available printing nozzles, but these work only within a narrow range of liquid viscosities and at one filament diameter. “You can’t map anything – or you can map in a very small region,” says Alfonso Castrejón-Pita, a co-author of the latest study. “So [inkjet] companies have had to use trial and error.”

Specialized nozzle

The Cambridge researchers have therefore developed their own nozzle. It is larger than a typical inkjet nozzle, and has an iris – similar to the aperture blades on a camera – that can control the diameter of the squirted fluid. What is more, the squirting mechanism relies on a speaker-like electromagnetic coil, which can handle even very viscous fluids. To detect break-up, the researchers photographed the ejected filaments using a high-speed camera.

They found that for short filaments with an aspect ratio less than about six – that is, with a length no greater than six times the diameter – no break-up occurs. They also found that break-up did not happen for large Ohnesorge numbers, greater than one. The results broadly agree with previous theory, but contain data points that stretch into previously untested regions.

“It has always proved easier to carry out such studies on computer rather than in the laboratory,” says Osman Basaran, an engineer at Purdue University in Illinois, US, who has contributed to the theory of how liquid filaments break up. “Therefore, the new experiments furnish experimental observations that had been needed for nearly a decade, if not longer.”

Basaran adds that there are slight discrepancies between the Cambridge researchers’ results and the theory, and hopes that these will prompt further work. But not all theorists are impressed. Jens Eggers of the University of Bristol, UK, thinks that the experimental data offer no surprises, and disagrees with the researchers that there should never be break-up for Ohnesorge numbers greater than one. “I would bet almost anything that this is false,” he says. “In any case, there isn’t much evidence in the data to support the claim.”

Brian Derby, who researches inkjet printing at the University of Manchester, UK, is ambivalent about the value of the extra data points, which he says are too distant from the values used in real-world devices. “It’s easier to see what’s going on,” he says. “But you could argue that they’ve extended the range outside of practical interest.”

In fact, Castrejón-Pita says his group’s results may not be directly applicable to inkjet printing anyway because they used water–glycerol mixtures for their tests. Actual ink, he says, might exhibit so-called non-Newtonian properties, which would make it deviate from the trends that they have observed. But this is an avenue that his group can explore. “Now we’re going to work with real things, with a real commercial print head, to check how far our predictions can be followed,” he says.

The study is published in Physical Review Letters.

UK overtakes US in research impact

The UK has overtaken the US in terms of the quality of physics-research output, according to a new report carried out by Evidence, which is owned by information-services provider Thomson Reuters. The report, Bibliometric evaluation and international benchmarking of the UK’s physics research, states that the UK is now second to Canada when ranked on the quality of research papers, measured as the average number of times that such papers are cited.

According to the report, commissioned by the Institute of Physics (IOP), which publishes physicsworld.com, the UK’s citation impact has jumped from 1.24 in 2001 to 1.72 in 2010, putting the country second in the world. Canada comes top of such a world ranking of research quality in physics with a citation impact of 1.75, Germany is third with 1.62, the US fourth with 1.60 and Italy fifth with 1.44.

“Over the past 10 years, the UK has either languished shortly behind the US or, in better years, been level,” says IOP president Peter Knight. “In 2010, however, we took a very encouraging lead on the US when the physics community’s research output is looked at as a whole. This should give the UK great cause for celebration.”

While the UK has been publishing more papers in physics – up from 5484 in 2001 to 6240 in 2010 – its share of world physics papers has decreased from 7.1% in 2001 to 6.4% in 2010 so it now ranks seventh behind Russia (7.3%), France (7.6%), Japan (9.6%), Germany (10.5%), China (18.6%) and the US (22%). While the US still publishes more papers in physics than any other country, it is being quickly caught up by China, which has seen its share of world physics papers balloon from 8.2% in 2001 to 18.6% in 2010. It is expected that China will overtake the US in terms of the quantity of research output in physics in the next couple of years.

Asia rising

The report states that the decrease in the UK’s output as a share of world papers is mostly down to the rise of other countries, particularly in Asia, that have been publishing an ever increasing number of papers in physics. Indeed, the report states that as well as China’s rapid rise, India has increased its share of physics papers from 3% to 4.6% between 2001 and 2010, while South Korea’s output has risen from 3.4% to 4.8% over the same period.

The report notes, however, that the decrease in the UK’s share should “not be interpreted as a decline in overall capacity”. “The UK’s physics-research base is performing strongly,” the report concludes. “If the UK wishes to remain globally competitive, it will need to maintain both research output and quality.” Knight adds that China’s rise should “sound a warning shot and keep us from complacency”.

Researchers make single-atom transistor

Researchers in Australia have created a single-atom transistor by planting an individual phosphorus dopant atom within a silicon sample with a spatial accuracy of plus or minus one lattice spacing. The research builds on earlier work by the same group allowing the creation of atomic-scale electrodes. While the transistor may currently help toward the continued miniaturization of classical electronics, the researchers hope that in the future their device will help develop a functional quantum computer.

Moore’s Law

The transistor is basically an electronically activated switch and is at the root of all computing. Without it processors would be unable to perform the logical operations required of them. Moore’s Law, named after the founder of Intel, Gordon Moore, has predicted that the number of transistors that can be crammed on to a commercial integrated circuit will double approximately every two years. When Moore made his prediction in 1965, he predicted that it would hold true until 1975, when, he suggested correctly, there would be about 65,000 transistors on each chip. In fact, it has proved uncannily accurate and still holds roughly true today, when there are billions. However, continued miniaturization requires the development of new manufacturing techniques, and – for Moore’s law to continue – devices will have to hit the single-atom scale around the year 2020.

In earlier work, Michele Simmons’ research group at the University of New South Wales in Sydney developed a technique allowing it to create atomic wires inside crystals of bulk silicon by selectively removing individual lines of silicon atoms and replacing them with phosphorus. Phosphorus has one more electron in its outer shell than silicon, so replacing a silicon atom with a phosphorus atom within a silicon crystal introduces a free electron to the material and raises the local conductivity. The team used this technique to fashion nanoscale transistor electrodes in the crystal. It then placed a single phosphorus atom in the centre of the transistor. The result was an atomic-scale version of a field-effect transistor (FET).

A quantum transistor

The current passing between the source and drain electrodes of a classical FET increases smoothly with the voltage between the gate and drain electrodes. But the atomic-scale FET produced by the New South Wales group, in collaboration with colleagues at the University of Melbourne, University of Sydney, the Korea Institute for Science and Technology Information and Purdue University in Indiana, US, behaved in a quantum-mechanical manner, becoming conductive only when the potential difference was aligned precisely with one of the energy levels of the phosphorus atom. “You change the bias on the gate and as you change the bias you will access the energy levels of the atom,” explains Simmons. “You go from conducting to insulating, to conducting to insulating as you go through the atomistic energy levels of that single-atom device.”

Cryogenic laptops and quantum computers

Physicist and electrical engineer David Ferry of Arizona State University in Tempe, US, believes the work is “another interesting example of making a very small structure and placing phosphorus atoms where they want them on a surface”. But he questions whether a transistor that can only carry one electron at a time will ever run fast enough to be of much use to the electronics industry. There are also other practical difficulties with the device, such as that it only works with cryogenic refrigeration. As Ferry says, “I don’t think you want to carry your laptop around at liquid-helium temperatures.”

Simmons accepts that the technology is not currently industrially compatible. “It is really a test of technology,” she says. “How far can you push things to deterministically make a single-atom device? Its long-term applicability to conventional industry is completely unknown: it just gives a marker in the sand that there is technology to be able to make it.”

The group’s main interest in using the transistor was to study the energy levels of the phosphorus atom within the silicon lattice, which the researchers hope to use as qubits in a quantum computer. “This is a transistor that we’ve designed so that we can look at the energy levels and check that we get agreement with what’s been theoretically predicted,” says Simmons. “In the computer, the phosphorus atoms will be essentially talking to one another in a lattice. You won’t necessarily have source and drain electrodes to each atom like you would in a conventional transistor for that device.”

The research is published in Nature Nanotechnology.

The STEM employment paradox

By Margaret Harris

I went to the University of Surrey last week for a science careers evening, and as I was chatting to some students afterwards, one of them asked a fascinating question. “We’re always hearing that the UK needs more graduates in STEM fields,” she said, using the ever-present acronym for science, technology, engineering and mathematics. “But if that’s true, why are so many of us struggling to find jobs?”

I’ve been asking myself the same question for some time. As Physics World’s careers editor, I receive many upbeat press releases touting the importance of STEM disciplines in building the knowledge economy, pulling the country out of recession and so on. But I have also watched, with impotent sympathy, as some of my scientifically trained friends search in vain for jobs. So what is wrong with this picture?

(more…)

Cyclotrons make commercial quantities of technetium

Scientists in Canada are the first to make commercial quantities of the medical isotope technetium-99m using medical cyclotrons. The material is currently made in just a few ageing nuclear reactors worldwide, and recent reactor shutdowns have highlighted the current risk to the global supply of this important isotope.

Technetium-99m is useful for medical imaging because it emits only gamma rays and can be incorporated into a number of different molecules that target different types of tissue in the body. Today it is made in nuclear reactors by creating a radioactive isotope of molybdenum that then decays to technetium-99m.

The entire supply of the isotope for North America is made at the 60-year-old NRU reactor in Canada, which has experienced two extended, unscheduled shutdowns in the past decade.

As a result, the Canadian government challenged the nation’s scientists to develop a new method of making the isotope that would use the medical cyclotron accelerators found in many major hospitals. These cyclotrons are already used to make other isotopes, but making technetium-99m in commercial quantities using accelerators has evaded physicists since it was first proposed more than 40 years ago.

Right on target

Now, a team including Paul Schaffer, head of the Nuclear Medicine Division at the TRIUMF accelerator lab in Vancouver, has cracked the problem after two years of hard work. The main challenge was designing a target of molybdenum-100 that produces significant amounts of technetium-99m when irradiated with protons from a cyclotron. Efficiency is important because molybdenum-100 is extremely expensive. They also had to come up with a way of extracting the isotope in a rapid way – it has a half-life of about 6 h – and is in a chemical form that can be used in medical applications. Also, because of the high cost of the target, it must be recyclable.

“We took the principles of physics, chemistry and engineering that people have known for years, and used them to write a recipe for upgrading a cyclotron so it could be used to make technetium-99m,” explains Shaffer.

The team has shown that the method can be used on two different commercial medical cyclotrons in Canada – which means that it is compatible with many cyclotrons worldwide. The next step for the team is to gain regulatory approval for the cyclotron-made isotope to be used in medical procedures. This should take less than two years, according to the scientists.

Cyclotron-based production is also compatible with how many medical physicists see as the future of medical isotopes. It is expected that technetium-99m procedures will be replaced by positron-emission tomography, which also uses isotopes made in cyclotrons.

Weighty issue – more on redefining the kilogram

cake

By Tushna Commissariat

I know that many Physics World readers want to be up to speed when it comes to the delicate matter of redefining the kilogram using Planck’s constant (h). You will therefore be pleased to learn that a paper published today in the journal Metrologia has a detailed description of using a device known as a “watt balance” to achieve an accurate value for h with the required level of certainty in the near future.

The international definition of the kilogram is currently based on a lump of platinum–iridium housed by our metrologist friends at the International Bureau of Weights and Measures (BIPM) in Paris. These guys aim to provide the basis for a single, coherent system of measurements throughout the world, traceable to the International System of Units (SI). But, unfortunately, periodic inspections of the lump – known as the International Prototype of the Kilogram (image on the right, courtesy BIPM) shows that it has been losing some of its mass slowly over time, because of chemical interactions between its surface and the air, making it rather unstable and, hence, not the ideal marker for a standard.

The SI is the most widely used system of measurement for commerce and science, and comprises of seven base units: metre, kilogram, second, kelvin, ampere, mole and candela. To ensure that all these values remain stable over time and can be universally reproducible, they should ideally be based on fundamental constants of nature. The kilogram, however, is the only unit still defined by a physical artefact.

What a watt balance can do is to provide a way of redefining the kilogram in terms of Planck’s constant, which relates the frequency of a photon to its energy. First proposed by Brian Kibble at the UK’s National Physical Laboratory (NPL) in 1975, the watt balance relates electrical power to mechanical power. By applying two quantum-mechanical effects – the Josephson effect and the quantum Hall effect – electrical power can be measured in terms of h.

Last October, delegates to the General Conference on Weights and Measures (CGPM) agreed that the kilogram should be redefined in terms of h; but they stipulated that a final decision would only be made when there is sufficient consistent and accurate data to agree on an accepted value for h.

The Metrologia paper describes what needs to be done to achieve the required level of precision above five parts in 108. It provides a measured value of h and extensive analysis of possible uncertainties that can arise during experimentation. Although these results alone are not enough, consistent results from other measurement institutes using the techniques and technology described in this paper will provide an even more accurate consensus value and a change to the way the world measures mass – possibly as soon as 2014.

In the paper, author Ian Robinson from NPL details the work that the lab carried out from 2007 to 2009, including the many in-depth alterations that Robinson made to the device itself as well as trying to weed out uncertainties and achieve the most accurate value. While NPL’s watt balance is currently being used at the National Research Council in Canada, Robinson is sure that it will give considerably greater accuracy and will provide an accurate h within the next two years.

Currently the lowest uncertainty for an h value – at 36 parts in 109 – has been achieved by scientists at the National Institute for Standards and Technology in the US, who have their own watt balance. The BIPM has asked for at least one experiment to achieve an uncertainty of 2 parts in 108, while the other experiments must have an agreement of a value with an uncertainty of 5 parts in 108 before it will allow the kilogram to be defined by h.

When I spoke to Robinson, he said that the NPL watt balance, now in its new home in Canada, is primed to take the most accurate readings. “Over the past six years or so, I have made innumerable changes to the apparatus; each part has been custom-built by Kibble and I, and all of this is explained in detail in the paper. All over the world, people are working very hard to find out where the discrepancies occur,” he says. This means that a consensus should be reached soon.

To find out more about the changes to the apparatus, the series of measurements using different weights and the unexpected uncertainty that cropped up in Robinson’s experiment just as he was poise to achieve the same uncertainty level as the NIST value and the cause for the discrepancy, take a look at Robinson’s paper at this link.

To learn more about redefining the kilogram, take a look at this Physics World blog, feature and video.

A waiting game

By Michael Banks

Square Kilometre Array


Artist’s impression of the proposed Square Kilometre Array site in Austrialia (Courtesy: Swinburne Astronomy Productions)

I was rather hoping for more when I opened a press release from the Square Kilometre Array (SKA) organization this morning.

SKA, costing €1.5bn, is a proposed ground-based telescope that will allow researchers to probe the first 100 million years after the Big Bang for clues about galaxy evolution, dark matter and dark energy.

SKA will consist of around 2000–3000 linked antennas spread from a central 5 km “core” containing about 50% of the collecting area out to far-flung stations as much as 3000 km away. The telescope will then have the same collecting area as a hypothetical steerable dish 1 km across.

Two rival bids are going head to head to host the telescope: one led by Australia and the other by South Africa. The Australian design calls for a core in the west of the continent, with out-stations stretching eastwards to New Zealand. The South African project relies on a core in the Karoo region of the Northern Cape province, with the array extending northwards to eight neighbouring countries, including Madagascar and Kenya.

A decision on where to site SKA was widely expected to be made in February, and now the independent SKA site advisory committee has just submitted its evaluation report and site-selection recommendation to SKA’s board of directors.

Unfortunately for us mere mortals, we will not know the contents of the report until a later date. The press release gave no hint of who may host SKA, only saying that the seven members of SKA organization – which includes China, Italy and the UK – will now have a “face to face” meeting in “late March or early April” to consider the report’s conclusions and possibly make a decision about the location of the site.

If no consensus is reached at that meeting, then the members will “agree on the next steps in the process”. This one may drag on for some time to come.

Copyright © 2026 by IOP Publishing Ltd and individual contributors