Condensed-matter physicists have their own particle zoo – a menagerie filled with familiar and exotic quasiparticles including old favourites like holes and phonons, and newer additions such as surface plasma polaritons. Quasiparticles are excitations in a solid that behave like tiny particles and obey quantum mechanics. A phonon for example, is a quantized sound wave that propagates through a crystal.
Now Charles Tahan and colleagues at the Laboratory for Physical Sciences just outside Washington, DC have shown that the interaction between phonons and electronic excitations in certain semiconductors can be described in terms of a brand new quasiparticle called the phoniton.
The team studied phonitons in silicon doped with phosphorous. As a phonon moves through the material it stretches and squeezes the crystalline lattice such than an electron associated with a phosphorous atom absorbs the phonon’s energy and is promoted into a higher energy level. This electron then decays back to its original energy, re-emitting the phonon, which can be absorbed and re-emitted at another phosphorous atom. The propagation of this phonon/excitation hybrid through the lattice can be described as a quasiparticle they have called the phoniton.
So what use could a phoniton be? Because they combine the electronic and mechanical properties of a material, they could be put to work in mechanical sensors that detect vibration, strain or other movement. Looking further into the future, they could also find use in quantum computers that use phonons to store and process information.
If you don’t have access to APS journals, you can read a preprint of the article here. Before you click through to the PDF, look at the comments where is says “Changed ‘phononitons’ to ‘phoniton’ by negotiation with PRL editors…”. I have to agree with the editors, phononitons is a real mouthful!
Most of us have at some point wondered whether time travel is possible. In Build Your Own Time Machine, author Brian Clegg uses this widespread interest as an excuse to explore the many avenues of time travel – or, more specifically, human perceptions and the scientific concept of “time”. While readers sadly will not find a chapter containing blueprints for a working time machine, Clegg does explain complex fundamental ideas such as relativity, thermodynamics and quantum theory in a simple and clear style, interspersed with chapters that cover everything from Plato and Aristotle’s view of time to the evolution of the modern-day calendar, cryogenics and wormholes. The book is peppered with quirky facts such as “A frequent flyer ages around one-thousandth of a second less than a counterpart on the ground after 40 years of weekly Atlantic crossings,” and it also features short biographies of famous people involved in the study of time. One especially nice twist occurs when Clegg draws the reader’s attention to the fact that they are constantly moving through time, even as they read his book. So although you will not have built your time machine at the end of the book, you will certainly have done some time travelling.
Marvin the cat and Milo the dog love science. In fact, they love science so much that they have burst out of their usual home on the back page of the Institute of Physics’ Interactions newspaper and into a bright little book called Marvin and Milo: Adventures in Science. Like the original cartoon series, which has appeared regularly in Interactions since 2004, the book features a series of fun, simple experiments in which Marvin and Milo illustrate basic principles of physics using ordinary household items. The pair specialize in counterintuitive demonstrations, so the results of their experiments often seem magical. A good example is experiment 26, in which Marvin shows Milo how to make water run down a string as it is poured out of a jug. Of course, as Marvin explains, this is not really magic – instead, it is a fluid-physics phenomenon called the Coanda effect – and the text on the facing page notes that the same principle can make flowing water “stick” to the underside of a sloping gutter. Like Marvin and Milo themselves, artist Vic Le Billon and author Caitlin Watson (who is the Institute’s head of public engagement) form a great team, and their book would make an excellent gift for the under-11 crowd.
2011 Macmillan £9.99hb 96pp
…or perhaps not
With its lurid green, yellow and red colour scheme, exclamation marks and comic-book styling, the cover of The Book of Potentially Catastrophic Science screams with excitement. “Smashing atoms! Making gunpowder!” it shouts. “Hey kids! Try these experiments at home!” Unfortunately, anyone who looks inside expecting to actually smash atoms or make gunpowder will be sorely disappointed; the book’s most dangerous experiment involves nothing more hazardous than boiling water in a paper cup. Author Sean Connolly’s explanations are clear and entertaining, and young scientists who manage to reset their expectations will certainly find some fun activities for a winter’s day. Still, the book’s most valuable lesson is that old cliché: you can’t judge a book by its cover.
2010 Workman Publishing $13.95hb 205pp
Holiday reading: a doggy conundrum, a time-travelling novel that delves into the history of physics and a cartoon explanation of radioactivity.(From left: iStockphoto.com/caracterdesign, Shutterstock/Eugene Ivanov, iStockphoto.com/bamlou)
Life, maths and everything
In the spring of 1977 Steven Strogatz wrote a letter to his former high-school calculus teacher, Don Joffray, about a mathematical puzzle known as a “chase problem”. Suppose four dogs start from the corners of a square with sides of length a, Strogatz suggested. If each runs (at the same speed) directly towards the one counterclockwise from it, how far will each dog have travelled when the four meet in the centre? The answer (it is a, the length of the square’s sides – can you figure out why?) was not important, but the letter proved to be the start of an extraordinary mathematical correspondence, one that would ebb and flow for nearly three decades. During that time, Strogatz would graduate, earn a PhD and become a professor of applied mathematics at Cornell University. Meanwhile, “Joff” would remain in the classroom for another 22 years, admiring his former star pupil’s exploits from afar, before eventually retiring to spend more time canoeing on his beloved Long Island Sound. For most of this period, the two men’s correspondence was almost exclusively calculus-based. The only problems they talked about were the sort you could solve with clever substitutions and a few pages of algebra. Eventually, though, things began to change: first gradually, like the continuous functions of calculus, then in a series of abrupt shifts that echoed phenomena in Strogatz’ own research on chaos theory. Strogatz’ chronicle of the two men’s correspondence, The Calculus of Friendship, is simultaneously a fun collection of mathematical puzzles and a moving, bittersweet account of how his relationship with Joff evolved. The hardcover version was first published in the US in 2009, but it is now available in paperback – so there is no excuse not to send your favourite mathematician a copy.
2011 Princeton University Press £10.95/$14.95pb 184pp
Cosmic soundbites
It sounds a neat idea: a book that answers 140 different questions about the solar system, our galaxy and the wider cosmos in segments of no more than 140 characters each (the standard length of Twitter messages). Tweeting the Universe: Tiny Explanations of Very Big Ideas by science writers Marcus Chown and Govert Schilling does pretty much what it promises to do, with bite-sized sentences about, for example, whether there is water on Mars (yes, but it is frozen), what happened before the Big Bang (not sure) and whether it is dangerous to fly through the asteroid belt (not really). While the restriction in character numbers was no doubt a diverting and useful constraint for the authors – brevity is the soul of wit and all that – the resulting Tweets are far from poetic, with definite articles and conjunctions often lopped off to save space. Moreover, each question is answered in the form of a dozen or so Tweets, making the explanations more like 1700 characters long and spread over two printed pages. So why not, if there are two pages, use them all up with old-fashioned prose? In fact, the introduction, which is conventionally written, is probably the best part of the book. Still, if this book’s ploy, which emerged from a column Schilling writes for the Dutch newspaper De Volkskrant, encourages an interest in physics among the Twitter generation, it
will have served a useful purpose.
Faber and Faber £12.99/$18.56hb 311pp
Adventures in the outside
In the year 2110 the world’s population is divided. After decades of fruitless activism, a separatist “Ecommunity” has cut itself off from the technologically advanced “Outside” world, seeking refuge from climate change and the degenerate values of modern society. Ecommunity members are free to grow their own food, worship nature and study ancient cultures – but their vaguely sinister hierarchy prohibits them from inquiring about science, because “prying leads to meddling leads to destruction”. As the premise of a science-fiction novel, this is promising stuff, but Zvi Schreiber’s Fizz quickly takes an unexpected turn. It turns out that the book’s eponymous heroine wants a bit more out of life than eco-hymns and harvest festivals; in particular, young Fizz is passionately curious about physics and astronomy. So when a loophole in the Ecommunity’s rules allows her to visit the Outside on her 18th birthday, she seizes the opportunity to learn more. What follows is a time-travelling adventure that takes Fizz from ancient Greece to 21st-century England as she seeks to understand how the physical world works. Along the way, she is aided by many of history’s most famous scientists, as well as a few well-chosen plot devices (for example, her time machine is thoughtfully equipped with a 3D printer that produces era-appropriate clothing). This mixture of popular physics and young-adult fiction is unusual, to say the least, but it works surprisingly well and readers will quickly find themselves caring about Schreiber’s characters and Fizz’s personal dilemmas.
2011 Zedess Publishing $11.95pb 516pp
Just add cartoons
“Physics with a chuckle is physics more easily remembered.” So says Richard Muller in his introduction to The Instant Physicist, which puts this principle to the test by pairing short discussions of non-intuitive physics with a series of cartoons. The book – which seems tailor-made for the stocking-filler season – concentrates on topics related to hot-button issues, including energy, climate science and (especially) radioactivity. Joey Manfre’s sketches add a nice touch, but cartoons aside, The Instant Physicist is essentially a slimmed-down version of Muller’s previous book Physics For Future Presidents (2008 W W Norton). Readers who crave a bit more substance from their festive reading should probably look there instead.
In 1510 Nicolaus Copernicus conceived the project that would become the major work of his lifetime: working out the mathematical details of a heliocentric universe. By 1539 his project was substantially finished, but not only was he reluctant to publish it, he had evidently abandoned all hope of completing it successfully. Aside from his fear of rejection at the hands of biblical literalists and Aristotelians, he had encountered numerous difficulties with some of his mathematical models. In the case of the planet Mercury, the problems seemed intractable. They had, in fact, arisen from Copernicus’s axiomatic adoption of the idea that planets and other heavenly bodies must follow the uniform, circular motions of celestial spheres, which meant that his models retained geocentric residues that even his considerable ingenuity could not dispel. The complications were discouraging, and Copernicus evidently realized that something had gone terribly wrong, but he could not figure out what.
Into this depressing state of affairs stepped a young Lutheran mathematician and astrologer called Georg Joachim Rheticus. He had heard of Copernicus’s theory while on a visit to Nuremberg, Germany, and decided to travel to Frombork in north-east Poland to learn the details directly from its creator. In the course of this strange and improbable meeting, Rheticus – who was completely convinced of the correctness of the theory – persuaded Copernicus to complete the remaining details and have the book published. Though Rheticus was unable to help the older man resolve the problems with his models (the solution would come some 70 years later, when Johannes Kepler replaced circular models with elliptical orbits), his enthusiasm was evidently so infectious that Copernicus let him complete the preparations for the book’s publication. Without Rheticus, On the Revolutions of the Heavenly Spheres might have completely disappeared.
When science writer Dava Sobel first heard about the meeting between Copernicus and Rheticus, and how it led to the publication of On the Revolutions, she imagined these events as a drama. Eventually, she wrote a play in two acts called And the Sun Stood Still that dramatizes the meeting, its context and Copernicus’s last days. As she explains in a foreword, however, her editor persuaded her to place this play between two narratives: one covering the life of the astronomer up to his meeting with Rheticus; and the second summarizing the circumstances of publication and its aftermath down to the present day.
The resulting book-play-book A More Perfect Heaven may or may not be a new genre, but the decision to mix fact and fiction magnifies the already inherent tensions of the story. Most notably, the narrative portions of the book acknowledge incomplete evidence, diverse interpretations, ambiguity and – in a word – uncertainty about the actual events. The dramatic part, however, requires Sobel to select from many possibilities and write in a way that is consistent with her selections.
Rheticus’s own biography suggested how Sobel should depict his initial meeting with Copernicus. When a young student, Valentin Otto, visited him in 1574, Rheticus remarked that he had been the same age as Otto when he first met Copernicus, and exclaimed “If it had not been for my journey, his work would never have seen the light of day” – so Sobel makes him a somewhat impetuous, excitable youth. Rheticus’s visit also coincided with an anti-Lutheran decree issued by Copernicus’s bishop in Frombork, so in the play Copernicus has to keep him hidden, with his close friend Tiedemann Giese aiding in the subterfuge.
More controversially, Sobel makes dramatic use of Copernicus’s relationship with his housekeeper, Anna. We do not know for certain whether she was Copernicus’ de facto wife – he consistently denied it – but Sobel leaves no doubt. Nor does she avoid Rheticus’s apparent homosexuality and alleged pederasty. In 1551 the father of a Leipzig student charged Rheticus with sodomy. The punishment for such a crime was “death by fire” so, rather than risk a trial, Rheticus fled Leipzig. After some months the university sentenced him in absentia to banishment for 101 years.
Historians may object to several details, but Sobel’s handling of these tensions is deft, light-handed and at times even humorous. Of all the decisions she makes, the most contentious for historians by far is her portrayal of Copernicus as a man totally unconcerned with astrology, even dismissive of it. The fact is that Copernicus was silent on the subject. Not a single astrological prognostication is attributable to him with certainty, whether in treatises with a different disciplinary focus, his correspondence or in comments related to his work as a medical doctor. Rheticus, by contrast, was adept at astrology, and promoted a sort of astrological interpretation of On the Revolutions in his own Narratio Prima (First Account) of 1540. Copernicus did not repudiate this interpretation, but Sobel nonetheless depicts him as disagreeing with Rheticus about astrology – albeit not in a way that would disrupt their collaboration.
The play is very short, and a performance would probably take less than an hour. I do not know whether anyone suggested that Sobel expand it, but had she wished, she could have done so by dramatizing other episodes in Copernicus’s life. His meeting in 1496 with his teacher in Bologna, the well-known astrologer Domenico Maria Novara, is a possibility, as are his decisions in 1510 to leave the retinue of his uncle, Bishop Lucas Watzenrode, and to embark on his heliocentric project.
As it stands, though, Sobel has given us accessible and readable accounts of Copernicus’s life that serve as book ends for one of the most fortuitous and fateful meetings of minds in the history of science. The play brings that meeting to life with wit and boldness. Adding the suggested preliminary scenes, however, would have made it into a full-length play, set up the dramatic encounter and denouement of the last two acts, and perhaps resolved in a more satisfactory way some of the tensions arising from Copernicus’s doubts and personal ordeals.
Over the last few years, interest in the science of cookery has blossomed. If you do an Internet search for "molecular gastronomy", you will find close to a million pages devoted to the topic. The impact of science on restaurant kitchens – and, to a lesser extent, domestic ones – in the 21st century has been highly significant, and Nathan Myhrvold's Modernist Cuisine is a book that clearly demonstrates the strength of this influence on restaurants worldwide.
I say that Modernist Cuisine is a book, but it is really much more: it is a true heavyweight tome, with more than 2400 pages split into six substantial volumes. Inside, Myhrvold and his co-authors, Chris Young and Maxime Bilet, describe how science has brought cooking into the 21st century. They explain the techniques, equipment and ingredients of modernist cuisine, offer a wealth of complete recipes for whole dishes and include a full kitchen manual to enable any cook to develop their own ideas. The level of detail and care that have gone into the book's preparation set it in a class of its own. Previous attempts to describe the production of food have never managed to demonstrate the depth of knowledge and insight on display here, which, when combined with a truly scholarly approach, make this the ultimate reference book for all chefs and aspiring cooks.
The book's authors are a distinguished group. Myhrvold was the first chief technology officer of Microsoft and the founder of its research division, Microsoft Research, but he left the firm in 1999 to pursue other interests, including his lifelong passion for food. My first contact with him came just over a year ago when he e-mailed me some questions concerning the use of liquid nitrogen to make ice cream; the fact that the ensuing months-long correspondence led to a single short paragraph in the final book indicates just how much careful research and attention to detail has gone into it.
Young, for his part, is a maths and biochemistry graduate of the University of Washington, whom I met when he was appointed as the first manager of the Fat Duck Experimental Kitchen (FDEK), part of Heston Blumenthal's restaurant in Berkshire, UK. He subsequently oversaw the FDEK's expansion to employ more than six full-time chefs, and also helped develop the recipes for the BBC TV series Heston Blumenthal: In Search of Perfection. The third author, Bilet, is a chef who worked at both the Fat Duck and other esteemed kitchens before joining Myhrvold's cooking lab and working on this project.
In a short review such as this, it is impossible to do full justice to this work. All I can hope to do is give a flavour of its scope and describe a little of how it quietly and clearly embeds a scientific approach into the kitchen and the processes of modern cooking.
The level of detail given over to cooking techniques can be illustrated with a couple of examples. One of my own favourite cooking methods, especially for meat, is sous vide. In this method, the food is sealed (under vacuum) in a plastic bag and cooked for a set time at a given, constant temperature in a water bath – the exact same water baths used in physics, chemistry or materials-science laboratories. This way, the texture can be controlled so that the food can be prepared to near-perfection every time. Myhrvold et al. explain the sous vide processes in detail, describing why and how meat is kept tender – but more importantly, they also provide a very detailed set of tables giving the ideal cooking temperatures and times for just about any cut of meat you can imagine. I was particularly impressed to see that they include in these tables the time required at each temperature to pasteurize the meat being cooked.
When it comes to something as apparently simple as using a grill (which – confusingly for British readers – is termed "broiling" in the US), the authors really go to town, explaining the processes of heat transfer and providing excellent and clear instructions that should allow anyone to find the "sweet spot" in their own grill. This "sweet spot" is the area of the grill where the heat input to the food being cooked is more or less constant right across the food, and the book's charts show readers how to find the region where the grill's power does not vary by more than 10%.
Then there are the superb instructions for using an incredible range of polysaccharides, proteins and enzymes to produce food gels. Such gels can be hot or cold, stiff or flaccid, brittle or fluid, and can be used to prepare peas with the texture of caviar, mangos that look like fried eggs, or drinks that are hot and cold at the same time. These culinary uses of substances that are more often thought of as additives to mass-produced foodstuffs are what many people think of when they hear the words "molecular gastronomy", and this has perhaps given the subject a bad name in some quarters. This is unfortunate, as those in the academic world who are involved with molecular gastronomy prefer to think of it as the pursuit of the science behind the question of what makes food delicious – or not.
In the work's later volumes, readers will find recipes that use all these techniques and more for a vast array of dishes. These range from the apparently commonplace, such as chicken tikka masala (albeit a true gourmet version, cooked sous vide with the breast and thighs prepared at different temperatures), to the seemingly exotic such as "abalone and foie gras shabu-shabu with yuba and enoki", which I will certainly be trying for myself as soon as I have the time.
But for me, the most important and useful parts of the book are when the authors explain how to get the very best out of the incredible array of equipment that is now available to use in the kitchen. Techniques of cooking at high or low pressures, the use of temperature-controlled baths, freeze-drying, and cooking in steam and combination ovens are all explained with remarkable clarity.
One of the most striking aspects of the book – and a clear illustration of the impact that science and scientists have had on modern cooking – comes in chapter 10, where the authors provide a list of the equipment they deem essential or useful in a properly equipped professional modernist kitchen. Of the top 10 items on the Must Have List, most are more familiar to physicists than they are to cooks. Who could imagine a kitchen without water baths (number 1 on the list); liquid nitrogen (2); vacuum pumps (7); magnetic stirrer/hotplates (9); centrifuges (9) and autoclaves (10)? Later, in the Handy Special Purpose Tools List, 8 out of 12 items come directly from the pages of scientific catalogues, including ultrasonic baths, rotary evaporators and vacuum ovens.
Although many readers will doubtless baulk at obtaining these various bits of kit, I found to my astonishment (and delight) that of the 53 items listed in the four tables of tools for the modern kitchen, I have at least 32. It is no wonder that I think this book will make my ideal Christmas present, and while my partner might feel it is too expensive, I should point out that it costs less per gram than good parmesan cheese.
Modernist Cuisine by Nathan Myhrvold, Chris Young and Maxime Bilet 2011 The Cooking Lab £395/$625hb 2438pp
For most physicists, there are two possible paths to fusion energy. The first is magnetic-confinement fusion, which involves using magnetic fields to trap a plasma that is then heated until it is hot enough for hydrogen ions to fuse – about 150 million kelvin is what is typically needed. The other path is inertial confinement, whereby a dense target of hydrogen is compressed further by powerful lasers to initiate fusion.
While these approaches are technically very different, they have one thing in common – they are being developed in huge and expensive projects. Magnetic confinement is being led by the ITER facility that is currently being built in France at an estimated cost of €16bn, while the National Ignition Facility (NIF) in California is pioneering inertial confinement through the use of 192 giant lasers to blast a pea-sized target.
There are some physicists, however, who believe that there is a middle ground towards practical fusion – one that combines magnetic and inertial confinement yet can be achieved at a fraction of the cost of either. One such person is the Canadian physicist Michel Laberge, who co-founded the company General Fusion to commercialize a fusion technique called "magnetized target fusion", or MTF. In 1990 Laberge received a PhD in plasma physics from the University of British Columbia, where he researched laser–plasma interactions. He then completed a postdoc in the same field at the Ecole Polytechnique in Paris and later at the National Research Council of Canada, where he used femtosecond lasers to study fast chemistry. This was followed by a nine-year spell developing technology for colour printing at the Vancouver-based firm Creo.
In 2002 Laberge and Creo colleague Doug Richardson left the company to create General Fusion, of which Laberge is now president and chief technology officer, and Richardson is the chief executive. Based in Burnaby, a suburb of Vancouver, the firm wants to build a $40m prototype reactor based on the principles of MTF. So far, the firm has raised more than $33m from a range of investors, including Amazon founder Jeff Bezos, the oil company Cenovus Energy and the Canadian government.
Hotter than the Sun
Like magnetic confinement, MTF begins with a plasma that is held in place by a magnetic field. This plasma would, however, be 1000 times denser than might be found in a reactor such as ITER – and therefore much less stable. But as long as the plasma sticks around for long enough that it can be compressed, it should be possible to achieve fusion. Several schemes have been proposed for how to do this squeezing. At one end of the spectrum, lasers like those used at NIF – but not necessarily as powerful – could be used. The Shiva Star experiment at the Air Force Research Lab in Albuquerque, New Mexico, for example, uses the electrical energy stored in a huge bank of capacitors to change a magnetic field very rapidly, which squeezes a metal tube on the plasma. Other schemes, including that of General Fusion, involve the mechanical compression of the plasma using pistons.
A cycle in the proposed reactor would begin with the creation of a plasma of tritium and deuterium. The plasma is formed in an injector, which wraps it in a magnetic field creating something akin to a swirling smoke ring. The plasma is then transferred along a magnetic vortex to the centre of a rotating sphere of molten lead and lithium.
The complete reactor An artist's impression of the completed reactor. The two plasma injectors are at the top and bottom of the reactor and the sphere of molten metal is shown in red at the centre of the image. Surrounding the metal are the pistons. The plasma is compressed at the bright spot at the centre of the molten metal. (Courtesy: General Fusion)
The sphere is surrounded by about 200 pneumatic pistons, which will suddenly all push in on the sphere at exactly the same time. This, claims General Fusion, will create an acoustic wave that will travel through the molten metal and compress the plasma so much that it will become hot enough and dense enough for the deuterium and tritium nuclei to fuse together. The goal for the initial plasma is a temperature of 106 K, at a density of 1017 particles/cm3. By contrast, an ITER plasma is expected to be about 150 million kelvin and have a density of about 1014 particles/cm3.
The large amount of heat produced by the fusion process would be absorbed by the molten metal and then recovered by passing the metal through a heat exchanger to generate steam that could in turn be used to make electricity. Some of the neutrons created during fusion will be absorbed by the lithium in the molten metal, which will create more tritium. This tritium will then be removed from the molten metal and used in future compression cycles. The entire process would be repeated with the injection of the next plasma.
Challenges ahead
General Fusion's current design for the reactor predicts that about 100 MJ of electrical energy per cycle could be created. Running the system at one cycle per second would generate power at 100 MW – which is about one-fifth the capacity of a small commercial nuclear power plant. The firm claims that a reactor could operate at this power for a year by consuming only 18 kg of deuterium and 60 kg of lithium.
According to Laberge, the firm has to overcome two key challenges before the reactor can be a reality. The first is being able to create a plasma that will endure long enough in the reactor that it can be compressed. Currently, the firm can create a plasma that hangs around for about 50 µs, but Laberge says this must be boosted to at least 100 µs. The plasma would be injected from two identical sources on opposing sides of the sphere. Each source would produce a doughnut-shaped toroid of plasma at a temperature of 106 K that would be "blown" to the centre of the sphere much like a smoke ring. The two doughnuts would collide and combine at the centre of the sphere prior to compression. So far, the firm has built one such plasma injector.
The other big challenge, says Laberge, is creating a system of pistons, all of which must strike the sphere simultaneously to within about 10 µs to ensure that the plasma is compressed evenly. If it is unevenly squeezed, some of the plasma could leak out, preventing the target from reaching the correct density and temperature for fusion to occur. The firm currently controls its pistons using a feedback mechanism that measures the positions of the pistons and uses piezoelectric brakes to keep them all moving at the same pace. As a result, General Fusion can control the motion of a single piston to about 10 µs, and Laberge is confident that this figure can be reduced further.
A working fusion reactor would have to be built at a new location because the ceiling at the firm's Burnaby premises is too low. Laberge says that the company has its eye on a disused transformer-testing facility owned by BC Hydro. He does not think it will be difficult for the firm to get a licence to build the prototype facility, which would take about three years to construct.
While the reactor could be run in a proof-of-principle mode without tritium, Laberge says that the facility would require a very small amount of tritium – less than that used in a hospital – to achieve fusion. If the firm can solve these problems, Laberge and colleagues plan to embark on a fundraising campaign to raise $40m to build the prototype.
Just a stunt?
So what do plasma-fusion experts think of General Fusion's plans for MTF? "I believe that MTF has potential as a viable path to fusion energy," says Uri Shumlak of the University of Washington, who is familiar with the company's plans. However, he points out that the science is not as fully developed as magnetic- or inertial-confinement fusion. "MTF represents a higher risk but lower cost path that is worth pursuing in my opinion," he adds.
Plasma physicist Michael Brown of Swarthmore College in Pennsylvania agrees that MTF is a realistic path. But he adds that while the firm's plan to crush the plasma with pistons is in some ways simpler than the NIF approach of using lasers, it is also a big technological challenge because such a carefully timed implosion of liquid metal has never been achieved before. He also warns that forming a target plasma in a liquid-metal vortex is not an easy task. "There are a lot of plasma physics and piston-technology issues for them to work out," he says.
Brown also admits that MTF is "viewed as a stunt" by the majority of the fusion community. He adds that a one-time implosion might generate a burst of neutrons from fusion but a reactor is a different story. "Many fusion scientists view the magnetic-confinement approach, as embodied by ITER, as the next big step to a steady-state fusion reactor," he says.
David Ward, who works on magnetic-confinement fusion at the Culham Centre for Fusion Energy in the UK, says that while General Fusion's approach seems plausible and that he will be following the firm's results with interest, he will not be jumping ship from magnetic confinement just yet. "In the past, when people have proposed new approaches as a shortcut to fusion, these have always failed and the lesson we have learned is that we just have to do it properly."
Using an atomic force microscope (AFM) to zoom down to nanometre dimensions, researchers in the US have improved our knowledge of the friction between surfaces at geological fault lines. The experiments show that chemical processes can act to bond the surfaces together – a discovery that may lead to a better understanding of how earthquakes are triggered.
Understanding how these surfaces interact is crucial because earthquakes result from frictional instabilities along active fault lines. Stress builds up along the fault and a slip occurs when this stress overwhelms the friction, releasing the stored energy and triggering an earthquake. Geophysicists know that friction between opposing rocks increases the longer the surfaces remain in contact. This is known as frictional strengthening or "ageing" and has been observed in both natural and laboratory settings.
Quantity or quality?
Two competing theories have been put forward to explain frictional strengthening. First, the points of contact at the rock interface may grow and increase in area over time ("plastic creep") – this is known as the "quantity" argument. Second, chemical bonding along the fault can increase the contact strength – this is the "quality" argument. However, these mechanisms are difficult to study because they usually occur deep within thick layers of rock.
Now, a team of physicists and geologists has shed light on frictional strengthening by considering the problem from a nanoscale perspective. Robert Carpick and colleagues at the University of Pennsylvania carried out experiments using an AFM, dragging a tiny silica tip across a silica surface in order to mimic rock-on-rock friction. Silica was used because it is a major component of rock.
The frictional force between the tip and the surface was found to increase logarithmically with time on a timescale of about 100 s, as is observed for macroscopic rock interactions. The researchers then investigated the source of this strengthening by sliding the silica tip across surfaces of diamond and graphite. Because these two materials are chemically inert and do not easily form bonds with silica, any observed frictional strengthening would be caused by a change in contact area, not chemical bonding.
Quantity ruled out at the nanoscale
The results were conclusive: the silica–diamond and silica–graphite interfaces showed almost no frictional strengthening, thus ruling out the contact-area mechanism. The researchers conclude that the silica–silica strengthening must be caused by chemical bonding at the interface, possibly via the formation of siloxane (Si–O–Si) bonds.
"Right now, most of the models of earthquakes are empirical and not predictive," says Carpick. "This adds a piece to the puzzle, and we think it's an important one because it proposes a specific mechanism to include in model friction: chemical bonding at the interface. With nanoscale experiments, new insights can be gained that have previously been elusive."
Although chemical bonding accounts for frictional ageing on the nanoscale level, it remains to be seen whether this mechanism can fully explain real-world, macroscopic rock processes.
Which process dominates?
"These results suggest that chemical bonding is dominant in chemically compatible surfaces, whereas other results distinctly indicate that the quantity of contacting surfaces plays a key role," Jay Fineberg of the Racah Institute of Physics in Jerusalem told physicsworld.com. "I would suspect that both effects are at play, and which of these dominates may well depend on how reactive any two contacting surfaces are. In any case, these are important issues that still need to be understood," he adds.
Carpick agrees that chemical bonding might not be the only contributor. "At the macroscopic level, interfaces between materials are extremely complex. We are not ruling out other mechanisms (such as growth of the contact area), but we are showing that chemical bonding is one mechanism that should be included in any comprehensive analysis," he says.
So how could we better understand the relative contribution of these two mechanisms? "We would need to look at the higher stresses, and also higher temperatures, that span the conditions found in geological systems," says Carpick. "We may see plastic creep at these higher pressures and temperatures, but the chemical bonding may also increase."
The colourful image shown right is an illustration of a new quasicrystal formation that has been discovered using computer simulations. The formation consists of hard triangular bipyramids, each composed of two regular tetrahedra sharing a single face, and is described in a paper in Physical Review Letters by Sharon Glotzer and colleagues at the University of Michigan in the US.
Prize-winning field
Quasicrystals – materials that have ordered but not periodic structures – were discovered by Daniel Shechtman in 1984 and won him the 2011 Nobel Prize for Chemistry.
Before Shechtman's discovery, it was thought that long-range order in physical systems was impossible without periodicity. Atoms were believed to be packed inside crystals in symmetrical patterns that were repeated periodically over and over again. But Shechtman found atoms in a crystal that were packed in a pattern that could not be repeated and yet had "10-fold" rotational symmetry. Since then, hundreds of different quasicrystals have been discovered, including icosahedral quasicrystals that have 2-fold, 3-fold and 5-fold rotational symmetry. There are also octagonal (8-fold), decagonal (10-fold) and dodecagonal (12-fold) quasicrystals that exhibit "forbidden" rotational symmetries within 2D atomic layers but that are periodic in the direction perpendicular to these layers.
Complex configuration
In this new work, the researchers, using Monte Carlo simulations, have found a degenerate dodecagonal quasicrystal that is formed when the constituent particles are at packing fractions above 54%. This new formation is only the second quasicrystal formed with hard particles and the first one that is degenerate.
Here, "degenerate" means that there is no unique lowest energy ground state of the crystal system that corresponds to a single unique configuration of the atoms. "Instead, there are a large number of configurations of these hard triangular bipyramids with the same overall energy and so they are called degenerate," explains physicist Rónán McGrath from the University of Liverpool in the UK, who has been studying quasicrystals for the past 12 years but was not involved in the work. The "hard particles", McGrath goes on to explain are particles that are purely geometric – there are no attractive or repulsive interparticle interactions, as would be seen if the particles had atomic content. Glotzer's group found a quasicrystal formation for hard tetrahedrals – the first of its kind – in 2009.
The team's calculations also showed that for higher packing fractions, a triclinic crystal formation is preferred. "Imagine covering a table top with dominoes. There are many ways of completely covering the table with no space between the dominoes – you could either place all the dominoes horizontally, vertically or both ways. From the point of view of the two 'lobes' of the domino [the two sections], all arrangements are identical. However, if you draw lines connecting the two lobes within a single domino, then each arrangement looks different. This is precisely the case for our particles, which are like 3D tetrahedral 'dominoes'," explains Glotzer.
"I think this work is really interesting because it goes some way towards helping us understand why and how quasicrystals form – the biggest open question in the field – by looking at purely geometric effects," says McGrath. According to Glotzer, the work highlighted the fact that entropy may play a much larger role in quasicrystal formation than was previously thought.
Historical path
McGrath also told physicsworld.com that he was intrigued by the new discovery as it follows in the tradition of the work carried out by Alan Mackay from the UK, who was one of the first to explore geometric effects in crystallography. "He was, in turn, a student of [pioneer crystallographer] John Bernal, who, in turn, was a protégé of the Lawrence Bragg; so for me, there is a nice linkage right back to the origins of crystallography."
Over the years, quasicrystals have led to important discoveries in a variety of fields, from nanoscience to supramolecular chemistry. Photonic "metamaterials" based on quasicrystals may one day replace semiconductor devices in communication and information technologies, while quasiperiodic arrays of electronic spins could reveal new aspects of magnetism for spintronics and other "transformation optics" applications.
The most advanced and biggest craft ever to be sent to Mars is well on its way to the red planet after successfully taking off on an Atlas V rocket at the Kennedy Space Center in Florida at 10.02 a.m. local time on 26 November. The 900 kg nuclear-powered craft, dubbed Curiosity, is due to land on Mars on 6 August 2012 after a 9-month journey. It is designed to determine whether life could have ever arisen on the red planet, as well as to characterize the Martian geology and climate.
Costing $2.5bn, Curiosity will carry ten science instruments, including cameras and spectrometers, to determine the composition of the Martian surface and attempt to detect the chemical building blocks of life. The craft will use a drill and scoop at the end of its robotic arm to gather soil and powdered samples of rock, which it will then pass to its internal instruments.
Some instruments have never been used by missions to Mars before. One of these – ChemCam – will involve firing a laser with a 1067 nm wavelength and a power of 10 MW to deposit 15 mJ of energy onto a millimetre spot of rock to break it down. Curiosity will then collect and analyse the spectrum of the light that has been emitted by the vaporized rock to study its chemical composition.
During its two-year mission, the rover will also characterize radiation found near the surface of Mars to determine the viability and shielding needed for human explorers. Rather than just using solar panels for power, Curiosity will also be fuelled by a plutonium battery, which will generate electricity from the heat of radioactive decay.
"Curiosity is the first rover designed to be both field geologist and portable laboratory," says Aileen Yingst, deputy principal investigator for a camera mounted on the robotic arm of the rover – the Mars Hand Lens Imager. "It has many of the characteristics of other rovers, but it also has instruments that will allow it to look for evidence of carbon compounds in samples."
Rocket-powered landing
Curiosity's target landing area is the Gale crater – a 150 km-wide basin that is believed to be around 3.5 billion years old. Curiosity, which is five times as heavy as the Spirit and Opportunity rovers that have been deployed on Mars for the last seven years, will test a novel type of landing as it is too heavy to employ airbags to cushion its descent as previous Mars rovers have done.
Instead, Curiosity will enter the Martian atmosphere in a pod, which will protect it from the intense heat. The pod will then deploy a parachute once it is within 10 km of the surface, after which the heat shield will fall off and a rocket-powered descent stage, dubbed a "sky crane", will use rockets to slow the craft's journey to the surface.
Once it is within 1.8 km of the surface, Curiosity, together with the sky crane in which it is attached, will separate from the pod and slowly descend to the surface. The sky craft will then lower Curiosity to the surface via three tethers as it hovers about 20 m above ground. NASA hopes the sky crane could be used in a future manned mission to the red planet.
"[Curiosity] will tell us critical things that we need to know about Mars; and while it advances science, we'll be working on the capabilities for a human mission to the red planet and to other destinations where we've never been," NASA administrator Charles Bolden said after Curiosity took off for Mars.
A new type of structure for converting red light into blue has been unveiled by researchers in the US. Known as frequency doubling or second-harmonic generation (SHG), the conversion involves "nanocups", which are tiny, artificially designed 3D structures. SHG is used in light sources and in metrology applications – and the researchers believe that the new structures could be adapted to achieve frequency doubling in parts of the electromagnetic spectrum where it is currently not possible.
SHG is a nonlinear optical process whereby two similar photons are converted into a single photon with twice the energy – and therefore twice the frequency or half the wavelength – of the initial photons. The process was first demonstrated in 1961 when researchers focused a ruby laser with a wavelength of 694 nm into a quartz sample and observed that the light that was subsequently emitted had a wavelength of 347 nm.
Today, SHG is typically produced in nonlinear media, such as certain optical crystals, and the effect is widely used by the laser industry, for example, to make green 532 nm lasers from a 1064 nm source.
Heroes of the half-shell
Now, Naomi Halas and colleagues at Rice University in Houston have designed a new nonlinear optical structure for frequency doubling. Called a nanocup (or half-shell), it consists of a hemispherical nanoparticle of radius 60 nm made of a non-conducting silica. A 35 nm-thick layer of gold is deposited onto the hemisphere's convex surface to create a cup-like structure. Such nanocups have "plasmonic resonances", which are collective oscillations of the metal's conduction electrons that can strongly interact with light at certain frequencies.
Halas' team has shown that the nanocups respond to both the electric- and magnetic-field components of light, and possess unique light-refractive properties. The Rice researchers succeeded in generating second-harmonic ultraviolet light from individual nanocups by tuning the magnetic plasmon resonance to the incoming laser light with a wavelength of 800 nm. They also found that they could increase the intensity of the SHG by tilting the nanocup with respect to the incoming laser light. The outgoing SHG signal at 400 nm was collected and analysed using a CCD camera. The team observed that the intensity of the SHG increases as the angle between the incident beam and the symmetry axis of the nanocup is increased (see figure).
Inaccessible wavelengths
"Our technique generates frequency-doubled light at efficiencies equivalent to those produced by conventional nonlinear optical crystals with the same thickness," says Halas. "Our work on nanocups could lead to the development of other, similar types of nonlinear optical materials that are designed to work at specific wavelengths of light, say in the infrared or ultraviolet, or at wavelengths that are currently inaccessible to existing nonlinear optical materials."
According to the team, photonic devices, such as optical parametric oscillators or amplifiers and electro-optic or acousto-optic modulators, could be made using these types of structures. The nanocups could also be integrated into silicon photonics for on-chip optical sources or for measurements in future work.
A vast bubble of hot, rarefied gas has been revealed as a source of cosmic rays – the mysterious particles that batter the Earth continuously. The observation of the so-called superbubble, measuring more than 100 light-years across, was made using gamma rays collected by NASA's Fermi satellite and sheds light on the origin of cosmic rays in regions of massive-star formation.
Cosmic rays are highly energetic protons, nuclei and electrons that arrive at the Earth from space. Ever since they were discovered in 1912 by the Austrian physicist Victor Hess, scientists have debated where they come from and how they are accelerated. Travelling at close to the speed of light, they can have energies many orders of magnitude higher than can be achieved in the most powerful accelerators on Earth.
Many scientists believe that cosmic rays with an energy up to about 1015 eV are accelerated by the shock waves produced when a supernova (an exploding high-mass star) ejects material into space at very high speeds. Data analysed from NASA's Advanced Composition Explorer spacecraft in 2003 provided indirect evidence that at least some cosmic rays are accelerated within regions of massive-star formation, where about 80% of supernova remnants reside. The satellite measured the relative abundance of different isotopes within samples of cosmic rays reaching the Earth, and found that while four-fifths of this material resembles that found in our own solar system, about a fifth corresponds to material ejected by heavy stars.
More direct support
The latest research provides more direct support for this hypothesis. An international team of astrophysicists has analysed gamma-ray data recorded by the Large Area Telescope onboard the Fermi satellite, and has found an extended source of gamma rays emitted from within the region of the Cygnus constellation. The gamma-ray emission extends along a line measuring about 160 light-years, between two clusters of massive stars, one containing more than 500 massive stars and the other about 75.
Massive stars are formed inside dense clouds of gas and as they grow they eject matter in the form of stellar winds and when they explode as supernovae. The pressure of these ejections pushes gas away from the stars, creating cavities, or bubbles, around them. These bubbles can grow until they merge with bubbles from neighbouring stars, so producing superbubbles.
The Fermi researchers believe that the gamma rays they have observed are the result of cosmic rays being produced inside a superbubble that then interact with the gas and light contained within the bubble. Astrophysicists use such gamma rays to observe the behaviour of cosmic rays because, unlike cosmic rays, gamma rays are not deflected by the magnetic fields that permeate space, and therefore their origins are easier to pinpoint.
The idea that the Fermi satellite is seeing cosmic rays produced inside the Cygnus superbubble is given added support by the relatively large number of higher-energy photons it detected. This "hard" gamma-ray spectrum suggests that the cosmic rays were accelerated close to where they produced the gamma rays, that is to say inside the superbubble. The gamma-ray emission produced by cosmic rays in the neighbourhood of the Earth, in contrast, has a "soft" spectrum because the cosmic rays have travelled further from their sources and lost energy in the process.
First "firm proof"
"This is the first time we have firm proof of cosmic-ray sources inside massive-star-forming regions," says group member Luigi Tibaldo of the University of Padua in Italy. "This is an important step forward in the quest to understand the mystery of cosmic rays."
The next step is to work out exactly what is doing the accelerating. As Tibaldo explains, the culprit could be isolated shock waves generated by single supernova remnants, or else the collective action of many different shock waves. Shedding light on this question will involve higher-resolution observations of the Cygnus superbubble, as well as more refined models of superbubble acceleration and more data from other massive-star-forming regions, both within and beyond our galaxy, says Tibaldo.
Tibaldo does point out, however, that such superbubble acceleration could not solve the cosmic-ray mystery single-handedly. That is because the shockwaves produced by supernova remnants or clusters of massive stars do not pack enough punch to accelerate cosmic rays to the very highest energies at which they have been observed – 1020 eV and beyond.
Alan Watson of Leeds University, who is not part of the Fermi team, believes that the latest results are "an important discovery" in cosmic-ray physics because, he says, "it is clear the researchers have established that there are freshly accelerated particles in the region of the superbubble", but argues that "questions remain", such as whether the particles being accelerated are protons or electrons. "Unfortunately, it does not seem that the problem of the origin of cosmic rays is going to be completely solved in time for the centenary in August 2012," he adds.