Skip to main content

‘Medusa front’ spotted in nanobatteries

 

Researchers at Sandia National Laboratories in New Mexico and the US Department of Energy’s Pacific Northwestern Laboratory are the first to observe how a nanobattery operates in real time using high-resolution transmission electron microscopy. The work could help make improved devices that will be used to power nanomachines and nanorobots of the future with applications in medicine and other fields.

Six years after Richard Feynman gave his famous lecture on nanotechnology, “There’s plenty of room at the bottom”, Twentieth Century Fox released a science fiction film called Fantastic Voyage in which a miniature spaceship just 1 µm in size travels inside a human body and removes a blood clot, all while evading the body’s immune system. Given that a typical human cell is about 10 µm across and the latest computer chips have feature sizes of just 45 nm, making such nanorobots is no longer out of the question. However, the biggest challenge will be how to power these autonomous nanomachines.

Lithium-ion batteries consist of two electrodes, the anode (negative) and the cathode (positive), separated by an electrolyte – a conducting material through which charged ions can move easily. During cell discharge, the positively charged lithium ions travel across the electrolyte to the cathode and so produce an electric current. When the batteries are recharged, an external current forces the ions to move in the opposite direction so that they can be stored at the anode.

Expanding anodes

Tin oxide is ideal for making the anode in lithium-ion batteries because it has a high energy density. However, upon charging, the material expands, which leads to cracking and reduced electrical conductivity, and eventually battery failure – a problem that is particularly serious in practical nanobatteries.

To investigate this expansion in detail, Jianyu Huang and colleagues made a working prototype of a nanobattery that comprises a single nanowire anode made of tin dioxide that can be charged and discharged. The battery also contains a specially designed ionic liquid electrolyte that can withstand the high vacuum of a transmission electron microscope (TEM) and a bulk lithium cobalt oxide cathode. The researchers loaded the device into the TEM to see exactly what happened when they applied –4 V against the lithium anode.

The team observed that when the tin dioxide nanowire was charged up, it swelled, twisted and then elongated (see video). These changes come about because of a process called lithiation, common to all lithium-ion batteries. This is when ions, extracted from the cathode during charging squeeze into the anode. By observing lithiation as it happens, Huang and colleagues say that their work might help design more advanced nanobatteries by understanding how the electrode accommodates volume changes associated with this process. “This will help us understand why a battery fails following cyclic charging and discharging,” says team member Chongmin Wang.

Surprising behaviour

The fact that the anode nanowire elongates to nearly twice its size came as a surprise, says Huang. Normally, the wire should expand in the radial direction rather than along its length. The nanowire twisting during charging and discharging is also unexpected and astonishing, he adds. “Such behaviour must be taken into account if we want to design and build standalone nanowire batteries, because it is electrical shorting as a result of these transformations that leads to battery failure.”

The researchers didn’t stop there; they also recorded how the microstructure of the battery evolved during charging. They observed that it changed from being an ordered, crystalline solid to being disordered and amorphous. “We saw that a high density of mobile dislocations nucleate and are absorbed at the chemical reaction (or ‘Medusa’) front with the dislocation cloud serving as a precursor to solid-state amorphization,” explained Huang. Electrochemical solid-state amorphization is a poorly understood process by which a crystalline material changes to an amorphous material. Amorphization will degrade a device, and controlling it is crucial for how well batteries perform and how long they last.

“While we ran short of demonstrating a fully packaged nanobattery, we believe we have made an important step towards an important goal in nanotechnology – building a single-nanowire battery consisting of a nanowire anode and cathode, and nanoscale electrolyte and packaging,” Huang told physicsworld.com.

The research, which was reported in Science (330 1515), will ultimately help create batteries with high energy densities, high power density and long cycle lifetimes.

CERN moves closer to antihydrogen spectroscopy

Physicists at CERN have taken a big step towards making the first spectroscopic measurements on a beam of antihydrogen atoms. The antihydrogen atoms, which consist of an antielectron orbiting an antiproton, were made by members of the lab’s ASACUSA group. The beams could be used to carry out the first detailed studies of the energy levels in antihydrogen.

Measuring in detail the energy levels in antihydrogen is important because the Standard Model of particle physics says they should be identical to those of hydrogen. Any slight differences in the “fine structure” of the levels compared to ordinary hydrogen could shed light on why there is so much more matter than antimatter in the universe.

The breakthrough comes just weeks after researchers in the ALPHA collaboration at CERN succeeded in trapping 38 antihydrogen atoms for about 170 ms. This was the first time antimatter atoms had been stored for long enough to measure their properties in detail and, taken together, the two results represent major advances in studies of antimatter.

Trapped in a cusp

The ASACUSA researchers, however, used an alternative technique for creating antihydrogen. Led by Yasunori Yamazaki of the RIKEN laboratory in Japan, they created their antiatom beams by combining antiprotons with positrons in a “cusp trap”.

The trap comprises 17 successive ring-shaped electrodes and two magnetic coils, which are wired to create magnetic fields in opposite directions (see figure). A cloud of antielectrons (also called positrons) from a radioactive source is first sent into the trap, where it is held as a plasma. A cloud of antiprotons – created in a nearby accelerator – is then fired into the plasma to create the antihydrogen atoms.

Charged particles remain stuck in the trap, while neutral antihydrogen atoms are able to move further along the apparatus to a “field ionization trap”. At this point, antihydrogen atoms in highly excited Rydberg states, in which the positron lies very far from the antiproton, are ionized and their antiprotons are trapped.

Detecting pions

The trapped antiprotons are then released and quickly annihilate upon contact with the walls of the trap. Each annihilation event creates pions, which are easily spotted by a bank of detectors surrounding the trap. By comparing the number of antiprotons injected into the trap with the number of annihilations detected, the team estimated that about 7% of antiprotons combine to form antihydrogen.

The team is now trying to improve the way in which antihydrogen is extracted from the trap before passing it through a microwave cavity in which hyperfine transitions between atomic energy states should occur. Making precise measurements of these transitions, which have not yet been carried out, could be used to study a fundamental quantum transformation known as the charge-parity-time (CPT) operation.

When applied to a physical system, a CPT transformation converts every particle to its antiparticle, reflects each spatial co-ordinate, and reverses time. Although is currently no experimental evidence that the CPT symmetry is violated, it could show up as a slight difference in the frequency of hyperfine transitions in hydrogen and antihydrogen atoms. The discovery of such a violation could also help physicists understand why there is much more matter than antimatter in the universe.

The work is reported in Phys. Rev. Lett. 105 243401.

Cosmic dark ages brought to light

It may look like a picnic table, but this humble piece of kit located in the Australian outback has revealed how the universe emerged from a period called the cosmic dark ages nearly 13 billion years ago. This was the time when the first stars and galaxies began to assert their influence on cosmic evolution, by bombarding the intergalactic medium with ultraviolet light until it had become a warm, ionized plasma.

The EDGES apparatus, a $30,000 radio antenna designed to detect the prominent 21 cm spectral signature of hydrogen, has allowed Judd Bowman of Arizona State University and Alan Rogers of the MIT Haystack Observatory in Massachusetts to place the first direct limit on how fast this cosmological phase transition took place.

“This result marks an observational milestone,” says Rennan Barkana of Tel Aviv University, who was not involved with the work. “Knowing when and how re-ionization happened teaches us a great deal about whatever population of objects existed 300–800 million years after the Big Bang, which could still turn out to be something more exotic than stars such as massive black holes or even decaying dark-matter particles.”

Plunged into darkness

The universe became neutral and transparent about 380,000 years after the Big Bang, when it had cooled sufficiently for protons and electrons to combine into hydrogen atoms. Photons, which before recombination were unable to travel far in the plasma, flew out in all directions and have since been stretched – or redshifted – by the expansion of space to constitute the cosmic microwave background (CMB). There were no luminous objects at that time because matter had yet to clump together under gravity, but 100 million years later the first stars and galaxies began to punch holes in the darkness and their light set about reionizing the intergalactic medium – mostly hydrogen.

Since at these early times (or red shifts) hydrogen’s 21 cm spectral line is stretched to roughly metre wavelengths, EDGES was designed to pick up signals with a frequency between 100-200MHz – in the VHF band. During three months of continuous observation Bowman and Rogers looked for the highly red-shifted 21 cm signal, which is expected to be extremely weak because it comes from transitions between the lowest energy states of hydrogen in which the spin of the electron and proton are either aligned or anti-aligned.

Having painstakingly subtracted the much stronger low-frequency signals coming from the magnetized plasma in the Milky Way and nearby galaxies, not to mention interference from terrestrial TV and radio transmissions, the team concluded that reionization did not end abruptly but took place over a red shift larger than 0.06. That translates to a period of at least 5 million years.

“This was expected in all theoretical models that forecast reionization by the first stars, but is now an observational statement,” says theorist Avi Loeb of Harvard University. “Most of the 21 cm community is involved in the construction of interferometers (whose cost is larger by two orders of magnitude) that image the sky as well, so it is gratifying to see that an innovative experiment succeeded in getting the first constraints.”

21 cm cosmology

Until now there have been two types of observational data on reionization: anisotropies in the cosmic microwave background as measured by WMAP and observations of the light from distant quasars. These data indicate that the universe was already fully ionized by redshift 6.5 (corresponding to about 1billion years after the big bang), but Barkana says that studying hydrogen’s 21 cm emission is very promising for going back to the period 200 million years after the Big Bang. “21 cm cosmology is opening up a new observational cosmic window,” said Barkana. “At least one, perhaps two more cosmic transitions should have left spectral fossils.”

“So far EDGES has only set limits and not made a detection,” says Rogers. “We hope that we might detect the early hydrogen and put some real numbers on the red-shift, but it remains to be seen how far EDGES can go.”

The research is published in Nature 468 796.

Penrose strikes back in war of the cosmos

greaves-495px.jpg
Do these concentric circles offer a glimpse of before the Big Bang?

By James Dacey

Roger Penrose is defending his claim that our universe did not begin with the Big Bang but instead continually cycles through a series of lifetimes, or “aeons”. He makes his latest case in a paper submitted to the arXiv preprint server yesterday.

The recent excitement began in November when Penrose, a University of Oxford physicist, made the sensational claim that he had glimpsed a signal originating from before the Big Bang. Working with Vahe Gurzadyn of the Yerevan Physics Institute in Armenia, Penrose came to this conclusion after analysing maps from the Wilkinson Anisotropy Probe (WMAP). These maps reveal the cosmic microwave background, believed to have been created just 300,000 years after the Big Bang and offering clues to the conditions at that time.

After scrutinizing over seven years’ worth of WMAP data, as well as data from the BOOMERanG balloon experiment in Antarctica, Penrose and Gurzadyn say they have identified a series of concentric circles within the data. These circles show regions in the microwave sky in which the range of the radiation’s temperature is markedly smaller than elsewhere. According to the researchers, the patterns correspond to gravitational waves formed by the collision of black holes in the aeon that preceded our own, and they published these claims in a paper submitted to arXiv.

The paper was quickly picked up by physicsworld.com and, in no time at all, the story was causing a big stir in the blogosphere. But not everybody agrees with Penrose’s outlandish claims and to date at least two other groups have published their own independent analyses of the same CMB data, and both have taken issue with the original conclusions. The first is a paper by Moss et al and the second is written by Wehus et al – both published on arXiv.

The disagreements are subtle – and I won’t pretend I fully understand them – but in essence both groups are saying that we should not be surprised by the circles, which can easily be explained by anisotropies in the CMB. The patterns, claims Wehus’ group, are fully consistent with the accepted inflationary model of cosmology: that the universe started from a point of infinite density, expanded extremely rapidly for about a second, and has continued to expand much more slowly ever since.

But not to just sit and sulk, Penrose and Gurzadyn have already hit back with a follow up paper, published yesterday on arXiv. In the short article, they agree that the presence of circles in the CMB does not contradict the standard model of cosmology. However, the existence of “concentric families” of circles, they argue, cannot be explained as a purely random effect given the pure Gaussian nature of their original analysis. “It is, however a clear prediction of conformal cyclic cosmology,” they write.

The battle, it seems, is set to go on.

Mission overshoots Venus orbit

The Japanese Space Agency, JAXA, has launched an investigation into a failed attempt to put a craft into orbit around Venus. Akatsuki, which blasted off in May, was due to study Venus’s violent atmosphere and confirm if there are active volcanoes on its surface. However, yesterday the craft failed to reduce its speed sufficiently to fall into the gravitational pull of the planet. Astronomers will now have to wait over six years for the mission to return to Venus before attempting to put it into orbit again.

Akatsuki, which means “dawn” in Japanese, is the first time JAXA has attempted to put a probe into an orbit around a planet. In 1998 JAXA launched Nozomi to orbit Mars but it suffered electrical failures before it could be put into orbit. “An orbit insertion is perhaps the most difficult and most critical operation of any in a planetary mission,” says Håkan Svedhem, project scientist for the European Space Agency’s Venus Express, which was launched in 2005 and has been orbiting the planet since 2006. “It is in no way a simple activity.”

Sultry neighbour

Known as the Earth’s “sister planet” due to its similar mass and size, Venus orbits closer to the Earth than any other planet in the solar system. However, Venus’s climate is very different from ours. Its atmosphere contains mostly carbon dioxide and is a sultry 460 °C, with the high temperatures believed to be due to a “runaway greenhouse effect”. And, while Venus rotates at about 6.5 km per hour, its atmosphere rotates at a violent 360 km per hour.

Costing about $220m, Akatsuki would have operated in orbit around Venus for the next four and a half years with five onboard cameras. Two of these instruments, which operate in the near-infrared regime, would study the planet’s surface and the motion of clouds, as well as the size of particles that make up the clouds. A long-wave infrared camera, meanwhile, would measure the temperature at the “cloud top”, which lies about 65 km above the planet’s surface.

The final two cameras are an ultraviolet imager to measure sulphur dioxide at the cloud top and a lightning and airglow camera, which would capture lightning flashes that have never been observed on the planet before.

According to Svedhem there is not a great deal the spacecraft can now do as it awaits its return to Venus. “All the people who have worked on the project must be very disappointed now,” says Svedhem, adding that it is also disappointing for the Venus Express team due to unavailability of joint observations the two spacecraft would have carried out. It is not yet known when the investigation will release its conclusions.

Quark–gluon mania returns to CERN

After a successful run of eight months – including recent collisions that could shed light on the primordial universe – the last beam of 2010 was extracted yesterday evening from CERN’s Large Hadron Collider (LHC). Since 7 November the LHC has been colliding lead ions at energies of around 0.5 PeV, 80 times higher than generated by earlier proton collisions. This creates a subatomic blob so hot and dense that nuclear matter dissolves into its constituent quarks and gluons – a state of matter that dominated the universe shortly after the Big Bang.

The search for such a quark–gluon plasma (QGP) first hit the headlines in 2000 when “fixed target” heavy-ion experiments at CERN found evidence for a new state of matter – apparently scooping Brookhaven National Laboratory in the US, where a dedicated QGP machine called the Relativistic Heavy Ion Collider (RHIC) was just starting up. But then in 2005 RHIC announced that its quark–gluon gloop behaved not like a gas, as expected, but like a liquid with zero viscosity. Earlier this year RHIC physicists confirmed that the primary participants in the flow are indeed quarks and gluons, not hadrons.

With the LHC now having created conditions 14 times as energetic as those at RHIC, quark–gluon mania returns to Europe. After a few days of running, the LHC’s dedicated heavy-ion experiment ALICE found evidence for a hot, dense state that still flows like a fluid despite the higher temperatures (arXiv: 1011.3914v1), and revealed a marked increase in the number of particles created in the collisions (arXiv: 1011.3916v2). Taken together, says CERN, these results rule out some theories about how the primordial universe behaved.

Evolution of the infant universe

Theorist Thomas Schaefer of North Carolina State University says that linking the LHC collisions with the evolution of the infant universe is not straightforward, however. “We verified the basic picture (the quark–gluon plasma really exists), we learned very interesting things about it such as its perfect fluidity, but neither of these things directly affects the dynamics of the early universe such as big bang nucleosynthesis,” he told physicsworld.com.

Brookhaven’s Steve Vigdor says the ALICE results certainly suggest liquid behaviour with low viscosity, but he thinks it premature to claim that this “confirms” the near-perfect liquid picture. “The question at this point is what the magnitude of the sheer viscosity of the matter is – how close is it to the conjectured lower quantum limit?” he said. “It’s taken much analysis of RHIC data to start to pin down this question quantitatively; LHC is not there yet.”

The two general purpose detectors at the LHC – ATLAS and CMS – have also brought new perspective on the quark–gluon state. At a seminar held at CERN last Thursday, representatives from ATLAS and CMS reported direct observations of “jet quenching” – when a collimated stream of hadrons created almost instantaneously from the decays of quarks and gluons is swamped as it traverses a dense quark–gluon state. “Jet quenching presumably teaches us about how energetic quarks and gluons interact in the QGP, and should help elucidate the quark–gluon correlations that lead to low-viscosity liquid flow,” says Vigdor.

ATLAS published its result last Monday (arXiv: 1011.6182) and CMS is expected to follow suit once the full heavy-ion dataset has been analysed. At CERN’s heavy-ion seminar last week, ALICE spokesman Jurgen Schukraft stated that the search for the QGP is essentially over, its discovery is well under way, and measuring its properties is just beginning.

QGP in proton collisions?

The LHC has added a further twist in the tale of the QGP. In July, when the machine was perfecting its main job of firing protons at each other, researchers on the CMS detector found that some of the debris from certain collisions containing a large number of particles was correlated – pairs of particles were flying out at angles which suggested they influenced each other at the point of the proton–proton collision. Members of the 3000-strong CMS collaboration claimed in September (arXiv: 1009.4122v1) that they had observed a “potentially new and interesting effect” reminiscent of similar features seen by experiments at RHIC that were interpreted as being due to the presence of hot and dense matter.

“Reminiscent is not a quantifiable scientific measure,” says RHIC physicist Michael Tannenbaum. “In contrast to the great physics that is the discovery of jet quenching at the LHC, which is very strong evidence that a QGP is also produced, claims for the discovery of a new effect in the CMS two-particle correlations are uninformed and inadequately researched.” In a comment about the CMS result (arXiv: 1010.0964v1), Tannenbaum lists several checks that must be made before evidence for a QGP in proton collisions can be claimed, for example concerning features of the QGP observed in gold–gold collisions at RHIC.

Most people are now convinced that a transition from the nuclear matter state to QGP has been seen in relativistic heavy ion reactions Richard Weiner, University of Marburg

Tannenbaum’s co-author, Richard Weiner of the University of Marburg in Germany, says the CMS observation is in line with both RHIC’s findings and previous observations in particle physics made at CERN in the late 1970s, which were interpreted by some as possible evidence for QGP. “Most people are now convinced that a transition from the nuclear matter state to QGP has been seen in relativistic heavy-ion reactions,” he said. “At RHIC this effect has been interpreted in hydrodynamical terms, and the same interpretation applies to proton–proton reactions.” Weiner says that even for many heavy-ion people this is a surprise, yet on the other hand he says many particle physicists have difficulties in accepting the interpretation due in part to the ever-increasing specialization in high-energy physics.

CMS member Pierre Van Mechelen of the University of Antwerp says that the CMS collaboration just reported what it measured, and cautions that this is a new energy domain. “Many models seem to be able to explain the correlations qualitatively, but the real challenge is to reproduce the result seen by CMS in exact numbers,” he said.

Subtle business

Interpreting LHC collisions is a subtle business, though. Protons are only a tiny part quark, while almost all of their mass comes from a sea of fluctuating gluons whose lifetimes at LHC energies are dilated to the point where proton collisions can be viewed as a clash of randomly configured gluonic “hot spots”. Experimentalists have to piece together the underlying physics of quark and gluon interactions, as described by quantum chromodynamics (QCD), from a bucket full of jets and junk produced almost immediately in the mêlée. “The CMS ridge is not predicted by any of the current, widely used QCD Monte Carlo models for proton–proton scattering,” says CMS deputy physics co-ordinator Guenther Dissertori. “For the non-heavy-ion people (the large majority), it was a complete surprise.”

Brookhaven’s Raju Venugopalan, who thinks a framework of high-energy QCD called a colour glass condensate can explain key features of both the CMS result and the ridge events seen in gold–gold collisions at RHIC (arXiv: 1009.5295v2), says that understanding the detailed structure of the CMS ridge provides a unique snapshot into the microscopic structure of visible matter. “Clearly, this novel phenomenon has triggered a rash of papers and will continue to do so, but few people have so far considered the systematics of the effect.”

Whether the CMS two-particle correlations are due to a colour glass condensate, a quark–gluon plasma, a rotating (arXiv: 1009.5229v3) or an exploding (arXiv: 1009.4635v1) deconfined quark–gluon state, or perhaps gluodynamic quantum entanglement (arXiv: 1010.4463v1), physicists ultimately have to be able to account for it if they are to disentangle signal from background when searching for new particles. The LHC beam may be down until February, when protons will be reinstated, but the task of interpreting its first year of data is far from over.

Not just a pretty face

Calabi–Yau spaces have become the poster child for string theory. These baroquely curved shapes adorn book jackets and conference notices galore, and you can even buy a plum-size model of one from the website of sculptor Bathsheba Grossman. But for most people – even most physicists, I suspect – such higher-dimensional origami is simply an object of awe. In part, this is because books and magazine articles about string theory for the general public gloss over what these shapes actually are. As a result, Calabi–Yau spaces have become something to gawk at, but never really grasp – rather like string theory in general. It has fallen to eminent mathematician Shing-Tung Yau himself, working with science writer Steve Nadis, to try to demystify them. Although their book The Shape of Inner Space is very uneven, it takes a big step towards making Calabi–Yaus more than so much eye candy.

Like the frame of a harp, the shape of space determines how the microscopic strings of string theory vibrate – giving rise, so the theory goes, to all the elementary particles and forces of nature. In the standard formulation of the theory, in order for strings to reproduce the observed properties of those particles and forces, space must have six unseen dimensions in addition to the three visible ones, and those dimensions must be finite in extent and crumpled into a Calabi–Yau shape. Basically, Calabi–Yaus are finite spaces with a high degree of symmetry. Not only do they satisfy the equations of general relativity for a vacuum, they also have a subtle geometric symmetry that is related to the physics concept of supersymmetry.

Such spaces were first conjectured by Eugenio Calabi in 1953, and Yau’s 1976 proof that they can exist was one of the major achievements that won him a share of both the 1982 Fields Medal and this year’s Wolf Prize in Mathematics. But the book covers the full range of Yau’s work and the broader research programmes to which it contributed, such as the positive-energy conjecture in general relativity (confirming that space–time is stable), the Poincaré conjecture and mirror symmetry. Not bad for someone who once failed a primary-school mathematics entrance exam and briefly dropped out of school altogether! And although the book never forgets to remind the reader how (justifiably) proud Yau is of his achievements, it also gives credit to others, including some (such as Alexander Givental from University of California, Berkeley) with whom he has had priority disputes. It also acknowledges that luck and good timing play a big role in individual accomplishment.

Along the way, the book introduces readers to concepts seldom seen in material intended for the general public, such as parallel transport, holonomy, geometric flow and the ways that topology constrains geometry and vice versa. Above all, it is fascinating to see the story of string theory told from a mathematician’s point of view rather that of than a physicist. The book argues that the theory mended a schism that had opened up between the two disciplines. However, it is frank about the problems that string theorists have needed to confront, such as the question of which – if any – Calabi–Yau shapes are dynamically stable (the “moduli stabilization” problem). In an interesting take on the notorious controversy over the “landscape” of string theory, which calls into question whether string theory is capable even in principle of making firm predictions, Yau suggests that the problem may lie in an incomplete understanding of Calabi–Yaus. In fact, for all the attention paid to these shapes, they are not the only form that the extra dimensions of string theory might take.

The book is worth picking up just for the chapter that fleshes out the definition of Calabi–Yau spaces. But be warned: it does not make for light reading, and although it may be intended for a general readership, I doubt it will achieve this aim. Large chunks of it seem written for other mathematicians and physicists, for whom the explanations will come as a relief from the deadly theorem-proof-theorem-proof style of most mathematics texts. Beyond these circles, most readers’ heads are going to hurt.

One particular failing is that the book does not clarify when a word can be understood colloquially and when it has a specific technical meaning. Non-expert readers are not going to realize that “the Gauss curvature of a sphere” is not the same as “the total Gauss curvature of a sphere” and will therefore wonder what they have missed when told that one has a value of 1 (for a unit sphere) and the other 4p. What is more, two chapters pass between the definition of Gauss curvature and further discussion of the concept. This is far too long an interval for most readers to remember what it means.

The book gives technical definitions that may be important to maintain rigour but will come across to non-specialist readers as unmotivated and impossible to parse. Even the introductory chapters take it for granted that readers already have an interest in geometry. Ironically, the later chapters are easier to absorb than the early ones, as though the authors realized in the process of writing the book that they had to tone it down. A chapter about vacuum decay seems positively fluffy compared with the dense earlier material.

It is also a little surprising that, after introducing Calabi–Yaus, the book does not venture to describe what they actually look like, and what properties are lost when a 6D structure is projected onto a 2D page. Yau writes that he likes geometry in part because it is the most visualizable of mathematical disciplines. Yet when it comes to visualizing his most famous achievement, all we get is a brief description three chapters later.

Like science itself, science writing is an incremental process. Sometimes it takes a succession of books and articles before scientists and writers hit upon the clearest explanations. By bravely attempting to explain areas of mathematics that no-one has ever tried to relate to the public before, The Shape of Inner Space takes a huge step forward in this process. Even readers who never manage to finish the book will benefit indirectly, as it will undoubtedly influence how string theory is taught and written about in the future. Until then, however, Calabi–Yaus will still be little more than pretty pictures.

Inside Penrose’s universe

A sketch of how the Big Bang is the crossover between one aeon and the next

In the years following his hugely successful book The Emperor’s New Mind, in which he argued that there is a quantum aspect to human consciousness, Roger Penrose has acquired a reputation among his peers for writing beautiful books that advocate controversial ideas. His latest book, Cycles of Time, is no exception. Its central idea is that one universe follows another in eternal recurrence on the grandest scale, and Penrose himself alerts the reader in his prologue (and epilogue) that this is a little crazy. However, the amazing facts revealed in quantum mechanics have turned “crazy” into an almost positive rather than pejorative epithet in physics, so perhaps the “health warning” should be taken with a pinch of salt.

Anyone who has heard Penrose’s recent lectures will have noted his passionate enthusiasm for his new idea. It came to him in the summer of 2005, when he was “depressing himself” by thinking of the wastes of time that stretch ahead of the universe according to the latest cosmological observations, which suggest an ever-accelerating expansion. He asked “Who will be around then to be bored by this apparent overpowering eventual tedium?” A thought occurred. If, by then, only massless particles are present (the rest having decayed), Penrose reasoned that eternity will pass in a flash since no proper time elapses at all for these voyagers along space-time light-cones. More significantly, it would mean that in the very distant future, the universe must have a particular structure much simpler than it would were massive particles present as well.

This idea, in turn, prompted Penrose to relate his decades-old ideas about the second law of thermodynamics to the geometrical structure of space-time that is implied by the observed accelerated expansion. Here, a quick primer may not go amiss. Einstein’s general theory of relativity is based on the curved, non-Euclidean geometry that Riemann introduced in 1854. This geometry has a part that describes measured angles (known as the “conformal geometry”) and a conceptually distinct part that describes distances. The point to understand about this is that angles, not distances, are what matter in determining the shape of something. One can, for example, imagine shrinking or magnifying the sides of a triangle, making it larger or smaller, while leaving the shape-determining angles unchanged.

From about 1962 Penrose introduced a similar device into general relativity, but with a further twist that allowed him to be able to “blow up” the Big Bang and black-hole singularities, where spatial distances shrink to nothing but matter densities and space–time curvatures become infinitely large. As he explains, his conformal, or angle-preserving, diagrams “bring into our finite comprehension the infinite regions of space and time” and fold out “those regions that are infinite in a different sense, namely the space-time singularities”. These diagrams have become an indispensable tool for relativists and play a key role in Penrose’s book. They can be used to depict the most salient features in space–time – in particular the violent singularities just mentioned, but also regions infinitely far away in space or time.

Above all, Penrose is concerned with the structure of our universe at its birth and in its very distant future. There is much evidence that suggests that its structure is “spacelike” at both extremes. This means that, in their essence, the “ends” of the universe are like our ordinary 3D space at a given instant of time. They may, however, have infinite curvatures – a spacelike singularity – or, alternatively, they may be blandly smooth.

Now we come to the second law, the hugely enigmatic nature of which Penrose has been emphasizing for decades. As he characterizes it, the Big Bang was a spacelike singularity with a conformal geometry that was utterly, improbably smooth compared with what one might reasonably expect. In contrast, the remaining part of the geometry – the one concerned with distances – was highly singular, since all distances shrink to nothing as one goes back in time to the Big Bang. Penrose’s preferred precise mathematical formulation of this state of affairs is down to his collaborator Paul Tod. According to it, one can in imagination extend our space–time beyond the Big Bang in a smooth, conformal manner.

You may still wonder what all this has to do with entropy and the second law of thermodynamics. Well, from the point of view of gravitational physics, the incredibly smooth conformal geometry at the Big Bang corresponds to very high order and extraordinarily low entropy. We and all the other increasingly intricate structures in the universe have been living off that order ever since. But how did this order get there? That was Penrose’s conundrum.

Penrose’s new inspiration came when he realized that the positive cosmological constant that is accelerating the universe’s expansion means that, from the conformal point of view, the universe will be spacelike at its end. Since the Big Bang singularity is also spacelike, it might be possible, provided certain conditions are satisfied, to match conformally the whimpering end of our universe to its explosive beginning. Their shapes could then dovetail, not just mathematically but in physical reality. However, Penrose rejected this “biting its tail” possibility on the grounds that it would introduce paradoxes of the “go back in time to kill your grandfather” variety and instead opted for what he calls a conformal cyclic cosmology. Here, one aeon succeeds another at a spacelike “crossover”, at which both universes’ conformal geometries – but not their scales – match. This process can be repeated eternally.

There are numerous problems to be overcome in this proposal, which involves a radical rethinking of Penrose’s own ideas about the second law. One serious difficulty is that it relies heavily on all particle masses, including that of the electron, becoming exactly zero in the very distant future. Many particle physicists will question that. But the biggest difficulty of all is that even if the shapes of the aeons match, how does the transition from an infinitely large scale before crossover to an infinitely small scale after crossover occur? This is where the argumentation and mathematics get tough.

Penrose effects the crossover with a scalar field dubbed “phantom” before the crossover, because it involves a purely mathematical conformal transformation. This field then becomes physical after crossover, and Penrose tentatively identifies it with the dark matter needed to explain the structure of galaxies and clusters of galaxies. Although the scalar field evolves deterministically, and in that respect is conventional, its transformation “at once” from being a purely mathematical object to a physical one has no parallel elsewhere in physics – unless one likens it to the notorious collapse of the wavefunction in quantum mechanics.

I have to say that the key idea strikes me as not so much crazy as implausible. Nevertheless, I think many people will, like me, be glad to have read this book. Penrose’s prose is wonderfully clear and concise (I admit to envy), and the “intelligent layreaders” who buy his books in such remarkable numbers despite all his equations will be stimulated by his lucid (and largely equation-free) discussion of the second law. Budding relativists will get an excellent introduction to conformal diagrams and much else. And I am sure sceptics will enjoy looking for defects in the arguments.

Let me end provocatively, like Penrose, for I share his fascination with conformal geometry. In one sense, I wish he had been more radical. Despite his great attraction to conformal geometry, Penrose still accords length a real physical role. But in fact we only ever observe angles, never lengths as such. Do we really need them?

  • 2010 Bodley Head £25.00 hb 320pp

A look on the bright side

When did we stop thinking that sunset marked the end of the day? Not as long ago as you might think: I am old enough to remember staying on a family farm where evening light came only from kerosene lamps and a home-made diesel generator that charged a stack of clapped-out car batteries. The generator was too noisy to run at night, and by morning the batteries could manage only a dim red glow. As a city child, I loved the romance of it all – the smell of kerosene still evokes warm memories – but Auntie Mabel no doubt couldn’t wait to get connected to the electrical grid. Proper electric light meant progress, status – and a lot less work.

Author Jane Brox has written several books about how farm life has changed over the centuries, and in Brilliant: The Evolution of Artificial Light she is interested above all in the intimate relationship between light and human possibilities. The result is a book that focuses on the social changes brought about by lighting technology rather on than the technical developments themselves. Short on technical detail and sometimes muddled about the engineering, it is long on human stories related to artificial light.

The plan is chronological, with attention devoted about equally to Europe and the US. We start in the caves of Lascaux, in southwest France, where startling images drawn by prehistoric humans present an immediate problem: how did they see to create them? Answer: limestone saucers found there were probably filled with oil or fat to fuel a luminous flame. This might have been a good point at which to answer another question: why are flames luminous? But Brox is not interested in this kind of thing. She is not a scientist, and the few technical explanations scattered through the book, as well as being distinctly odd in places (confusing motors with generators for instance), have a slightly bolted-on feel, as if reluctantly inserted at the insistence of an editor.

Brox is good, however, at conveying the extraordinarily long time during which flames were (with the exception of oddities such as fireflies and rotting fish) the only source of light after the Sun had set. And also the fact that for nearly all of this time these flames were provided by burning fat or oil, just as they were in prehistoric times. There were a few footling improvements along the way: Aimé Argand’s hollow wick, invented in the late 18th century, burned fuel more efficiently; smelly whale oil (cue long passage about the heroism and wickedness of whaling) was replaced by relatively clean kerosene. But it was only when liquid fuels gave way to gas that anything approaching the amount of light we expect today could be achieved at an affordable price.

Once gas began to be piped to cities in the early 19th century, things changed quickly. Very soon, after 40,000 years of stasis, we had not one but two new kinds of light producers: electricity as well as gas. Brox documents the emergence of electric lighting dramatically, capturing the excitement of the AC/DC wars between Westinghouse and Edison, the harnessing of Niagara Falls and the expansion of the US grid into rural areas (aided by the federal government in a way that might now seem unacceptably socialist). Later, the US would pay a heavy price for the hasty and chaotic development of its grid, with blackouts starting in the 1960s and continuing to this day.

It is a curious fact that although lighting accounts for only about 10% of electrical demand, failure of the electricity supply is nearly always called a “blackout”. It seems we fear loss of light more than loss of power. Nevertheless, a total, intentional, blackout was instituted in Britain in 1939 as the Second World War began, soon to be followed by blackouts of a different kind as the electrical systems of its cities were knocked sideways by bombing. Brox describes the London blitz vividly and sympathetically, though once again her tendency to concentrate on the dramatic and personal at the expense of her advertised subject rather gets the better of her.

Brilliant is an entertaining and thought-provoking book about a relatively neglected subject. It is beautifully written, if occasionally veering a bit close to the poetry of Walt Whitman, with Brox’s “…in the midst of the old quiet, where light had circumference again” faintly echoing lines such as “There in the fragrant pines and the cedars dusk and dim.” The book is not comprehensive: some important areas, such as vehicle lighting, are left out. On the other hand, it does tackle some of the problems thrown up by abundant artificial light: the dimming of stars, the disorientation of birds and, of course, the apparently unstoppable production of carbon dioxide by countless power stations.

Brox takes us up to the present day and even into that betting shop of popular science, the future, where we find clothes made from fabrics that store energy during the day and release it via LEDs after dark. She is on more secure ground discussing compact fluorescent lamps (CFLs) and on-going consumer resistance to their unfriendly light. Once again, a bit more science would have helped here, contrasting the jagged spectrum of CFLs with the smooth, black-body radiation of a good old-fashioned light bulb.

While not as scholarly as Carolyn Marvin’s comparable classic When Old Technologies Were New (Oxford University Press, 1990), Brox’s book is equipped with a very full set of notes. And like any really serious book, it provides no pictures. As a result, it is not altogether clear who its audience might be, other than that overworked animal the intelligent layreader (perhaps someone who liked Dava Sobell’s more specialized Longitude). But Brilliant did the trick for me. Despite its irritating technical failings, it provided more than a nostalgic whiff of kerosene: an engrossing account of the engineering, entrepreneurial and political heroism that created today’s dangerously light-addicted world.

A glimpse of the future

As they gaze back in time, their giant telescopes collecting light that has travelled from distant stars and galaxies for millions or even billions of years, astronomers can sometimes seem fixated with the past. Chris Impey is different. Although the University of Arizona astronomer acknowledges that “science mostly answers the question of how things got the way they are”, in his latest book he is chiefly interested in the future. After all, he concedes, if we focus only on the past, “our job is only half done, as every good story needs an ending”.

This is the premise of How It Ends, and a compelling premise it is, too – after all, we are all fundamentally curious about how things end. In addition to its focus on the future, the book is also unusual in its field of study; Impey considers not just the ultimate fate of stars and galaxies, but also of everything there is, from the smallest earthbound microbes to the entire universe. Hence the scope of the book is vast, criss-crossing disciplines such as biology, biochemistry, geology, physics, astronomy and cosmology.

In the first half of the book, the author examines the origins and future of life on Earth. As author of the popular-science book The Living Cosmos, Impey has long been interested in astrobiology, and in How It Ends he considers the future of all biological life on Earth, from humans and large mammals to bacteria and the ecosystem itself. This part of the book is a treasure-trove of intriguing information, from the relation between lifespan and mass for 1700 different species to a discussion of how biodiversity evolved over the past billion years. He then goes on to consider threats to the ecosystem, both from within (overpopulation, nuclear war, anthropogenic climate change and so on) and without (various hazards from asteroids to supernovae). This part of the book is reminiscent of earlier works such as Martin Rees’s Our Final Century, but the hazards are described in an engaging and upbeat style.

In the second part of the book, Impey moves to a bigger picture: from the future of the Earth to the fate of the Sun; from the future of the Milky Way to the fate of the universe itself. This section is a masterly jaunt through great swathes of modern astrophysics and cosmology. In particular, I suspect the author’s portrayal of the ultimate demise of the Sun and our galaxy will be unforgettable to many readers, as will his description of the mystery of dark energy and the possibility of a “big rip” end to the universe.

In different hands, the topic of this book could have been rather grim, but Impey employs a light-hearted and lucid style that renders it highly engaging. The book is packed with “I didn’t know that” moments, yet one never feels bombarded by the facts. Moreover, while the author notes that “scientists steer towards the boundary between what they know and what they don’t know because that’s where the excitement is”, he keeps the border between established science and speculation admirably clear in all instances. He also never lectures; indeed, there is a slightly irreverent tone throughout that makes the material very accessible.

Admittedly, the book does have some moments of opaqueness. The second half is clearer than the first, in part because its subject matter – astronomy and cosmology – is closer to the author’s expertise and pedagogical background. The narrative in the first part can also be a little disjointed, perhaps because the author tries to discuss too many things. I suspect that many will read this section one or two chapters at a time, rather than straight through.

There are also some isolated instances where a bit more explanation would have been helpful. For example, in considering the existence of extraterrestrial intelligence, Impey employs the famous Drake equation without showing it explicitly. This will render the ensuing discussion somewhat mystifying to those not familiar with the relation, which provides a way of estimating the number of alien civilizations in our galaxy. It is an extremely simple and interesting equation, and one wonders why it was left out.

Elsewhere, in considering man-made threats to Earth, the possibility that CERN’s Large Hadron Collider could create a black hole is dismissed in two short sentences and a footnote. This is almost certainly an accurate representation of most physicists’ views on the issue; however, for a book aimed at a popular audience the argument seems a little terse. Indeed, readers without a physics background may be left with a feeling that something has been omitted.

In general, though, How It Ends is a very enjoyable read. Of course, there are few definitive answers to most of the questions posed, but the book covers a hugely diverse range of topics, with entertaining diversions along the way. A great many fascinating concepts jump off the page, including Milkomeda, the expected merging of the Milky Way with the Andromeda galaxy; the shadow biosphere, a theory that suggests that Earth may be inhabited by life forms so far removed from our present definitions of life as to be unrecognized; transhumanism, a movement advocating the use of technology to turn people into quasi-machines; and terraforming, the complex operation of turning a planet such as Mars into a habitable zone for humans.

All in all, Impey’s book is itself proof of the author’s contention that science comprises a great deal more than a collection of dull obdurate facts, but instead constitutes “a powerful narrative to help us organize and understand the world” – both its beginnings and, ultimately, its endings.

Copyright © 2026 by IOP Publishing Ltd and individual contributors