Skip to main content

Dirac medal honours charm-quark physicists

The charm quark was predicted in 1970 by Iliopoulos and Maiani when, with future Nobel laureate Sheldon Glashow, they formulated the now-famous “GIM mechanism” in an attempt to understand the weak interaction. This quark – the fourth predicted to exist – is now known to have a positive charge of two-thirds of that of an electron. “The GIM mechanism was a seminal contribution to the developing theory of the electroweak interaction,” David Gross, a member of the Dirac medal selection committee, told physicsworld.com.

Their theory was confirmed in November 1974 with the discovery of the J/Ψ particle – a bound state of a charm quark and a charm antiquark – at both the Brookhaven National Laboratory and the Stanford Linear Accelerator Centre in the US. The discovery persuaded many physicists for the first time to realize that quarks exist.

Maiani says he is extremely honoured to win the medal. “Dirac has been my hero since the beginning of [my] physics studies,” he said. “I will never forget the impression made upon me by the hole theory of the positron, and reading his book – together with [Richard] Feynman’s – is the way I learned quantum mechanics.”

The Dirac Medal is awarded to scientists previously unrecognized by the Nobel prize, Fields medal or Wolf Foundation prize who “have made significant contributions to physics.”

Phoenix blasts off to Mars

The $420 million Phoenix mission, the result of an international collaboration led by the University of Arizona, US, is the first project in NASA’s Mars Scout mission. It started life in 2003 as an attempt to revive the 2001 Mars Surveyor Lander, which was cancelled after the Mars Climate Orbiter and Mars Polar Lander failed in 1999. “We have worked for four years to get to this point, so we are all very excited,” said project manager Barry Goldstein at NASA’s Jet Propulsion Laboratory.

Phoenix will land using descent engines on a site in the northern hemisphere at 68.35° north latitude and 233.0° east longitude. Although these engines have not been used successfully since 1976 – NASA has recently favoured airbag systems – they enable the craft to carry the weight of seven different instruments, some of which were those mothballed from the Mars Surveyor Lander.

First, Phoenix will use a 3.35-metre-long robotic arm to dig into the surface and reach the icy layer residing a few centimetres beneath. Mounted on the end of the arm is a visible-light camera, which will provide high-resolution colour images of the soil and ice.

Samples delivered to the lander by the arm will be heated by a “thermal and evolved gas analyzer” to see how much water vapour, carbon dioxide and volatile organic compounds are contained in the soil. The samples will also be distributed to optical and atomic-force microscopes, which will examine mineral grains, while electrochemistry cells will be able to measure properties such as acidity or alkalinity, and a conductivity probe can check thermal and electrical properties.

Other instruments will look at the wider environment. During descent, a camera will record the geography around the landing site, and after Phoenix has landed a stereo camera will observe the local terrain in 3D. Finally, meteorological equipment will monitor changes in water abundance, dust, temperature and other variables.

“[Saturday’s] launch is the first step in the long journey to the surface of Mars,” said principal investigator Peter Smith of the University of Arizona. “We certainly are excited about launching, but we are still concerned about our actual landing, the most difficult step of this mission.”

NASA is still reviewing proposals for the second Mars Scout mission, due to fly in 2011.

Laser flips magnetic bit without any help

Most computers store data on magnetic hard disk drives, in which the direction – “up” or “down” – of the magnetic moments in a small region of the disk corresponds to a binary bit. Data are read by a magneto-resistance element and written by heating the bit with a laser and then flipping the moments with a magnetic field pulse from a tiny coil.

The cost and complexity of hard drives could be reduced significantly if data could instead be read and written using light alone. While some commercial hard drives now use light to read data from magnetic bits, a technique for writing data using only light had remained elusive.

Now, Theo Rasing and colleagues at Radboud University Nijmegen in the Netherlands along with researchers at Nihon University in Japan have shown that a single laser pulse can flip the magnetization of a 5 µm spot on a thin magnetic film from up to down and vice versa – without the need for an external magnetic field.

The pulse was only 40 fs (10-15 s) long – much shorter than the magnetic field pulses used in hard drives, which cannot be made much shorter than about 2 ns. Indeed, the 40 fs switching time had been thought to be impossible because in 2004, a 2 ps lower limit on controlled magnetic switching had been established by another team of physicists.

The laser pulse was circularly polarized, which means that it creates an intense but highly localized magnetic field within the material. The pulse was switched between two polarization states, which flips the direction of the field.

The researchers did their experiments on an alloy of gadolinium, iron and cobalt, which is used widely in magneto-optic data storage devices. The team is now checking to see if the switching occurs in materials with higher coercivity, which could allow an all-optical memory to achieve the same storage density as a conventional hard drive.

Rasing has patented the write process and he is confident that it will be commercialized. However, he admits that anyone wanting to build a hard drive using the technology would have to overcome the significant challenge of how to build a tiny laser that can also produce an intense pulse of circularly-polarized light that can be focussed down to a spot 50 nm in diameter, which is much smaller than the wavelength of the laser light. “But these are solvable problems,” he says.

Blog life: Cocktail party physics

Blogger: Jennifer Ouellette
URL: twistedphysics.typepad.com
First post: February 2006

Who is the blog written by?

Jennifer Ouellette studied English at university and says she “stumbled into writing about physics”, eventually becoming a full-time science writer for popular-science and trade magazines. She has written two books: Black Holes and Quantum Cats and The Physics of the Buffyverse, the latter of which attempts to use the TV series Buffy the Vampire Slayer to illustrate concepts in physics.

What topics does the blog cover?

Ouellette says the title of the blog comes from a description of her first book, which was a series of short essays mixing concepts in physics with art, literature, history and pop culture. In a more literal sense, the blog contains several recipes for physics-themed cocktails, from the “Laser Beam” to the “Black Hole” (which is so-called because “after one of these, you have already passed the event horizon of inebriation”).

Who is it aimed at?

As an outsider coming into physics herself, Ouellette writes accessibly for non-specialists. But her blog is obviously read by physicists too – she met her fiancé Sean Carroll, a cosmologist at the California Institute of Technology, after they read each other’s blogs. Naturally, the engagement was announced via both Cocktail Party Physics and Carroll’s blog Cosmic Variance.

Why should I read it?

As you would expect of a professional writer, Ouellette’s entries are polished and humorous. Her enthusiasm for science and scientists is obvious; and as she is not confined to one research area, she is free to hop around fields, frequently visiting conferences in search of good stories.

How often is it updated?

Entries are less frequent than on other physics blogs – only one or two per week – but they are also much longer, reading more like fully fledged magazine columns than the usual rattled-off blog posting.

Can you give me a sample quote?

“I must confess to finding it easier to write about applications of physics rather than the basic science. But when I started covering the quantumdots area, I learned some useful things about the ‘electrons and holes’ effect that is critical not just to quantum dots, but also lasers and other semiconductor physics. This is not an easy thing for a layperson to visualize, although physicists toss those terms around like high-school slang.”

Experts

Expertise, argue Cardiff University sociologists Harry Collins and Robert Evans, is “the pressing intellectual problem of the age”.

Can this really be true? Surely everyone knows what an expert is: an authority to whom a layperson can comfortably defer for advice. Scientists, who are usually regarded as experts par excellence, could be forgiven for not seeing expertise as a problem, for they are routinely involved in dispensing and using expertise. It seems odd to claim that expertise might be more of an intellectual problem than, say, creating a unified theory of gravity, decoding the genome or understanding the early universe.

But expertise – what it is and what role it plays – is surprisingly difficult to describe when looked at carefully. The difficulties appear both inside science and out, but they are revealed most dramatically in the use of scientific expertise in law and politics. At a trial, each side digs up its own expert witnesses, all of whom say that they represent the scientific community and that the experts on the other side are untrustworthy hired guns – intellectual mercenaries if you like – with the jury left to guess which side to believe. In politics, controversies such as the existence of global warming and the safety of nuclear power generate conflicts over who selects experts, whose advice is tainted by ideology, and what the scope and limits of testimony should be.

Recently the subject of expertise has attracted much scholarly attention. A book entitled Rethinking Expertise, by Collins and Evans, is due out this autumn (University of Chicago Press), while their perspectives, along with others, are included in an anthology entitled The Philosophy of Expertise, a book co-edited by Evan Selinger and me (Columbia University Press). And Collins, who has edited a forthcoming issue of the journal Studies in History and Philosophy of Science on the subject, is hosting a workshop in Cardiff this month to discuss the role of expertise in everything from interdisciplinary physics projects to the implementation of technical advice about how to prevent dolphins from being snared in fishing nets.

A periodic table of expertise

Collins grew interested in expertise during his 30-year sociological study of the hunt for gravitational waves (Physics World December 2004 pp10–11, print edition only). One day while he was having lunch with some gravitational-wave physicists, Collins noticed that, although he had never studied physics beyond A-level, he was engaged in a seamless conversation with them. “They are never going to give me a job,” he told me recently, “but from the conversation it would have been hard to spot who was the physicist and who the outsider.”

The experience motivated him to draw a distinction between contributory expertise, possessed by active practitioners of a field, and interactional expertise, whereby someone can speak knowledgeably about a subject without being able to contribute new ideas to it. “It’s more than talking the talk but less than walking the walk,” he told me. “It’s like ‘walking the talk’.”

Collins first noticed interactional expertise in his own field, where sociologists often acquire it when they study how people in other fields behave. But he found that interactional expertise is also widespread inside science. It is essential, for instance, in projects where people have to interact across disciplinary borders, and where managers and peer-reviewers must make decisions in areas in which they are not trained. “There would be no science without interactional expertise,” Collins says. “It is impossible for every expert to possess every technical skill they need to work in a big collaboration.”

The concept proved so valuable that Collins and Evans set out to develop a systematic theory of forms of expertise, and compiled what they ambitiously named a “periodic table of expertises” – an attempt “to classify all the kinds of expertise that might be brought to bear on a technological problem”. It includes about a dozen types of expertise that range from categories such as language speaking that everyone must have to live in a society, to more specific, higher-level and domain-restricted categories such as contributory and interactional expertise.

But the pair’s most ambitious and controversial aim is ultimately to help facilitate the resolution of public controversies with a scientific dimension. When politicians try to resolve such controversies, two simple and tempting choices present themselves: let the public decide; or take the matter out of the public’s hands and assign it to specialists. “The first choice risks technological paralysis” Collins and Evans write in their article in the expertise anthology, while “the second invites popular opposition”. They hope that their analysis of expertise can help define better the kinds of people who should be allowed to participate in decision making about the technical aspects of controversies.

The critical point

Would an improved understanding of expertise be sufficient to resolve debates about global warming and nuclear power? No, for technical aspects are generally not what drives such controversies: perception and prejudice; self-interest; and utopian visions are vitally important in framing them.

Collins and Evans know this. They mean only to lay the groundwork for better institutional tools to handle controversies. At first glance their position appears drearily commonsensical: experts are likely to make better technical judgments than the rest of us; and governments should defer technical issues to experts even though experts are sometimes wrong. Still, some sociologists have severely criticized Collins and Evans for this view, which they see as undemocratic, elitist and a throwback to the days when the word of experts was unquestioned.

But the importance of starting on the right foot for solving the controversies that surround issues like global warming and nuclear power is what leads Collins and Evans to claim that expertise is “the pressing intellectual problem of the age”.

• The Studies of Expertise and Experience workshop is on 16–18 August in Cardiff, UK: www.cf.ac.uk/socsi/expertise

Fears over factoids

Did you know that when the Large Hadron Collider (LHC) comes online at CERN next spring, it could end up creating mini black holes that destroy the Earth? This is not something from a Dan Brown novel, but from a TV documentary broadcast as part of the BBC’s Horizon series in the UK on 1 May – a programme that has been running for 40 years and is supposedly the flagship of TV science in the country. Although the documentary itself was fairly measured, the producers began the programme with the black-hole claim and used it in their publicity for the show.

Physicists who recall superb Horizon documentaries of the past – for example, on the discovery of the W and Z bosons – will have been disappointed that such a marvellous project as the LHC should have been sensationalized in this way. It was disheartening that the programme makers felt the need to rehash these unnecessary concerns over black holes being produced in particle accelerators, which physicists had already dismissed before the Relativistic Heavy Ion Collider (RHIC) came online at the Brookhaven National Laboratory in 2000 (Physics World July 2000 pp19–20, print edition only).

Meanwhile, another Horizon documentary, broadcast on 10 April, claimed that one reason for sending humans to the Moon is so that we can mine it for helium-3 as a fuel for fusion power back on Earth. The need to bring helium-3 back from the Moon has even been briefly referred to in Physics World (May 2007 pp12–13, print edition only) and, more worryingly, has been presented to US congressional committees, including the Science and Technology Committee of the House of Representatives in 2004.

As a particle physicist, I am of course interested in the LHC; and as the chair of a working group set up by the British National Space Centre to look into the future of UK space science – including the possibility of humans returning to the Moon – I am also intrigued by the helium-3 story. Both of the claims bother me and, on investigation, each is revealed as an example of what I call “factoid science” – myths of dubious provenance that propagate, become received wisdom and could even influence policy. So what is the reality and what can physicists do to correct such mis-information?

Strangelet statistics

The story of the LHC as an Armageddon machine would be laughable were it not so serious. Aficionados of Dan Brown – whose novel Angels and Demons was set partly at CERN – might believe that the Geneva lab produces antimatter capable of making weapons of mass destruction. But I did not expect to find similarly outlandish statements used to promote Horizon. As the programme’s website puts it: “Some scientists argue that during a 10-year spell of operation there is a 1 in 50 million chance that experiments like the LHC could cause a catastrophe of epic proportions.” The site then invites the public to take part in a poll on whether the LHC should be turned on or not, based on this “probability”.

While the LHC will create the most energetic collisions ever seen on Earth, cosmic rays at these and even higher energies have been bombarding our and other planets for billions of years without mishap. When I asked the producers of Horizon where they had obtained the 1-in-50-million statistic, I was told it had been taken from a “reliable source”: Our Final Century by Cambridge University cosmologist Martin Rees. But when I read his book, it became clear that the programme’s research had sadly been incomplete. On page 124, Rees discusses a paper published in 1999 by CERN theorists Arnon Dar, Alvaro de Rújula and Ulrich Heinz that uses the fact that the Earth and the cosmos have survived for several billion years to estimate the probability of colliders producing hypothetical particles called “strangelets” that might destroy our planet (1999 Phys. Lett. B 470 142).

Rees fairly describes their conclusions as follows: “If the experiment were run for 10 years, the risk of catastrophe was no more than 1 in 50 million.” In other words, the chance of disaster is one in at least 50 million (as no disaster has occurred); this is rather different from saying, as Horizon does, that there is a “1 in 50 million” probability of a catastrophe happening from the moment the LHC switches on.

Moreover, when Dar and colleagues wrote their 1999 paper, a committee of eminent physicists appointed by the Brookhaven lab was also investigating if RHIC could produce strangelets (arXiv:hep-ph/ 9910333v3). That study used not just information from cosmology but also data from collisions between heavy ions (albeit at lower energies than RHIC would obtain) to show that the chances of catastrophe are at least one part in 1019.

Furthermore, these figures refer specifically to strangelets being produced at RHIC, as Rees makes clear, and have nothing to do with the question of whether we should risk creating black holes. Indeed, why does Horizon talk about black holes at all? The only reason can be that a theory does exist that posits that mini black holes could be produced in a collider. But if one mentions this theory, then one must include the whole of it, which clearly states that mini black holes pose no hazard whatsoever because they do not grow but evaporate and die.

As if any more evidence was needed that colliders are safe, CERN also set up an “LHC safety-study group” to see if its new collider could create black holes or strangelets. It concluded – in an official CERN report published in 2003 (CERN-2003-001) – that there is “no basis for any conceivable threat” of either eventuality, which is as near as science can get to saying zero. Unfortunately, the Horizon programme made no mention of these serious and time-consuming enquiries even though CERN’s press office gave the programme’s researchers a copy of the lab’s 2003 report. Instead, the public has been led to believe that scientists are prepared to embark on experiments that could spell the end of the planet.

Helium errors

Let me now turn to the helium-3 factoid. At most fusion experiments, such as the Joint European Torus (JET) in the UK, a fuel of deuterium and tritium nuclei is converted in a tokomak into helium-4 and a neutron, thereby releasing energy in the process. No helium-3 is involved, so where does the myth come from? Enter “helium-3 fusion” into Google and you will find numerous websites pointing out that the neutron produced in deuterium–tritium fusion makes the walls of the tokomak radioactive, but that fusion could be “clean” if only we reacted deuterium with helium-3 to produce helium-4 and a proton.

Given that the amount of helium-3 available on Earth is trifling, it has been proposed that we should go to the Moon to mine the isotope, which is produced in the Sun and might be blown onto the lunar surface via the solar wind. Apart from not even knowing for certain if there is any helium-3 on the Moon, there are two main problems with this idea – one obvious and one intriguingly subtle. The first problem is that, in a tokomak, deuterium reacts up to 100 times more slowly with helium-3 than it does with tritium. This is because fusion has to overcome the electrical repulsion between the protons in the fuel, which is much higher for deuterium– helium-3 reactions (the nuclei have one and two protons, respectively) than it is for deuterium– tritium reactions (one proton each).

Clearly, deuterium–helium-3 is a poor fusion process, but the irony is much greater as I shall now reveal. A tokomak is not like a particle accelerator where counter-rotating beams of deuterium and helium-3 collide and fuse. Instead, all of the nuclei in the fuel mingle together, which means that two deuterium nuclei can rapidly fuse to give a tritium nucleus and proton. The tritium can now fuse with the deuterium – again much faster than the deuterium can with helium-3 – to yield helium-4 and a neutron.

So by bringing helium-3 from the Moon, all we will end up doing is create a deuterium– tritium fusion machine, which is the very thing the helium aficionados wanted to avoid! Undeterred, some of these people even suggest that two helium-3 nuclei could be made to fuse with each other to produce deuterium, an alpha particle and energy. Unfortunately, this reaction occurs even more slowly than deuterium–tritium fusion and the fuel would have to be heated to impractically high temperatures that would be beyond the reach of a tokomak. And as not even the upcoming International Thermonuclear Experimental Reactor (ITER) will be able to generate electricity from the latter reaction, the lunar-helium-3 story – like the LHC as an Armageddon machine – is, to my mind, moonshine.

Rising pressure

Does any of this matter beyond raising the blood pressure of some physicists? All publicity is good publicity, some might say. But I believe we should all be concerned. The LHC factoid has now been repeated in the New Yorker and in various reviews of the Horizon documentary. Even some nonphysics colleagues are asking me to explain what it is all about. If Horizon claims to be the flagship TV science series on which the public rely to form their opinions, I would hope that their researchers do their research, and that the editors then take due account of it.

The factoids about mining the Moon for fusion fuel and of the LHC Armageddon make a cautionary tale. A decade from now it is possible that committees of well-informed scientists and rather less-well-informed politicians, with public opinion weighing on their minds, will be deciding on our involvement in mega-projects such as the next huge accelerator, human space exploration, or even a post-ITER commercial fusion plant.

Decision making driven by public opinion that is influenced by factoids already has a dire history in the bio-medical arena: the controversy over whether to give children a combined immunization against measles, mumps and rubella (MMR) being the most recent example. My advice is that if you see an error in the media, speak out, write to the editors and try to get corrections made. It is an opportunity to get good science in the news.

Islamic science

Most physicists, particularly those in western nations, probably do not give religion a great deal of thought. A minority of physicists do, however, have firmly held religious beliefs and think long and hard about reconciling those beliefs with their scientific knowledge, as our report on a recent meeting on “God and physics” in Cambridge makes clear (p10, print edition only). There is much to admire in their deep thinking, which has been recognized by physicists being awarded the Templeton prize for progress in religion six times in the last eight years.

But in Muslim nations, religion plays a far bigger role in everyday life than it does in the West. Indeed, today Islam is actually holding back scientific progress by placing too great an emphasis on studying and interpreting the pages of the Koran, as the leading Iranian physicist Reza Mansouri points out (see “A way forward for Islamic science”). Those students in his country who do study science at university tend to learn a very narrow curriculum by rote, rather than being encouraged to think for themselves. Low investment in science – even in oil-rich Gulf states – and restrictions on freedom of expression compound the problem.

It was not always thus. Muslim scholars made huge contributions in areas like astronomy, optics and mathematics between the 8th and the 13th centuries, with Islam encouraging rigorous intellectual enquiry. Why science in the Islamic world fell from grace is a topic of considerable debate among historians, with the advances made in Renaissance Europe certainly playing a part in halting progress.

But whatever the reasons, the key for Muslim nations now is to rebuild their scientific strengths through increased public funding – no mean feat when their governments fail to see the merits of such investment – and by encouraging links between scientists in those countries and in the West. Placing a greater focus on a few, world-class labs rather than spreading money thinly around will help too. These solutions are essentially no different to what is needed in other parts of the developing world. But given the great untapped potential in the 1.3 billion or so people who live in the Islamic world, that rebuilding – long though it may take – is a worthwhile task.

Once a physicist: Sergi Jordà


How did you first become interested in physics?

As a child I was always inventing weird artefacts or constructing houses out of balsa wood with the idea of becoming an architect. Then as a teenager I had a kind of ideological conflict, thinking that the nice architectural projects that were worth working on were mostly for the rich. Somehow this brought me into physics, although at that time, the subject didn’t mean much more to me than bricks slipping on inclined planes.

Where did you study physics and how much did you enjoy it?

I studied at the University of Barcelona in the early 1980s. The sad truth is that by concentrating on passing the exams, I didn’t have much time to enjoy the deeper concepts. Near the end of my studies I was tutored by a disciple of the Belgian chemistry Nobel laureate Ilya Prigogine and I started to really appreciate nonlinear thermodynamics, and complex and chaotic systems.

How and when did you become interested in computer music?

While studying physics, I also played the saxophone – a sort of free jazz – and in my third year at university I discovered that I loved computer programming. Then, in my fourth year, I came across a snapshot of an audio spectrogram on the back cover of an album by Laurie Anderson. I had studied the Fourier transform in an abstract way (no-one ever talked about sound during my five years of physics), so I could intuitively understand what the image was about: sound, and therefore music, could be “understood” by computers. I soon imagined that computers could be used for making music – even free jazz. And believing that computers were far better suited than me to repetitive and unexciting tasks, I gave up practising scales on the saxophone and started programming.

How did your career develop after you graduated?

By then I was already sure that I wanted to become a computer musician, although I didn’t know how to proceed. I first survived as a computer programmer, then started teaching programming in private schools. Meanwhile, I studied anything I could find on computer music and made my own music programs that I started using in performances. In the 1990s I worked on multimedia projects and computer art, before returning to academia to teach in the computer-science faculty and do research into real-time musical interaction between humans and computers.

What are you working on at the moment?

For the last four years I have been working on the reactable, together with Günter Geiger, Martin Kaltenbrunner and Marcos Alonso from the Music Technology Group at my university. The reactable is an electronic musical instrument conceived for collaborative computer-music performance and improvisation. It is based on a circular table around which several musicians share control of the instrument by rotating, moving and caressing physical artefacts on its luminous surface.

What are some of your career highlights?

In the 1990s I had a successful collaboration with Catalan theatre group la Fura dels Baus, which gave birth to FMOL, a software program for online musical collaboration that can be considered as the precursor of the reactable. But the reactable itself is by far my most successful creation and also the most accomplished. It is the fruit of 20 years of work in the field; and the fact that an artist such as Björk used it extensively for her last world tour is enormously gratifying.

How has your background in physics helped you in your career?

Somehow it made me feel confident about the potential of human knowledge and understanding. It gave me the illusion that anything, with the possible exception of humans themselves, can be understood – no matter how complex it seems or how long it may take.

Physicists at Aldermaston

The Atomic Weapons Establishment (AWE) is the home of the UK’s nuclear deterrent and is responsible for the entire lifecycle of the country’s warheads from research and design through assembly to in-service support and, finally, decommissioning and disposal. AWE also plays a vital role in national security and international monitoring of the Comprehensive Test Ban Treaty. Its core mission is to build and maintain the warheads for the submarine-launched Trident ballistic-missile system that forms the UK’s sole nuclear deterrent. It is also required to maintain the capability to design a warhead to replace Trident, should it ever be required.

In order to perform all these tasks, AWE carries out world-class science in some of the most challenging fields, including explosive detonation, hydrodynamics, high-strain-rate deformation behaviour, radiation physics and computer modelling. The AWE site at Aldermaston in Berkshire includes all the facilities needed to carry out this science – extremely fast supercomputers, areas for explosive trials and experimental facilities for high-energy-density physics. It employs researchers in not just nuclear physics, but all branches of the subject: from atomic and condensed-matter physics to astrophysics and quantum physics.

AWE is currently investing in its building and facilities in order to support Trident safely and reliably for the next 20 years. But behind all these facilities are the people. AWE currently employs 4300 staff and 1500 contractors across its sites in Aldermaston and nearby Burghfield, and it prides itself on recruiting only the best people in science, engineering and technology. Maintaining the UK’s nuclear deterrent is not textbook science – everything AWE does is innovative. It carries out experiments on materials under extreme temperatures, strain rates and pressures that are over in the blink of an eye. AWE needs technical experts in a wide variety of physics fields to be able to understand and model the phenomena of interest.

Explosive research

I applied to work at AWE after completing my undergraduate degree in physics at Lancaster University in 1993. I was looking for a career in physics research, and AWE seemed to offer everything I wanted. I was recruited into the hydrodynamics department as a research scientist and spent the first few years carrying out experiments in the explosive facilities researching the detonation of condensed high explosives. During this time AWE sponsored me to carry out a part-time MSc in numerical methods at the Royal Military College at Shrivenham. I also had the opportunity to publish my work and travel to many conferences around the world.

In 2000 my team and I moved to the theoretical- physics area on site, where I started leading a team of scientists looking into models of shear strength in condensed matter. At this time I also embarked on a company-sponsored part-time PhD in non-equilibrium thermodynamics. Within two years I was asked to lead the theoretical material-modelling group, consisting of some 25 staff – quite a leap for a mere scientist! In January this year I moved back to the hydrodynamics division to head up its science group of 50 full-time employees. I have lots of opportunities to develop in both technical and business matters, and I travel regularly to work with AWE’s international counterparts. No two days are ever the same.

Most of the physicists at AWE work within the Directorate for Research and Applied Science. The directorate has about 1100 staff, of whom some 600 are scientists – a mixture of physicists, chemists, materials scientists, computational scientists and mathematicians. There is a roughly even split between theoretical and experimental staff, although we all work closely together to deliver integrated programmes.

Although the science and technology we need is self-contained at AWE, we do have active external collaborations, consultancies and contracts. We work with UK universities to employ summer students and foster graduate and postgraduate research, thus helping to develop the scientists of the future. We also contract work out to industry, thereby helping to invigorate technological advances in the UK as a whole. We have regular audits of our technical work, which allow top academics and experienced workers in industry access to our work, and we take on board their suggestions for future research directions. There is also an ongoing peer-review process with our colleagues in the US weapons laboratories that allows the exchange of data, ideas and staff under the auspices of the 1958 US/UK Mutual Defence Agreement.

Continuing development

Many scientists at AWE say that working in a technical area at Aldermaston is like being back at university but with higher pay and greater job security. AWE encourages its researchers to publish externally, to attend and speak at prestigious international conferences, to write textbooks, to assist the research councils with paper reviews and funding approvals, and to advise and direct UK technical policy. Substantial funding is also allocated to blue-sky work, where it is relevant to the core business.

AWE has a recognized graduate scheme for all recently graduated scientists and engineers that lasts about two years. The focus is on learning more about the wider company and workplace skills. An attractive remuneration package goes along with this. AWE continues to take professional development very seriously beyond the first few years, with on-the-job mentoring by experienced staff, funding for higher education and postdoctoral work, and placements at international facilities. The company vision is to be “internationally recognized for science, engineering and technology”, and as such AWE constantly strives for excellence – mediocrity simply will not do.

Poincaré, Perelman and proof

The public’s view of the mathematician as a reclusive genius toiling away for years at an arcane problem is one that will not go away. The media have enthusiastically seized on this image since the depictions of Andrew Wiles in Simon Singh’s best-seller Fermat’s Last Theorem and John Nash in the film A Beautiful Mind. Last summer in Madrid it surfaced again in the form of Grigori Perelman and the Poincaré conjecture. Not that Perelman was in Madrid – that was exactly the point.

The occasion was the award of the Fields medals, the mathematicians’ equivalent of the Nobel prizes. King Juan Carlos was centre stage, flanked by political and mathematical dignitaries, as an audience of thousands strained to identify the dark-suited young men at the front waiting to receive the four medals. The second name to be announced was that of Perelman, “for his contributions to geometry and his revolutionary insights into the analytical and geometric structure of the Ricci flow”.

“I deeply regret”, the announcer continued, “that Dr Perelman has declined to accept the Fields medal”. Where was Perelman? Presumably he was back in his apartment in St Petersburg, working hard on the next challenging problem.

The events leading up to this point form the subject of this book by Donal O’Shea, a mathematician at Mount Holyoke College, Massachusetts. It would be impossible to describe the step-by-step evolution of Perelman’s result. What might his diary say? “Morning: proved A implies B; afternoon: B implies C; evening: found a counterexample to A.” So instead the author gives a broad history of this area of mathematics, tailored to culminate with the famous conjecture that states that “a simply connected three-dimensional manifold is homeomorphic to the sphere”.

Therein lies the challenge for the author: to explain both the concepts and the history for a general readership in the manner of Singh’s successful book on Wiles and Fermat. While Singh’s task was to explain number theory to the layperson, O’Shea’s is to deal with geometry. This should be easier and less dependent on formulae, but quite quickly it becomes clear that pictures and diagrams have their limitations when two dimensions give way to three. Fortunately, the author’s lively style carries the reader quite successfully through this short book.

Geometry for many readers means the ancient Greeks, and that is where O’Shea starts. The Poincaré conjecture concerns a “three-sphere”, the analogue of a sphere in 4D space. We cannot sit in four dimensions and look down on such an object, but nor could the Greeks see the two-sphere that is the Earth. Nevertheless, they not only knew our planet’s shape but measured its diameter. O’Shea focuses too on Euclid, in particular his clumsy “parallel postulate”. The discovery in the 19th century of non-Euclidean geometry where the postulate fails is a theme of the book: it led mathematicians to rethink what geometry consisted of. By 1850 the existence of different types of geometry was recognized, but they were all homogeneous – all points looked the same.

Then in 1854 came Bernhard Riemann, clearly the author’s hero. His controversial lecture of that year, “On the foundations that underlie geometry”, portrayed a different world: non-linear, non-homogeneous and existing in any number of dimensions. This was the concept of a manifold: a mathematical space that is constructed by “gluing together” local patches that are Euclidean into a more complex global structure. My local space, yours and everybody else’s join together to give a 3D manifold: it might be a three-sphere, it might be more complicated. O’Shea’s subtitle, “In search of the shape of the universe”, suggests that Perelman has solved that problem too, but that is hardly likely to be the case.

Henri Poincaré made his name in celestial mechanics and differential equations, but he was also one of the founders of algebraic topology. Topology concerns the properties of objects that are unchanged by continuous deformation. The wording “simply connected” in the Poincaré conjecture is such a property: it means that a closed curve can be continuously shrunk to a point. One can easily visualize doing this on the surface of a two-sphere like the Earth, but not on a doughnut, where a curve looping round the hole cannot be shrunk. In two dimensions the sphere is the only surface that is simply connected, and Poincaré’s famous conjecture of 1904 asked whether the same was true in three dimensions.

As the field of topology developed in the 20th century, reputations were made and then broken by attempts to prove this conjecture. Finally it was proved in all higher dimensions, and only Poincaré’s original problem was left; it was clear that there was something special about three dimensions. The prevailing view of 3D topology changed radically in the 1980s when US mathematician Bill Thurston showed that many three-manifolds could be systematically broken up into pieces, each of which had a type of homogeneous geometry. He conjectured that this procedure should apply much more widely and it is this that Perelman actually succeeded in proving. It implies the Poincaré conjecture as a special case (just as Wiles’ proof of Fermat’s theorem was a special case of the Shimura–Taniyama– Weil conjecture).

What technique did Perelman use that evaded the other topologists? Not to use topology! He used instead the “Ricci flow equation” – a geometrical version of the heat equation pioneered by mathematician Richard Hamilton. Given an irregular distribution of temperature in a body, the heat equation describes the temperature at subsequent times. The flow of heat tends to smooth out the initial irregularities. Perelman’s idea was to start with an arbitrary manifold and “follow the flow”, thereby hoping to get something regular and homogeneous like a sphere or a non-Euclidean geometry. But it does not work like that, as Hamilton knew, because the solution blows up in finite time. Perelman showed that when this happens, you can modify the manifold in a controlled way and start the flow again, then repeat. Eventually you have either cut the space up into Thurston’s pieces or arrived at a homogeneous geometry – simple connectivity then gives you a sphere.

O’Shea’s book describes well the progress and personalities involved in this long process, and sensibly puts anything vaguely technical into 45 pages of notes at the end. Disappointingly for a subject so geometrical, the illustrations are of poor quality. Maps are shrunk so much as to be illegible and many illustrations are idiosyncratic line drawings. On the other hand, O’Shea includes an intriguing engraving from The Divine Comedy – I had no idea that Dante had described the three-sphere.

The book goes on to describe other events in Madrid last summer, as articles in the press suggested that the proof might be incomplete and that other researchers were claiming credit. (Note the absence of the words “Poincaré conjecture” in the citation – even the Fields medal committee could not decide.) Such disputes in the abstract world of pure mathematics rarely make the news, but journalists were on hand and the leading character was intriguingly absent.

Perelman in fact never published his proof in a peer-reviewed journal, but instead placed his articles on the preprint server arXiv.org. He followed this up by answering questions in detail on a lecture tour of the US, and confirmation of the validity of his proof is now emerging from various research groups around the world.

Proof is a delicate issue and always has been: the venerated logic of Euclid’s Elements had to be adjusted by David Hilbert; Poincaré’s own work was littered with errors; and there was a famous (though soon corrected) gap in Wiles’ proof. But time, like the flow equation, smooths these out. The same is true of accreditation. Who was Euclid anyway? For all we know he may have been a committee. But who cares now? We possess the propositions, lemmas and theorems to appreciate and use. So when the barbarians have come and gone again, we will still have a theorem, not a conjecture, which tells us that a simply connected three-manifold is a sphere.

Copyright © 2025 by IOP Publishing Ltd and individual contributors