Skip to main content

World record pulsed magnetic field

Los Almos Magnetic pulse.jpg

By Tushna Commissariat

Last week saw researchers at the Los Alamos National Laboratory (LANL), US, set a new world record for the strongest magnetic field produced by a “non-destructive” magnet. The scientists achieved a field of 92.5 T (tesla) on 18 August, and then surpassed their own achievement the following day with an impressive 97.4 T field. This meant they beat the previous record that had been held by a team in Germany, who achieved 91.4 T on 22 June 2011.

While the total duration of the pulse was 2.5 seconds, the magnetic field was held to within 1% of the peak field (at both 97.4 T and 92.5 T) for approximately 1 millisecond. The 97.4 T achievement was met with jubilation as four main researchers certified with their signatures the data that would be sent to the Guinness Book of World Records.

This advance improves our ability to create non-destructive magnetic fields – higher-power magnets routinely rip themselves to pieces due to the large forces involved – that serve as tools to study the fundamental characterization of advanced materials such as graphene or high-temperature superconductors. They also confine electrons to nanometre-scale orbits, which reveal the fundamental quantum nature of a material.

LANL spokesperson James Rickman told physicsworld.com that “This is the largest non-destructive magnet ever created on Earth to our knowledge. Destructive magnets routinely generate higher fields for microseconds of duration – many thousands of times shorter. Destructive magnets are reported to have reached 2800 T and require the use of tens of kilos of high explosives.” For perspective, the Earth’s magnetic field is 0.0004 T, while a magnet used to move a large amount of metal (like a car in a junkyard) would be 1 T and a medical MRI scan would have a magnetic field of 3 T.

The researchers generated their field using a combination of two electrical pulsed power systems. A massive 1.4 GW generator is used to power the outer coils of the magnet system and a high performance capacitor bank is used to power the inner sections of the electromagnets. Both power systems are run in a “pulse” mode of operation. The image above shows researchers Yates Coulter (left) and Mike Gordon making their final preparations before successfully achieving the record.

With this achievement, the Pulsed Field Facility at LANL will routinely provide scientists with magnetic pulses of 95 T, attracting scientists from all over the globe for a chance to use this technology. The team are now looking to achieve a 100 T field – something that researchers from around the world, including Germany, China, France and Japan are trying to achieve.

X-rays control disorder in superconductor

It could be possible to make superconductor-based circuits simply by using a beam of X-rays to control the positions of dopant atoms within a suitable material. That is the prospect offered by new research carried out by physicists in Italy and the UK, which shows how tiny regions of a copper-oxide compound can be transformed into a superconductor by exposure to X-rays of a high enough intensity. With further development, the technique could be used to make circuits containing superconducting quantum interference devices (SQUIDs), which could find use in quantum computers.

Last year a team led by Antonio Bianconi of the University of Rome “La Sapienza” used X-rays to probe the structure of lanthanum copper oxide, which is a high-temperature superconductor. Some physicists believe that the copper-oxide compounds known as cuprates owe their superconducting properties to the way in which dopant oxygen ions are distributed in layers between the copper oxide. Indeed, Bianconi’s group found that when lanthanum copper oxide is superconducting, the oxygen ions display a fractal pattern. In other words, some of the ions arrange themselves into stripes, and while there are many small such ordered regions there are far fewer larger ones – a power law distribution that is the hallmark of fractals and which is also seen, for example, in protein structures within living cells.

Order from disorder

In the latest work Bianconi and colleagues used X-rays to study the time-evolution of lanthanum copper oxide. Working at the ELETTRA synchrotron in Trieste, the team first heated up a sample of the material to about 50 °C to create disorder in the oxygen ions. Then they brought the sample back to room temperature and exposed it to an X-ray beam from the facility while using a CCD detector to record the X-rays reflected from the surface of the sample.

When the intensity of the beam was relatively low, the ions remained disordered. More precisely, the team found many small regions of order but no larger ones, which meant that the material was not superconducting. But once the intensity crossed a certain threshold the researchers discovered that larger regions of order began to emerge and, as a result, the material became superconducting. They also found that the time taken to reach this superconducting state depended on the intensity of the beam. “The superconductivity is like a plant,” says Bianconi. “It needs a certain minimum intensity of light and then it grows over time.”

Carving out wires

The researchers then narrowed down the X-ray beam so that it was just a tenth of a millimetre across and found that the fractal-like regions occurred only where the beam struck the sample. In other words, they say, a directional, narrow X-ray beam could be used to carve out superconducting wires and components within an otherwise non-superconducting sample of lanthanum copper oxide. And they say that this process could be repeated many times over on a single piece of the material, just as a compact disc can be written to multiple times. Whereas the surface of a CD is wiped clean and then written to using laser beams, in this case the material surface was restored to its disordered state using a flash of warm air, although Bianconi says that lasers could also be used.

According to Bianconi, this technique could be used to make circuits containing SQUIDs, which might form the basis of qubits inside a quantum computer. He believes this would be easier to carry out than the lithography currently used to manufacture SQUIDs because it would require no photosensitive mask and no chemicals to wash away that mask. “You shine the X-rays directly onto the active material and then move the X-rays around like you would a pen,” he says.

Bianconi says that his group hopes to make SQUIDs in this way using the facilities of the London Centre for Nanotechnology, run jointly by University College London and Imperial College. He also believes it should be possible to extend the principle beyond superconductors, to use X-rays to etch out electronic circuits more generally. Further into the future, he even envisages computers using the technique to modify their own circuitry in order to solve ever more complex problems. “Using more complex materials than silicon gives you this greater flexibility,” he says.

Climate scientist cleared of research misconduct

Pennsylvania State University (Penn State) climate scientist Michael Mann has been cleared of research misconduct following an inquiry by the Office of the Inspector General (OIG) at the National Science Foundation (NSF). The OIG agreed with the conclusions of a previous investigation by the university last year that “cleared [Mann] of any wrongdoing” and has now closed the case.

The charges against Mann stem from information in e-mail messages that were allegedly hacked from a University of East Anglia server and released early in 2009. Sceptics of human-influenced climate change charged that the “Climategate” documents indicated falsification and destruction of data, misuse of privileged information, and serious deviation from accepted research practices by Mann and other climatologists. Mann is best known for the widely accepted “hockey-stick graph” showing the recent surge in temperatures caused by climate change.

In July 2010 a panel assigned by Penn State cleared Mann of research misconduct and, as a matter of routine, sent a copy of its report to the OIG, which then requested additional information from the university and Mann to “review the report for fairness and accuracy”. Each US federal agency has an OIG that provides independent oversight of the agency’s programmes and operations and Mann says the interview with the Inspector General’s office covered a range of issues related to “false allegations that had been made by climate-change deniers regarding the stolen e-mails.”

On 15 August the OIG released its findings, which exonerated Mann and agreed with the conclusion reached by the university panel. “Lacking any evidence of research misconduct, as defined under the NSF Research Misconduct Regulation, we are closing the investigation with no further action,” the OIG report says.

“I guess this is about the seventh investigation now that has confirmed that there is absolutely no evidence of impropriety either for me or any of my climate scientist colleagues,” Mann told physicsworld.com. “It should be the final nail in the coffin given the unimpeachability of the NSF Office of the Inspector General. But unfortunately, many of our detractors will simply conclude that the imagined conspiracy runs wider and deeper.”

Narrow questions

The decision, however, will likely not end more inquiries into the internal workings of the climate-change community. David Schnare, director of the Environmental Law Center at the American Tradition Institute (ATI) says that the NSF report “focused on a very narrow question and does not address a variety of questions about Mr Mann’s activities that remain unexamined not only by NSF but by the other bodies that have looked into these matters”.

The ATI, a conservative group that focuses on environmental issues, has called on the University of Virginia – where Mann worked from 1999 to 2005 – to release documents, including those not covered by the Freedom of Information act. “The policy debate associated with climate change requires more than a few pronouncements claiming to be based on science,” Schnare adds. “To understand the validity and strength of the science, we need to understand the potential biases of the scientific work.”

On 23 August the University of Virginia released 3800 pages of e-mails, yet Mann takes strong exception to efforts to release more documents. “I hope that the University of Virginia will respect the privacy issues of not just me but the 30-plus other scientists who are involved and fully defend the exemptions that exist in the law to protect scientists from fossil-fuel industry hired guns engaging in fishing expeditions intended to embarrass, smear and malign honest scientists,” he says.

How to board an aircraft in a hurry

By Hamish Johnston

Before the days of the budget airline free-for-all, most aeroplanes were boarded in row-number blocks – with passengers seated at the rear section of the plane going first.

While this method is widely believed to be more efficient than everyone piling on at once (apparently it isn’t) some folks had suspected that better schemes could be found.

There are two main ways in which passengers can interfere with each other and slow down the boarding process. In “aisle interference”, a passenger who is stowing a bag in an overhead locker prevents people from moving further into the plane. In “seat interference”, seated passengers move into the aisle to allow others to sit down, slowing down boarding.

In addition to block boarding, several other schemes have been proposed to minimize interference. These include “Wilma”, which begins with all window-seat passengers followed by all those seated in the middle and finally the aisle seats. Others, such as the method proposed in 2008 by Fermilab physicist Jason Steffen, take a more prescriptive approach, defining the precise order in which each passenger boards the plane.

In Steffen’s scheme, passengers are boarded back to front, but in such a way that adjacent passengers in the line are seated two rows apart (12A followed by 10A and 8A, for example). This is done to ensure that each person has enough room to stow their bags. Those in window seats are also boarded before middle and aisle seats.

Now Steffen, along with the television producer Jon Hotchkiss, has done a series of experiments to try to work out which boarding method is the quickest. The experiment was filmed for a television programme called This vs That and you can watch the trailer above.

The measurements were made using a mock fuselage of a Boeing 757 aircraft in Studio City, California, that is normally usually used for film production. The “passengers” ranged in age from about 5 to 65 and were given hand luggage to load into lockers. The plane had 12 rows of six seats with one aisle running up the middle.

Using the traditional block boarding method, it took nearly seven minutes for all passengers to take their seats – more than two minutes longer than when the passengers boarded at random. The Wilma technique clocked in at just over four minutes, whereas Steffen’s method was the quickest, taking about three and a half minutes to fill the plane.

However, the most surprising result occurred when the plane was boarded in back-to-front order. This started with the passenger in the rear right window seat, followed by the rear left window seat, the rear right middle seat and so on. This highly regimented method took over six minutes to fill the plane – showing that a free-for-all can be more efficient than a highly regimented plan.

You can read a preprint describing the experiment here.

Zigzag nanowire regulates Brownian motion

Physicists in the US have created a magnetic trap that can contain microscopic particles despite their Brownian motion. The trap, which is based on a magnetized, zigzag-shaped nanowire, could help researchers to perform chemical or biological experiments in a microfluidic environment, where fluids are geometrically constrained to a submillimetre scale.

Microfluidics is a nascent field that involves shifting picolitre quantities of liquids through micron-width channels. The ability to perform measurements on tiny quantities is useful to many researchers in chemistry, biology and medicine who have to work with materials that are expensive or difficult to synthesize, such as new drugs. Moreover, several microfluidic systems can be incorporated together, allowing the creation of “lab on a chip” platforms for the study of many chemical processes at once.

A key requirement of microfluidics and nanotechnology in general, however, is the ability to manipulate the path of objects in the 100 nm to 10 µm range, where random, thermally driven movements – so-called Brownian motion – play a big role. Different techniques have been put forward, but each has drawbacks. For instance, optical tweezers can trap particles with the electric field created by a focused laser beam, yet this process can cause local heating. Meanwhile, dielectric tweezers operate by imposing an electric field between electrodes, yet these too can affect the local environment.

Magnetic zigzagging joysticks

Now, Aaron Chen and colleagues at Ohio State University in Columbus, US, have come up with a particle trap that may present a way around these difficulties. The trap consists of a magnetic wire made of iron and cobalt that the researchers pattern in a zigzag shape on a silicon surface. The researchers first apply a strong magnetic field so that the wire’s magnetization points towards or away from each vertex, generating monopole-like fields that act as magnetic traps at the vertices. They then apply weaker magnetic fields, which tune the strength of the trap and thereby change the behaviour of the particles.

The particles Chen and his group used were iron oxide encapsulated in a polymer, with a total radius of 0.28 or 0.6 µm. This composition lent the particles a superparamagnetic character, so that they could be magnetized in the trap’s relatively weak fields without displaying any remnant magnetization themselves. Using a CCD camera, the researchers saw that the particles stayed in the trap to within 100 nm. In other words, the trap could regulate a particle’s Brownian motion without pinning it down entirely.

Pros and cons

Stephen Russek, a physicist at the National Institute of Standards and Technology in Colorado, US, calls the work a considerable advance. “In addition to being able to localize and trap a particle at a particular site, Chen et al. have shown they can control its Brownian motion, which is an important step in controlling the reaction dynamics of [any] attached biomolecules,” he says. But, he adds, “The physics is classical and the main breakthrough is a technological one as opposed to [an] elucidation of new physical phenomena. The control of Brownian motion is just one of [several] stochastic fluctuations that need to be controlled to allow precise control of biological processes in vitro or in vivo.”

Lars Egil Helseth, an expert in magnetic traps at the University of Bergen in Norway, agrees that there are still drawbacks to the Ohio State researchers’ technique. “Their microstructure is fixed, and cannot be moved around at will to capture beads as one could do with optical traps and movable magnetic domain walls,” he says, which is a problem for the many applications that require movable traps. He also points out that the authors use a micron-sized structure, which prohibits confinement and control in very small volumes. “Although parts of the [experiment] are nice, I believe other solutions are required to meet the demands of biophysics, for example,” he adds.

Still, Chen and colleagues now plan expand their technique by moving beyond control of just individual particles. “Investigating how multiple particles interact within a trap like this will be our next main goal,” he says.

The research was published in Physical Review Letters.

Now you see it, now you don't

MINOS


By Michael Banks

Blink and it’s gone.

No, it’s not the latest in the search for the Higgs boson at the Large Hadron Collider near Geneva, but instead a slight difference in the mass between neutrinos and their antimatter counterparts, antineutrinos.

Neutrinos come in three “flavours” – electron, muon and tau – that change or “oscillate” from one to another as they travel though space.

It is generally thought that neutrinos and antineutrinos should have the same mass. Last year, however, results from the MINOS experiment at Fermilab, near Chicago, showed a 40% difference between muon neutrinos and muon antineutrinos (converting into tau neutrinos and tau antineutrinos, respectively) as they travelled from the accelerator to the MINOS detector (shown above) some 735 km away in the Soudan mine, Minnesota.

The results were presented with a “confidence level” of around 90–95%, which in statistical terms is approximately “two sigma” (usually a “discovery” requires five sigma).

Although the two sigma significance was small, the result was backed up three days later by a three sigma effect at another detector in the Soudan Mine – MiniBooNe. They saw a difference when muon neutrinos oscillate into electron neutrinos compared with the related process for muon antineutrinos.

Physicists noted that if the result turned out to be true it would not come as a surprise, but as an “overwhelming shock”.

But now it seems as though those fears have at least been partially allayed. After gathering twice as much data, researchers at MINOS announced yesterday at the Lepton Photon 2011 meeting in Mumbai, India, that they found the difference had dropped from 40% to 16%.

So it seems that there is still a disparity, but more data will be needed before we can be sure whether there is any mass difference between neutrinos and antineutrinos.

Milky Way stars born from intergalactic gas

Astronomers using the Hubble Space Telescope may have solved the mystery of how the Milky Way continues to spawn new stars at a consistent rate despite its diminishing gas reserves. They say the galaxy is being supplied by clouds of gas originating from outside of the Milky Way, and that these findings could help refine our knowledge of galaxy evolution.

The Milky Way currently converts 0.6–1.45 solar masses’ worth of gas into new stars every year, depleting the galaxy’s gas reserves. Yet the star formation rate doesn’t seem to be dropping, which suggests that something must be replenishing the supply. Ionized High Velocity Clouds (iHVCs), fast-moving conglomerations that move with a haste that cannot be explained by the rotating disc of the galaxy, are a proposed culprit. One suggestion is that they could be remnants from the formation of the 30+ galaxies in the Local Group, drawn in by the Milky Way’s gravity. If they do originate beyond the galactic disc, and then fall onto it, they could be bolstering the amount of gas in the galaxy.

It is not clear how large these clouds are but they were first found when astronomers noted that some of the light from distant quasars was being absorbed by objects near the edge of the galaxy. However, the huge distances involved meant it was unclear whether the iHVCs were directly associated with the Milky Way’s halo – the diffuse sphere that surrounds the galaxy – or existed beyond it. In order to solve this problem, Nicholas Lehner and Jay Christopher Howk, of the University of Notre Dame, US, adapted the quasar technique.

“Instead of observing quasars, we observed stars within the Milky Way’s halo”, Lehner told physicsworld.com. The pair observed 28 halo stars with the Hubble Space Telescope, 14 of which showed similar absorption lines in their spectra to the original quasars observations – the presence of an iHVC was revealed. The distance to these stars is well known, and so gives the maximum possible distance of the iHVC that has now been incorporated into the galaxy.

Why doesn’t the galaxy run out of gas?

Knowing the distance is the first piece in a jigsaw. “The mass of the iHVC is proportional to the distance squared,” explains Lehner. Lehner and Howk then used the original quasar observations to model the likely distribution of these iHVCs across the sky. Knowing where they are, how much gas they contain and how fast they are moving allowed the pair to estimate how much gas should fall on the Milky Way per year. “We predict that between 0.8 and 1.4 solar masses of material from iHVCs falls onto the Milky Way annually,” says Lehner. Compare that to the 0.6–1.45 solar masses consumed in star formation every year, and there is a potential answer to why the galaxy doesn’t run out of gas: it is commandeering it from intergalactic space.

“They found the magic number,” Filippo Fraternali, who researches iHVCs at the University of Bologna, Italy, told physicsworld.com. “It is not 0.1 or 100 solar masses in-filling each year, but very close to one – this is an important result,” he adds. However, the result isn’t water-tight. “It is the right approach, but there are big assumptions that may change that final number quite a bit,” Fraternali explains. He would like to see a much bigger sample than the original 28. “It is hard to get good statistics on a sample of that size,” he says.

Now that they are confirmed halo objects, Lehner is planning just that. “We’re going to go back to the quasar database, which is much larger than our stellar sample,” he explains. “If we want to understand how galaxies evolve then we need to understand how this gas gets in and out of them,” he adds.

This research was published in Science.

The future of the James Webb Space Telescope

hands smll.jpg

By Tushna Commissariat

With scientists and politicians debating over the fate of the James Webb Space telescope, this week we are asking your thoughts on the subject. Following over-run costs, a US congressional committee has moved to cancel the $6.8bn James Webb Space Telescope, poised to be the successor to Hubble Space Telescope. Should funding be reinstated or should NASA focus on other projects?

Do feel free to explain your position by posting a comment on the poll. You can vote on this poll on our Facebook page.

Results just in

Last week we asked you what you thought was the main benefit of studying physics at university. Options ranged from “Learning how the physical world works” to “Developing strong problem-solving skills”, “The wide range of career opportunities it can bring” and “The chance to play with some cool hi-tech equipment”. Among the 229 people who voted, “Learning how the physical world works” was the most popular with a 137 votes followed by “Developing strong problem-solving skills” at 63 votes. Interestingly, our “other” option that encouraged people to let us know what reasons they had for studying physics that did not fall in any of the above categories had 16 votes, with a few people pointing out that they chose physics to have a career in military research labs or, in one case, to “make something go boom”.

For some, like reader Craig Levin it was more about the type of course one was subscribing to. “If you’re taking ‘Physics for Poets’, you get a whizz-bang tour of the universe and how it works. If you’re taking a lab course, you’re getting a more in-depth picture and picking up some problem-solving skills and a little bit of project management.” he sagely pointed out. A tongue-in-cheek comment from reader Russell Davies read “I abandoned physics at age 18, because it appeared fraught with problems of limited career opportunities, limited income potential and a distinct lack of babes.”

Thank you for taking part in the poll and for taking the time to provide your thoughts. And don’t forget to vote in this week’s poll on our Facebook page.

Peering into the past of near-Earth asteroids

Eight years after the Hayabusa mission was launched to retrieve material from a near-Earth asteroid, the first scientific analyses of these rocks have been released. Among the findings published by an international team of scientists is the discovery that common chondrites found on Earth derive from the same origins as this stony “S type” asteroid. The implication is that meteorites scattered across the globe may contain preserved information about the early solar system.

The Hayabusa mission, launched by the Japanese space agency JAXA in 2003, was designed to land on the Itokawa asteroid – a 500 m-long body that lies around 300 million kilometres away from Earth – and return a sample to Earth by 2007. But having landed on the surface of Itokawa to collect samples in November 2005, technical glitches including being hit by a solar flare caused the return of the mission to be delayed for an additional three years. It eventually landed in the Woomera Prohibited Area in Southern Australia in June 2010, and the largely intact probe containing 1500 extraterrestrial grains was sent back to Japan for examination.

Parent rock

The essential result of the analysis of the 40 particles the researchers have looked at has confirmed the belief that the most common meteorites found here on Earth, known as ordinary chondrites, are born from S-type asteroids like Itokawa. The main goal of the Hayabusa mission was to demonstrate that S-type asteroids are primitive solar system bodies that “record” the long history of early solar system events.

Tomoki Nakamura from Tohoku University, Japan, and colleagues are the first to analyse the loose surface material, or regolith, that was brought back, using electron microscopes and X-ray diffraction techniques. “In our experiments, we analysed single particles by applying different techniques in order to know different aspects of the [same] particle. We needed to show asteroidal material is identical to chondrite meteorites, because we already knew that the chondrites are most primitive materials in the solar system,” explains Nakamura, who is one of the lead authors of a series of six papers published in the journal Science this week. All six papers are by the same team, with different lead authors looking at various aspects of the research that were conducted with different techniques.

The lead paper states that the regolith samples show signs of impact shocks and a large amount of heating. This suggests that the asteroid underwent a thermal evolution as the interior slowly heated up and reached a peak temperature of 800 °C, before it cooled down again very slowly. Because many of the particles studied had experienced this high temperature, this suggests that the particles on the surface of the asteroid were once at considerable depths within its body. This led the team to conclude that the asteroid was initially a much bigger body that suffered a large impact. “The size reduction occurred during a heavy impact that broke the parent body of 20 km into small pieces; some of which gathered again to form the [current] 0.5 km asteroid Itokawa,” says Nakamura. The researchers now believe that the formation time of Itokawa goes back to the early solar system 4.5 billion years ago.

Spectrum of results

The other papers published by the group cover a host of topics – one looks at the oxygen isotope ratios and minor element abundances in Itokawa particles, while another ties S-type asteroids with chondrite meteorites. Another paper investigates evolution processes of the asteroid’s surface. This led to the finding that the particles on the surface were first formed by fragmentation of larger rocks. In addition, exposure to solar winds has changed the colour of the particles and seismic activity in smooth terrain has reduced their size gradually.

Through this analysis we now have a detailed understanding of the history of asteroid formation. So tiny particles gave us a big result.Tomoki Nakamura, Tohoku University, Japan

Another study compared the dust with regolith sampled from the Moon, showing that there are chemical differences between lunar dust and the Itokawa samples. The researchers attribute these differences to chemical alteration by space-weathering and meteoroid impacts on the asteroid surface. A final paper looks at the noble gas isotopes helium, neon and argon to map a history of irradiation from solar wind and cosmic rays on the asteroid’s surface. The team found that Itokawa is continuously losing its surface materials at the rate of tens of centimetres per billion years.

Nakamura says that he was rather surprised to see just how similar the asteroid particles were to chondrites. “When I analyse the particles I always feel I am analysing meteorites,” he said. He also said that the results have helped with better understanding of asteroid formation. “The particles recovered from the asteroid Itokawa are very small, mostly less than 0.1 mm. But through this analysis we now have a detailed understanding of the history of asteroid formation. So tiny particles gave us a big result.”

The quantum century

Books that try to explain quantum physics to the general reader have become a well established non-fiction (well, mostly non-fiction) genre. In this recent addition to the field, The Quantum Story: a History in 40 Moments, Jim Baggott plucks from the 20th-century history of quantum physics a set of “moments”, each one being the brief story of a significant discovery, tied to a single individual or a small group. He proceeds, largely in chronological order, from the year 1900 to just beyond 2000. It is an ambitious undertaking, and makes for a literally weighty book. But although Baggott goes at his task with unrelenting enthusiasm, the results are ultimately disappointing.

Baggott’s choice of “moments” is not the problem. His selections are relatively well balanced. Of the 40, a dozen are devoted to the development of quantum physics, another 12 to particle physics, 11 to the “meaning” of quantum physics (too many, in my opinion), and five to quantum cosmology and quantum gravity. The only missing moment, as I see it, is Fermi’s 1934 theory of beta decay, which set the stage for understanding that annihilation and creation of particles is the bedrock of all interactions, not just those involving the electromagnetic force. Apart from this omission, and the overemphasis on complementarity and “reality”, I cannot fault the selection.

More problematic is Baggott’s philosophical bent, which he reveals in the book’s preface. Here, he states that recent experiments “strongly suggest that we can no longer assume that the particle properties we measure necessarily reflect or represent the properties of the particles as they really are”. Baggott chooses to emphasize these words by repeating them on page 356 (except for the qualifying “strongly suggest that”). Indeed, questions about the nature of reality run throughout the book, and Baggott cannot quite free himself from the supposition that if an electron is emitted at A and absorbed at B, it must have existed as a particle at points between A and B. In fact, we cannot even say that it was always the same electron, much less that it had a definable existence or definable locations between A and B. All of which shows that opinion about what is reasonable and what is not reasonable, what is real and what is not real, remains part of what we call physics.

Another problem – although this is more a matter of taste – is the book’s style. If you love breathless prose, this is the book for you. Baggott’s account has physics lurching from crisis to crisis on a collision course with philosophy, and his leading characters burn with emotion as they are consumed by anger, ambition or bitter disappointment. For example, on page 158, I I Rabi is said to have been “incensed” when he famously asked about the muon, “Who ordered that?” Actually, he was joking. Similarly, in its early days, the quark model was not just questioned, it was, says Baggott, “treated with derision”.

Baggott’s end notes run to 29 pages and his bibliography to nearly 150 titles. He has obviously worked long and hard on this project, and many of his discussions reflect a depth of understanding. For instance, in “Moment 11” he explains well that waves in classical physics exhibit a kind of uncertainty. Similarly, his treatment of Bell’s theorem (Moment 31), although not easy reading, is accurate. And his discussion, in an “interlude”, of the famous meeting between Heisenberg and Bohr that took place in German-occupied Copenhagen in 1941 is balanced.

Despite this careful preparation, the book still contains quite a few physics errors, which is perhaps not surprising in a work by a non-physicist. Some are minor, such as a wrong definition of a mole (p17) or an incorrect explanation of the way that polarizing sunglasses work (p321). Others are a bit more substantive, such as the statement on page 28 that in classical theory an electron spiralling down toward a nucleus will lose speed, or the assertion on page 29 that electron energies in the Bohr atom increase in direct proportion to the quantum number n. Neither these nor any of the other errors are significant enough to undermine interest in the book.

A more serious problem, perhaps, is that most of the book’s errors are irrelevant. Even when the physics is correct – which is most of the time – does it matter? The discussions of physics in the book, although earnest, vary from barely comprehensible to completely opaque. Who, even among physicist readers, will understand the discussion of spontaneously broken symmetry groups in the 25th “moment” or mixing angles in the 28th? It matters little whether the physics is wholly accurate or not. The book doesn’t teach any physics. At best, it gives some kind of flavour of what it is all about. For some readers, that may be enough.

Basically, Baggott’s book is about people – from Max Planck, Niels Bohr and Albert Einstein in quantum theory’s early years, to Ed Witten, Lee Smolin and Anton Zeilinger in more recent times. On people, he does a good job (even if, as noted above, the descriptions are a bit breathless in places). If you read this book, you will learn about Pauli’s temper, Gell-Mann’s erudition, Bohr’s paternalism, Heisenberg’s ambition, Schrödinger’s dalliances, Glashow’s zest, Feynman’s playfulness, Hawking’s triumphs, Einstein’s stubbornness, Aspect’s determination, Zeldovich’s White Horse scotch whisky bet and more. But with 410 pages of text, there is a lot of book to wade through for the pleasure of these engaging profiles.

Copyright © 2026 by IOP Publishing Ltd and individual contributors