Skip to main content

New technique nails distance to supermassive black hole

The distance from Earth to the black hole lurking at the centre of a distant galaxy has been determined to unprecedented accuracy by astronomers in Denmark, the UK and Japan. As well as giving us a better estimate of the mass of such black holes, the method could also lead to the creation of a new cosmic distance scale, which could give us accurate and independent measurements of how fast the universe is expanding.

At the centre of most galaxies, including the Milky Way, lies a supermassive black hole that is typically 105–109 times more massive than the Sun. As matter is accelerated into the black hole, lots of radiation is emitted, creating an extremely bright object called an active galactic nucleus (AGN). Such supermassive black holes are of great interest to astronomers because their formation is related to the evolution of their surrounding galaxies.

Flickering lights

Deep within an AGN is a relatively compact accretion disc of material that generates ultraviolet (UV) light as additional matter is accelerated into the black hole. Some of this UV light travels directly to Earth, where it can be detected as a flickering signal. But beyond the accretion disc is a gaseous “broad line region” (BLR) and then a dusty torus – and some of the UV light travels towards the torus, where it stimulates the emission of infrared light in a process called reverberation.

Some of this infrared light will travel to Earth and be detected. So, by measuring the time delay between a flicker of the UV light and the same flicker in the infrared, astronomers can calculate how long it takes light to travel across the BLR. The radius of the BLR can then be calculated by multiplying this time by the speed of light.

As the intensity of light emitted by a black hole increases as the square root of this radius, if two AGNs have the same BLR radius, but different intensities, then the brighter object will be closer to Earth. This concept led Darach Watson and colleagues at the University of Copenhagen and the University of Queensland to propose a new way of measuring cosmological distances in 2011 (see “Active galactic nuclei measure the universe”). The pair measured the disc radii and brightness of about 30 AGNs, but the snag is that the technique can only determine which object is closer, not how far away each is from Earth.

Simple trigonometry

To measure absolute – not relative – distances with the method would require an additional observation using a different technique. Now, however, Watson, together with Sebastian Hönig and colleagues at Copenhagen, Southampton and Kyoto Sangyo universities, have managed to measure the absolute distance using a simple relationship familiar to anyone who has studied trigonometry.

The team used the two telescopes of the Keck Observatory in Hawaii to observe the AGN at the centre of NGC 4151 – a galaxy that lies about 63 million light-years from Earth (or 19 megaparsecs). Light from both telescopes, which are a distance of 85 m apart, is fed to an interferometer, which let the team measure the tiny angle between the light arriving from the centre of the AGN and the light arriving from the outer radius of the BLR.

As the radius of the BLR is already known from a reverberation measurement, the distance to the AGN can be calculated by simply dividing the radius by the angle. Darach and colleagues were able to gauge the distance to the AGN to within an uncertainty of about ±2.5 megaparsecs, which is much better than measurements using other techniques.

In principle, once the absolute distance of one AGN is determined in this way, the absolute distances to other AGNs can be worked out simply using their brightness and BLR radii. “I’m really excited about this result because it couples so beautifully to our previous discovery of a way to measure relative distances with active galaxies,” says Watson.

Independent distance scale

The distance to faraway objects is currently determined using a complicated “cosmic distance ladder” that employs several different techniques that apply only over specific distance ranges. The new AGN technique, in contrast, could be used over a wide range of cosmic distances. As Watson explains, this could “avoid all the mess associated with the cosmic distance ladder”, giving astronomers “an entirely separate and totally independent set of cosmic-distance measurement tools using only active galaxies and nothing else”.

This in turn could yield a new and independent measure of the rate at which the universe is expanding, and ultimately an independent estimation of the age of the universe. Knowing the distances to AGNs will also help astronomers gain a better understanding of how these structures and their associated galaxies formed and evolved.

Watson and colleagues have submitted a proposal to do similar measurements on three other AGNs using the Very Large Telescope Interferometer in Chile.

The research is described in Nature.

Laser blast makes pure quantum dots

Quantum dots made of pure selenium can be made by simply firing a laser beam at selenium powder mixed into a glass of water. The easy and inexpensive process was developed by researchers at the University of Texas at San Antonio and Northeastern University in the US, and unlike other techniques, does not involve potentially toxic chemicals. The high-quality nanostructures could be used in two very different applications: as antibacterial agents and as light harvesters in solar cells.

Quantum dots are tiny pieces of semiconductor – such as selenium – that are typically tens of nanometres across. The size of a quantum dot dictates how their charge-carrying electrons and holes interact with light. As a result, they are of great interest to researchers trying to develop photonic technologies and especially solar cells. However, growing quantum dots that are pure and all the same size can be a challenge.

Green and easy

The researchers, led by Gregory Guisbiers in San Antonio, created their pure selenium quantum dots using a technique called pulsed laser ablation in liquids (PLAL), which involves simply firing a pulsed laser beam at a target – in this case selenium powder in water. “Our method is ‘green’ because it does not involve any dangerous solvents, only water, and there are no toxic adducts or by-products, like those often encountered in many wet chemistry processes,” explains Guisbiers. “It is also cheap and easy because we do not need a vacuum chamber or clean room – everything is done in a beaker of water.” The pure nanoparticles produced are also easy to collect and store because they are directly synthesized in solution, he adds.

This is the first time that selenium quantum dots have been synthesized using PLAL at ultraviolet and visible wavelengths, he says. These wavelengths are particularly interesting because they are better at reducing the size of particles compared with light at near-infrared wavelengths. Guisbiers and colleagues also showed that the crystallinity of the nanoparticles created by this technique depends on their size – that is, the smallest particles are crystalline while the largest ones are amorphous.

Antibacterial and anti-cancer

Selenium nanoparticles have antibacterial and anti-cancer properties, and could be used in medicine because the material is biocompatible and already exists in our bodies. However, nanoparticles need to be free of surface contaminants if they are to be employed in a biomedical setting – something that has proved difficult to achieve in the past.

The team, which has already tested its nanoparticles on E. coli, is now looking to see if they are efficient at killing other types of bacteria. “We are particularly interested in other bacteria involved in nosocomial diseases, like the methicillin-resistant Staphylococcus aureus,” Guisbiers says. “I’m told that [hospital-acquired infections] cause roughly 100,000 deaths every year in the US alone because bacteria are becoming more and more resistant to existing antibiotics. What’s more, these so-called super-germs are spreading worldwide, making this a major international health concern.”

The researchers will report their work in an upcoming issue of Laser Physics Letters. The team is also planning to incorporate the pure selenium quantum dots that they made into third-generation solar cells. “Indeed, since the element itself is a p-type semiconductor, when combined with an n-type semiconductor, we can build p–n junctions (the building blocks of all modern-day electronics) at the nanoscale,” adds Guisbiers.

A cabinet of invisible curiosities

Engraving of an 18th century magician in breeches and a powdered wig making objects disappear

“If you could be invisible, what would you do? The chances are that it would have something to do with power, wealth or sex. Perhaps all three.” This statement from Philip Ball’s book Invisible: the Dangerous Allure of the Unseen may sound cynical, but it is probably also accurate. After all, what would you do if you suddenly had a power that, for nearly all of human history, has belonged to the world of the obscure, the occult and the supernatural?

Ball’s book lures you into this world. In reading it, I was reminded of a Victorian museum: every chapter is full of weird and wonderful exhibits. In the first chapter you enter the courtyard of myths and sagas. There, the technology of invisibility is of no concern – gods, after all, can do anything – but the exhibition shows you the human reasons why someone might wish to become invisible. Yes, they are what you would expect: power, wealth and sex, or all three together.

Next comes the dungeon of the dark ages, filled with occult forces that can be mastered with complicated spells and magical ingredients (black cats, mirrors, poisonous plants and the like) if you have acquired the right secret knowledge, the Faustian tome, “with secrets crammed, from Nostradamus’ very hand”. The guardians of this world were the secret societies such as the Rosicrucians and Freemasons who lured their followers away from the path of the plain and obvious. One may be amused or confused by the actions of this dungeon’s inhabitants, but their crude attempts at natural magic were probably the beginning of the dream of mastering the unseen. However inadequate their means were at the time, in some sense they were the precursors of the scientific societies of today.

Once past the dungeon, one enters the chamber of ghosts and fairies – spiritualism was very much en vogue in Victorian times – followed by a large hall filled with the apparatus of invisible rays and waves. Here we find such curiosities as Röntgen’s X-rays, revealing skeletons in living people, and Marconi’s radio waves, carrying voices over vast distances. Sometimes, technology and the desire for the supernatural mixes, for in spiritualistic séances, Victorian ghosts communicate in a code inspired by the Morse code of the telegraph.

The next room is more brightly lit, illustrating the concept of unseen forces and particles forming the foundations of the world as we see it. On one wall, the portrait of Sigmund Freud guides you through a séparé to an antechamber dedicated to psychology and the subconscious, where seeing is not always believing. This is followed by the hall of fame of invisible novel and film characters, in particular H G Wells’ “Invisible Man” and his cousins. Here, too, is some nice physics, explaining how the invisible man disappears. The science of light – optics – has finally found a place in the museum.

The next hall is filled with magnifying glasses and microscopes, revealing the world of the microscopically small – the bizarre, monstrous forms of fleas and other insects that shocked people when they first saw them, but also the invisible world of bacteria and viruses. In making this world visible, scientists discovered the real causes of infectious diseases and could finally come up with effective remedies against them, from disinfectants to antibiotics, much to the benefit of mankind. But fears of the microworld still linger: what if a grey goo of rampant nanobots infects us all?

The last two chapters deal with camouflage, stealth and other forms of modern invisibility technology. Here, Ball points out how much attitudes have changed over time. A microwave cloaking device, for example, would not have impressed a medieval audience at all, as it is clearly visible. All it does is guide invisible electromagnetic microwaves around objects placed inside the device; it is only invisible to the already invisible. Only if you know from science that these invisible waves are as real as the world you see with your own eyes will you find this impressive. The fact that people have indeed been impressed by microwave cloaking shows the extent to which science has entered the public consciousness.

Invisible is filled to the brim with stories, anecdotes and gossip about people whose names you have probably never heard. It may amuse you to learn what both serious people and charlatans have believed in the past and how incredibly they went wrong. But equally, it may depress you, as you may wonder whether we are any wiser these days: “For while man strives he errs”, as Goethe’s Faust has it. For my taste, however, the book focuses too much on the dark, gothic side of invisibility and on the absurd errors of our unfortunate predecessors. Even in the darkest times, amidst the greatest confusion and error, there has been an invisible stream of reason, clarity and wonder that elevates those who follow it above the absurd. It is called science. As Steven Weinberg put it in The First Three Minutes, “The effort to understand the universe is one of the very few things which lifts human life a little above the level of farce and gives it some of the grace of tragedy.” In a book about invisibility, I would have preferred less of the farce and more of the science.

  • 2014 The Bodley Head £25.00hb 336pp

Social physics and antisocial science

Alex “Sandy” Pentland is a computer scientist with an impressive academic record and an even more impressive history of translating academic outputs into business and consultancy. To say he has entrepreneurial flair would seem to be an understatement; his previous book was a bestseller, and his career is sprinkled liberally with consultancies and spin-outs from his research group. His career defies easy categorization, but he calls the work that he does on network analysis and computational social science “social physics”. In his latest book, Social Physics: How Good Ideas Spread – the Lessons from a New Science he outlines his vision of a discipline that has a history of infighting and intellectual land-grabbing.

The term “social physics” was originally coined in the early 1800s by the philosopher Auguste Comte, who hoped that a mechanistic science could help to unravel society’s complexities. When another scholar, the Belgian astronomer Adolphe Quetelet, started using the term for his own brand of mathematical social science, Comte decided he didn’t want to be a social physicist anymore and became the first sociologist instead. Whether this reflects worst on the egos and caprices of physicists or sociologists rather depends on the reader’s existing prejudices.

Since Comte’s day, attempts by political philosophers, mathematicians and computer scientists to create a definitive calculus of human society have failed in a variety of interesting ways, whether from lack of data, poverty of imagination or dogmatism of approach. The people who you might identify as social physicists nowadays would probably say they worked in complexity theory, network science, machine intelligence or another of the technically challenging, frequently data-led approaches sitting at the nexus of statistics, computational modelling and applied maths. It’s not so much a discipline as an enthusiastic club, formed largely of recovering physicists and computer scientists who take a quantitative approach to understanding people.

Pentland’s definition of social physics is certainly within this wheelhouse. His work focuses on the power of social networks – power to influence people to exercise and lose weight, to enable creativity, and to create “cities of tomorrow” in the mould of Jane Jacobs, the 20th-century journalist turned urban-studies activist. Many of Pentland’s studies are based on user-centred data, often gathered from sensors that are worn on the body and are designed to collect information about social interaction. These data include aggregate information, such as how often these individuals meet one another, but also more detailed evidence about conversational turn-taking and duration of speech.

This micro-level information is synthesized into living, breathing, second-by-second social networks in a manner unimaginable by Stanley Milgram when he carried out his field-defining “six degrees of separation” experiment nearly 50 years ago. With a little statistical magic, Pentland’s team at the Massachusetts Institute of Technology has been able to convert these rich data streams into concrete insights into the functioning of social networks, and create recommendations that have helped to transform businesses and public health projects.

While the work they’ve done is impressive and engages wonderfully with the world it seeks to improve, it feels like there are areas where Pentland’s book runs aground. The worlds of data science, “big data”, “smart cities” and the “Internet of Things” are already having huge impacts on the social sciences – sociologist Emma Uprichard referred last year to the “methodological genocide” that is being visited on her subject – and so tying “social physics” to a specific branch of network theory with an added dash of management science or social psychology seems very specific in scope.

Pentland also seems to flounder when it comes to the contentious political issues that working in social science almost inevitably generates. In particular, the ways in which he discounts the concepts of “markets” and “class” seem like a desire to sidestep the thorny issues that characterize divisions between the right and the left. He is prone to techno-Utopianism, and seems to take the view that creating and mediating social networks (perhaps via smartphones or sensing technologies) will solve the problems that plague the modern city. To support this theory, he cites Jacobs’ ideas of community urbanism (themselves partly inspired by the early discussions of complexity science by Warren Weaver), but I would have liked to have seen more of the scholarship that bridges Jacobs’ mid-20th century work and Pentland’s current research. References to this body of work are rather buried in the bibliography, and aren’t really discussed a great deal in the main text. He does, however, give special note to individuals when he discusses the work of his own PhD students – a significant gesture that I suspect many senior academics forget in the white heat of a book deal, and one that makes me rather warm to him as an author.

Some of the ideas that I found most exciting in Pentland’s work were relegated to later sections, such as his Open Personal Data Store, where users would store their personal data (not only biographical but real-time and location based) and decide who has access to it, and at what price. (For example, are the services provided by Facebook worth a certain loss of privacy? How much privacy?) This user-owned model of data could be nothing short of revolutionary. His work in analysing mobile phone data for the whole of Côte d’Ivoire is also fascinating and creates huge opportunities, as well as raising issues around globalism and inequality. This, however, is covered rather briefly.

Social Physics is an engaging and worthwhile read, and a good introduction to some of the ideas fizzing around the discipline. It focuses almost exclusively on Pentland’s own work, but does so in a readable and enthusiastic fashion. On the downside, it left me wanting to hear more of the stories behind the “thousands of hours of sensing” and “hundreds of gigabytes of data” these studies collected. We only really hear Pentland’s success stories, but what happened when things went wrong? What was unexpected? It’s these details and fallibilities that bring research stories to life. This wonderful flavour of science gives us new techniques to understand and tackle social problems, but these techniques raise their own questions – questions that are sometimes too easily dismissed in a zippy “tasting menu” that shows off Pentland’s particular flavour of social physics.

  • 2014 Penguin Press $27.95hb 320pp

Villainous physicists, Hubble’s cat and more

This week we heard about a possible new James Bond film villain and its none other than Stephen Hawking. According to this story in the Telegraph, he feels as if his trademark wheelchair and computerized voice would lend themselves perfectly to the part. On the same note, we saw this interesting feature on the Wired website that looks at the history behind Hawking’s very recognisable voice. Last month, I was lucky enough to attend an early screening of James Marsh’s Hawking biopic The Theory of Everything, which includes a rather touching and funny scene of Hawking testing out his voice for the first time. You can read more about the film in the reviews section of the upcoming January issue of Physics World.

(more…)

Physicist nominated as US defence secretary

Ashton Carter

US president Barack Obama has nominated physicist Ashton Carter as secretary of defence – one of the key positions in the US cabinet. With a DPhil in theoretical physics from Oxford University, the 60 year old has extensive experience in government and academia, working predominantly on national security and military issues. His nomination will first require confirmation by the US Senate, but insiders indicate that Carter’s passage will be smooth. “I can’t imagine that he’s going to have opposition to his confirmation,” says Oklahoma senator James Inhofe, a Republican who is frequently at odds with the Obama administration.

Carter was not talking to the media – a tradition among government nominees until they have appeared before the Senate. However, former colleagues have praised Carter as being capable of performing the task of overseeing US defence policy. “He comes grounded in the conceptualization of a physicist – he is very good in policy areas,” Graham Allison of Harvard University’s Belfer Center for Science and International Affairs told Physics World. “Both his intellectual foundation and his work as a practitioner in government have been informed by evidence-based analysis, as opposed to hocus-pocus.”

From physics to the Pentagon

Carter earned a degree in physics and medieval history from Yale University in 1976, and won a Rhodes Scholarship to Oxford, where he chose to concentrate on physics, graduating in 1979. “My arrogant view at the time was that life would eventually teach me political science, sociology, psychology and even economics, but it would never teach me linear algebra,” he wrote in a short autobiography for the Belfer Center.

In 1981, following a year as a research associate in theoretical physics at Rockefeller University, and another in the Congressional Office of Technology Assessment, he joined the Pentagon as a civilian programme analyst. Three years later, he applied his background in physics to missile defence, dubbing former US president Ronald Reagan’s “Star Wars” nuclear-weapons defence system as “unworkable”.

Between 1984 and 1993, Carter then held a variety of academic positions at the Massachusetts Institute of Technology and Harvard, focusing on international affairs and becoming an expert on defence issues and international security. He rejoined the Pentagon in 1993 as assistant secretary of defence for international security policy. There, he helped to secure nuclear weapons in Eastern Europe following the collapse of the former Soviet Union.

After some years in industry and academia, during which he wrote three books and contributed a multitude of journal and popular-science articles, Carter returned to government in 2009 as undersecretary of defence for acquisition, technology and logistics. In that position, he earned praise for his ability to guide the defence department through the government sequester, which forced all departments to reduce their spending, with minimal damage to military readiness.

Carter became deputy to defence secretary Leon Panetta from 2011 to 2013. In that role, he identified and carried out cuts to the military budget mandated by Congress. Panetta compared him to Scotty, the engineer in Star Trek: “I worked on the bridge while he manned the engine room,” Panetta wrote in his recent book Worthy Fights: A Memoir of Leadership in War and Peace.

Challenges ahead

Carter was tipped to replace Panetta when the latter left the defence department two years ago, but the Obama administration instead selected former Nebraska Senator Chuck Hagel. Hagel’s retirement from the position, announced in November, has now opened the way for Carter, who is currently lecturing at Stanford University.

Once confirmed, Carter will face fresh challenges within a tightly controlled financial situation, including the struggle with Islamic State in the Middle East, the small but continuing US military presence in Afghanistan and Iraq, and maintaining a US military presence in Asia. “We are trying to manage on a lower budget at a time when the threat is not receding,” Carter told the Boston Globe in 2012. “The world hasn’t gotten any safer.”

Planck offers another glimpse of the early universe

Results of four years of observations made by the Planck space telescope provide the most precise confirmation so far of the Standard Model of cosmology, and also place new constraints on the properties of potential dark-matter candidates. That is the conclusion of astronomers working on the €700m mission of the European Space Agency (ESA). Planck studies the intensity and the polarization of the cosmic microwave background (CMB), which is the thermal remnant of the Big Bang. These latest results will no doubt frustrate cosmologists, because Planck has so far failed to shed much light on some of the biggest mysteries of physics, including what constitutes the dark matter and dark energy that appears to dominate the universe.

Planck ran from 2009–2013, and the first data were released in March last year, comprising temperature data taken during the first 15 months of observations. A more complete data set from Planck will be published later this month, and is being previewed this week at a conference in Ferrara, Italy (“Planck 2014 – The microwave sky in temperature and polarization”). So far, Planck scientists have revealed that a previous disagreement of 1–1.5% between Planck and its predecessor – NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) – regarding the mission’s “absolute-temperature” measurements has been reduced to 0.3%.

One of the biggest challenges for Planck is to separate dust and CMB polarization in frequency bands around 100–200 GHz where both emissions are nearly indistinguishable. With that in mind, Planck was designed to have a channel dedicated to the observation of polarized dust – the 353 GHz channel.

“With the proper extrapolation in frequency, the 353 GHz data can be used to clean the lower frequency channels and get to the pristine CMB signal,” says Marc-Antoine Miville-Deschênes at the Institut d’Astrophysique Spatiale in Orsay, France. “Interestingly, the 353 GHz channel is also bringing totally new information on the magnetic field of the Milky Way,” adds Miville-Deschênes, who is part of the Planck collaboration. In fact, he tells physicsworld.com that this is the first time that such images could be obtained and that “Planck is providing us with all-sky data. We will be able to answer fundamental questions that were raised more than 60 years ago about the role of the magnetic field in the formation of stars”.

Winnowing dark matter

Planck’s latest measurement of the CMB polarization rules out a class of dark-matter models involving particle annihilation in the early universe. These models were developed to explain excesses of cosmic-ray positrons that have been measured by three independent experiments – the PAMELA mission, the Alpha Magnetic Spectrometer and the Fermi Gamma-Ray Space Telescope.

The Planck collaboration also revealed that it has, for the first time, “detected unambiguously” traces left behind by primordial neutrinos on the CMB. Such neutrinos are thought to have been released one second after the Big Bang, when the universe was still opaque to light but already transparent to these elusive particles. Planck has set an upper limit (0.23 eV/c2) on the sum of the masses of the three types of neutrinos known to exist. Furthermore, the new data exclude the existence of a fourth type of neutrino that is favoured by some models.

Planck versus BICEP2

Despite the new data, the collaboration did not give any insights into the recent controversy surrounding the possible detection of primordial “B-mode” polarization of the CMB by astronomers working on the BICEP2 telescope. If verified, the BICEP2 observation would be “smoking-gun” evidence for the rapid “inflation” of the early universe – the extremely rapid expansion that cosmologists believe the universe underwent a mere 10–35 s after the Big Bang. A new analysis of polarized dust emission in our galaxy, carried out by Planck earlier in September, showed that the part of the sky observed by BICEP2 has much more dust than originally anticipated, and while this did not completely rule out BICEP2’s original claim, it established that the dust emission is nearly as big as the entire BICEP2 signal. Both Planck and BICEP2 have since been working together on joint analysis of their data, but a result is still forthcoming.

  • This article was updated on 5 December 2014.

A strong model, with flaws

John Moffat’s new book covers the history of the Standard Model of particle physics from its beginnings to the recent discovery of the Higgs boson – or, as Moffat cautiously calls it, the new particle most physicists believe is the Standard Model Higgs. But Cracking the Particle Code of the Universe isn’t just any book about the Standard Model: it’s about the model as seen through the eyes of an insider, one who has witnessed many fads and statistical fluctuations come and go. As an emeritus professor at the University of Toronto, Canada and a senior researcher at the nearby Perimeter Institute, Moffat has the credentials to do more than just explain the theory and the experiments that back it up: he also offers his own opinion on the interpretation of the data, the status of the theories and the community’s reaction to the discovery of the Higgs.

The first half of the book is mainly dedicated to introducing the reader to the ingredients of the Standard Model, the particles and their properties, the relevance of gauge symmetries, symmetry breaking, and the workings of particle accelerators. Moffat also explains some proposed extensions and alternatives to the Standard Model, such as “technicolor”, supersymmetry, preons, additional dimensions and composite Higgs models as well as models based on his own work. In each case he lays out the experimental situation and the technical aspects that speak for and against these models.

In the second half of the book, Moffat recalls how the discovery unfolded at the Large Hadron Collider (LHC) and comments on the data that the collisions yielded. He reports from several conferences he attended, or papers and lectures that appeared online, and summarizes how the experimental analysis proceeded and how it was interpreted. In this, he includes his own judgment and relates discussions with theorists and experimentalists. We meet many prominent people in particle physics, including Guido Altarelli, Jim Hartle and Stephen Hawking, to mention just a few. Moffat repeatedly calls for a cautious approach to claims that the Standard Model Higgs has indeed been discovered, and points out that not all necessary characteristics have been found. He finds that the experimentalists are careful with their claims, but that the theoreticians jump to conclusions.

The book covers the situation up to March 2013, so of course it is already somewhat outdated; the ATLAS collaboration’s evidence for the spin-0 nature of the Higgs boson was only published in June 2013, for example. But this does not matter all that much because the book will give the dedicated reader the necessary background to follow and understand the relevance of new data.

Moffat’s writing sometimes gets quite technical, albeit without recourse to equations, and I doubt readers will fully understand his elaborations without at least some knowledge of quantum field theory. He introduces the main concepts he needs for his explanations, but he does so very briefly; for example, his book features the shortest explanation of gauge invariance I have ever come across, and many important concepts, such as cross-sections or the relation between the masses of force-carriers and the range of the force, are only explained in footnotes. The glossary can be used for orientation, but even so, the book will seem very demanding for readers who encounter the technical terms for the first time. However, even if they are not able to follow each argument in detail, they should still understand the main issues and the conclusions that Moffat draws.

Towards the end of the book, Moffat discusses several shortcomings of the Standard Model, including the Higgs mass hierarchy problem, the gauge hierarchy problem and the unexplained values of particle masses. He also briefly mentions the cosmological constant problem, as it is related to questions about the nature of the vacuum in quantum field theory, but on the whole he stands clear from discussing cosmology. He does, however, comment on the anthropic principle and the multiverse and does not hesitate to express his dismay about the idea.

While Moffat gives some space to discussing his own contributions to the field, he does not promote his point of view as the only reasonable one. Rather, he makes a point of emphasizing the necessity of investigating alternative models. The measured mass of the particle-that-may-be-the-Higgs is, he notes, larger than expected, and this makes it even more pressing to find models better equipped to address the problems with “naturalness” in the Standard Model.

I have met Moffat on various occasions and I have found him to be not only a great physicist and an insightful thinker, but also one who is typically more up to date than many of his younger colleagues. As the book also reflects, he closely follows the online presentations and discussions of particle physics and particle physicists, and is conscious of the social problems and cognitive biases that media hype can produce. In his book, Moffat especially criticizes bloggers for spreading premature conclusions.

Moffat’s recollections also document that science is a community enterprise and that we sometimes forget to pay proper attention to the human element in our data interpretation. We all like to be confirmed in our beliefs, but as my physics teacher liked to say, “belief belongs in the church”. I find it astonishing that many theoretical physicists these days publicly express their conviction that a popular theory “must be” right even when it is still unconfirmed by data – and that this has become accepted behaviour for scientists. A theorist who works on alternative models today is seen too easily as an outsider (a non-believer), and it takes much courage, persistence and stable funding sources to persevere outside the mainstream, as Moffat has done for decades (and still does). This is an unfortunate trend that many in the community do not seem to be aware of, or do not see why it is of concern, and it is good that Moffat in his book touches on this point.

In summary, Moffat’s new book is a well-done and well-written survey of the history, achievements and shortcomings of the Standard Model of particle physics. It will equip the reader with all the necessary knowledge to put into context the coming headlines about new discoveries at the LHC and future colliders.

  • 2014 Oxford University Press £19.99/$29.95hb 256pp

Artistic influences

In the autumn of 1610, the Italian painter Lodovico Cardi (better known by his professional name, Cigoli) began his most challenging work: a fresco to adorn the inside of the dome of the Basilica di Santa Maria Maggiore in Rome. Based on a biblical scene, the fresco depicts the Virgin Mary as the “queen of heaven”, standing on the Moon, wearing a multicoloured robe and carrying a sceptre, with a halo of 12 stars floating above her head. A closer look, however, reveals something new in Mary’s otherwise familiar pose: the Moon beneath her feet is pock-marked with numerous crater-like features. Cigoli, it turns out, was friends with Galileo, and historians believe he either looked through one of Galileo’s telescopes or at least closely perused Galileo’s Sidereus Nuncius, published just a few months earlier.

Cigoli’s fresco, then, is an early example of art drawing inspiration from science. It would not be the last. True, as science became more specialized, its practitioners tended to work in ever-narrower domains – but the bridge between art and science was never fully severed, and at the turn of the 20th century the links once again seemed to strengthen. At that time, both science and art were grappling with new ways of seeing the world, and it may be more than a coincidence that the application of non-Euclidean geometry to physics – notably by Einstein in his general theory of relativity – came just as painters such as Pablo Picasso and Georges Braque were experimenting with bold new ways of interpreting the world on canvas. The movement they founded, known as cubism, challenged traditional ideas of space and time – and echoed the latest developments in mathematics and physics.

The interplay between science and the arts in the early 20th century is a subject that Arthur I Miller explored in his earlier (very good) book Einstein, Picasso: Space, Time and the Beauty That Causes Havoc. Miller, an emeritus professor at University College London, has now turned his attention to the last 50 years, with a particular focus on the art–science scene today. Colliding Worlds bears witness to what Miller sees as a new phase in the history of art – one in which boundaries between disciplines have become blurred. Many of the artists Miller profiles have benefited from collaborations with scientists or engineers; a few of them are scientists themselves. One of the artists that he speaks with declares that today’s art “is an offspring of science and technology”.

In this entertaining, thorough investigation, we meet dozens of artists taking the field in a myriad of new directions. There’s Harold Cohen, for example, whose computer program, named AARON, produces much-sought-after abstract paintings; Katherine Dawson, who uses lasers to sculpt glass into biologically accurate depictions of the human brain; and Tim Otto Roth, who uses data from the Hubble Space Tele-scope to project “the heartbeat of the primordial universe” onto the sides of buildings. There’s no getting around the impression that a few of them are oddballs: ORLAN is an artist whose raw materials include her own skin cells; before meeting her, Miller received an e-mail from an assistant reminding him that her name is to be spelled in all capitals. But to be fair, the world of science has no shortage of oddballs either.

One question that looms in the background is whether the art–science influence flows in two directions or just one. Even the word “influence” is offensive to some artists. Miller recounts how, when chairing an art–science debate, he offhandedly used the phrase “science-influenced” to describe some of the artists taking part, and was immediately attacked by the panellists for suggesting “a hierarchy of disciplines – that science was above art”. It’s not clear how fruitful the evening was; as Miller writes, “The panellists seemed to know little of what went on in the world of science outside biology. This held for the audience, too, the vast majority of whom were artists.” At any rate, the impression one is left with after reading Colliding Worlds is that “influence” is a perfectly good word to describe what’s happening. Moreover, the influence is primarily, if not exclusively, one-way: whether the artists Miller has profiled will admit it or not, they are, in fact, deeply influenced by science, with little evidence to suggest a similar kind of influence going the other way. The book’s subtitle suggests Miller himself is sympathetic to this view, although he does record a small handful of exceptions. In one case, an encounter with an artist led a physicist to study the interaction between laser beams and soap bubbles; in another, a biophysicist was inspired to investigate artificial photosynthesis.

Miller also tackles a more difficult problem: can science explain the appeal of art? More generally, is there a scientific basis for aesthetics? He quotes from Richard Taylor, a physicist who is also an artist: science, Taylor hopes, will “throw a narrow beam of light into those dim corners of the mind where great paintings exert their power”. Yet there is, at this point, no consensus on the matter. Even trickier is the question of whether art and science are driven by similar motivations – perhaps, Miller suggests, by a quest for symmetry, or, more generally, for “beauty”. A fully satisfying answer is too much to hope for, especially given how hard it is to even pin down definitions of “art” and “science” that everyone can agree on; ultimately Miller leaves the issue unresolved. (Even so, an analysis of Jackson Pollock’s drip-paintings, said to exhibit a fractal structure, makes for a fascinating intermezzo mid-way though the book.)

It’s hard to doubt Miller’s assertion that a new art movement is blossoming. He presents more than enough examples to make the case. Yet his prediction that art and science are on their way to a merger – an argument he mounts in the book’s final pages – must be taken with a grain of salt. Although both art and science will surely surprise us in the years and decades ahead, I suspect they will surprise us in their own, separate ways.

  • 2014 W W Norton £22.00hb 352pp

Elegant constructions

“In science, one plus one is always two; in art it can also be three or more.” This quotation from the artist Josef Albers appears early on in Beautiful Geometry, and it makes an excellent motto for this slyly humorous and – yes – beautiful book about the most visual branch of mathematics. A collaboration between a historian of mathematics, Eli Maor, and a mathematically-inspired artist, Eugen Jost, the book contains 51 short essays about different topics in geometry, each of them accompanied by at least one full-colour artwork.

Artwork based on Steiner's porism, showing a 3x3 matrix in which each cell contains brightly coloured chains of circles inscribed inside larger circles

Throughout the book, these vibrant illustrations (a mix of computer-generated images and acrylic on canvas) function both as stand-alone artworks and as demonstrations of mathematical concepts – albeit with a dash of artistic licence from Jost, who is clearly a fan of Albers’ views on the sum of 1 + 1.

The journey through geometry begins with the Pythagoreans, the cult-like group in ancient Greece whose motto “Number rules the universe” reflected their belief in a mathematically ordered world. Their leader, Pythagorus, did not discover the theorem that bears his name, but he may have been the first to prove it, and his followers certainly used it to explore related geometric concepts. One such concept appears in Jost’s artwork “The (3,4,5) Triangle and Its Four Circles” (see image at top of article). At the heart of this image is a right triangle inscribed with an “incircle” that is tangent to all three sides. Outside the triangle are three “excircles”, each of which is tangent to one side of the triangle and to the continuations of the other two sides. The fact that the lengths of this triangle’s three sides form a “Pythagorean triple” (a trio of positive integers a, b and c for which a2 + b2 = c2) has a curious consequence: it means that the radii of the incircles and excircles must also be integers. This property, Maor notes, leads to some interesting relationships between the radii of the four circles; for example, the product of the radii of the smaller excircles ra and rb is equal to the area of the triangle, ab/2.

Artwork inspired by the golden ratio, composed of 12 tiles representing different ways the ratio appears in mathematics and nature

Such relationships delighted the Pythagoreans, but later developments in mathematics were less to their liking. As Maor explains, the discovery that √2 cannot be expressed as the ratio of two positive integers “brought about a serious intellectual crisis” among the group’s members.

Of course, knowledge of these new “irrational” numbers also brought about advances in geometry, including studies of the “golden ratio” that crops up in so many areas of art and nature (see image above). And even the Pythagoreans might have been pleased with a few of the modern pieces of geometry that appear later in the book. Another image (also above), for example, illustrates a theorem known as “Steiner’s porism”, which concerns the behaviour of chains of non-concentric circles. Its co-discoverer, Jakob Steiner, helped to revive “classical” geometry in the early 19th century and, despite its antiquity, this well-studied discipline turned out to have some new tricks up its sleeve.

Artwork inspired by Morley's Theorem, made up of a grid of nine tiles containing triangles or significant dates in Morley's life, with the inscription "In any triangle the angle trisectors lying near the sides intersect in an equilateral triangle" written around the edges

A good example is Morley’s Theorem (see image above), which involves angle trisection but could otherwise have come straight out of Euclid’s Elements. Even in classical geometry, Maor concludes, “surprises may still be awaiting us around the corner – or perhaps around the vertex!”

  • 2014 Princeton University Press £17.77/$27.95hb 208pp
Copyright © 2026 by IOP Publishing Ltd and individual contributors