Skip to main content

How to treat your inner geek

 

By Matin Durrani

The word “geek” used to be a bit of insult, but to be labelled a geek these days isn’t such a bad thing after all. I think a lot of that’s due to the sheer power and pervasiveness of smartphones, software and IT — in fact, the top definition of “geek” over at Urban Dictionary is “The people you pick on in high school and wind up working for as an adult.” I also reckon the huge popularity of TV’s The Big Bang Theory has played its part in the reversal of fortune of the word, with many of us following the stories of Sheldon, Leonard and their geeky physics pals.

(more…)

A very brief history of relativity

On 25 November 1915, as devastating war raged throughout Europe, Albert Einstein presented the paper Die Feldgleichungen der Gravitation (“The field equations of gravitation”) to the Prussian Academy of Sciences. This paper, the last in a series of four, marked the first consistent formulation of his general theory of relativity, a task that had challenged Einstein for almost a decade. The golden goose of theoretical physics, enticed to Berlin in 1914, had delivered a priceless egg.

Among physicists, the general theory is regarded as Einstein’s greatest work – the masterpiece that surpasses even his ground-breaking papers of 1905 on the atomic hypothesis, the light quantum and special relativity. Indeed, the general theory has long been considered one of the great triumphs of 20th century science, a tour de force that remains unsurpassed in terms of its originality, profundity and predictive power. By replacing Newton’s “action-at-a-distance” law of gravity with a revolutionary new view of gravity as a curvature of space–time, Einstein laid the foundations for our modern view of the world on the largest scales, from our understanding of black holes to the Big Bang model of the evolution of the universe.

In his 1905 theory of relativity (later named the “special theory”), Einstein’s insistence that the laws of physics must appear identical to observers in uniform relative motion led to the prediction that space and time are neither independent nor absolute. Instead, observers travelling at high speed relative to one another would experience a given interval in space and time differently. Long before this startling prediction could be verified by experiment, Einstein had embarked on the quest for a more universal theory of relativity – that is, a theory that could describe bodies in non-uniform (accelerated) motion. Anxious that the new theory would also include gravitational effects, he realized to his delight in 1907 that these two ambitions were one and the same. This great insight – the equivalence principle – set Einstein on the long and difficult road to the general theory of relativity.

In Einstein’s Masterwork: 1915 and the General Theory of Relativity, John Gribbin provides a timely, succinct and highly accessible account of -Einstein’s greatest theory and its legacy today. A visiting fellow in astronomy at the University of Sussex, Gribbin is best known as a science writer of prolific output. Earlier titles include popular histories of modern science (Science: a History), quantum theory (In Search of Schrödinger’s Cat) and cosmology (In Search of the Big Bang), as well as several scientific biographies.

Here, he turns his attention to the story of general relativity, which he tells in the context of Einstein’s life and work before and after 1915. This is a logical approach in many ways. Apart from creating a narrative that is eminently readable, the science is presented in historical context in a manner that makes it easy to absorb.

For example, having described Einstein’s early life and undergraduate years, Gribbin discusses his early research in statistical mechanics. This work is often overlooked in popular accounts, but it set the foundations for Einstein’s pioneering papers of 1905. Similarly, the author explains how Einstein could not progress beyond his equivalence principle before the advent of Hermann Minkowski’s geometrization of special relativity – or before he had acquired sufficient mastery of differential geometry to apply -Minkowski’s approach to curved geometries.

Einstein’s long road to general relativity has been the subject of much research in recent years by Einstein scholars such as John Stachel, Don Howard and Jürgen Renn, and accounts of the story have been given at a popular level in books such as Amir Aczel’s God’s Equation, Jean Eisenstaedt’s The Curious History of Relativity and Pedro Ferreira’s The Perfect Theory. Gribbin carves his own place in this literature; while his narrative is less detailed than that found in any of the books above, it deftly conveys the main points in Einstein’s journey to the general theory in characteristically clear and succinct prose.

It must be admitted, however, that in placing the story of general relativity in the wider context of Einstein’s life and work, the author risks telling a tale that has been told many times before, not least in a plethora of Einstein biographies. Indeed, there is considerable overlap with the author’s own 1993 book Einstein: a Life in Science, co-authored with Michael White.

This is no great problem in principle, due to the freshness of the writing and the author’s uncanny ability to convey profound scientific concepts in a few crisp sentences. However, there are undoubtedly times when the book feels more like a biography of Einstein than a biography of his greatest theory.

For example, the chapter that deals with the legacy of general relativity (from classic observational tests to the foundational role of the theory in modern astrophysics and cosmology) is curiously short, and is followed by a chapter describing Einstein’s life and science in his later years. It seemed to this reviewer that the “legacy” section would have worked better as the last chapter and could have been more substantial.

In particular, Gribbin’s discussion of the evolution of relativistic cosmology is extremely brief, given the central importance of general relativity in modern cosmology. While the “static” cosmic models of Einstein and Willem de Sitter are fleetingly mentioned, there is no discussion of Einstein’s resistance to the time-varying cosmologies of Alexander Friedmann and Georges LemaÎtre- when they were first proposed (and no distinction is drawn between their very different approaches). The author also fails to distinguish between LemaÎtre’s 1927 model of cosmic expansion and his later hypothesis of an origin for the universe, and there is no discussion of Einstein’s conversion to time-varying models of the cosmos in the wake of Edwin Hubble’s observations of the galaxies. These omissions are a pity, as recent research has shown that Einstein’s cosmology offers many insights into his thoughts on both relativity and relativistic cosmology.

It is also puzzling that Einstein’s great search for a unified field theory is mentioned only very briefly, given the central role of general relativity in this long quest. As many science historians have noted, it was the great success of Einstein’s geometrical approach to general relativity that laid the foundation for his unshakeable conviction that the road to unification lay in further generalizations of the field equations (a conviction that was shared by Erwin Schrödinger). Finally, I found the author’s objection to the shorthand term “general relativity” somewhat ahistorical and was disappointed that the famous field equations Gμν = – κTμν were never shown.

However, these are minor criticisms that should not deter the reader from this excellent and informative book. Einstein’s Masterwork is a beautifully written and highly accessible account of the genesis of a great theory, a hugely enjoyable read that is highly recommended for physicists and the public alike.

  • 2015 Icon Books £10.99hb 240pp

Imaging the polarity of individual chemical bonds

A new atomic force microscopy (AFM) technique, which would allow users to precisely detect and map the charge distribution within molecules, has been developed by researchers in Europe. The team has used its method to demonstrate the difference in bond polarity between two structurally identical but chemically distinct molecules. It hopes the technique could someday be used to map charge movement within photovoltaic materials – something that could potentially help to improve solar cells.

In 1991 a variation of AFM – dubbed Kelvin probe force spectroscopy (KPFS) – was invented, and is used to measure the charge distribution on a surface. As a nanoscale tip oscillates at a tiny distance from the surface, a bias voltage is applied between the two. The electrostatic interaction with the surface pushes and pulls on the tip, changing its oscillation frequency. By measuring the frequency at various points, one can calculate the potential difference between tip and surface, and thereby infer the surface distribution of electronic charge.

Chemically attached

In 2009 Leo Gross and colleagues at IBM Research Zurich in Switzerland showed that, by “functionalizing” the tip – making it chemically reactive by attaching a certain molecule to it – they could use AFM to obtain images of individual atoms within molecules. However, to measure the distribution of charge within individual chemical bonds, one needs to perform KPFS with the tip at sub-nanometre distances from the surface. Here, previously unexplained artefacts appear in the images, making KPFS unreliable.

In the new research, physicist Jascha Repp of the University of Regensberg in Germany and colleagues analysed the forces acting on the tip, to better understand the artefacts that appear in the images. The interactions between tip and surface have multiple origins – apart from the electrostatic force at play, there is also repulsion that arises due to Pauli’s exclusion principle, which dictates that two identical fermions cannot simultaneously occupy the same quantum state. The closer the tip gets, the more significant is the Pauli repulsion. The catch is that, as the potential difference is raised and lowered, the molecules on the surface – and more significantly, on the tip – are distorted, changing the tip-to-surface distance, as well as the consequent significance of the Pauli repulsion, leading to artefacts in KPFS images.

Tip-top solution

Repp’s team has now devised a solution, however. The researchers measured the rate of change of the tip’s oscillation frequency with height, as they brought it closer to the surface. This frequency shift was due to electrostatic interaction, Pauli repulsion and other forces such as van der Waals attraction. They then altered the potential difference between tip and surface, and repeated the process. The electrostatic interaction has a different mathematical dependence on distance, from that of the other forces. Therefore, by comparing how the total force varied with distance at both voltages, the researchers were able “to better disentangle all these different contributions and…extract better only the electrostatic contribution”, explains Repp.

To demonstrate their technique, the researchers examined several simple molecules. In all cases, the results were closer than traditional KPFS to theoretical predictions. For example, they looked at the structurally identical molecules F12C18Hg3 and H12C18Hg3, showing for the first time that the C–H bonds in the first compound were polarized with the negative charge on the carbon atom, whereas the C–F bonds in the second were polarized the opposite way, as predicted. The team hopes that its method could someday be used to study changes in charge distribution as molecules become excited, thereby helping to optimize organic photovoltaic compounds.

“It’s an important piece of work and it’s given us key insights into electrostatic forces,” says physicist Philip Moriarty at the University of Nottingham. “The only reservation I have – and the authors themselves state this reservation – is that it involves quite a number of approximations. Nonetheless, the comparison with traditional KPFS data is pretty compelling.” He notes that another recently unveiled technique for extracting the same information by functionalizing the tip with a quantum dot, achieves “comparable” resolution. Moriarty and Gross both believe the two techniques are complementary. “The other technique is very sensitive at longer distances where the lower-order electrostatic multipoles become more dominant,” Gross says. “When you do scanning probe microscopy, you often want high lateral resolution, and therefore it’s important to be able to interpret results that you get at close tip-sample distances. That’s the region which the current paper addresses.”

The research is published in Physical Review Letters.

GRAND plans for new neutrino observatory

A novel radio telescope, currently being designed by scientists in France, China and other countries, could shed light on some of the most violent cosmic phenomena in the universe. If built, the so-called Giant Radio Array for Neutrino Detection (GRAND) – comprising hundreds of thousands of antennas spread over an area just slightly smaller than the UK – would detect extremely high-energy neutrinos originating from deep space. According to the researchers, GRAND would be considerably cheaper to build than rival telescopes based on optical technology.

As neutrinos interact so weakly with other matter and can cross the universe unimpeded, they can provide valuable information about the phenomena via which they were born. Unfortunately, as they are so elusive, huge detectors are needed to detect any neutrinos on a reasonable timescale.

Radio array

The current leading facility, the $275m IceCube detector, uses thousands of photomultiplier tubes suspended deep within the ice at the South Pole. However, the high cost of photomultiplier tubes limits the overall volume of such a detector, thereby setting an upper limit on the energy of neutrinos that can be snared – especially because the most energetic ones are also the rarest. IceCube has bagged about 100 deep-space neutrinos since turning on in 2011 and looks set to capture several hundred more over the next decade, but is unlikely to detect any having an energy much above 1015 eV.

Rather than detecting light, GRAND will pick up radio signals – because radio antennas are cheaper to make than their optical equivalents, a much larger detector can be built. The scheme relies on tau neutrinos interacting with nuclei as they travel up through the Earth, thereby generating tau leptons that produce showers of secondary particles as they decay in the atmosphere. The synchrotron radiation given off by charged particle showers would be detected by the antennas.

Olivier Martineau-Huynh at the National Institute of Nuclear and Particle Physics (IN2P3) in Paris, together with colleagues worldwide, has developed a basic plan for the detector. GRAND calls for one antenna per square kilometre, over an area of roughly 200,000 km2. Each “detection unit” would consist of three perpendicular butterfly-shaped antenna (pictured) connected to a GPS unit that would time and plot the trajectories of incoming neutrinos. Antennas would be on or around mountains, which would make good sites to pick up the resulting radio waves, as well as being additional targets for incoming neutrinos.

The researchers say that their set-up would be sensitive to neutrinos with energies up to about 1020 eV. Its design is such that about 100 ”cosmogenic” neutrinos – those generated by cosmic rays as they interact with the cosmic microwave background on their journey to Earth and which, having an energy of about 1017 eV, are probably out of IceCube’s reach – should be detected yearly.

According to Martineau-Huynh, GRAND’s observations should help to resolve long-standing questions about the nature and composition of high-energy cosmic rays, and exactly which energetic processes produce them. He says that thanks to an expected very high angular resolution, GRAND could also produce a sky map of astrophysical sources emitting very energetic neutrinos. “It will be the ultimate detector for neutrinos at high energies,” he says.

Cheap challenges

Before they can go any further, however, Martineau-Huynh and co-workers must show that they can indeed make the antennas cheaply. Their estimated price tag for GRAND is $100m (excluding the costs of the manpower involved), so each antenna must be built for no more than $500 – although Martineau-Huynh emphasizes that this is a very preliminary figure. He believes that mass production can push the costs right down, and hopes that one potentially thorny issue – how to transmit antenna data back to base – can be done using cheap Wi-fi technology already employed to monitor oil pipelines.

Another major challenge will be screening out background interference. The researchers calculate that for every cosmic neutrino it detects, GRAND could register as many as a billion events generated by terrestrial sources – such as commercial radio transmissions or thunderstorms. To accurately distinguish their signal from the noise, the researchers are building a prototype array of 35 antennas in the Tian Shan mountains in north-west China. With results from the prototype array expected in 2018, and assuming funding can be obtained from China and other countries, Martineau-Huynh hopes that GRAND might switch on in 2022.

IceCube’s principal investigator, Francis Halzen, of the University of Wisconsin-Madison in the US, describes GRAND’s radio-based approach as “powerful”, but points out that it would not be sensitive to neutrinos with an energy below about 1016 eV because such particles wouldn’t generate detectable radio waves. He also notes that IceCube’s continued non-detection of cosmogenic neutrinos is unfortunately “restricting the number of events an experiment like GRAND can detect in the future”.

A report on GRAND was presented at the 34th International Cosmic Ray Conference, held in the Netherlands earlier this month. A preprint is available on the arXiv server.

Physicists get a surprise when watching quasicrystals grow

The first “movies” of quasicrystal growth reveal that an unexpected error-correction process is involved in creating the curious structures. Taken by physicists at the University of Tokyo, the high-resolution images suggest that short-range interactions dominate the quasicrystal formation process – rather than long-range interactions, as had been previously thought.

Conventional crystals are characterized by repeating unit cells of atoms, which combine to create a simple crystalline structure with translational symmetry. This symmetry means that if an observer were to move from one unit cell to another, the surrounding crystal would look exactly the same. Such crystals grow via short-range, local interactions between unit cells, because growth consists of simply repeating a single pattern.

Quasicrystals, on the other hand, can be modelled as two unit cells – called Penrose tiles – that repeat in a more complicated way to create a structure that has rotational symmetry but lacks translational symmetry. The growth of quasicrystals appears to require atoms to have long-range, nonlocal interactions with each other to create these complex patterns – something that does not have a physical explanation. The problem is that each atomic cluster in a quasicrystal should only have information about its immediate neighbours, and should therefore lack the long-range information necessary for quasicrystalline patterns to form. Indeed, many prominent scientists, including the Nobel laureate Linus Pauling, dismissed the idea of quasicrystals when it first emerged.

Open question

While there is now little doubt that quasicrystals exist, how they grow has remained an open question since they were first discovered in 1984 by Daniel Shechtman, who bagged the 2011 Nobel Prize for Chemistry for his pioneering work on the subject.

Our observations reveal a peculiar process of quasicrystal growth that is very different from the ideal growth models
Keiichi Edagawa, University of Tokyo

Today, quasicrystals are used in everyday objects such as frying pans, surgical scalpels and razor blades, but their growth had remained poorly understood. Now, Keiichi Edagawa of the University of Tokyo and his team have used high-resolution transmission electron microscopy to directly observe the growth of a quasicrystal that is an alloy of aluminium, nickel and cobalt. “Our observations reveal a peculiar process of quasicrystal growth that is very different from the ideal growth models proposed so far,” Edagawa told physicsworld.com.

Growing grain

Edagawa and his team made an Al70.8Ni19.7Co9.5 quasicrystal with 10-fold rotational symmetry, meaning that rotating the pattern by integer multiples of 36° gives an identical pattern. The researchers heated the sample to a temperature of 1183 K, which caused atoms from one grain in the quasicrystal to detach and join another grain. The researchers used a high-resolution transmission electron microscope to record a movie of grain growth at 30 frames per second. “Our resolution is sufficient to resolve atomic clusters,” Edagawa says.

The team observed that errors were frequently introduced into the structure as the atomic clusters arranged themselves. However, these errors were consistently corrected over the course of a few seconds to restore the quasicrystalline order. Observing these errors was a surprise: theoretical models of quasicrystal growth had predicted that grains grow perfectly and always preserve the quasicrystalline order.

Six instances of “error-and-repair” events were observed over 15 s on one grain. The presence of these errors suggest to Edagawa and his team that the growth of quasicrystals is indeed governed by local atomic interactions; nonlocal interactions would ensure constant quasicrystalline order and obviate the need for error-and-repair events. “Local interactions appear to suffice to construct ideal quasicrystalline order,” the researchers reported, an assertion that is supported by recent Monte Carlo and molecular-dynamics simulations. The team believes that the assembly errors increase the free energy of the quasicrystal and that the subsequent rearrangements back to the quasicrystalline order decreased the free energy again.

The observations are described in Physical Review Letters.

Moving meridians, Stradivarius violins, sunspots and more

The Prime Meridian and the modern reference meridian

 

By Tushna Commissariat

A visit to the Royal Observatory in Greenwich is incomplete without walking along the Prime Meridian of the world – the line that literally divides the east from the west – and taking some silly photos across it. But you may be disappointed to know that the actual 0° longitudinal line is nearly 100 m away, towards the east, from the plotted meridian. Indeed, your GPS would readily show you that the line actually cuts through the large park ahead of the observatory. I, for one, am impressed that the original line is off by only 100 m, considering that it was plotted in 1884. A recently published paper in the Journal of Geodesy points out that with the extreme accuracy of modern technology like GPS, which has replaced the traditional telescopic observations used to measure the Earth’s rotation, we can measure this difference. You can read more about it in this article in the Independent.

(more…)

Physicists isolate neutrinos from Earth’s mantle for first time

The first confirmed sightings of antineutrinos produced by radioactive decay in the Earth’s mantle have been made by researchers at the Borexino detector in Italy. While such “geoneutrinos” have been detected before, it is the first time that physicists can say with confidence that about half of the antineutrinos they measured came from the Earth’s mantle, with the rest coming from the crust. The Borexino team has also been able to make a new calculation of how much heat is produced in the Earth by radioactive decay, finding it to be greater than previously thought. The researchers say that in the future, the experiment should be able to measure the quantities of radioactive elements in the mantle as well.

According to the bulk silicate Earth (BSE) model, most of the radioactive uranium, thorium and potassium in our planet’s interior lies in the crust and mantle. Accounting for about 84% of our planet’s total volume, the mantle is the large rocky layer sandwiched between the crust and the Earth’s core. Heat flows from the interior of the Earth into space at a rate of about 47 TW, but one of the big mysteries of geophysics is how much of this heat is left over from when the Earth formed, and how much comes from the radioactive decay chains of uranium-238, thorium-232 and potassium-40.

Peering deep underground

One way to settle the question is to measure the antineutrinos produced by these decay chains. These tiny particles travel easily through the Earth, which means that detectors located near the surface could give geophysicists a way of measuring the abundance of radioactive elements deep within the Earth – and thus the heat produced deep underground.

Back in 2005 physicists working on the KamLAND neutrino detector in Japan announced that they had detected 22 geoneutrinos, while Borexino, which has been running since 2007, reported in 2010 that it had seen 10 such particles. Both detectors have since spotted more geoneutrinos and, taken together, their measurements suggest that about one half of the heat flowing out of the Earth is generated by radioactive decay, although there is large uncertainty in this value.

Italian adventure

The Borexino detector is made up of 300 tonnes of an organic liquid, and is located deep beneath a mountain at Italy’s Gran Sasso National Laboratory to shield the experiment from unwanted cosmic rays that would otherwise drown out the neutrino signal. Whenever electrons in the liquid are struck by an antineutrino, they recoil and create a flash of light. In the latest work, Borexino physicists have analysed a total of 77 detector events, with the team calculating – using data from the International Atomic Energy Agency – that about 53 of these antineutrinos were produced by nuclear reactors.

The remaining 24 geoneutrinos could have come from either the Earth’s crust or its core. However, scientists have a pretty good idea of how much uranium and thorium are in the crust, allowing the Borexino physicists to say that half of these geoneutrinos were produced in the mantle and the other half in the crust. Furthermore, the physicists can say with 98% confidence that they have detected mantle neutrinos – a much greater level of confidence than achieved in previous studies.

The team also calculated the heat generated by radioactive decay in the Earth and found it to be in the 23–36 TW range. This is larger than estimates based on assumptions about the amount of radioactive elements in the Earth, which are in the 12–30 TW range, and also larger than an estimate based on previous antineutrino measurements.

The Borexino team also tried to work out what proportion of the geoneutrinos came from the uranium decay chain and what proportion from the thorium chain. Potassium decays were not considered because they are not expected to make a significant contribution to the numbers detected. The data suggest that the currently accepted ratio of thorium to uranium in the Earth is correct, but that the uncertainty in the Borexino values is very large. More data, the Borexino physicists say, should let them make more precise measurements of the contributions of uranium and thorium to the heating of the Earth.

The study is reported in Physical Review D.

Physicist nominated to lead US Department of Energy’s Office of Science

US president Barack Obama has nominated physicist Cherry Murray for the role of director of the Office of Science in the Department of Energy (DOE). The Office of Science supports fundamental research in both the physical sciences and energy research. It also oversees 10 of the DOE’s 17 national laboratories. The nomination must now be confirmed by the US Senate.

Murray earned her bachelor’s degree and her PhD from the Massachusetts Institute of Technology. She then joined Bell Laboratories as a staff scientist. During her 27-year tenure at Bell, she prospered and became an executive who managed research and development. She has since held several leadership roles in industry, academia and the national laboratory system, with the most recent being as dean of Harvard University’s School of Engineering and Applied Science. Murray has also served on the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, and is currently on the Congressional Commission to Review the Effectiveness of the National Energy Laboratories.

High impact

The US Congress is in recess until after 7 September. Once it returns, the Committee on Energy and Natural Resources will review Murray’s nomination. If it passes the committee, it will then be brought to the full US Senate. The confirmation process is a long one, and if Murray is established as the director of the Office of Science, she will not expect to begin her new position until December or January. Murray told physicsworld.com that she is “absolutely thrilled to be nominated and hopefully I will be confirmed. I am looking forward to a new experience”.

In the director role, Murray would have several priorities. “The office of science does a really good job getting community input for what is important,” says Murray, “and I would of course continue to do that if confirmed.” Another priority for her would be finding ways to increase collaboration between academia, industry and national laboratories. “I think it’s important for the nation. And, if confirmed, I actually think I can have some impact.”

The director role has been vacant for more than two years, although deputy-director Patricia M Dehmer has served as the acting director for the Office of Science. In November 2013 physicist Marc Kastner was nominated for the position. After the initial several-month-long vetting process, his nomination stalled due to the hearings occurring after US Congress changed in 2014. Kastner has since taken the reins of the newly formed Science Philanthropy Alliance, which aims to increase privately funded basic science research.

How flowing galaxies revealed the immensity of the Laniakea Supercluster

A image showing a 2D slice of the supergalactic equatorial plane, the boundary of Laniakea is the closed orange curve. The white lines are velocity flow curves where red denotes areas of high density and blue shows low density. The Milky Way is the black dot in the right side of Laniakea

By Brent Tully at the International Astronomical Union General Assembly in Honolulu, Hawaii

We know that we live on a planet in a solar system in a galaxy in a group of galaxies. But what do we know about our location in the universe beyond that? Some astronomers would answer that we live in the “Local” or “Virgo” supercluster of galaxies. However, the concept has been vague. In the interconnected “cosmic web” it has not been clear where one dense region of galaxies ends and another begins.

Rather than just looking at the distribution of galaxies, it is instructive to consider the motions of galaxies with respect to each other. On the grand scale, galaxies are flying apart from each other with the expansion of the universe. We have to cancel out that motion to see the residual “peculiar” velocities of galaxies that arise from local gravitational attractors.

(more…)

Better science policy

The physicist John H Marburger, who died in 2011, had a unique perspective on science policy. He spent many years in the research world, first as president of Stony Brook University and then as director of the Brookhaven National Laboratory, before spending seven years as science adviser to US president George W Bush. Good science policies, he realized, don’t just involve deciding who gets how much money; they also depend on what basis those decisions are made.

Keen to give policy-makers a helping hand, Marburger called for the creation of a “science of science policy” that would offer them new tools, metrics and models. His initiative, first unveiled 10 years ago, is discussed in Science Policy Up Close (Harvard University Press 2015) – a collection of Marburger’s writings that I edited – and in a 2011 handbook he co-edited called The Science of Science Policy. Marburger’s thoughts are perceptive, but his ambitions are yet to be realized.

More than money

Early on during his time in Washington, DC, Marburger discovered that the amount of federal money invested in US research and development had been remarkably stable over the years, closely tracking the country’s gross domestic product (GDP). He therefore concluded that there’d be little point in pushing for a larger influx of federal money, but that what might improve things would be better ways to assess its impact.

Marburger had to confront the issue within months of taking office. The 2001 terrorist attack on the World Trade Center had led the Bush administration to restrict visas for foreign nationals, and Marburger was asked to speak to a National Science Board (NSB) taskforce about the “impact of security policies on the science and engineering workforce”. By then many scientists had denounced the restrictions as unduly harsh and harmful to US research. Marburger was sympathetic, yet all the evidence he could drum up was anecdotal. He could find little data, of uncertain reliability, on the subject.

“I am not at all confident,” he told the NSB, “that the right questions are being asked or answered to provide guidance for action. We have workforce data that I do not understand how to use, and we have workforce questions whose answers would seem to require more than merely data.”

The same problem, Marburger soon realized, plagued virtually all aspects of science policy. He was disappointed in the techniques used in the US to determine appropriations for research funding at the federal level, and found that science policy was mainly driven by advocates, anecdotes and intuition – rather than data and models. This lack of a scientific basis for policy-making, he felt, threatened the credibility of policy advice, and was what led him to call for a “science of science policy”. Marburger made the call at the 2005 Forum on Science and Technology Policy – an annual event held in Washington, sponsored by the American Association for the Advancement of Science (AAAS). He also made his case in an editorial in Science magazine that May (308 1087).

Crossover research

Marburger’s proposal had – and still has – its critics, who are sceptical that anything of the kind could be achieved given the varied and ever-changing state of the research ecosystem. But has it had any impact on science policy-making in Washington in the 10 years since he proposed it? I posed this question to Kaye Husbands Fealing, an economist who is chair of the School of Public Policy at the Georgia Institute of Technology, who had organized a session at the AAAS policy forum in Washington on the 10th anniversary of Marburger’s proposal.

“Definitely,” replied Fealing, who knew Marburger, having served as programme director at the National Science Foundation when he developed his initiative and having co-edited Marburger’s handbook. “It helped to get economists, sociologists, psychologists, political scientists and other academics interested in bringing ideas from their realms to bear on science policy.”

I asked her if these ideas always bore fruit. “Not always,” she replied. “There’s often a distance between what academics think is helpful and what policy-makers find useful.”

I then wondered if she knew of any examples in the wake of Marburger’s proposal of successful interactions between academics and policy-makers. “Yes,” she said, citing a 2006 paper by Harvard University labour economist Richard Freeman on the value of increasing graduate research fellowships. It included proposals that White House science policy-makers found convincing enough to implement.

“The neat thing,” Fealing told me, “was that this study was done by an academic for academic reasons, but ended up influencing policy-makers. I’m not saying this happens a lot. But Marburger influenced some academics to say of their work, ‘This could have implications for policy in these ways,’ and some policy-makers to be willing to listen and think, ‘We might be able to do this a bit differently.’ ”

The critical point

During a science policy roundtable at this year’s AAAS meeting, I asked my neighbour why science policy is so hard to study and improve. “It’s trophological,” he said. Unfamiliar with the term, I asked him what he meant. “Trophology means the study of food chains,” he explained. “Science funding involves nourishing strings of different animals – government, funding agencies, universities, labs, departments, consumers – belonging to many different food chains.”

I found the metaphor enlightening. Making science policy means having to create, maintain and improve research food chains. The animals you’re feeding have different – and changing – nutrition requirements and are themselves evolving, and how you feed one can adversely affect others. That, among other things, is what makes success in science policy so difficult to measure. And it is why Marburger’s bold ideas on policy metrics have a long way to go before they see reality.

Copyright © 2026 by IOP Publishing Ltd and individual contributors