Skip to main content
Policy and funding

Policy and funding

Better science policy

13 Aug 2015 Robert P Crease
Taken from the August 2015 issue of Physics World

An ambitious programme to improve science policy decision-making is 10 years old. Robert P Crease investigates

Way to go

The physicist John H Marburger, who died in 2011, had a unique perspective on science policy. He spent many years in the research world, first as president of Stony Brook University and then as director of the Brookhaven National Laboratory, before spending seven years as science adviser to US president George W Bush. Good science policies, he realized, don’t just involve deciding who gets how much money; they also depend on what basis those decisions are made.

Keen to give policy-makers a helping hand, Marburger called for the creation of a “science of science policy” that would offer them new tools, metrics and models. His initiative, first unveiled 10 years ago, is discussed in Science Policy Up Close (Harvard University Press 2015) – a collection of Marburger’s writings that I edited – and in a 2011 handbook he co-edited called The Science of Science Policy. Marburger’s thoughts are perceptive, but his ambitions are yet to be realized.

More than money

Early on during his time in Washington, DC, Marburger discovered that the amount of federal money invested in US research and development had been remarkably stable over the years, closely tracking the country’s gross domestic product (GDP). He therefore concluded that there’d be little point in pushing for a larger influx of federal money, but that what might improve things would be better ways to assess its impact.

Marburger had to confront the issue within months of taking office. The 2001 terrorist attack on the World Trade Center had led the Bush administration to restrict visas for foreign nationals, and Marburger was asked to speak to a National Science Board (NSB) taskforce about the “impact of security policies on the science and engineering workforce”. By then many scientists had denounced the restrictions as unduly harsh and harmful to US research. Marburger was sympathetic, yet all the evidence he could drum up was anecdotal. He could find little data, of uncertain reliability, on the subject.

“I am not at all confident,” he told the NSB, “that the right questions are being asked or answered to provide guidance for action. We have workforce data that I do not understand how to use, and we have workforce questions whose answers would seem to require more than merely data.”

The same problem, Marburger soon realized, plagued virtually all aspects of science policy. He was disappointed in the techniques used in the US to determine appropriations for research funding at the federal level, and found that science policy was mainly driven by advocates, anecdotes and intuition – rather than data and models. This lack of a scientific basis for policy-making, he felt, threatened the credibility of policy advice, and was what led him to call for a “science of science policy”. Marburger made the call at the 2005 Forum on Science and Technology Policy – an annual event held in Washington, sponsored by the American Association for the Advancement of Science (AAAS). He also made his case in an editorial in Science magazine that May (308 1087).

Crossover research

Marburger’s proposal had – and still has – its critics, who are sceptical that anything of the kind could be achieved given the varied and ever-changing state of the research ecosystem. But has it had any impact on science policy-making in Washington in the 10 years since he proposed it? I posed this question to Kaye Husbands Fealing, an economist who is chair of the School of Public Policy at the Georgia Institute of Technology, who had organized a session at the AAAS policy forum in Washington on the 10th anniversary of Marburger’s proposal.

“Definitely,” replied Fealing, who knew Marburger, having served as programme director at the National Science Foundation when he developed his initiative and having co-edited Marburger’s handbook. “It helped to get economists, sociologists, psychologists, political scientists and other academics interested in bringing ideas from their realms to bear on science policy.”

I asked her if these ideas always bore fruit. “Not always,” she replied. “There’s often a distance between what academics think is helpful and what policy-makers find useful.”

I then wondered if she knew of any examples in the wake of Marburger’s proposal of successful interactions between academics and policy-makers. “Yes,” she said, citing a 2006 paper by Harvard University labour economist Richard Freeman on the value of increasing graduate research fellowships. It included proposals that White House science policy-makers found convincing enough to implement.

“The neat thing,” Fealing told me, “was that this study was done by an academic for academic reasons, but ended up influencing policy-makers. I’m not saying this happens a lot. But Marburger influenced some academics to say of their work, ‘This could have implications for policy in these ways,’ and some policy-makers to be willing to listen and think, ‘We might be able to do this a bit differently.’ ”

The critical point

During a science policy roundtable at this year’s AAAS meeting, I asked my neighbour why science policy is so hard to study and improve. “It’s trophological,” he said. Unfamiliar with the term, I asked him what he meant. “Trophology means the study of food chains,” he explained. “Science funding involves nourishing strings of different animals – government, funding agencies, universities, labs, departments, consumers – belonging to many different food chains.”

I found the metaphor enlightening. Making science policy means having to create, maintain and improve research food chains. The animals you’re feeding have different – and changing – nutrition requirements and are themselves evolving, and how you feed one can adversely affect others. That, among other things, is what makes success in science policy so difficult to measure. And it is why Marburger’s bold ideas on policy metrics have a long way to go before they see reality.

Copyright © 2025 by IOP Publishing Ltd and individual contributors