Research funding should reward unpredictability
Jan 7, 2008 2 comments
Particle-physics experiments can cost hundreds of millions or even billions of dollars — and with that kind of money at stake, debates over which projects are worthy of funding can be heated and complex. While those who make big funding decisions try to be as objective as possible, the physicist Bruce Knuteson of the Massachusetts Institute of Technology in the US believes that agencies would have a much better idea of where to spend their money if they quantified the “scientific merit” of particular research proposals. He has shown how to calculate such a quantity and has also demonstrated that it is often lesser-known experiments that lead to the biggest discoveries.
In formulating scientific merit, Knuteson used the principles of information theory originally put forward by the American mathematician Claude Shannon in the 1940s. According to Shannon, the merit of a particular result from a scientific experiment is simply a measure of how improbable that result was considered to be before the experiment was carried out — in other words how much new information it generates. The merit of a proposed experiment is then equal to the combined weighted merit of all of its possible results.
Results rarely clear-cut
To convert this concept into a form that could be used to assess research proposals in practice, Knuteson also took into account the fact that experiments do not always generate clear-cut results. So an experiment might have a certain merit if it can be used to look for a hitherto undiscovered particle, but this merit would be reduced if the experiment could not definitively rule the new particle in or out.
Even sporting a multi-billion dollar price tag, the LHC is reasonable value for money compared to alternatives
Knuteson applied his thinking to assess both the merit of future experiments and the merit of past research (arXiv:0712.3572). In the former category he also took his figure of merit and divided it by the cost of the experiment to work out the project’s “bang per buck”. He found that the Large Hadron Collider, which is due to switch on at the CERN laboratory in Geneva later this year, has huge merit because it will be capable of so many potential discoveries. He pointed out that in general his measure of scientific merit rewards experiments that guarantee themselves surprises by pushing the energy boundary. “Even sporting a multi-billion dollar price tag, the LHC is reasonable value for money compared to alternatives," he added.
Regarding past results, Knuteson calculated that the unexpected discoveries of the tau lepton and the J/psi particle in the 1970s were far more worthwhile than the higher profile but more predictable discoveries of the W and Z bosons and the top quark. He noted that the groups responsible for the latter discoveries were significantly larger, and therefore capable of promoting their results more widely. Knuteson also adds that a non-discovery of the Higgs boson at CERN will be far more worthwhile than the particle’s discovery, since a non-discovery would be more surprising.
According to Knuteson, high-energy funding agencies should require grant applicants to provide a figure of merit for their proposed experiments, adding that doing so will “tighten” the often “woolly” arguments found in proposals. While limiting his analysis to high-energy physics (since, he says, it is a rare example of a field that does not have to take account of potential, economic or social return), he concedes that even within the confines of his field the technique has its problems.
For one thing it assumes agreement on what are “qualitatively new” discoveries, pointing out, for example, that the discovery of a new type of meson might be unanticipated but would add very little to our understanding of nature. He also recognizes that some will object to the whole idea of trying to quantify the scientific merit of experiments. But this, he said, is already done by funding agencies, even if the process is implicit rather than explicit. He believes it is better if these evaluations are made “in the sharpest, most open, most quantifiable and scientifically best motivated framework possible.”
About the author
Edwin Cartlidge is a science journalist based in Italy