With its complex procedures, unknown evaluations and unconscious biases, applying for research funding is no mean feat. Dalmeet Singh Chawla investigates if it is time to revamp the grant-funding process
Jessica Wade is hoping to apply for her first grant next year but is concerned about a lack of transparency. “You have no idea who’s evaluated you or what criteria they are using,” says Wade, a postdoctoral researcher in experimental physics at Imperial College London. Under the current system, she says, a researcher is more likely to get funded if they have been funded before, if they are from certain high-flying institutions or if they have big names on their applications. For Wade – a novice in the grant game – the current system feels “intrinsically unfair”.
Not that grant-application veterans have an easy ride. Senior scientists often gripe about how much time they spend applying for money rather than doing real research – and now there’s evidence to back them up. In 2013 Adrian Barnett, a statistician at the Queensland University of Technology in Brisbane, Australia, and colleagues published a study that found that it took researchers applying for money from the National Health and Medical Research Council of Australia an average of 34 days to prepare a proposal for a grant.
A key part of any successful application is making it through peer review. The success rate at the Engineering and Physical Sciences Research Council (EPSRC), which funds many UK academic physicists, is around 32%. That’s fairly high in the funding world. Recent figures for the Biotechnology and Biological Sciences Research Council, for example, are around 24%.
The situation’s even harder if you’re from a minority group. A 2011 study commissioned by the US National Institute of Health (NIH), for example, found that black applicants were 35% less likely to receive grants from the agency than whites. Since then, the NIH has invested $250m into diversifying biomedical science and examining its own internal biases. What’s more, one recent study in the Netherlands (PNAS 10.1073/pnas.1719557115) has shown that early success with grant applications increases success in later applications.
One study found that black applicants were 35% less likely to receive grants than whites
Elisabeth Pier, a data strategist at non-profit firm Education Analytics in Madison, Wisconsin, thinks funders should assess their own biases. Pier and her colleagues have shown that different grant reviewers evaluating the same applications generally have low levels of agreement.
Stuart Buck, vice-president of research at the private Laura and John Arnold Foundation – which mostly funds work related to criminal justice, public accountability, research integrity and education reform – says his organization doesn’t focus on how many papers a researcher has written, or which journals they are published in. Rather, the foundation carries out more of a direct assessment. For instance, if the applicant is looking to carry out a randomized control trial (RCT) of a public policy issue, they would check to see if they had successfully run an RCT before. In recent years, the foundation has funded studies that attempt to replicate previously published findings. Investing in such studies can highlight the weaknesses of a discipline, such as the lack of data- or code-sharing among its researchers. The foundation also pays reviewers for their work, Buck says, with rates varying on a case-by-case basis.
EPSRC, on the other hand, doesn’t pay reviewers but has been working to recognize their efforts, agency executive chair Philip Nelson, told Physics World. The council, which awards some £800m in grants a year, also monitors reviewers’ performance and makes sure no more than one reviewer recommended by the applicant is appointed. The agency sends reviewers’ comments back to applicants, without disclosing their identities, so if proposals are treated unfairly, applicants get a say. Overall, he argues, the agency’s application success rate is not unreasonable.
Most funders, including EPSRC, rely on single-blind peer review of grant applications, where peer reviewers know the identities of candidates but not vice versa. The general argument for this system is that referees need to know the candidates’ history, to contextualize their new application with their previous work. But it means that researchers who have a good track record will have the edge over junior researchers with less experience. Some, therefore, think “double-blind” peer review – where both reviewers and applicants remain unnamed – would work better. EPSRC is experimenting with this model, but Nelson notes that evidence suggests there are no biases in the agency’s current system.
When it comes to manuscript peer review in physics, double-blind seems to be gaining ground. Last year, for instance, IOP Publishing (which publishes Physics World) carried out a trial offering the double-blind system as an option for two of its journals. It found that around 20% of submissions were filed as double-blind, with the model most popular among authors from India, Africa and the Middle East.
Paul Coxon, a materials scientist at the University of Cambridge, says the double-blind system is sometimes hard to implement, especially for smaller fields, where a reviewer may still be able to guess who the application is coming from.
Another more recent development in the scholarly publishing world is open peer review, where both reviewers and applicants know each other’s names. Wade says she would prefer this system when applying for grants. “It would make people be less nasty,” she says. “But if it’s going to be any type of blind, then it should be double-blind.”
One alternative proposed system is that experts should stop trying to pick the best research to fund, instead relying on a lottery to allocate funds. The New Zealand Health Research Council has been experimenting with such a system for its “explorer” grants, where a brief initial scan of a bunch of proposals will pick out “transformative and viable” projects, which are then randomly allocated money. Lotteries would save a lot of time and eliminate all forms of potential biases, says Barnett. They also allow for more off-the-wall ideas to get funded, he says, which wouldn’t receive money under the traditional system. “Being rejected by a lottery is better than by a person,” he notes.
A lottery is also being tested as part of the “Experiment!” initiative at the Volkswagen Institute in Germany, which funds the humanities and social sciences as well as science and technology in higher education and research. Under the scheme, 120–140 projects are first pre-selected internally, out of which 15–20 grants are selected by a jury of scientists using a double-blind system and another 15–20 are selected by a lottery.
The luck of the draw
Many scientists think that existing funding systems are already pretty much a lottery, even if unintentionally. “Academic careers depend on luck,” Coxon adds. “Maybe having something that is genuinely random and based on the luck of a draw has some sort of appeal.” But Wade prefers conventional peer review for her first grant application, noting that, whether or not she gets the grant, constructive feedback would help her with future applications.
Two years ago, computer scientist Johan Bollen of Indiana University Bloomington and ecologist Marten Scheffer of Wageningen University in the Netherlands proposed yet another funding model, wherein researchers no longer have to apply for grants – instead, they receive an equal amount of funding annually from which they donate a fixed percentage to other scientists (see Physics World August 2016 issue). At first, the model was criticised, but recently the Dutch parliament asked the Netherlands Organization for Scientific Research to initiate a pilot project to test the idea. Wade, however, says Bollen and Scheffer’s system may introduce more bias, since researchers may simply pass on money to their friends instead of those who actually deserve it. Instead, she suggests reinvesting the “pointless money” left over after a project has finished, instead of buying unnecessary equipment.
Leading universities form funding ‘clique’
Barnett has applied for a grant to develop another possible fix, by using video applications to speed things up. But no matter the system, it seems people will find a way to game it. Barnett has heard of academics applying for grants after already doing the work, but before publishing it. “You can write a very good application if you already done the work because you know what happens,” he says. And with that money, they do new work, and repeat the process. Some academics, Barnett notes, also agree to never co-author papers together so they can review each other’s papers and provide them with favourable feedback.
Last year, the NIH discovered some instances of researchers involved in the funding process to have violated its confidentiality rules. Earlier this year, the agency said it was re-evaluating 60 applications and had begun taking disciplinary action against academics who broke the rules.
“A consensus is building that funding should be less contingent on proposal submissions and peer review, should be less all-or-nothing, and should involve less overhead and less inequality,” says Bollen. “I think the future will be more about funding people and teams instead of projects.”