Skip to main content
Scientific enterprise

Scientific enterprise

Figuring out a handshake

12 Jan 2017

How can we fix the replication crisis in science? Bruce Knuteson offers a solution

Handshake
Human forces Could an online knowledge exchange help to tackle the ongoing replication crisis in academic science? (Courtesy: iStockphoto/russssia)

In April 2008 I moved from the physics faculty at the Massachusetts Institute of Technology to a hedge fund in Manhattan with a box-seat view of the Great Recession. My 23rd-floor corner office near Times Square offered a full view of Lehman Brothers’ headquarters a few blocks away. The investment bank Bear Stearns had imploded a month before I arrived. Lehman Brothers collapsed in the cityscape through the window behind my monitor. A former treasury secretary occupied the office adjacent to mine. I had left the arcane world of particle physics for a world-class education on incentives in finance.

Throughout the financial sector, people get paid to take bets with other peoples’ money. Such gamblers stand to make a lot if they are right. But they stand to lose comparatively little if they are wrong. And investors lack the information needed to properly police those gambling on their behalf. This is all you need to know to confidently predict an unending series of financial crises that will call into question wealth you mistakenly think you have. Such a system inevitably self-organizes into a common bet. (Large markets in which nearly everyone is naturally on the same side – such as stock and real-estate markets – are particularly convenient for this purpose.) The common bet allows the gamblers to cash in for a few years, and provides them with herd protection when things fall apart.

Heads win big …

Halting financial crises therefore requires the elimination of such “heads win big, tails lose little” incentives. Any other purported fix is a joke. Regulation can push the problem around, but it cannot possibly fix an incentive system with such a deep structural flaw. The person managing your money gets paid well if they turn out to be right; they do not lose much if they eventually turn out to be wrong; and in the meantime that person collects a salary. Even if they know nothing, such a person has an incentive to place bets, an incentive to convince you they know something, and an incentive to genuinely convince themselves they know something. The last of these provides useful protection against any future charges of fraud.

How does any of this relate to science? Well, academics are incentivized to publish papers. The rewards are clear and present: publications and citations lead to grants, promotions and scholarly accolades. Penalties for being wrong are often comparatively fuzzy and distant. The scientist reaps rewards if they turn out to be right; they often do not lose much if they eventually turn out to be wrong; and in the meantime that person collects a salary. Even if they know nothing, such a person has an incentive to publish papers, an incentive to convince you they know something, and an incentive to genuinely convince themselves they know something. The last of these provides useful protection against any future charges of fraud.

This is all you need to know to confidently predict an ongoing replication crisis in academic science that calls into question “wealth” you mistakenly think you have: useful, accurate knowledge about how nature works. Halting the crisis requires a fundamental change to the underlying incentive. Any other purported fix is a joke. Regulation – in the form of peer review, transparency requirements and so on – can push the problem around, but it cannot possibly fix an incentive system with such a deep structural flaw.

This is uncomfortable to acknowledge. Most of us strongly prefer to assume people do the right thing, whatever their incentives. We prefer to think incentives do not matter, or at least do not matter much. In the financial sector, someone will eventually develop a compelling alternative to “heads win big, tails lose little” incentives. I do not know what that alternative will be, but it will probably be simple. It will involve material loss for those who turn out to be wrong. It will provide a welcome alternative for those of us who prefer to invest with people subject to carrots and sticks aligned with our interests.

Why do scientists not simply sell what they learn from their research?

In the case of science, the appropriate alternative occurred to me a couple of years ago, as I found myself wondering why scientists do not simply sell what they learn from their research. Selling information is tricky. If academic A wants to sell information to quant Q, A needs to convince Q that the information is useful (to Q) and accurate before revealing the information. This is usually hard to do, which is why scientists do not sell what they learn. It is not for lack of desire or potential customers – these are consequences, not causes. Instead, it is because there is no mechanism for doing so.

An overcomplicated system has expanded in the resulting vacuum: dissemination through journals; quality control through peer review; funding by taxpayers rather than directly by consumers; assessment of value by citation counts. This system cumbersomely and ineffectively compensates for an extraordinary historical oversight: nobody, apparently, had ever bothered to figure out some sort of handshake allowing A to sell information to Q.

After figuring out the necessary handshake, I built Kn-X (pronounced like the word “connects”), an online knowledge exchange through which A can finally sell information to Q. Kn-X is simple. It involves material loss for those who turn out to be wrong. It provides a welcome alternative for those of us who prefer to base important decisions on information from people subject to carrots and sticks aligned with our interests. It might even address a crisis or two. www.kn-x.com

Copyright © 2025 by IOP Publishing Ltd and individual contributors