Should Grants be Selected by Lottery?

Nick Arnosti


Vox Article

mBio Advocate


Contest Models Paper

Relate to my work with Matt

Also relates to money-burning paper: if light-tailed, better to give our randomly than to try to get best.

Another, looking at high-quality proposals, found there was virtually no agreement on their merits — two different researchers might come to vastly different conclusions about whether the grant should be approved. Another analysis looked at successful grants and found that 59 percent of them could have been rejected due to random variability in scoring. Clearly, above some threshold, the process is deeply subjective and not a real measure of quality.

Furthermore, all that time and effort doesn’t even help the best grants rise to the top. Among grant proposals that are already pretty good, ratings are highly subjective — two scientists will arrive at profoundly different evaluations of the same grant. That means whether one is approved or rejected is mostly a matter of chance. One study evaluated this by asking peer reviewers to review high-quality NIH grant applications as if they were making a grant decision. They computed the inter-rater reliability of the reviewers — that is, how strongly their judgment was correlated. An inter-rater reliability of above 0.7 is considered pretty good. The inter-rater reliability for grant evaluations? Near zero.

If lottery, must ask: should it be correlated across years? Mention relationship to hunting