was successfully added to your cart.

Grant fundingResearch on Research

Reducing grant application numbers: The shame stick or prize carrot?

By February 11, 2015 No Comments

circles-1

Pope Boniface VIII commissioned the artist Giotto based on his ability to draw a perfect circle. Similarly the ability of a statistician is in proportion to how well they can draw a normal distribution (although I don’t have the data to back this up).

Sometimes there are wonderfully quick and cheap ways to test ability, but most of the time testing ability is a tiresome and costly process.

Awarding scarce research funds to the most deserving applicants is a great example of a costly process.

Researchers spend an average of 38 working days preparing an NHMRC Project Grant proposal, but with success rates of just 15% then over 500 years of researcher went into failed applications in 2014.

This time would likely have been better spent on actual research.

Many applications are non-competitive and could possibly be culled early, saving time for both researchers and funding agencies.

Our analysis of the major health and medical scheme in Australia estimated that 61% of applications were never likely to be funded.

The ARC and NHMRC would both like to receive fewer applications, especially those applications which have little chance of success. Higher application numbers increase the difficult job of finding qualified peer reviewers and increase the overall administrative burden.

Both the ARC and NHMRC have tried to reduce application numbers by publishing institutional success rates. The theory is that institutions with a low success rate will be shamed into doing more internally to screen out non-competitive applications. However, institutions do not incur the major cost of applying, instead their researchers do and researchers are motivated to submit applications even when then know there is a small chance of success.

Researchers may also gain value from applying, or may be under pressure to apply from their colleagues or institution

Using a prize

Our proposal to reduce application numbers is to reserve some of the funding pool as a prize which is awarded to the institutes with the highest success rates. This would provide a strong financial incentive for institutions to increase their success rate, and the logical change for most institutions would be to cull non-competitive applications (rather than trying to find more winners).

Under the current system the very real costs of applying are externalised to researchers and funding agencies. Institutions bear relatively low costs from high application numbers. A prize for high success rates imposes a cost on institutions for submitting large numbers of applications.

Two key decisions are: how big the prize should be, and how it should be awarded. The larger the prize the greater incentive there will be to increase success rates and the greater downward pressure they will be on application numbers. Too large a prize could encourage institutions to submit only a handful of applications which would increase the variance in funding success and may lead to boom and bust years for some institutions. To test the water, the prize proportion could be set at a low 5% in the first year, with increases in later years after monitoring the behaviour of institutions.

The prize would be allocated to institutions whose success rate is higher than the overall average. Institutions with a success rate below the average would get no prize money. For those above the average, their share of the prize would be proportional to the difference between their success rate and the average, so that those with a higher success rate are more rewarded.

Rewarding quality

Our proposed system still rewards high quality, but it does it with lower costs as there would be fewer applications. The burden on peer reviewers would be reduced, which may increase the quality of peer review (see previous AusHSI blog). The time saved by researchers from submitting fewer applications could be re-invested into actual research. The largest time savings would be for those institutions that selected applicants early, before they had written a full application.

We believe a prize could be allowed for many current schemes. For example, the Australian Research Council’s mandate is to “reward excellence” and our proposal does that as it relies on peer review and rewards institutions who produce excellent ideas. The US National Institutes of Health mission is to “facilitate research […] cost-effectively” and using a prize would likely be more cost-effective than the current system as it could greatly reduce the time spent by applicants and peer reviewers.

Potential problems

There are potential problems with using a prize. It relies on institutions to allocate the prize to other good researchers. There would be a strong incentive for them to invest wisely to produce more excellent researchers capable of winning future prizes. Institutions could use the prize to fund riskier research ideas which often do badly in conservative funding schemes, but often have more scientific and public benefit.

A prize based system would put extra pressure on those researchers who are selected to represent the institution, and researchers are already under pressure when it comes to winning funding. Highly successful researchers could be head-hunted as their value increases.

Researchers who are not selected by their institution may be unhappy and argue that the new system prevents them from getting their good ideas funded. However when success rates are under 15% many good ideas are already not funded. Any reduction in opportunity for some should be balanced against the greater good of a lower cost system that could increase overall scientific productivity. Many current funding systems are drowning under their weight of application numbers and require large increases in bureaucratic costs to cope.

The UK Engineering and Physical Sciences Research Council took the unpopular step of blacklisting unsuccessful researchers, although this has drastically reduced application numbers

Using a prize could be viewed by some as an alternative way of blacklisting researchers, but with the institutions doing the dirty work rather than the funding agencies.

It is not certain how institutions would react to a prize. Institutions would only benefit if they were able to accurately cull applications with a low chance of success.

Given that researchers were unable to predict which of their own proposals would be successful, accurate culling may not be possible in which case there would be little incentive to try for the prize and institutions would be best to revert to submitting as many applications as possible.

Success rates have been published to encourage poorly performing institutions to submit fewer applications. But the shame of a low success rate is unlikely to be enough to change behaviour. Our proposal gives success rates real value that would likely change behaviour and reduce the number of applications submitted.

Post script: Thanks to @s_palm on Twitter for pointing me to this article: http://www.nerc.ac.uk/latest/news/nerc/demand/. The UK Natural Environment Research Council is restricting the number of applications per institution based on their success rate. So using a stick instead of a carrot.

Adrian Barnett

@aidybarnett

To find out more, watch Adrian’s Video on the topic. 

ng-ab

X