“Don’t bite the hand that feeds you.” This common wisdom has guided many behaviours, including those of European scientists. In the common rooms of university departments, many are lamenting the pressures of competitive funding mechanisms. Yet, vocal protests are rare. We argue that the rise of generative AI will speak to this common frustration, for it will render current decision-making on a big chunk of research funding obsolete.
Generative AI like ChatGPT is a promising and powerful, hence dangerous technology that should be regulated. The EU can play a leading role in this. Yet, it should also be welcomed as a technology that challenges problematic existing practices, for instance, standardised testing (ChatGPT easily nails a SAT test). It urges us to pause and think: was what we were doing before actually worthwhile?
One of these practices, we argue, is competitive research funding.
Competitive funding is based on the idea that the best scientific projects should receive most of the available resources, and that this is best achieved by means of competition. Individuals and consortiums write lengthy project proposals, according to a pre-given format, that are peer-reviewed by a mix of academics, practitioners, and consultants. Their decision creates a cutoff point, separating the winners from the losers. This is oftentime tragic, for there are usually many more proposals worth funding than there is budget available.
But do these competitive schemes really reward the ‘best’ research? Whoever has done reviewing for the European Commission can clearly see that there are indeed better and worse proposals. Yet, a lot of the evaluation is determined by a project’s completeness and specific jargon that makes it fit a funding call. Some of it is check-boxing: have the right elements been addressed? In fact, this already brings a project much closer to being funded. Some of it is about quality: are proposed objectives, impacts, and communication measures consequential and credible?
It happens to be that generative AI programmes, like ChatGPT, are good – and getting much better – and efficient at nailing these different aspects of proposal writing. This has already caused some researchers to take a chatbot as their writing assistant. For the more standard proposals, ones that require less original research, generative AI may even take over the biggest chunk of the writing process.
This raises a pivotal question: should lengthy proposal texts still be the basis for project selection?
What can we expect? For one, more proposals will be submitted. ChatGPT is a labour-saving device after all and significantly lowers the barrier to producing a 100-page document. More proposals will mean more reviewing, meaning more overhead for funding programmes.
More importantly, with AI-generated text, it is not clear what is actually being assessed. A proposal already functions as a proxy for actual quality of research that hasn’t happened yet. But at least thinking and planning efforts have gone into it, which bestows proposal writing with some intrinsic value. AI-generated text does nothing (or very little) in that regard.
Finally, while proposals focusing on scientific excellence – like ERC projects – will likely not benefit enough from AI to come closer to being funded, more downstream funding efforts (concerning industry collaboration or addressing specific societal challenges) generate increased risk that a positive funding decision is based on a text mostly written by AI. In such cases, there can be no presumed link between the text and the supposed quality of the project, which makes the assessment itself meaningless and counter-productive.
Although funding agencies could use AI systems to detect AI writing, this is not very reliable and will almost certainly become more difficult in the near future. With further advancements on the horizon, it is questionable to which extent current practices of assessment can keep up.
It is likely, we believe, that they won’t keep up. Generative AI challenges the very practice of reviewing proposals for competitive funding. This is a threat, but also an opportunity, because it forces everyone – including policy makers – to think: what the hell are we doing anyway?
Let’s consider the biggest competitive research funding scheme: Horizon Europe. With almost a hundred billion Euro in cash, it shapes and affects the entire European scientific system.
Despite its undeniable positive impacts, the programme is fraught with problems.
First, it represents a form of reversed solidarity. While most EU programmes are aimed at distributing resources from the stronger to the weaker shoulders, success in Horizon programmes often depends on strong institutional support that only richer institutions can afford. Second, project funding is always temporary and many positions created through Horizon projects employ researchers on limited, precarious contracts. Third, researchers spend hundreds of hours per proposal combined with reviewing this costs millions of Euros. Spending time on writing proposals exacerbates the shortage of qualified research staff – already a huge problem in academia.
While generative AI problematizes the backbone of competitive funding, we might welcome it as a challenge to funding programmes like Horizon tout court.
How to proceed? One way forward would be to rethink the application and review process. Instead of consortiums having to spend hundreds of hours on coordinated proposal writing, researchers could gather at coordinated events where writing takes place on the spot. Proposals will be more limited, shorter and more concise, and evaluation would happen based on synergy produced in the room.
Another way would be to recognize the inherently harmful aspects of competitive funding instruments. This would mean stepping away from research funding where originality is not a prime requirement (like with ERC projects). It would also mean taking the opportunity to revise the European funding scheme. Funding could be split between a primary and a secondary stream – the primary stream funding much-needed permanent research and teaching positions.
Generative AI is a wake-up call for research funding bodies to reconsider their current assessment practices. If the scientific community in Europe does this well, we can address some of the deepest problems faced by current-day academia. It’s an opportunity not to be wasted.