How (not) to incentivise open research

Lizzie Gadd makes the case for open research being required not rewarded.

I recently attended two events: the first was a workshop run by the ON-MERRIT team, a Horizon 2020 project seeking to understand how open research practices might actually worsen existing inequalities. And the second was the UKRI Enhancing Research Culture event at which I was invited to sit on a panel discussing how to foster an open research culture. At both events the inevitable question arose: ‘how do we incentivise open research?’. 

UnSplash

And given the existing incentives system is largely based around evaluating and rewarding a researcher’s publications, citations, and journal choices, our instinct is look to alternative evaluation mechanisms to entice them into the brave new world of open. It seems logical, right? In order to incentivise open research we simply need to measure and reward open research. If we just displace the Impact Factor with the TOP Factor, the h-index with the r-index and citation-based rankings with openness rankings, all will be well.

But to my mind this logic is flawed.

Firstly, because openness is not a direct replacement for citedness.  Although both arguably have a link with ‘quality’ (openness may lead to it and citedness may indicate it) they are not quite the same thing. And it would be dangerous to assume that all open things are high quality things.

So we can add open research requirements to our promotion criteria, but we are still left with the conundrum as to how to assess research quality. And until an alternative for citations is found, folks are liable to keep relying on them as an easy (false) proxy. So we can think we’ve fixed the incentivisation problem by focusing on open research indicators, but we haven’t dealt with the much bigger and much more powerful disincentivisation problem of citation indicators.

If we’re looking to openness to improve our research culture, incentivising openness by measuring it feels pretty counterproductive to me.

Secondly, as I’ve argued before, open research practices are still unheard of by some and the processes by which to achieve them are not always clear. Open research practices need to be enabled  before we can incentivise them. Of course related to this is the fact that some open research practices are completely irrelevant to some disciplinary communities (you’ll have a hard job pre-registering your sculpture). And undoubtedly those from wealthy institutions are likely to get much more support with open research practices than those from poorer ones. In this way, we’re in danger of embedding existing inequalities in our pursuit of open practices – as the ON-MERRIT team are exploring.

But in addition to these pragmatic reasons as to why we can’t easily incentivise open research by measuring it, there is a darned good reason why we shouldn’t turn to measurement to do this job for us. And that is that HE is already significantly over-evaluated already.

Researchers are assessed from dawn til dusk: for recruitment, probation, appraisal, promotion, grant applications, and journal peer review. There is no dimension of their work that goes unscrutinised: where they work, who they collaborate with, how much they have written, the grants they have won, the citations they’ve accrued, the impact of their work, the PGRs they’ve supervised – it’s endless. And this in combination with a highly competitive working environment makes academia a hotbed for toxic behaviours, mental health difficulties, and all the poor practices we blame on “the incentives”. (Although Tal Yarkoni recently did an excellent job of calling out those who rely on blaming the incentives to excuse poor behaviours).

If we’re looking to openness to improve our research culture, incentivising openness by measuring it feels pretty counterproductive to me.  We don’t want to switch from narrow definitions of exceptionalism, to broader ‘open’ definitions of exceptionalism, but away from exceptionalism altogether. Adding open to a broken thing just leaves us with an open broken thing.

Surely this is what we want for open research? Not that it should be treated as an above-and-beyond option for the savvy few, but that it should be a bread-and-butter expectation on everyone.

So how do we incentivise open?

Well, this is where I think we can learn from other aspects of our research environment. Because at the end of the day, open research practices are simply a set of processes, protocols and standards that we want all researchers to adhere to as relevant to their discipline.  And we put plenty of these expectations on our researchers already, such as gaining ethical approvals, adhering to reporting guidelines, and following health & safety standards.

There’s no glory associated with running due diligence on your research partners and following GDPR legislation won’t give you an advantage in a promotion case. These are basic professional expectations placed on every self-respecting researcher. And whilst there are no prizes for those who adhere to them, there are serious consequences for those that don’t.  Surely this is what we want for open research? Not that it should be treated as an above-and-beyond option for the savvy few, but that it should be a bread-and-butter expectation on everyone.

Now I appreciate there is probably an interim period where institutions want to raise awareness of open research practices (as I said before, they need to be enabled before they can be incentivised).  And during this period, running some ‘Open Research Culture Awards’ or offering ‘Open research hero badges’ to web pages might have their place. But we can’t dwell here for long.  We need to move quite rapidly to this being a basic expectation on researchers. We have to define what open research expectations are relevant to each discipline. Add these expectations to our Codes of Good Research Practice. Train researchers in their obligations. Monitor (at discipline/HEI level) engagement with these expectations. And hold research leads accountable for the practices of their research groups.

Adding open to a broken thing just leaves us with an open broken thing.

To my mind, the same applies to measuring open research at institutional level, for example in REF exercises. We should require HEIs to expect and enable disciplinary appropriate open research practices from their researchers and to evidence that they a) communicate those expectations, b) support researchers to meet those expectations, and c) are improving on meeting those expectations. That’s all. No tricky counting mechanisms. No arbitrary thresholds. No extra points for services that are just the product of wealth.

Of course, if we are going to monitor take up of open research at discipline and university level, we do need services that indicate institutional engagement with open research practices. But again I see this as being an interim measure, and more to highlight where work needs to be done than to give anyone boasting rights. When open research becomes the modus operandi for everybody, monitoring just becomes a quality assurance process. There’s no point ranking institutions on the percentage of their outputs that are open access when everybody hits 100%.

I know this doesn’t tackle the disincentivisation problem of journal impact factors, but open never did.  We have moved from a serials crisis (where the costs were high, the speeds were slow, and only a few could read them) to an open serials crisis (where the costs are high, the speeds are slow, and only a few can publish in them). To me this is a separate problem that could be fixed quite easily if funders placed far bolder expectations on their researchers to only publish on their own platforms – but that’s another blog post.

We all want open research and we all want to fix the incentives problem as we see this as slowing our progress towards open research. But I think offering up one as the solution to the other is not going to get us where we want to go. Indeed, I think it’s potentially in danger of exacerbating unhelpful tendencies towards exceptionalism when what we really want is boring old consistent, standards-compliant, rigorous research.

Campbells law rightly tells us that we get what we measure, but the inverse – that we need to measure something in order to get it – is not always true. In our rightful pursuit of all things open, I think it’s important that we remember this.

Elizabeth Gadd is Head of Research Operations at the University of Glasgow. She is the chair of the Lis-Bibliometrics Forum and co-Champions the ARMA Research Evaluation Special Interest Group. She also chairs the INORMS International Research Evaluation Working Group.

Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

5 Replies to “How (not) to incentivise open research”

  1. Agreed, we need to step back at looking at carrots, but without reverting to sticks. A lack of incentives can quickly become a compliance exercise instead of the culture that we seek. Creating an open culture only happens if hearts are changed: the challenge is now to move beyond the intellectual.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from The Bibliomagician

Subscribe now to keep reading and get access to the full archive.

Continue reading