Lizzie Gadd considers the practical implications of the responsible research evaluation requirement of the Wellcome Trust’s new Open Access Policy.
Responsible research evaluation is important. Really important. It’s not only quality-of-life important but life-or-death important to many over-worked, over-evaluated and under-resourced researchers. And now for the first time a funder is turning round and saying yes, we agree, this is really important. And not only are we committed to doing research evaluation responsibly, we’re only going to fund those who are doing the same. What’s not to like? You just can’t argue with the good intentions of the Wellcome Trust’s new Open Access policy. I do, however, have some concerns as a research policy manager about how the policy will work in practice, what the unintended consequences might be and ultimately, whether it has what it takes to change the face of research evaluation.
Putting the principle into practice
I think the fundamental challenge that universities will face when implementing this policy is simply interpreting what it means. When you read the text (see below) it really isn’t clear whether HEIs are being called on simply to commit to the principle of assessing research outputs on their merits, or to actually act on that commitment. Do we need to promise to do our best, or prove we’ve done it? My instinct says the latter, but that’s not clear. It’s also unclear as to what we’d need to do to prove our commitment to this principle. Will they be checking our promotion criteria for mentions of journal metrics? Scouring department-level KPIs? Looking at individual researcher web pages for references to JIFs? Where does it start – and end? Having spent two years implementing a responsible metrics policy based on the Leiden Manifesto, I know that even with the best will in the world – the most responsible PVCR and a draconian responsible metrics wonk at the helm of publication policy – you will still find pockets of poor practice in some corner of your organisation. Changing culture takes a long, long time. And I would hate to think that any institution who is seriously committed to responsible metrics would fail the Wellcome Trust’s policy test because of the actions of a single mis-informed member of staff.
Extract from the Wellcome Trust Open Access Policy
8. Wellcome is committed to making sure that when we assess research outputs during funding decisions we will consider the intrinsic merit of the work, not the title of the journal or publisher. All Wellcome-funded organisations must publicly commit to this principle. For example, they can sign the San Francisco Declaration on Research Assessment, Leiden Manifesto or equivalent. We may ask organisations to show that they’re complying with this as part of our organisation audits.
9. Researchers and organisations who do not comply with this policy will be subject to appropriate sanctions. These may include Wellcome:
• not accepting new grant applications
• suspending funding to organisations in extreme cases.
The critical policy wording challenge for me though, is the requirement not to even “consider” an output’s journal title or publisher in the assessment of outputs. That might sound like an odd thing for a responsible metrics bod to say, and we’d certainly never judge on an output based on journal metrics alone. However, in some disciplines, as part of a basket of metrics, we may use them to support expert judgment, based on the fact that they can be a good indicator of the visibility of a journal, in line with our desire to increase the visibility of our outputs. And we do so in accordance with Principle 1 of the Leiden Manifesto: “Quantitative evaluation should support qualitative, expert assessment”. This is the same Leiden Manifesto which ticks the Wellcome Trust’s OA policy box…
I suppose this is where it all seems to unravel a bit. Because it seems perfectly possible to meet the Wellcome Trust’s policy requirement by being a signatory to Leiden or other responsible metrics approach, and not actually meet the Wellcome Trust’s policy requirement of never considering the journal title in output assessment. Depending of course, on exactly what this means. In a wonderful fit of irony, responsible metrics statements may themselves become bad metrics, failing to measure what the Wellcome Trust claims they measure.
” In a wonderful fit of irony, responsible metrics statements may themselves become bad metrics, failing to measure what the Wellcome Trust claims they measure.”
I guess this might be easily fixed one way or the other, but it does speak to a bigger concern I have around the birth of responsible metrics mandates, and that is whether it is desirable, or even possible, for funders to impose their principles on HEIs. Practices, yes, but principles? Aren’t principles things you choose for yourself? And if they’re externally imposed with sanctions, can they really be considered principles? Responsible metrics mandates seem to remove any intrinsic motivation universities might have to explore this area in depth, and any autonomy they might have had to decide how to implement responsible metrics. Instead it becomes a boring old compliance issue. Now I don’t doubt that the Wellcome policy will drive responsible metrics up the agenda for many institutions, and that has to be welcomed. However, I fear it may also lead to a lot of hastily-drafted bare-minimum policies and quick-fix DORA signing which will not encourage HEIs to think carefully about all the issues relating to responsible research evaluation.
An important case in point here is the fact that the Wellcome Policy targets only the responsible evaluation of research outputs and not researchers themselves. Again, you could extrapolate that the policy extends to both, but it would be an extrapolation because it is certainly not explicit in the text. To my mind, the assessment of researchers based on journal metrics, or any other poorly contrived publication metric, is just as problematic as the mis-judgement of research outputs, if not more so. As it stands, an institution can hand-on-heart adhere to the Wellcome Trust policy on research outputs, and continue to judge researchers on their H-index, M-index or other ageist, sexist, or racist publication-based indicators that take their fancy. This feels a bit wrong. Institutions that have thought more deeply about this will no doubt incorporate responsible evaluation of not only outputs, but researchers, research groups, departments and even universities into their policies.
“As it stands, an institution can hand-on-heart adhere to the Wellcome Trust policy on research outputs, and continue to judge researchers on their H-index, M-index or other ageist, sexist, or racist publication-based indicators that take their fancy. This feels a bit wrong.”
Does it have what it takes?
Now I appreciate that the Wellcome Trust’s Policy is an Open Access policy, not a Responsible Metrics policy. And it is an attempt to wean HEIs off journal prestige with a view to recalibrating the academic reward system and ultimately releasing academics to publish in gold journals or platforms in line with Plan S. We all want to see this. But the big assumption here is that universities, in particular Wellcome or cOAlition S-funded universities, own the reward mechanisms that motivate the whole of the academic enterprise. I’m not the first to point out the geographical limitations of this assumption. However, there is a strongly-held belief that universities are also somehow solely responsible for the drive towards journal-based evaluation. I would question this. Of course universities play a part, but as DORA itself points out, there are many other stakeholders including journal publishers, rankings that use certain journals as a prestige indicator, other funders who assess research performance using JIFs, and most influential of all: academics themselves. I fear that for most academics, journal publication is not a route to receiving reward from their institution; journal publication is its own reward. And I fear we’re never going to break addiction to journal prestige whilst journals still exist. But that’s probably another blog post.
I can’t imagine for a minute that the Wellcome Trust are not aware of all this, and despite my pontification here, I do applaud their efforts. Better to light a single candle than curse the darkness. However, this is a bit what the policy feels like: a small effort in a huge campaign. A nod in the general direction of responsible research evaluation, rather than a fully-worked out roadmap to get there. And whilst I can understand why this is in principle, as a policy manager, I really need this to work in practice. Let’s hope that as we move forward, we’ll have a bit more dialogue, get a bit more clarification and see funder efforts really start to shine a significant light on responsible research evaluation.
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Research Evaluation Special Interest Group Champion. She also chairs the newly formed INORMS International Research Evaluation Working Group.