In light of rumours that only DORA signatories will have access to UKRI funding in future, Lizzie Gadd explains why Loughborough has chosen an alternative path up the ‘responsible metrics mountain’, and why she believes all ‘mountain-climbers’ should be equally supported and rewarded.
As one of a small, but seemingly growing, group of UK universities who have given serious thought to signing the San Francisco Declaration on Research Assessment (DORA), but for legitimate reasons decided to take an alternative path towards responsible metrics, I have been increasingly concerned about rumours that signing DORA might be a prerequisite to UKRI funding in future. I should say up front that I have nothing against DORA. It was the first responsible metrics statement off the starting blocks. They were years ahead of the next entrant. They have reached over 12,000 individual signers and almost 500 institutions. And with their recent investment in a Community Manager, they are doing more than any other body in the world to push forward the responsible metrics agenda. Full stop.
I have, however, been critical of 1) others who criticise non-signatories (there are good reasons we’ve not signed), and 2) those signing just to tick a responsible metrics box (responsible metrics is too important for that). As far as I’m aware, DORA folks agree with me on these two points.
As I’ve recently been asked a number of times why we have not signed DORA, I thought it might be helpful to lay out our reasoning and explain why we have instead developed our own principles based on the Leiden Manifesto. I do this not to create divisions, but to highlight that there are justifiable reasons why an institution might not sign DORA, but this does not mean they haven’t thought carefully about their responsible metrics approach. Quite the opposite. I think the point needs making, again, that institutions should retain the autonomy to implement responsible metrics in a way that fits with their own mission and values. The destination is the issue, not the path taken to get there.
The reasons we have not signed DORA are twofold. The main reasons relate to the focus of DORA, and a second reason is a technical point around the appropriate use of journal metrics at the level of the individual. So let’s start with the breadth issues.
DORA was initiated by the American Society of Cell Biology in 2012 and has a fairly narrow focus on the abuse of journal metrics. As such, it has come under criticism for being too STEM-focussed (it repeatedly refers to ‘scientific research’). Although it has 18 recommendations in total, it targets only two specifically at institutional signatories. By contrast, the Leiden Manifesto contains ten principles which take a broader approach to the responsible use of all bibliometrics across a range of disciplines and settings. As an institution with arts, humanities and social sciences scholars, we felt the Leiden Manifesto’s broader remit was more appropriate to us. Importantly, the Leiden Manifesto makes this one of their key tenets: that research evaluation should align with an institution’s mission. I often make the point that universities are in danger of outsourcing their values to ranking bodies and funders, instead of doing the hard work of understanding what it is they value about research and researchers, and e-valu-ating accordingly. To my mind, it is unrealistic to expect the same research evaluation approaches and policies to serve all institutions equally, and I would say that any institution with arts and humanities scholars is not going to be well-served by DORA alone.
I think the point needs making, again, that institutions should retain the autonomy to implement responsible metrics in a way that fits with their own mission and values. The destination is the issue, not the path taken to get there.
The breadth issues relate not only to disciplinary coverage, however, but also to types of evaluation activity. Loughborough University engages in bibliometric evaluation at a range of levels – from university KPIs, to school-based analyses, through to the provision of indicators to individuals as part of their annual appraisal. DORA’s focus on the mis-use of the impact factor for assessing individual scholars doesn’t help us in setting our university-level citation performance KPI, or our school-level collaboration indicators, or our non-journal metric based appraisal data. The Leiden Manifesto, on the other hand, does provide us with some principles here: around keeping our data collection open and transparent, and allowing those being evaluated to verify the data, and to remember to protect locally relevant research. This is enormously helpful to us as we seek to do all levels of evaluation responsibly and well.
There are two other important elements of the Leiden Manifesto we wouldn’t be without. The first is its call to recognise the systemic effects of indicators and assessment. This principle is currently driving a piece of work at Loughborough around our open research policies, having recognised that even responsible bibliometrics can have the unintended consequence of focussing on an ever-decreasing group of journals who appear to be increasing their Article Processing Charges in line with their journal metrics. The second is its call to scrutinize indicators regularly and update them. Bibliometric evaluation is a nascent and fast-moving field. We can’t afford to write our responsible metrics approach once and abandon it – we have to keep it under review. And this is another way it contrasts with DORA which, by its very nature as a signed statement, is fixed in its 2012 form.
Having pointed out the areas where the Leiden Manifesto extends beyond DORA, I should also say that there are issues that DORA specifically addresses that the Leiden Manifesto does not. The first of these is its call for institutions to “be explicit about the criteria used to reach hiring, tenure, and promotion decisions” and the second is some very important recommendations to journal publishers around removing artificial limits on the length of reference lists and opening them up to the community, and to metrics providers around providing data under liberal re-use licences.
Of course, all these issues are mainly a matter of preference. Some institutions have both signed DORA and adopted the Leiden Manifesto. Others have just signed DORA, but gone beyond its recommendations in implementing a responsible approach. All of these positions are legitimate and entirely a matter for institutional judgement – not mine. However for us there is a particular clause in DORA which is a sticking point, and that is the general recommendation:
“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
Now of course, we have no problem with the principle of not using a journal metric to assess the quality of an article or an individual’s contribution and would never do so. And a key element of our hiring practice is to ask individuals to select three outputs and provide DOIs or links so that they can be read or viewed. However, the implication of this statement is that institutions should not use journal-based metrics AT ALL in hiring, promotion or funding decisions. Now this could be the way it’s worded (DORA contains many complex sentences with compound clauses) but if our interpretation is correct, we couldn’t sign it. For whilst we wouldn’t use a journal metric in a hiring decision as a proxy for the quality of the paper, we do find journal metrics a useful indicator of a journal’s visibility. And improving the visibility of our research is a key part of our publication strategy.
So when we offer advice on choosing journals we suggest academics follow these three simple steps – in this order:
- Readership. Is the journal a place that your target audience will find, read and use/cite it?
- Rigour. Does the journal have high editorial standards, offer rigorous peer review and an international editorial board?
- Reach. Is it a visible outlet, with international reach and a liberal open access policy or option?
And we advise that field-normalised journal citation metrics such as SJR and SNIP may offer a window onto #3 (visibility and reach). In some disciplines they may also offer a window onto #2 (rigour). But they never offer a window onto #1 (readership) – and that is top of the list.
In line with this advice, we provide some academics with data on their journal metrics, alongside other indicators, as a starting point for a conversation at their annual appraisal around publication strategy. And some schools use them in recruitment as an indicator of journal visibility, but always in line with our responsible metrics approach, i.e., as one of a range of indicators, in conjunction with peer review, in appropriate disciplines, and understanding their limitations.
All institutions who’ve chosen to climb the responsible metrics mountain should be equally supported and rewarded, regardless of how they choose to climb it.
So there you have it. We may use journal metrics, responsibly, in hiring decisions. And far from being ashamed of this, I actually think metrics are a really important check and balance in our evaluative processes. Although peer review is held up as a gold standard, we know that it is fraught with problems around unconscious bias. Others have shown how publication metrics can expose this. Metrics have a level of objectivity that humans don’t. Both are inherently problematic, but together, they might provide us with our best chance of evaluating fairly and responsibly. This is Loughborough’s approach.
I hope we can move on from arguments about which path to responsible metrics is best, and move away from judging those who take one and not another, and certainly desist from offering funding only to those on a particular path. It’s not often we hear the adage “it’s not the journey, it’s the destination”, but in this case I think it’s apt. All institutions who’ve chosen to climb the responsible metrics mountain should be equally supported and rewarded, regardless of how they choose to climb it.
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.