DORA, the Leiden Manifesto & a university’s right to choose

In light of rumours that only DORA signatories will have access to UKRI funding in future, Lizzie Gadd explains why Loughborough has chosen an alternative path up the ‘responsible metrics mountain’, and why she believes all ‘mountain-climbers’ should be equally supported and rewarded.

As one of a small, but seemingly growing, group of UK universities who have given serious thought to signing the San Francisco Declaration on Research Assessment (DORA), but for legitimate reasons decided to take an alternative path towards responsible metrics, I have been increasingly concerned about rumours that signing DORA might be a prerequisite to UKRI funding in future.  I should say up front that I have nothing against DORA.  It was the first responsible metrics statement off the starting blocks.  They were years ahead of the next entrant.  They have reached over 12,000 individual signers and almost 500 institutions.  And with their recent investment in a Community Manager, they are doing more than any other body in the world to push forward the responsible metrics agenda.  Full stop.

Mountain - twiga269 CC-BY-NC
 
Ascencion du Mont Blanc by twiga269 CC-BY-NC 

I have, however, been critical of 1) others who criticise non-signatories (there are good reasons we’ve not signed), and 2) those signing just to tick a responsible metrics box (responsible metrics is too important for that).  As far as I’m aware, DORA folks agree with me on these two points.

As I’ve recently been asked a number of times why we have not signed DORA, I thought it might be helpful to lay out our reasoning and explain why we have instead developed our own principles based on the Leiden Manifesto.  I do this not to create divisions, but to highlight that there are justifiable reasons why an institution might not sign DORA, but this does not mean they haven’t thought carefully about their responsible metrics approach.  Quite the opposite.  I think the point needs making, again, that institutions should retain the autonomy to implement responsible metrics in a way that fits with their own mission and values.  The destination is the issue, not the path taken to get there.

The reasons we have not signed DORA are twofold.  The main reasons relate to the focus of DORA, and a second reason is a technical point around the appropriate use of journal metrics at the level of the individual.  So let’s start with the breadth issues.

DORA was initiated by the American Society of Cell Biology in 2012 and has a fairly narrow focus on the abuse of journal metrics. As such, it has come under criticism for being too STEM-focussed (it repeatedly refers to ‘scientific research’).  Although it has 18 recommendations in total, it targets only two specifically at institutional signatories.  By contrast, the Leiden Manifesto contains ten principles which take a broader approach to the responsible use of all bibliometrics across a range of disciplines and settings. As an institution with arts, humanities and social sciences scholars, we felt the Leiden Manifesto’s broader remit was more appropriate to us.  Importantly, the Leiden Manifesto makes this one of their key tenets: that research evaluation should align with an institution’s mission.   I often make the point that universities are in danger of outsourcing their values to ranking bodies and funders, instead of doing the hard work of understanding what it is they value about research and researchers, and e-valu-ating accordingly.  To my mind, it is unrealistic to expect the same research evaluation approaches and policies to serve all institutions equally, and I would say that any institution with arts and humanities scholars is not going to be well-served by DORA alone.

I think the point needs making, again, that institutions should retain the autonomy to implement responsible metrics in a way that fits with their own mission and values.  The destination is the issue, not the path taken to get there.

The breadth issues relate not only to disciplinary coverage, however, but also to types of evaluation activity. Loughborough University engages in bibliometric evaluation at a range of levels – from university KPIs, to school-based analyses, through to the provision of indicators to individuals as part of their annual appraisal. DORA’s focus on the mis-use of the impact factor for assessing individual scholars doesn’t help us in setting our university-level citation performance KPI, or our school-level collaboration indicators, or our non-journal metric based appraisal data.  The Leiden Manifesto, on the other hand, does provide us with some principles here: around keeping our data collection open and transparent, and allowing those being evaluated to verify the data, and to remember to protect locally relevant research.  This is enormously helpful to us as we seek to do all levels of evaluation responsibly and well.

There are two other important elements of the Leiden Manifesto we wouldn’t be without.  The first is its call to recognise the systemic effects of indicators and assessment.  This principle is currently driving a piece of work at Loughborough around our open research policies, having recognised that even responsible bibliometrics can have the unintended consequence of focussing on an ever-decreasing group of journals who appear to be increasing their Article Processing Charges in line with their journal metrics.  The second is its call to scrutinize indicators regularly and update them.  Bibliometric evaluation is a nascent and fast-moving field.  We can’t afford to write our responsible metrics approach once and abandon it – we have to keep it under review.  And this is another way it contrasts with DORA which, by its very nature as a signed statement, is fixed in its 2012 form.

Having pointed out the areas where the Leiden Manifesto extends beyond DORA, I should also say that there are issues that DORA specifically addresses that the Leiden Manifesto does not.  The first of these is its call for institutions to “be explicit about the criteria used to reach hiring, tenure, and promotion decisions” and the second is some very important recommendations to journal publishers around removing artificial limits on the length of reference lists and opening them up to the community, and to metrics providers around providing data under liberal re-use licences.

Of course, all these issues are mainly a matter of preference.  Some institutions have both signed DORA and adopted the Leiden Manifesto.  Others have just signed DORA, but gone beyond its recommendations in implementing a responsible approach.  All of these positions are legitimate and entirely a matter for institutional judgement – not mine.  However for us there is a particular clause in DORA which is a sticking point, and that is the general recommendation:

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

Now of course, we have no problem with the principle of not using a journal metric to assess the quality of an article or an individual’s contribution and would never do so.  And a key element of our hiring practice is to ask individuals to select three outputs and provide DOIs or links so that they can be read or viewed. However, the implication of this statement is that institutions should not use journal-based metrics AT ALL in hiring, promotion or funding decisions.  Now this could be the way it’s worded (DORA contains many complex sentences with compound clauses) but if our interpretation is correct, we couldn’t sign it.  For whilst we wouldn’t use a journal metric in a hiring decision as a proxy for the quality of the paper, we do find journal metrics a useful indicator of a journal’s visibility.  And improving the visibility of our research is a key part of our publication strategy.

So when we offer advice on choosing journals we suggest academics follow these three simple steps – in this order:

  1. Readership. Is the journal a place that your target audience will find, read and use/cite it?
  2. Rigour.  Does the journal have high editorial standards, offer rigorous peer review and an international editorial board?
  3. Reach.  Is it a visible outlet, with international reach and a liberal open access policy or option?

And we advise that field-normalised journal citation metrics such as SJR and SNIP may offer a window onto #3 (visibility and reach).  In some disciplines they may also offer a window onto #2 (rigour).  But they never offer a window onto #1 (readership) – and that is top of the list.

In line with this advice, we provide some academics with data on their journal metrics, alongside other indicators, as a starting point for a conversation at their annual appraisal around publication strategy. And some schools use them in recruitment as an indicator of journal visibility, but always in line with our responsible metrics approach, i.e., as one of a range of indicators, in conjunction with peer review, in appropriate disciplines, and understanding their limitations.

All institutions who’ve chosen to climb the responsible metrics mountain should be equally supported and rewarded, regardless of how they choose to climb it.

So there you have it. We may use journal metrics, responsibly, in hiring decisions.  And far from being ashamed of this, I actually think metrics are a really important check and balance in our evaluative processes.  Although peer review is held up as a gold standard, we know that it is fraught with problems around unconscious bias.  Others have shown how publication metrics can expose this.  Metrics have a level of objectivity that humans don’t.  Both are inherently problematic, but together, they might provide us with our best chance of evaluating fairly and responsibly.  This is Loughborough’s approach.

I hope we can move on from arguments about which path to responsible metrics is best, and move away from judging those who take one and not another, and certainly desist from offering funding only to those on a particular path.  It’s not often we hear the adage “it’s not the journey, it’s the destination”, but in this case I think it’s apt.  All institutions who’ve chosen to climb the responsible metrics mountain should be equally supported and rewarded, regardless of how they choose to climb it.


Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.

 

Creative Commons LicenceUnless it states other wise, the content of the 
Bibliomagician is licensed under a 
Creative Commons Attribution 4.0 International License.

 

4 Replies to “DORA, the Leiden Manifesto & a university’s right to choose”

  1. Even though I am currently chair of the DORA steering committee, I don’t want to get into ‘theological’ arguments about the differences between DORA and the Leiden Manifesto because they are both forces for good! Moreover, I am sure that Lizzie agrees with me that ultimately it is the development of good research assessment practices that matter and I again applaud the work that she has been doing on that front at Loughborough.

    Nevertheless, I want to argue for a more expansive interpretation of what DORA means (as a declaration and an organisation) than is presented here.

    It is perfectly true that DORA was born in 2012 but it would not be correct to suppose that the declaration is any more fixed in time than the Leiden Manifesto. Although the first and most prominent recommendation of the declaration is stated negatively (“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”), the remaining 17 recommendations are almost invariably positive, encouraging adherents to think about and create good practice. It does not limit how they should do that. The declaration is not very long so I would encourage everyone to read it in full (https://sfdora.org/read/).

    Nor should it be supposed that DORA’s relevance is confined to the sciences; it has always aimed to be “a worldwide initiative covering all scholarly disciplines”. Admittedly, the work to extend that coverage has been lacking, but as the recently published roadmap makes clear, we have now placed a particular emphasis on extending DORA’s disciplinary and geographical reach (https://sfdora.org/2018/06/27/dora-roadmap-a-two-year-strategic-plan-for-advancing-global-research-assessment-reform-at-the-institutional-national-and-funder-level/). We are in the process of assembling an international advisory board from all corners of the globe. We would be glad to hear from arts, humanities and social science scholars about how DORA can help them to promote responsible research assessment in their fields.

    Lizzie draws a careful distinction between not using journal metrics to assess the quality of research outputs and using them to assess ‘visibility’. She is right to do so because there is a risk it might send a subliminal message to researchers. It would be interesting to hear from Loughborough’s researchers how they interpret the guidance on these points.

    This distinction is the basis of Lizzie’s argument that, because Loughborough wishes to incentivise it’s researchers to make their outputs more visible, they could not in good conscience sign DORA. I can see how that is an honest interpretation of the constraints of the declaration, but my own view is that it is too narrow. The preamble to the declaration lays out the argument for the need to improve the assessment of research ‘on its own merits’. This and the thrust imparted by the particular recommendations of the declaration show that it is the misuse of journal metrics in assessing research content – not its visibility – that is the heart of the matter. It seems to me that Loughborough’s responsible metrics policy is therefore not in contravention of either the letter or the spirit of DORA.

    In the end, as Lizzie rightly states, it is Loughborough’s call and, again, I am sure that Lizzie and I have in common a strong desire to promote good research assessment practices. I stand by what I wrote back in 2016, in a piece bemoaning the fact that so few UK universities had yet to sign DORA:

    “I would be happy for universities not to sign, as long as they are prepared to state their reasons publicly. They could explain, for instance, how their protocols for research assessment and for valuing their staff are superior to the minimal requirements of DORA. It’s the least we should expect of institutions that are ambitious to demonstrate real leadership.”

    Like

  2. Thank you to Lizzie for a very valuable piece, and to Stephen for an important comment. As Lizzie said, what one takes DORA to mean depends on how one interprets the words and compound clauses. It looks as if Stephen encourages reading the General Recommendation as:

    “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles,
    • to assess an individual scientist’s contributions, or
    • in hiring, promotion, or funding decisions.”

    in contrast to the (more natural?) Loughborough reading of it as:

    “Do not use journal-based metrics, such as Journal Impact Factors,
    • as a surrogate measure of the quality of individual research articles,
    • to assess an individual scientist’s contributions, or
    • in hiring, promotion, or funding decisions.”

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from The Bibliomagician

Subscribe now to keep reading and get access to the full archive.

Continue reading