Who is responsible for responsible metrics?

ARMA Metrics Champion and Lis-Bibliometrics Committee Chair Elizabeth Gadd provides the following summary of the Metrics SIG session at this year’s ARMA Annual Conference.

There are many stakeholders in the world of research evaluation by numbers, and the Metric Tide Report did a great job of teasing out who should be doing what. At the ARMA Conference this year, the Metrics Special Interest Group (SIG) brought together a fantastic panel of those stakeholders to get an understanding of how they have responded to the Metric Tide report: what is happening in the world of responsible metrics, and what needs to happen next?

Photo 07-06-2017, 15 49 07

The event kicked off with a great presentation by Ben Johnson of HEFCE who (until recently), was on the Metric Tide Steering Group and is now Secretary to the Responsible Metrics Forum. He highlighted the key findings of the Metric Tide report: how whilst there was a broad correlation between metrics and peer review at university level, at the output level there were major variations; and how the plumbing just isn’t there yet for metrics to be robust.

He explained that the remit of the Forum for Responsible Metrics was to offer a high level view on infrastructure around metrics and to encourage cultural change on the use of metrics in universities and elsewhere. An action plan is currently being developed that includes some ‘Town Hall’ meetings that allow interested parties to contribute to the Forum and advance the agenda. The Forum recognise that whilst they are but 14 people, tens of thousands of people will be affected by their work. The notes and papers of Forum meetings can be accessed from the UUK website.

Dr Simon Kerridge, Director of Research Services at the University of Kent was also on the Metric Tide Steering Group and spoke about the impact the report has had on the sector. He walked us through the recommendations relating to the various stakeholder groups with a focus on the correlation analyses which led to report’s hesitation about recommending that metrics should replace peer review in the REF. We were reminded that 12 is not bigger than 10 if the error bar is 3… He discussed the various options for institutions looking to establish or sign up to principles for the responsible use of metrics, stating that the Leiden Manifesto was almost an abstract for the Metric Tide Report.

Photo 07-06-2017, 16 14 11Next up came a lively presentation from Professor Alan Dix of the University of Birmingham who sat on the REF Panel for UoA 11 in 2014. He described his fascinating analysis of the REF outputs submitted to that panel whereby he correlated the panel’s peer review ratings for each individual paper with citation metrics. The results were, in his words, “scary” revealing a five to ten-fold bias against applied research versus pure, and against pre and post-1992 universities versus Russell Group institutions. So, whilst a theory paper had to be in the top 5% to get a 4*, an applied paper had to be in 0.05% to achieve the same rating. Alan concluded that whilst metrics are bad, people are far worse!

Finally, Chris James, Elsevier’s Product Manager for Research Metrics, gave a supplier’s perspective on the responsible use of metrics. His talk centred around four key themes: awareness raising around the types of available metrics; educating users on their responsible use; engaging with the research community to understand their needs and the provision of tools and resources to help people choose the right metrics.  He focussed in on the recent “basket of metrics” work undertaken by Elsevier and the development of the new CiteScore metric that provides a free alternative to the Journal Impact Factor.

Discussion

Following the panel session, the attendees were invited to consider for themselves where the various responsibilities for responsible metrics lay.  Despite being limited for time some interesting discussions were had, post-its were scrawled upon, and the following suggestions were made for the various stakeholder groups.

Policy makers. It was felt to be important that policymakers consider any data in context and not in isolation.  A strong view was held that REF panel members, should be given very strict guidance about the informal and formal use of metrics in the next REF, and should be offered training to help them understand the metrics they may be asked to interpret.  This chimes with the suggestion made by the Lis-Bibliometrics community’s submission to the REF consultation that each panel using citation data should have a Bibliometrician on the panel.  Finally, it was suggested that in the interests of transparency so lauded by Responsible Metrics statements, UKRI should consider making output scores and citation scores available to all so those being assessed can run their own analyses.

Academics. It was recognised that academics had special insight into the publication and funding environments in their own fields and should bring that knowledge to bear on the choice of metrics used to measure their own success.  With this in mind, they should really have a good case for using a particular metric before they use it.

Research Managers. It was thought to be important for all Research Managers to have an awareness of the various data sources, their strengths and limitations, used for generating metrics, and how to adapt what was available to what was needed.  It was suggested that the CASRAI Impacts Working Group might have a role to play here. Adherence to the ‘recipes’ offered by the Snowball Metrics initiative would be one way of generating responsible and benchmarkable metrics.

Photo 07-06-2017, 16 03 57

The importance of understanding variations in practice by field was discussed, not least with regard to publication and citation practice.  Understanding that academics may have strengths and weaknesses across their portfolio of activity (teaching, research, impact, etc.,) was important for universities to recognise in their reward systems. A question was raised as to whether we should promote or police our responsible metrics approaches, with the balance of opinion coming down on the side of promotion! One supplier commented that the HE sector was doing particularly well in the area of responsible metrics and should be congratulating itself rather than beating itself up…

Suppliers. A critical piece of advice came from one attendee who advised suppliers, “don’t flog things that encourage bad behaviour”! A discussion ensued about whether an indicator could be inherently bad or just ill-defined or misused. One question related to journal metrics and whether we really needed any more when there were other badly needed metrics that normalise for other things like gender and interdisciplinarity.

Responsible Metrics Forum. By this time, we were running out of time but it was thought to be important that the Forum focussed not only on bibliometrics, but on other ways that HE activity was measured.

Funders. Picking up on Ben’s earlier point, it was thought to be important that funders mandated the use of ORCID, and was very clear about the behaviour they wanted to see.

Summary and personal reflection

The sector has clearly made some moves forward in the responsible use of metrics since the publication of the Metric Tide Report. And the establishment of the Responsible Metrics Forum was significant in this regard. In terms of answering the question, ‘who is responsible for responsible metrics’ it was apparent that there was no single focus of responsibility but that everyone had a part to play.  However, there is currently neither ‘stick’ nor ‘carrot’ to motivate institutions or academics to engage with this area and, in my view, this is having a slowing effect.  Despite the call not to ‘beat ourselves up’ it is true that only a handful of universities have either signed up to DORA or developed their own statement on the responsible use of metrics in their institutions. And ensuring practices within our institutions actually align to those principles is a separate challenge.  With one eye on REF, I do wonder whether UKRI should itself be nailing its colours to the mast in terms of its approach to responsible metrics and either signing DORA or, even better, developing its own statement on the responsible use of metrics. Where it leads, others may follow.

The Presentations


To find out more about ARMA visit https://www.arma.ac.uk/

Advertisements

2 thoughts on “Who is responsible for responsible metrics?

  1. Thanks for this thorough report. I wanted to alert you to the Mellon-funded work we’re just beginning to do over at HuMetricsHSS (humetricshss.org), as I can see many parallels here.

    Like

    1. Hi – thanks for this! Yes I’ve just started following HUMetricsHSS on twitter and will be following your work with interest. Perhaps you’d be able to speak at a future Lis-Bibliometrics event? Please feel free to contact me directly if this would be of interest?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s