HEFCE: The road to the Responsible Research Metrics Forum – Guest post by Ben Johnson

On Wednesday 8th February 2017, Imperial College made headlines by announcing that it has signed the San Francisco Declaration on Research Assessment. Meaning that Imperial will no longer consider journal-based metrics, such as journal impact factors, in decisions on the hiring and promotion of academic staff. Their decision followed a long campaign by Stephen Curry, a professor of structural biology and long advocate of the responsible use of metrics.

At the end of last year, Loughborough University issued a statement on the responsible use of metrics in research assessment, building on the Leiden Manifesto.  This was followed two weeks ago, with a statement on principles of research assessment and management from The University of Bath, building on the concept of responsible use of quantitative indicators. And, earlier in 2016, the Stern review of the Research Excellence Framework recognised clearly that “it is not currently feasible to assess research outputs in the REF using quantitative indicators alone.

What these examples and others show is that the issue of metrics – in particular ‘responsible metrics’ – has risen up the agenda for many universities. As one of those closely involved in the HEFCE review of metrics (The Metric Tide), and secretary to the new UK Forum for Responsible Research Metrics, this of course is great to see.

Nevertheless, of course the issue of metrics has been bubbling away for much longer than that, as the Metric Tide report set out. University administrators, librarians and academics themselves have taken a leading role in promoting the proper use of metrics, with forums like the ARMA metrics special interest group promising to play a key part in challenging attitudes and changing behaviours.

In addition, as we have seen with university responses to the government’s HE green paper and to the Stern review, the wider community is very alive to the risks of an over reliance on metrics. This was reflected in the outcomes of both exercises, with peer review given serious endorsement in both the draft legislation and the Stern report as being the gold standard for the assessment of research.

These developments are exactly the kinds of things that the new UK Forum for Responsible Research Metrics wants to see happening. This forum has been set up with the specific remit to advance the agenda of responsible metrics in UK research, but it’s clear that this is not something it can deliver alone – it is a substantial collective effort.

So what will the Forum do? Well, as the Metric Tide report states, many of the issues relate to metrics infrastructure, particularly around standards, openness and interoperability. The Forum will have a specific role in helping to address longstanding issues, particularly around the adoption of identifiers – an area of focus echoed by the Science Europe position statement on research information systems published at the end of 2016, which is itself a useful touchstone for thinking about these issues.

To support the Forum; Jisc are working hard on developing an action plan to address the specific recommendations of the Metric Tide report, with a particular focus on building effective links with other groups working in this area, e.g. the RCUK/Jisc-led Research Information Management (RIM) Coordination Group. This will be discussed when the Forum meets again in early May.

However, sorting out the ‘plumbing’ that underpins metrics is no good if people continue to misuse them. To support this, the Forum will therefore take a complementary look at the cultures and behaviours within institutions and elsewhere; firstly to develop more granular evidence of how metrics are being used, and secondly to look at making specific interventions to support greater responsibility from academics, administrators, funders, publishers and others involved in research.

With that in mind, Universities UK and the UCL Bibliometrics Group, under the auspices of the Forum, will shortly be jointly issuing a survey of HEIs on the use of problematic metrics in university management and among academic groups, to help identify the scale of any (mis)uses of measures such as the JIF. But, to help us also understand better why initiatives like DORA have not been more widely adopted in the UK.

Of course, metrics have much broader uses than just measuring outputs – they are also used to measure people, groups and institutions. This is a key finding of the Metric Tide report, but one that is often overlooked when focussing very narrowly on output metrics. The forum will also be focussing on this, seeking to bring people together across all domains.

To make a decisive contribution here, the Forum needs to have clout, and it is for this reasons that the five partners (HEFCE, RCUK, Jisc, Wellcome and Universities UK) asked Professor David Price to convene and chair the Forum as a mixed group of metrics experts and people in positions of serious influence in their communities. This was a delicate balance to strike, and one that can only be successful if the Forum engages effectively with the various interested communities.

With that in mind, the Forum is planning to set up a number of ‘town hall’ meetings throughout 2017 to engage with specific communities on particular topics, and would very much welcome hearing from anyone interested in being involved in these or in engaging with the Forum in any other way. We will be announcing further details of these on the Forum’s web pages soon.

If you are interested in joining up with the work of the Forum throughout 2017, please contact me on b.johnson@hefce.ac.uk – I’d be delighted to hear from you.


Ben Johnson is a research policy adviser at the Higher Education Funding Council for England, is secretary to the UK Forum for Responsible Research Metrics and a member of the G7 expert group on open science.

He has responsibility for policy on open access, open data, research metrics, technical infrastructure and research sector efficiency within universities in England. In recent years, he co-authored The Metric Tide (a report on research metrics), developed and implemented a policy for open access in the UK Research Excellence Framework (REF), and supported Professor Geoffrey Crossick’s project and report to HEFCE on monographs and open access. He is a member of the UK’s open data forum and co-authored the forthcoming UK Open Research Data Concordat. In addition to this, he is currently part-seconded to the Department of Business, Energy and Industrial Strategy to work on reforming the research and innovation landscape.

REF consultation: Lis-Bibliometrics response

The four UK higher education funding bodies are consulting on proposals for the next Research Excellence Framework.  Thank you to all Lis-Bibliometrics members who have contributed their thoughts on this.  Here is a draft response the Lis-Bibliometrics Committee intends to submit on behalf of the group.  If you have any last minute comments please contact me or share via the list as soon as possible.  We’ve decided to respond only to consultation question 18:

Q.18 Do you agree with the proposal for using quantitative data to inform the assessment of outputs, where considered appropriate for the discipline? If you agree, have you any suggestions for data that could be provided to the panels at output and aggregate level?

We agree that quantitative data can support the assessment of outputs where considered appropriate by the discipline.  Any use of quantitative data should follow the principles for responsible use of metrics set out in the Metric Tide and the Leiden Manifesto.

  • Disciplinary difference, including citation patterns varying by output type, must be taken into account.
  • Data should only be used if it offers a high standard of coverage, quality and transparency. Providing data from a range of sources (e.g. Scopus, Web of Science, Google Scholar) would allow the panel to benefit from the strengths of each source whilst highlighting the limitations.
  • Known biases reflected by bibliometric indicators (e.g. around interdisciplinary research and gender) should be taken into account.
  • A range of data should be provided to avoid incentivizing undesirable side effects or gaming by focusing attention on a single indicator.
  • Given the skewed distribution of citations, and the ‘lumpiness’ of citations for recent publications in particular, we recommend measures of uncertainty be provided alongside any citation data. At the very least, false precision should be avoided.
  • In addition to citation indicators, panels should take into account the number of authors of the output.

Panels should receive training on understanding and interpreting the data and be supported by an expert bibliometric advisor.

We do not consider the field-weighted citation impact indicator appropriate for the assessment of individual outputs: as an arithmetic mean based indicator it is too heavily skewed by small numbers of ‘unexpected’ citations.  Furthermore its 4 year citation window would not capture the full citation impact of outputs from early in the REF period.  The use of field-weighted citation percentiles (i.e. the percentile n such that the output is among the top n% most cited outputs worldwide for its subject area and year of publication) or percentile bands (as used in REF2014) is preferable.  Percentile based indicators are more stable and easier to understand as the “performance” of papers is scaled from 1-100, but can be skewed by large numbers of uncited items.

Output level citation indicators are less useful for recent outputs.   Consequently, it might be tempting to look at journal indicators.  This temptation should be resisted!  Given the wide distribution of citations to outputs within a journal, and issues of unscrupulous ‘gaming’, journal metrics are a poor proxy for individual output quality.  Furthermore, use of journal metrics would incentivize the pursuit of a few ‘high impact’ journals to the detriment of timely, diverse and sustainable scholarly communications.

Use of aggregate level data raises the question of whether the analysis is performed only on the submitted outputs, or on the entire output from the institution during the census period. The latter would provide a more accurate picture of the institution’s performance within the discipline, but automatically mapping outputs to REF units of assessment is extremely challenging.  Furthermore it would be hard to disaggregate those papers written by staff who are not eligible for submission to REF.

Katie Evans, on behalf of the Lis-Bibliometrics Committee

Note: This replaces an earlier draft REF consultation response posted on 1st March 2017.   

My double life: playing and changing the scholarly communications game. By Lizzie Gadd

I love my job.  I work as a “Research Policy Manager (Publications)” at Loughborough University.  And I spend my time understanding and advising on how we can improve the quality and visibility of our research.  However the strategies for achieving this aren’t as straightforward as you might think.  And increasingly I feel like I’m leading a double life, both seeking to play and change the scholarly communication game.

 communication-by-jackie-finn-irwin-cc-by-2-0

‘Communication’ by Jackie Finn-Irwin CC-BY 2.0

 

What do I mean by this?  Well, the game we’re in is one where publications mean prizes. If others rate them (e.g. in the REF) or cite them (as measured by the University League Tables), you win. To be a winner, we know you need to produce quality research (of course), collaborate internationally (improves quality and visibility), and publish in those journals that are indexed by the tools that both expose your research to the world, but importantly, also do the citation measuring for the aforesaid REF and University League Tables.  And although there is a huge backlash against using journal metrics as an indicator of the quality of the underlying research, there is no doubt that getting a paper into a journal with extremely rigorous quality standards still means something to academics and their peers.

So the current game is inherently tied up with journal publications.  And there are two well-rehearsed reasons why this is not a good thing. The first is that journals are expensive – and getting more expensive. The second reason is that journals are slow at communicating research results.  Publication delays of years are not uncommon. (There are of course other objections to journals, not least the murky waters about how an end-user may re-use journal content, but I won’t go into these here.)

This is why we need to change the game. And the best option we have for changing the game is to keep producing quality research and collaborating internationally, but to also create new means of scholarly communication that are neither expensive, nor slow.

Some might argue that you can have it both ways. Publish in a journal which has a liberal green open access policy.  This will allow you to provide immediate access to the research through the pre-print, and access to the peer reviewed research through the postprint .  And to be honest, this is the compromise we currently go for.  But this form of open access is showing no signs of reducing the cost of subscriptions .  And not all journals have liberal green open access policies. And not all academics want to release their preprint until it has been accepted by a journal, in case the paper is rejected – so this rather defeats the object.

Now there are many alternative models of scholarly communication that ARE inexpensive and speed up publication.  These include preprint archives or ‘diamond’ open access journals that charge neither the author to submit nor the reader to read.  However, the problem is that these are not picked up by the citation benchmarking tools.  This is either because they are not journals at all (preprint archives) or because they are new entries to the market so have yet to prove their significance in the field and be selected for inclusion.

So what does a Research Policy Manager (Publications) do?  Well, it seems to me like I have two equally unsatisfactory options.  The first is to continue to play the journals game in order to ensure the citedness of our research is captured by the key citation benchmarking tools, but encourage OA as a means for improving visibility and discoverability of our work.  Whilst this isn’t going to speed up publication or reduce our costs, I think there is something about the lure of a high quality journal may well be a driver of research quality – which is very important.

The second option is to dramatically change our focus on to new forms of scholarly communication that will speed up publication rates and reduce our costs, such as preprint archives and diamond OA journals.  And by so doing, we’d need to hope that the well-documented citation advantage for immediately open research will do its thing. And that when the research is considered by the REF, they really will just focus on the content as they promise, and not the reputation of the vehicle it is published in.  Always bearing in mind that any citations that the research does accrue will only be picked up by open tools such as Google Scholar and not the tools that supply the REF – or the league tables.

So to answer my own question – what does a Research Policy Manager advise in these circumstances? Personally, I try to live with one whilst lobbying for the other, and as far as possible seek to ameliorate any confusion faced by our academics.  This is easier said than done – certainly when faced with later career academics who can remember a time when research was optional and where you published was entirely your business.  To now be faced with a barage of advice around improving the quality, accessibility, visibility, and citedness of your work, bearing in mind that the routes to these are often in conflict with each other, is a constant source of agony for both them and me.

I recognise that we have to play the game. Our reputation depends on it.  But we also have to change the game and provide quicker and more affordable access to (re-usable) research results. At the risk of sounding over-dramatic, the future may depend on it.

 

Elizabeth Gadd