REF consultation: Lis-Bibliometrics response

The four UK higher education funding bodies are consulting on proposals for the next Research Excellence Framework.  Thank you to all Lis-Bibliometrics members who have contributed their thoughts on this.  Here is a draft response the Lis-Bibliometrics Committee intends to submit on behalf of the group.  If you have any last minute comments please contact me or share via the list as soon as possible.  We’ve decided to respond only to consultation question 18:

Q.18 Do you agree with the proposal for using quantitative data to inform the assessment of outputs, where considered appropriate for the discipline? If you agree, have you any suggestions for data that could be provided to the panels at output and aggregate level?

We agree that quantitative data can support the assessment of outputs where considered appropriate by the discipline.  Any use of quantitative data should follow the principles for responsible use of metrics set out in the Metric Tide and the Leiden Manifesto.

  • Disciplinary difference, including citation patterns varying by output type, must be taken into account.
  • Data should only be used if it offers a high standard of coverage, quality and transparency. Providing data from a range of sources (e.g. Scopus, Web of Science, Google Scholar) would allow the panel to benefit from the strengths of each source whilst highlighting the limitations.
  • Known biases reflected by bibliometric indicators (e.g. around interdisciplinary research and gender) should be taken into account.
  • A range of data should be provided to avoid incentivizing undesirable side effects or gaming by focusing attention on a single indicator.
  • Given the skewed distribution of citations, and the ‘lumpiness’ of citations for recent publications in particular, we recommend measures of uncertainty be provided alongside any citation data. At the very least, false precision should be avoided.
  • In addition to citation indicators, panels should take into account the number of authors of the output.

Panels should receive training on understanding and interpreting the data and be supported by an expert bibliometric advisor.

We do not consider the field-weighted citation impact indicator appropriate for the assessment of individual outputs: as an arithmetic mean based indicator it is too heavily skewed by small numbers of ‘unexpected’ citations.  Furthermore its 4 year citation window would not capture the full citation impact of outputs from early in the REF period.  The use of field-weighted citation percentiles (i.e. the percentile n such that the output is among the top n% most cited outputs worldwide for its subject area and year of publication) or percentile bands (as used in REF2014) is preferable.  Percentile based indicators are more stable and easier to understand as the “performance” of papers is scaled from 1-100, but can be skewed by large numbers of uncited items.

Output level citation indicators are less useful for recent outputs.   Consequently, it might be tempting to look at journal indicators.  This temptation should be resisted!  Given the wide distribution of citations to outputs within a journal, and issues of unscrupulous ‘gaming’, journal metrics are a poor proxy for individual output quality.  Furthermore, use of journal metrics would incentivize the pursuit of a few ‘high impact’ journals to the detriment of timely, diverse and sustainable scholarly communications.

Use of aggregate level data raises the question of whether the analysis is performed only on the submitted outputs, or on the entire output from the institution during the census period. The latter would provide a more accurate picture of the institution’s performance within the discipline, but automatically mapping outputs to REF units of assessment is extremely challenging.  Furthermore it would be hard to disaggregate those papers written by staff who are not eligible for submission to REF.

Katie Evans, on behalf of the Lis-Bibliometrics Committee

Note: This replaces an earlier draft REF consultation response posted on 1st March 2017.   

What are you doing today?

The Lis-Bibliometrics commissioned, Elsevier sponsored bibliometric competencies research project is seeking to develop a community-supported set of bibliometric competencies, particularly for those working in libraries as well as in other related services.  You can take part by completing the bibliometrics competencies survey at: https://survey.shef.ac.uk/limesurvey/index.php?sid=27492&lang=en

To get a flavour of the variety of bibliometric work going on, I asked fellow Lis-Bibliometric Committee members what they’re doing today:

“Today I’m helping a researcher clean up his very muddled and duplicated Scopus Author IDs and link his outputs to his ORCID iD. I’m also thinking about how best to benchmark the output of our Law school against our competitors for undergraduate students.” Karen Rowlett, Research Publications Adviser, University of Reading

“Today I’m discussing the release of our Responsible Metrics Statement (now approved by Senate J) with our PVCR; running some analyses on SciVal which look at the impact of Loughborough’s conference publications on our overall citation performance; and presenting at a cross-university meeting aimed at exploring how to improve the visibility of our research.” Elizabeth Gadd, Research Policy Manager (Publications), Loughborough University

“Today I am preparing a presentation on Metrics for one of the teams, and working on analysing the Leiden Ranking data.” Sahar Abuelbashar, Research Metrics Analyst, University of Sussex

Meanwhile, I’m advising researchers on using citation metrics in grant applications.  What are you doing today?

Katie Evans

Research Analytics Librarian, University of Bath

In search of better filters than journal metrics

When I started working with bibliometrics I was aware of the limitations and criticisms of journal metrics (journal impact factor, SJR, SNIP etc.) so I avoided using them.  A couple of years on, I haven’t changed my mind about any of those limitations and yet I use journal metrics regularly.  Is this justifiable?  Is there a better alternative?  At the moment, I’m thinking in terms of filters.

High pile of hardcover books
CC-BY Alberto G

Martin Eve was talking about filters because of their implications for open access: the heavy reliance on the reputation of existing publications and publishers makes the journey towards open access harder than you might expect.  But it set me thinking about journal metrics.  Academics (and other users and assessors of scholarly publications) use journal metrics as one way of deciding which journals to filter in or out.  The scenarios presented in Dr Ludo Waltman’s CWTS Blog post “The importance of taking a clear position in the impact factor debate” illustrate journal impact factors being used as filters.

Demand for assessment of journal quality to fulfil filtering functions drives much of my day-to-day use of journal metrics.  Given the weaknesses of journal metrics, the question is: is there something else that would make a better filter?

Article level citation metrics cannot completely fulfil this filtering function: in order for a paper to be cited, it must first be read by the person citing it, which means it need to have got through that person’s filter – so the filter has to work before there are citations to the paper.  Similarly for alternative metrics based around social media etc.  A paper need to get through someone’s filter before they’ll share it on social media.  So we end up turning to journal metrics, despite their flaws.

Could new forms of peer review serve as filters?  Journal metrics are used as a proxy for journal quality, and journal quality is largely about the standards a journal sets for whether a paper passes peer review or not.  Some journals (e.g. PLoS One) set the standard at technically sound methodology, others (e.g. PLoS Biology) also require originality and importance of the field.   Could a form of peer review, possibly detached from a particular journal, but openly stating what level of peer-review standard the article meets, be the filtering mechanism of the future?  Are any of the innovations in open peer review already able to fulfil this role?

Comments, recommendation, and predictions welcome!

Katie Evans

(Katie is Research Analytics Librarian at the University of Bath and a member of the LIS-Bibliometrics committee, but writes here in a personal capacity)