Guest Post: Implementing responsible research assessment – governance and rankings

Part two of a three-part blog summary of the discussion at the ‘Implementing responsible research assessment’ panel at the LIS-Bibliometrics 10th anniversary event: The Future of Research Evaluation in March 2020. Some of the questions put to the panel and their responses have been collected and grouped by theme. The panel comprised of Sarah Slowe (University of Kent), Steven Vidovic (University of Southampton) and Karen Desborough (Cardiff University) and this post addresses questions the panel received related to institutional rankings.

Q: How did you decide who implemented and/or led on responsible metrics implementation? 

This is a question that has as many answers as we represent institutions. Overall, it is a question of taking a route that fits each institution. 

  • At the University Southampton, the Library brought the problem of research assessment to the attention of their Research Integrity Governance Committee, who endorsed the Library as subject experts to lead the implementation.  
  • At the University of Kent, the question was taken to our Research and Innovation board, who wanted to have the resource in place to implement the principles before they committed to them. This led to a founding of the Office for Scholarly Communication. An academic-led internal board gave the resources and the appropriate positioning within the academic community and professional services departments to manage the implementation.  
  • At Cardiff University, the Pro Vice-Chancellor Research, Innovation and Enterprise is a passionate champion of inclusive research culture and took the question to University Executive Board. They created a Dean’s post that encompassed leadership and implementation of DORA. 

Q: If (some) world university rankings are compiled on the basis of irresponsible metrics – are we putting ourselves at risk by ‘stepping out’ of the metrics game? 

Across an institution there will be pressure to perform in the rankings. Colleagues will cite their role in attracting prospective staff and students, something we agree is valuable. However, paying attention to rankings and promoting them is visible support of the rankings system. 

Despite widespread criticisms, such as the Metric Tide report; rankings continue to be compiled by self-appointed arbitrary organisations without appropriate or transparent methodologies. Academia has given weight to rankings by paying attention to them, and not publicly criticizing them.  

We are aware of an ‘arms race’ going on to improve positions in the rankings; but not every unit, department, faculty or institution in the world can be in the top 100, 20 or 10. We’re all playing the same game and barely affecting our position. The rankings are altering Institutional behaviour and we’re investing significant time, resources and money into ‘improving’ our ranking.  

This is problematic because there are two kinds of key performance indicator (KPIs), the kind that describes your situation and the kind that influences your situation. Institutional rankings are best described as the first type, but institutions are misusing them as the second type. We need to be developing and promoting the second type of KPI so that we can do well despite the rankings, not because of them.   

To do this, we focus on the things that researchers can do – making outputs openly available, discoverable, linked to open data, encouraging use of ORCiD. We also encourage data cleansing in the systems rankings use, ensuring that publications and authors are correctly associated with the institution.  All communications regarding rankings should be transparent, accompanied with methodology and question the motivation of any attempt to change institutional mission or practice in goal of ranking performance.

Photo by Florian Schmetz on Unsplash

Q: How do you counter the argument that journal rankings or quartiles are a fair proxy for article quality because of the rejection rate or perceived quality of peer review process for ‘top’ journals? 

There are many reasons an individual chooses a particular venue to publish and we can’t assume that an article appearing in a lower-ranked journal means it was rejected from higher-ranked journals. Those assumptions stifle interdisciplinarity, diversity and can be damaging to individuals’ careers. Also remember that ‘top’ journals are not immune from the same biases and misjudgements as any other ‘not-top’ journal.

Q: Business schools are accredited and evaluated against a target list of journals. How do you discuss responsible metrics with this community? 

Business schools face particular challenges in relation to responsible metrics. We need to discuss the underpinning principles of responsible research assessment to help them recognise the broader argument. The ABS ranking is meant to be a recommended publishing list, it should only be used that way, if at all. The way we discuss the rankings can encourage them to see the limitations of the list, such as reminding authors about audience, impact of research and reach, so that there is recognition that a 1 or 2* journal might be a more appropriate venue for great quality research. Furthermore, interdisciplinary research may be more appropriately published in a journal not ranked by ABS at all! 


Sarah Slowe is the Head of the Office for Scholarly Communication at the University of Kent. She has a background in research support and provides support for researchers in maximising the dissemination of their work. Sarah has pioneered the responsible metrics culture at the University of Kent, with a focus on equipping researchers to be able to use appropriate metrics about their work, and the work of others. Sarah is enthusiastic about researcher focussed services, passionate about co-produced research and loves a challenge.

https://orcid.org/0000-0002-4951-8834

Steven Vidovic is the Open Research Development Manager at the University of Southampton. Steven is responsible for managing and informing the University’s open research strategy and also has interests in publishing ethics and research integrity. Steven holds a doctorate in palaeontology and has previously managed Biological, Earth and Environmental Sciences journals for an international publisher. He is currently the chair of DOAJ’s Advisory Board and contributes to their Council, and is a member of RLUK’s OAPP group.

https://orcid.org/0000-0002-4726-8018

Karen Desborough is the Responsible Research Assessment Officer at Cardiff University. She is responsible for supporting the implementation of the DORA action plan, working under the direction of the Dean for Research Environment and Culture. Karen’s role includes the delivery of training and information sessions around responsible research assessment and monitoring adherence to DORA principles.

https://orcid.org/0000-0002-8921-5985


Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

One Reply to “Guest Post: Implementing responsible research assessment – governance and rankings”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.