David Whyte from the University of Liverpool thinks his institution is making a mockery of established standards in research metrics. In this post he breaks down why he thinks so and how this could effect the broader research community.
I’ll begin by putting my cards on the table. This blog comes from a very particular perspective. I am an officer in the University of Liverpool branch of the Universities and Colleges Union (UCU), currently on strike to protect 32 jobs in the Faculty of Health and Life Sciences.
I believe that research metrics should not be used to sack people.
Some would argue with me on this point, I’m sure. But there are aspects of the Liverpool dispute that I’m sure almost everyone who works with research performance data would agree with me on. I am quite sure we could agree on the principle that researchers should not be sacked using research metrics that they have never been made aware of. I am equally sure that we could agree that research performance should be measured using robust standards, not hastily concocted measures that are designed to be used exclusively by managers for the purposes of redundancy selection.
None of the staff selected for redundancy at the University of Liverpool have been given any information about the way the metrics used to judge them were applied or calculated. Indeed, none have ever had any research performance issues raised in their PDRs or by line management. Neither were they given the opportunity to ensure the data applying to them was accurate. Indeed, the criteria used to assess them were designed by the same managers that sought to sack them in the first place.
The University announced in January 2021 that 47 staff academics had been selected for redundancy based on a combination of grant income targets and a citation of >2, as measured in Elsevier’s Field Weighted Citation Impact indicator. Those criteria were then mitigated for individuals in a leading or specialist institutional role, and those who had taken parental leave and long-term sickness absence (although in fact a number with extended sickness absense during the period considered and one with two periods of maternity leave remained in the redundancy pool). Evidence submitted to the University of Liverpool by the UCU showed that FWCI was meaningless when applied to individuals’ citation scores (we used an earlier blog published here as part of the evidence in this submission).
UCU also submitted extensive evidence condemning the University for its failure to abide by established international standards of a “rounded qualitative” assessment. The authors of the Metric Tide, The Hong Kong Principles and the Leiden Manifesto supported UCU’s view in public. Indeed, the UK Forum on Responsible Research Metrics wrote to University of Liverpool Vice-Chancellor Janet Beer to inform her that “any narrow set of specific indicators of individual performance (including average research income or Field Weighted Citation Impact) cannot provide a methodologically rigorous, fair or responsible basis on which to evaluate or assess individual research performance.”
As a result of this evidence and pressure, the University published new redundancy selection criteria in May. When it did so, the University of Liverpool boldly stated that the criteria “are in keeping with the principles of DORA and this has been confirmed, in writing, by both DORA and Research England.” When staff at the University of Liverpool put this to Stephen Curry, the Chair of the Declaration on Research Assessment, at a public meeting on the 1st June, his response was:
“I have given an opinion based on information that is on Liverpool’s website… I was a little bit concerned that it was shared, I thought it was a private conversation…DORA is vulnerable to being played by either side.”
Researchers at the University of Liverpool are now seeking answers to how the University has been able to use the imprimatur of DORA without reproach.
“It does not take long for anyone with knowledge of research performance measurement to see that this is a sham qualitative assessment.”
This is explained by how the University of Liverpool has sought to assuage criticisms made by the international research metrics community. Following those criticisms, a hastily assembled “rounded qualitative assessment” was designed by senior management. In this so-called “qualitative” assessment, the University required two of the following criteria to be met.
- Two or more publications of a standard assessed world-leading in terms of originality, significance and rigour as Lead, Corresponding or Senior Author
- Research Excellence Framework (REF) Impact Case Study Lead
- Evidence of significant non-research income (from continuing professional development, consultancy, or commercial activity)
- Known substantial contribution to teaching delivery, 80% or more teaching load or programme leadership
It does not take long for anyone with knowledge of research performance measurement to see that this is a sham qualitative assessment.
But the proof of the pudding…is in the data crunching. When this assessment was applied at the end of April, not one person from the original group of 47 was removed. The qualitative elements introduced in the second iteration of the redundancy selection actually saved no one!
This is a qualitative assessment that failed to take one single person out of the original pool. This is even more astounding when we consider that many of those in the pool are some of the most respected researchers in their field in the world. So how could this qualitative assessment fail to pick this up?
“…the “qualitative” assessment of publications was done in an impossibly short space of time by a panel of nine people without the necessary range expertise.”
To understand the true nature of this rounded individual assessment, we need to look a little more closely to see how it worked.
The first, the “qualitative” assessment of publications was done in an impossibly short space of time by a panel of nine people without the necessary range expertise. The expectation was that two papers should be assessed as 4* REF quality (read and scored by those who proposed the redundancies in the first place!). Even if assessment could have been done impartially and rigorously, the level of expectation is clearly unreasonably high. World leading has a very clear meaning to research assessment processes: it means that the paper can be expected to receive a REF grading of 4*. The University has never demanded this aspiration explicitly in any policy, and indeed, in its published Research Policy Principles, reference is made to an aspiration of regular publication at 3* level, but this is not set out in prescriptive terms. Moreover, the University’s published core role expectations do not come close to demanding this level of research performance.
The second, the completion of an Impact Case Study (ICS) is something that is controlled by Research England’s REF rules. The rules are that one ICS is submitted for every 10 to 15 research active staff. Again, this cannot be considered anything other than setting the bar at a level that is unobtainable for the majority of teaching and research staff.
“…University of Liverpool is making a mockery of established standards in research metrics and is devaluing everything we do that sustains the research community.”
The third criterion is effectively an extension of the original research grant income target and it includes data that we would have expected to see in any research grant income assessment. This is not a serious ‘qualitative’ measure.
The fourth sets out an expectation that if a researcher is teaching for more than 80% of their time, then this should feature in the assessment. Again, the bar has been set at an unfeasibly high level. The accepted custom and practice expectation at the University of Liverpool on teaching and research contracts is 40% teaching/40% research/20% administration. If the teaching load carried by anyone on a teaching and research contract approached anything close to 80%, major questions would be raised about incompetent managers who have prevented staff from fulfilling their duties.
If this assessment was applied in my own Department, only 6% of research active staff would reach the bar. There can be little doubt that this “rounded” assessment is nothing of the sort. It is an assessment that is designed to be unobtainable for the majority of staff and to give managers the discretion to sack individuals at will.
“If this assessment was applied in my own Department, only 6% of research active staff would reach the bar. ”
This explains why no one was removed from the redundancy pool when this “qualitative” criteria was applied. It also explains why some of our best researchers and our best teachers have been caught up in this storm.
I, along with all of my colleagues who voted to take industrial action to stop this charade, believe the University of Liverpool is making a mockery of established standards in research metrics and is devaluing everything we do that sustains the research community. Its senior managers may think that by repackaging this scheme as “rounded” it will be perceived as such. It is now time for the academic community to stand up and challenge the violence that is being done by my employer to standards of responsible metrics. More than a thousand of us are on strike at the University of Liverpool to make sure that our senior managers do not get away with this.
David Whyte is an Officer with the University of Liverpool branch of the Universities and Colleges Union (UCU) and Professor of Socio-Legal Studies. Professionally, he engages with numerous campaign organizations and NGOs related to his research.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.