Robyn Price from Imperial College London briefs us on details of the THE Impact Ranking methodology
The fourth edition of The Times Higher Education (THE) Impact Ranking is set to publish in April 2022 and will surely be followed by participating institutions communicating successful performance in it and non-participating institutions might be considering whether to take part in future editions.
THE’s Impact Ranking is framed around the Sustainable Development Goals (SDGs), a set of 17 goals defined by the United Nations in 2015 that relate to complex international development and environmental issues. Whilst not directly addressed in the UN’s SDGs; universities, through their core subfunctions of research, education, operations, governance and external leadership, are essential to the SDG endeavour.
THE have translated the UN’s actual SDGs into targets that it can measure universities on. They are a mixture of traditional bibliometric indicators as well as indicators that assess organisational operations and policies. I broadly really agree with this, so for example a university wanting to be recognised for its research on the climate crisis (SDG 13: Climate Action) is also assessed on its carbon footprint, its climate change disaster planning, commitments to carbon neutral operations and more. Prompting organisations to document their responses to the SDGs, both their research and their values and management policies, is great. However, unpicking the methodology and reflecting on the nature of the ranking I’m left unsure as to whether this is really happening.
- Number of publications (worth between 7-13.55% to every SDG)
Total number of outputs matching terms relevant to the SDG in a Title, Abstract or Keyword search. It is not clear why the value of this metric to each SDG fluctuates, e.g. research publications on Food Hunger contribute 7% towards the goal, but in Decent Work and Economic Growth research publications contribute 13%.
From the perspective of respecting the value of all forms of research outputs it is positive that this publication count metric is not limited to journal articles.
THE state that it will also include Books, Conference Proceedings, Trade Publications[EG2] , but are these output types indexed comprehensively enough by their source (Scopus) to contribute? The query terms for matching outputs to SDGs are made available but the actual publication sets defined by THE for the rankings are not.
Joint authorship is respected, so an output with co-authors at different institutions will count towards each institution’s score. For hyper-authored papers each institution receives the same credit regardless of weighting. This is the opposite of the THE World Rankings which uses a fractional method to proportion authorship credit.
- CiteScore (worth between 10%-14% to some SDGs. Absent from some SDGs)
The proportion of publications (as determined by the # Publications as above) that are published in top 10% of all journals in Scopus by CiteScore, normalised by the total number of publications in the same period. CiteScore is an Elsevier equivalent of the Journal Impact Factor. It counts number of citations received to a journal over the proceeding 4 year period and divides this by the number of publications from the journal in the same 4 year period. Some support the CiteScore as a more robust alternative to the IF due to the longer citation window and more inclusive article types and journal indexing, but it is still a journal-based metric and should not be used to assess quality of individual outputs or authors. [EG3]
- Field-Weighted Citation Impact (FWCI)(worth 10% to some of the SDGs. Absent from some SDGs)
Number of citations received to an item, normalised by publication type, year of publication, and by subject. The mean of all an institution’s publications for the SDG is found, and then a Cumulative Density Function of a Normal Distribution is calculated. This grants each institution a 0-100 score. FWCI is an unstable metric for small sample sizes and recently-published outputs which can result in skew by outliers.
- Scopus as the bibliometric data source
Scopus is not a comprehensive database of research outputs. Studies estimate that it contains approximately 58% of Google Scholar’s content, of which roughly 93% is written in English  and predominantly from STEM disciplines[EG4] . This bias will advantage some universities over others. It is also a separate, paid-for database, so only subscribers to Scopus will be able to attempt to replicate or model the bibliometric performance. Are these limitations acceptable for a measure of global development?
The non-bibliometric indicators
The other indicators relate to organisational management and policies that THE has deemed related to each SDG. For example, an indicator for THE’s SDG 16: Peace, Justice and Strong Institutions is ‘academic freedom policy’.
It makes sense to me that academic freedom is something that could be asked of a university wishing to be evaluated under this SDG, however their methodology to award points for the non-bibliometric indicators seems arbitrary and insufficient.
Using the same SDG example, an institution can score up to four points on ‘academic freedom’ by saying they have written a policy (one point), showing THE their policy (up to one point), putting the policy on the internet (one point) and telling THE it has been created or reviewed between 2015-2020 (one point). The evaluation schema for evidence to award “up to one point” is described in the methods as:
Is this a meaningful and contextual appraisal of, in this example, the highly nuanced issue of academic freedom? Stephen Curry has described this perfunctory scoring method as “approximate and incomplete evaluations of a rich spectrum of endeavour”. In addition, THE do not provide [EG5] any indication as to who performs this evaluation, so we’re left to wonder whether assessment is by anything from a panel of experts, a data entry team, or a computer.
The composite score and rank
This method of aggregating scores of different things is a bit ‘pick and mix’. All institutions can submit to as many or as few SDGs as they like. On one hand, it is sensible that universities only submit in the areas that they have efforts and activities in, and I agree with THE that this approach enables participation from institutions without the resources to return data on all. It also means that institutions can ‘opt-out’ of the SDGs that it does not wish to draw attention to.
Those that submit to at least four SDGs including SDG 17: Partnership for the Goals will receive an overall score and place in the ranking. This is composite aggregation of scores from separate and different SDGs into one overall score, and a fundamental weakness.
Richard Holmes writes, “The University of Sydney… is ranked for clean water and sanitation, sustainable cities and communities, and life on land… (this) includes supporting water conservation off campus and the reuse of water across the university. RMIT University, in third place, is ranked for decent work and economic growth, industry innovation and infrastructure and reduced inequalities… So, essentially THE is trying to figure out whether Sydney is better at reusing water than RMIT is at announcing policies that are supposed to reduce discrimination”.
The methodological weakness of the composite score has been commented on since the first 2019 instalment of the ranking yet THE have chosen to retain it in each edition, to be able to dish out awards for the most impactful institutions. The essential competitive nature of a ranking system doesn’t exactly lend to ‘Partnership for the Goals’ either.
Access to evidence and data
None of the submitted data or evaluation evidence is made available. Subscribers to the paid-for THE DataPoints analytics product can see a very limited summary of the scores to benchmark between institutions. This lack of access to supporting data would not be considered acceptable research practice within many of the universities submitting to this ranking.
I recognise that this ranking gives institutions a framework through which to communicate their achievements on really important issues, but it is a disservice to the genuine intentions and investments that many of the participating universities are making that a flawed and competitive ranking has commoditised it.
As an interesting aside, THE’s owner, Inflexion Private Equity Partners LLP, detail their environmental, social and governance management, but the THE itself does not have any publicly-available information on its own contributions towards the SDGs. This must make us question whether this puts them in the best position to judge others?
I hope that THE’s new Impact Rankings Advisory Board will contribute to meaningful methodological improvement.
From examining the 2021 and 2022 methodology I do observe some improved rigour in the indicators of the 2022 issue, but the essential issues of lack of access to evidenced data and flawed composite score remain unchanged since the first edition.
As the number of institutions participating increases every year, more than doubling between the 450 entered in 2019 to 1,200 in 2021 and allegedly more than 1,500 institutions have entered for the 2022 issue; THE become more and more successful in conflating performance in their Impact ranking with actual impact in the world.
 Martín-Martín, A., Thelwall, M., Orduna-Malea, E. et al. Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: a multidisciplinary comparison of coverage via citations. Scientometrics 126, 871–906 (2021). https://doi.org/10.1007/s11192-020-03690-4
Robyn Price established a responsible bibliometric analysis and education service at Imperial College London. She is also interested in open and equitable research models. Previously, Robyn worked in the editorial teams of open access and subscription journals.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.