The research evaluation food chain and how to disrupt it

Lizzie Gadd describes how seeing research evaluation as a food chain where participants are both the evaluators and the evaluated may help us understand, and solve, some of the problems inherent within.

Research evaluation is often cited as the root cause of many problems facing scholarly communication today. The highest profile problem of course, being evaluation-by-journal-brand as the cause of journal-brand-obsession. And there is much finger-pointing: at academics for refusing to give up these journal brands; at universities for recruiting based on journal brands; at funders & governments for funding based on journal brands; at data providers for indexing only these journal brands; and at rankings for using the data sources that index these journal brands. Everyone else seems to be the problem. And of course everyone is part of the problem. And when everyone plays a part in a problem it’s difficult to know exactly where to start solving it.

Of course, evaluation-by-journal-brand, is not the only research evaluation problem we face, although it’s the one most often discussed. We see impossible target-setting based on unfair expectations. We see the blind use of citation metrics with no disciplinary nuance or understanding. We have peer review unchallenged as the Gold Standard of research evaluation without any real acknowledgement of its limitations. The road to responsible research evaluation is fraught with many dangers.

However, the more I think about these issues and the many stakeholders involved, the more I sense there is a hierarchy here. Not all stakeholders have the same power to change the system. And as with any hierarchy, it might be that the quickest way to create change is to go straight to the top.

So what does this hierarchy look like? Well I’ve put together the following diagram which I’ve called the Research Evaluation Food Chain. Like all heuristics it has its limitations, but I think it provides a useful framework for understanding the world of research evaluation and how we might approach the problems within.

At the bottom we have researchers. Lots of them. And like many creatures in a food chain, they can be known to eat (evaluate) each other as well as being eaten (evaluated) by those further up the chain. In fact the next species in the food chain, whilst technically labelled universities, are really just researchers who have amassed enough seniority to get to decide how the masses beneath them are evaluated. But when questioned, the individuals representing universities in this chain will usually claim that they only measure because they are measured. They are only placing expectations on their research staff to ensure the university gains enough gold or glory, so they can stay alive and continue to pay the afore-mentioned researchers. After all, it’s ‘dog eat dog’ out there and research funding is a zero-sum game. And of course the finances in question are bestowed by research funders, often governments, who evaluate according to their own strategic aims and objectives. And one of those objectives, whether spoken or unspoken, is to climb or maintain the country’s standing in the various international university league tables.

League tables, in this analogy, are at the top of the food chain; the Kings of the Jungle. (Although my Scandinavian colleagues experience them more as parasites, living off their hosts and offering no benefit in return). But for many of us, they are predators: they predate, but are not predated upon. The rest of the food chain might grumble and gripe, but there is nothing they can do. ‘The rankings are here to stay!’, they cry. And this is why: there is no challenger able to match their might. They have allies of course, as do all the predators (evaluators) in the food chain. In this case, the allies take the form of data vendors. I’ve depicted vendors as the sun that shines on all members of the food chain equally, but perhaps I could have depicted them as a rain cloud. I just can’t help seeing data vendors (if I might mix my metaphors for a moment) as the arms trade to the research evaluation food chain. It feels like they don’t care who they sell to, or what damage is done, as long as they stand to profit. And profit they do, with many companies who once described themselves as publishers or journalists now describing their primary business as data analytics.

“League tables, in this analogy, are at the top of the food chain; the Kings of the Jungle. They predate, but are not predated upon.”

Of course, there is resistance towards the unchallenged dominance of university rankings. “Vocal and creative grassroots efforts” as James Wilsdon’s recent rant against the rankings described them. Brave souls who risk being torn limb from limb by the League Table Lions. Some challengers, like ‘University Wankings’, find courage only in anonymity. At the other end of the spectrum we see those who try to form alliances with the Lords of the Beasts and try to gain some benefit through association. We see this with institutions that buy the rankings data and host their events. Others in the food chain just try to keep their heads down and stay out of harms way.

So how does viewing research evaluation as a food chain help us to understand it better? To my mind there are five things we can learn.

1) The relationship between players in the research evaluation ecosystem is complex, multi-layered and interlinked and the behaviour of those in the chain are influenced by a range of external factors. Funders evaluate researchers as well as universities, and rankers will rank just about anything. As such, there is no obvious place to break the chain in order to fix the problems inherent within. Even removing the apex predator won’t stop other ‘species’ from evaluating each other. Thus we can’t hope to change poor research evaluation practice by focussing on only one stakeholder; a whole system change is needed.

2) Thankfully, we know that disrupting a food chain at any point in the hierarchy will have a significant effect. Just as the consumed pass through the food chain, so do (in our diagram) the evaluated. Prey become predators as they get promoted and take on new roles. Todays researcher will be tomorrows university leaders/funders/government advisors – even rankers. As those further up the chain consume those further down, it is to be hoped that the practices of those higher up the food chain will change, given time.

3) Those at the top of the food chain often scoff at the frustrations of individual researchers’ desperate to change the evaluation hierarchy. It feels almost as ridiculous as plankton seeking to have an influence over the actions of the hawk. But it shouldn’t be forgotten that the actors on the first trophic level (known rather pertinently as ‘producers’ in food chain terminology) are utterly critical. Without them, there is no chain. If we care about the health of research, we should care more about the health of the researcher than any other part of the system. They are the foundation on which the whole hierarchy is built.

4) Just as the sun provides essential energy to fuel any food chain, so does the provision of data by vendors. To revert to my second metaphor: without weapons there can be no war. It is not the generation of myriad new metrics by bibliometric scholars that is the problem. It is their selection and widespread availability in vendors’ products, usually offered without training or explanation, that causes damage as they work their way through the food chain. Unfortunately, whilst researchers struggle to influence up the food chain, the ability to influence the ‘sun’ seems an almost Canutian task. And efforts to encourage data providers to provide metrics more responsibly (or not at all) often fall on deaf ears.

5) Finally, turning to the apex of our evaluation food chain, I think many of those further down, certainly those immediately below the apex, often fail to acknowledge who is really at the top. It is hard for seemingly autonomous entities (HEIs, funders, governments) to admit that they are motivated in no small part to impress an unappointed, ungoverned, unchecked predator. This is no doubt due to the complexities involved in this particular relationship (do they fight, ignore or befriend them?) as well as the lack of perceived power they have as individual entities to ‘topple’ them, and the fear of what might replace them if they are toppled (better the devil you know?).

“They don’t care who they sell to, or what damage is done, as long as they stand to profit. And profit they do..”

On this last point, I would like to finish by offering a solution to this particular problem with a somewhat gratuitous plug for some work I’m involved in. Those at the top of the food chain are only there because they themselves have no predators (evaluators). The obvious solution, therefore, is to add in an additional layer of evaluation over the top, namely, to evaluate the rankers. (I was interested to see comedian Katy Brand recently proposing a ‘Review the Reviewers’ blog for a similar reason – to take power away from an ungoverned and potentially predatory profession). This is something the INORMS Research Evaluation Working Group have suggested and are currently working on. The proposed output would be a way of scoring rankers against a set of community-agreed, responsible ranking principles. The proposed outcome, we hope, would be to have a positive influence on the research evaluation food chain, by putting some of the power back into the hands of the researchers on whom the system depends.

 


img_6139 (2)Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and co-Champions the ARMA Research Evaluation Special Interest Group. She also chairs the INORMS International Research Evaluation Working Group.

Creative Commons LicenceUnless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

3 Replies to “The research evaluation food chain and how to disrupt it”

  1. Dear Elizabeth,
    thanks for your text. I agree on the systemic effect of “pseudo-metrics” in evaluation, thus enabling every actor to reallocate their own responsability to others or even to undefined entities like “science gaming” or “publish or perish”. I also agree on the impact of “avaiable” data on the whole chain, as the objectified “citation count” would give an excuse to justify human judgment.

    Nevertheless, I think your “food hierarchy” solidifies relations that are not that much proven.

    1/ There is not an unified “ranking” system, but rather fights over what has to be counted or excluded. Yes, there is an old tendancy to consider publications, then articles only as being important in most disciplines. But ranking institutions, universities and learned societies build their own rankings, often explicitely against other existing ones see this decade-old article on SSH https://halshs.archives-ouvertes.fr/halshs-00568746v2 Maths, computer science, high-energy physics and other don’t follow the JIF crazyness, but still have journal (and conference) formal or informal rankings.

    2/ There is no proof of the real influence of “world rankers” on actual universities. While specialized rankings, especially in the US see the classic Espeland & Sauder articles, have shown an adaptation from departements (including blatant fake data), I don’t see it from “global players”. The “Norwegian system” of money distribution is much more important than these global rankers for universities in some European countries, while being much more complicated than just “outlet ranking”. REF is of course the tricky example: it could be replaced (formally) by a bibliometric algorithm, because it mimics it with its very skewed distribution of money.

    3/ Many institutions, departments, funders… don’t “play this game” because they have their own policies (specific goal, audiences) or they decided explicitely not to play it (lack of interest, lack of resources…). Of course they are far less visible – from a “global point of view” – but still exists and probably represents the vast majority of their respective population. Even things like ERC grants can’t be easily linked to your “food chains” so imagine national or local grants.

    To conclude, I think that unifying the chain is counter-productive by empowering players thant are not that dominant, and that trying to “domesticate” then through “responsible” rankings would only give them more leverage. Everytime an institution gives back free “good data” to Scopus or WoS which is really “bad”, they don’t “correct” them, but only validate their rankings.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from The Bibliomagician

Subscribe now to keep reading and get access to the full archive.

Continue reading