Robyn Price considers how systematic injustices in academia are present in and perpetuate in bibliometrics; and ways that the bibliometric community can address this.
Working in bibliometrics is difficult. The part that I, and I think a lot of people in this community, find difficult is the responsibility for data about research outputs and the people who produce research.
Bibliometrics requires applying an agreed value system on to the research outputs, and thus on to the people. In this recent week of global action and anger against systematic racism amid the backdrop of health and economic crisis, this difficulty has weighed on me more heavily than ever.
We try to count a real world of people (in the UK alone, HESA counts over 210,000 working academics employed in 2018/19), the organisations that employ them and the research they produce. So much of traditional research evaluation depends on this research being in the hegemonic output type (journal articles).
Journal articles are a data entity that we analyse in search of a larger narrative about production or performance of research. This is something that in reality is complicated and not always accurate: co-authorship might or might not depict real collaboration and knowledge sharing between researchers; the discipline associated with the journal might or might not depict the exact intellectual space this work contributes to; the listed authors and their authorship position might or might not relay the real story of how or by whom the work was created.
This gap between bibliographic data and the complex reality of what it represents will be familiar to anyone who works in this area. I have been further considering how this gap between the data and the reality, the ‘known unknown’ that we accept when using journal articles as a proxy for telling a human story of research, is exacerbated by issues of power structures and unequal representation in academia. This link was clarified to me through a discussion with Professor Chris Jackson (Imperial College London) who spoke of his perception and experience of structural biases in academia that perpetuate biases in the academic value system.
How can you ‘assess the impact’ of an article without knowing the ways authors might have benefited from or have been disadvantaged, or even been excluded entirely, by systematic inequities that the article data can never tell you? Some of the ways we treat bibliographic data directly strengthen biases. An example of this is using databases that we know to have biases, such as language or geography, for analysis.
These limitations can be seen even when we just begin to identify researchers and their outputs; the publication databases falling short of identifying outputs from all of our researcher cohorts equally before we even start to derive metrics, responsible or otherwise, from them.
How can the bibliometric community tackle these issues? I have begun to consider how individuals in positions like me have responsibility for stopping these issues perpetuating in our practice and community. What practical things can those in such positions do to help address these issues?
- Actively seek and critically read bibliometric research that examines publication and citation bias. Look at research on diversity and participation in higher education and research careers, equity in research funding, and research culture. Allow the findings to influence your own bibliometric practice.
- Examine diversity and equity in our own professional domain. We are accountable for how our professional group is run. We gatekeep jobs; provide education and training, curate conferences and events; are the consumers or creators of bibliometric journals; govern online spaces like social media, listservs, blogs (such as my own role on this blog). All of this directly influences who joins the profession, and what they experience once inside of it. Resources like LIS Decolonise; CILIP Community, Diversity & Equality Group; SCONUL’s BAME experiences report and the multiple advocacy strands by the ALA demonstrate work already being done in this area by the sector. Take note of professional groups, especially paid-for membership bodies, that don’t facilitate reflection or action.
- Hold commercial databases, metric tools and publishers accountable for their products. We build community-owned research infrastructure and tools in the hope of more equitable power and financial structures.
- Continue to build the relationships and discussion spaces created by responsible metrics conversations by listening to the widest possible range of voices to examine practices and power structures in your own institution.
I invite anyone with other ideas of ways to address this issue, or who wish to share experiences or further reading material to contribute through the comments or on the list.
 Ishaq, Dr M and Hussain, Dr AM (2019). BAME staff experiences of academic and research libraries. London: SCONUL https://www.sconul.ac.uk/page/bame-staff-experiences-of-academic-and-researchlibraries
Robyn is the Bibliometrics and Indicators Manager at Imperial College London. In this role she is responsible for managing a bibliometrics service with an emphasis on promoting responsible use of metrics. Previously, Robyn worked in the editorial teams of subscription and open access journals.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.