Truth behind the numbers?

Robyn Price considers how systematic injustices in academia are present in and perpetuate in bibliometrics; and ways that the bibliometric community can address this.

Working in bibliometrics is difficult. The part that I, and I think a lot of people in this community, find difficult is the responsibility for data about research outputs and the people who produce research.

Bibliometrics requires applying an agreed value system on to the research outputs, and thus on to the people. In this recent week of global action and anger against systematic racism amid the backdrop of health and economic crisis, this difficulty has weighed on me more heavily than ever.

We try to count a real world of people (in the UK alone, HESA counts over 210,000 working academics employed in 2018/19[1]), the organisations that employ them and the research they produce. So much of traditional research evaluation depends on this research being in the hegemonic output type (journal articles).

Journal articles are a data entity that we analyse in search of a larger narrative about production or performance of research. This is something that in reality is complicated and not always accurate: co-authorship might or might not depict real collaboration and knowledge sharing between researchers; the discipline associated with the journal might or might not depict the exact intellectual space this work contributes to; the listed authors and their authorship position might or might not relay the real story of how or by whom the work was created.

This gap between bibliographic data and the complex reality of what it represents will be familiar to anyone who works in this area. I have been further considering how this gap between the data and the reality, the ‘known unknown’ that we accept when using journal articles as a proxy for telling a human story of research, is exacerbated by issues of power structures and unequal representation in academia. This link was clarified to me through a discussion with Professor Chris Jackson (Imperial College London) who spoke of his perception and experience of structural biases in academia that perpetuate biases in the academic value system.

Photo by Tingey Injury Law Firm on Unsplash

How can you ‘assess the impact’ of an article without knowing the ways authors might have benefited from or have been disadvantaged, or even been excluded entirely, by systematic inequities that the article data can never tell you? Some of the ways we treat bibliographic data directly strengthen biases. An example of this is using databases that we know to have biases, such as language[3] or geography, for analysis.

These limitations can be seen even when we  just begin to identify researchers and their outputs; the publication databases falling short of identifying outputs from all of our researcher cohorts equally before we even start to derive metrics, responsible or otherwise, from them.

How can the bibliometric community tackle these issues? I have begun to consider how individuals in positions like me have responsibility for stopping these issues perpetuating in our practice and community. What practical things can those in such positions do to help address these issues?

  1. Actively seek and critically read bibliometric research that examines publication and citation bias. Look at research on diversity and participation in higher education and research careers, equity in research funding, and research culture. Allow the findings to influence your own bibliometric practice.
  2. Examine diversity and equity in our own professional domain. We are accountable for how our professional group is run. We gatekeep jobs; provide education and training, curate conferences and events; are the consumers or creators of bibliometric journals; govern online spaces like social media, listservs, blogs (such as my own role on this blog). All of this directly influences who joins the profession, and what they experience once inside of it. Resources like LIS Decolonise; CILIP Community, Diversity & Equality Group; SCONUL’s BAME experiences report[4] and the multiple advocacy strands by the ALA demonstrate work already being done in this area by the sector. Take note of professional groups, especially paid-for membership bodies, that don’t facilitate reflection or action.
  3. Hold commercial databases, metric tools and publishers accountable for their products. We build community-owned research infrastructure and tools in the hope of more equitable power and financial structures.
  4. Continue to build the relationships and discussion spaces created by responsible metrics conversations by listening to the widest possible range of voices to examine practices and power structures in your own institution.

I invite anyone with other ideas of ways to address this issue, or who wish to share experiences or further reading material to contribute through the comments or on the list.


[1] https://www.hesa.ac.uk/data-and-analysis/staff/working-in-he

[2] https://www.theguardian.com/education/2019/sep/12/look-at-how-white-the-academy-is-why-bame-students-arent-doing-phds

[3] https://arxiv.org/abs/2005.10732

[4] Ishaq, Dr M and Hussain, Dr AM (2019). BAME staff experiences of academic and research libraries. London: SCONUL https://www.sconul.ac.uk/page/bame-staff-experiences-of-academic-and-researchlibraries


Robyn is the Bibliometrics and Indicators Manager at Imperial College London. In this role she is responsible for managing a bibliometrics service with an emphasis on promoting responsible use of metrics. Previously, Robyn worked in the editorial teams of subscription and open access journals.

undefined https://orcid.org/0000-0001-5776-5256

 Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

5 Replies to “Truth behind the numbers?”

  1. Great post Robyn. I wholeheartedly agree, and I especially feel that our knowledge infrastructures need to change; this can only happen through healthy and sometimes uncomfortable dialogue with communities and people of color and other diverse backgrounds, and with communities in the developing world. In a lot of real-world examples, charitable organizations (typically run by white people from affluent areas of the world) show up and build something for those in lower socioeconomic areas and/or in the developing world; they determine what those communities need without even approaching or speaking to them, and often, it’s not what those communities need, and money, time, and energy is wasted; not to mention the effects of the poorly planned interactions.

    This happens in knowledge infrastructures as well; we’ve decided that the bibliometrics we are using for assessment from specific citation indices are the best way to assess researchers, despite the fact that these databases and metrics directly disadvantage researchers in the Global South by discouraging research in their own local communities and discouraging international collaboration on projects that would focus on locally relevant issues in the Global South (See Alperin, 2015: https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/bult.2013.1720390407, and Alperin, 2015: https://stacks.stanford.edu/file/druid:sr068mj0031/AlperinGeographicVariationAltmetrics.pdf).

    It’s amazing that we have equated “global excellence” with “Western and European excellence.” There are ways to overcome this, and though there is so much attention in academia on “infrastructure” as a buzzword, there is little attention paid to improving knowledge infrastructure for inclusivity and diversity, and there most definitely needs to be intentional focus on encouraging participation from diverse communities to plan and design tools (See Okune et al., 2018: http://dx.doi.org/10.4000/proceedings.elpub.2018.31 or https://web.archive.org/web/20190505040052/https://hal.archives-ouvertes.fr/hal-01816808/document – for some reason the document is not accessible on the archive right now).

    We need to mobilize and advocate for inclusion but also stand up against systemic bias and racism when it does occur, such as this example in which a journal was rejected for inclusion in Scopus despite qualifying under their requirements for acceptance and the reviewers’ comments being inaccurate; in addition, none of the reviewers listed had any expertise in the journal’s subject area of global health and epidemiology, and commented that it has “weak articles” and a “low citation profile” (just FYI, this OA journal’s name is Central Asian Journal of Global Health and is published through the University of Pittsburg). Read more here: https://crln.acrl.org/index.php/crlnews/article/view/24321/32136 Now, someone actually took the time and effort to write about this and communicate the struggle they encountered, but how many other examples are out there, especially from the Global South? Thanks for reading such a long comment. I hope others will comment and discuss these issues here.

    Liked by 1 person

    1. Thanks Rachel for your thoughtful response. I was not aware of the U of Pittsburg journal case.

      Also your comment “there is so much attention in academia on “infrastructure” as a buzzword, there is little attention paid to improving knowledge infrastructure for inclusivity and diversity” I think is really important. Related to this, I have noticed the big metric data vendors rushing to add UN Sustainable Development Goals data to their paid-for subscription products without externally communicating their own responsibility to the Goals of equity in education, innovation, strengthening the means for global partnership- all things their multiple products in the research spheres, in some overt and some covert ways, discourage?

      I would like to add these resources and research pieces to a collection.

      Liked by 1 person

  2. I think a collection would be an excellent idea! Related the the UN Sustainable Development Goals, THE also began to add these to the rankings data (last year I think?) And strangely and disturbingly, (and I’m not even touching on the flawed methodology), they left out a lot of key goals and arbitrarily chose some over others. The missing SDGs are:

    SDG 1 – No poverty
    SDG 2 – Zero hunger
    SDG 6 – Clean water and sanitation
    SDG 7 – Affordable and clean energy
    SDG 14 – Life below water
    SDG 15 – Life on land

    See http://occamstypewriter.org/scurry/2019/05/20/unsustainable-goal-university-ranking/ for more information on their arbitrary selections and percentage/weighting.

    As to your question, I think perhaps we could outline our collection in a way that shows how products, rankings, availability of certain metrics and data discourage those areas/goals. It could be quite compelling, but at will it change anything? I don’t know that it will; it seems that universities know better but are bound to their marketing and revenue goals.

    Liked by 2 people

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.