Lizzie Gadd considers whether “responsible” is something we do or something we are.
I’ve made the point a few times recently that responsible bibliometrics requires responsible bibliometricians. It is not just our systems and processes that need to be responsible, but us. We are the ones supporting these activities in our institutions. I believe that our approach, our personal qualities and attitudes are the route to ensuring that metrics are used responsibly. When I make this point I often reflect on how the Metric Tide Report’s five principles of robustness, humility, transparency, diversity and reflexivity, offer us as a framework not just for how the measuring is done, but by whom. I wanted to take the opportunity to unpack this here and make some suggestions as to what a responsible bibliometrician might look like.
We need to be as robust as we can in our evaluations and as clear as we can on their limitations. To achieve this we need to hone our data curation and analysis skills as much as possible and avail ourselves of every learning opportunity. Not many of us will have come to our roles with a statistical background or indeed any understanding of bibliometrics. And not all of us will be full-time bibliometricians, many do this sort of work on top of an already busy workload. However, within the constraints of our role and in accordance with the amount of time we’ll be spending on this type of work, we should seek to improve our knowledge. The Bibliometric Competencies are a good starting place to help you assess your current skills and any gaps you might have. LIS-Bibliometrics events can give you a good overview of a range of topics on one day and The Bibliomagician with its resources page can be another useful source of intel. LIS-Bibliometrics have recently piloted a Statistics for responsible bibliometrics one-day course aimed at non-statisticians and CWTS Leiden offer a range of courses and summer schools which are well-regarded. The field of bibliometrics is nascent and fast-moving. To really be robust, we need to wear permanent ‘L’ plates. Which brings me on to my second point.
I would say that humility is the critical characteristic of any bibliometrician. And not just because we have to keep learning, but because the foundations of citation analysis, particularly at the level of individuals or small groups, simply isn’t strong enough for us to get over-confident about its interpretation. We need to keep data in its place in our conversations and to recognise that while we might have some numbers, it is our academic colleagues who have a career’s-worth of specialism in their fields. There are other ways of evidencing publication quality and visibility (e.g., prizes, awards, sales and reviews) that go beyond quantitative data and we should always provide academics with an opportunity to supplement our data with that.
Take every opportunity to give presentations to academic groups and provide plenty of time for Q&R. The crowning principle of the Leiden Manifesto is “to scrutinise indicators regularly and update them”.
I’ve learned an incredible amount by listening to my academic colleagues and we truly achieve better outcomes by pooling our resources. For example, I may have data that show there is a citation cost to publishing conference papers. However, my academic colleagues will tell me that there is a huge networking and visibility benefit to attending. Together we can develop strategies that take both of these facts into account.
The wonderful Henk Moed has divided up bibliometric activities into four domains:
- Data – curating and understanding the data on which you are to run your analysis
- Analysis – designing and performing an analysis
- Evaluation – interpreting the meaning of your analysis
- Policy – making policy decisions based on the analysis, or that inform future analyses.
Some of us may only be involved in the data curation and analysis domains and in my experience, a competent bibliometrician should be able to undertake these with only the odd reference to academic colleagues. However, when it comes to evaluating the outcomes of those analyses and generating policy based on bibliometric data, I would say that this should only be performed alongside an expert in the field.
We need to be open not only about what we’re doing (e.g., an international co-authorship analysis), and how we’re doing it (data source and indicators), but why we’re doing it (to help monitor our international connectivity) and what we understand the limitations to be (international co-authorship is not the only dimension of international connectivity and co-authorship is not a feature of all disciplines). Without the latter two elements, a many misunderstandings can arise as to the drivers of bibliometric analysis (am I being judged?) and their consequences (will I be forced to work internationally?). We need to explain the policy drivers behind our activities and the assumptions on which we’re basing our analyses to reassure people that the data is being interpreted sensibly.
The responsible bibliometrician works with these various communities to understand the glorious diversity of publication and citation practices, and to reassure colleagues that we understand that one size does not fit all.
Transparency also involves making ourselves as visible and available to colleagues and their feedback as we can. Take every opportunity to give presentations to academic groups and provide plenty of time for Q&R. The crowning principle of the Leiden Manifesto is “to scrutinise indicators regularly and update them”. We need to be open to amending our approach in the light of new evidence and legitimate challenges. In academia we are dealing with some pretty bright sparks. Whilst our analyses can be a helpful input into policy discussions and evaluations, we have as much to learn as we have to offer (see humility above) and an open and transparent approach can not only allay any fears they may have about the evaluation, but also leave you open to opportunities to improve it. Successful bibliometric evaluation is not done on our academic colleagues, it’s done with them.
Part of the joy of bibliometrics, but also one of the challenges, is the diverse subject communities that we seek to understand and provide bibliometric services to. As I’ve reported before, in one of my early meetings with our Associate Dean for Research in Art, English and Drama, he said that trying to describe the Arts and Humanities through numbers was like asking engineers to define their work through dance. We now no longer provide publication and citation metrics for Art, English & Drama. Other academic schools work with us to choose their own indicators. Even within a single academic school you have a wide variety of emerging and established discipline areas, pure and applied, with different publication practices and potential, seeking to engage with different academic and non-academic audiences. The responsible bibliometrician works with these various communities to understand the glorious diversity of publication and citation practices, and to reassure colleagues that we understand that one size does not fit all.
One of the important things to understand around diversity is the limits of field normalisation. I admit when I first started out I became a little over-confident about the use of field-normalised metrics, as though they were a magic wand that allowed me to compare any one discipline area with another. Field-normalisation makes analyses more fair, but not completely fair. This is being worked on all the time and better field-normalisation will come, but for now, I’d say proceed with caution and advise accordingly. Ludo Waltman and Nees Jan van Eck have recently provided a really accessible overview of field-normalisation which is well worth a read.
To me, the ability to recognise the potential and systemic effects of measurement on the entities we are measuring (and beyond) is up there with humility in terms of critical personal attributes of bibliometricians. As I’ve said before, if I had my way, every bibliometrician would have a post-it on their computer that reads “Metrics can kill people”. I know this sounds melodramatic, but it is ultimately what we are dealing with here. Academics have taken their own lives under the pressure to live up to quantitative targets. And if that isn’t the ultimate systemic effect of an indicator, I don’t know what is. The situation is particularly delicate when we’re measuring publications. The Latin root of the word “author” is “father”. And it has been noted before that research publications are almost literally academic offspring. When we measure an academic’s publications (quantity, citedness, co-authorship, visibility) it can feel to them like we’re judging their children. And you judge other people’s children at your peril…
When using bibliometrics for research assessment, we have to be fully aware of the possible effect that such measurements might have on people. In one sense this is where our personal qualities will have more power than our technical abilities. I remember a time when I was agonising about my bibliometric responsibilities and their potential effects to a colleague, also an Associate Dean for Research. I’ll never forget her response. She said, “Lizzie, someone will do this job. Better it were you and your conscience, than some calculator-wielding b*****d.”
Yes, we need to be robust in terms of our data and methods, humble in terms of the half-story the data is telling us, transparent about the what, where and why of our activities, and we need to embrace diversity, but at the end of the day, the best quality of a bibliometrician is kindness. We must never reduce people and the work of their hands to numbers.
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.
Unless it states otherwise, the content of The Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.