Having worked in UK academic libraries for 15 years before becoming freelance, I saw the rise and rise of citation counting (although as Geoffrey Bilder points out, it should rightly be called reference counting). Such counting, I learnt, was called “bibliometrics”. The very name sounds like something that librarians should be interested in if not expert at, and so I delved into what they were and how they might help me and also the users of academic libraries. It began with the need to select which journals to subscribe to, and it became a filter for readers to select which papers to read. Somewhere along the road, it became a measurement of individual researchers, and a component of university rankings: such metrics were gaining attention.
Then along came altmetrics, offering tantalising glimpses of something more than the numbers: real stories of impact that could be found through online tracking. Context was clearly key with these alternative metrics, and the doors were opened wide as to what could be measured and how.
It is no surprise that after working in subject support roles I became first an innovation officer, then institutional repository manager and then a research support manager: my knowledge of and interest in such metrics was useful in that context. Yet I was mostly self-taught: I learnt through playing with new tools and technologies, by attending training sessions from product providers and by reading in the published literature. I’m still learning. The field of scholarly metrics moves on quickly as new papers are published and new tools are launched onto the market. Also as university management and funders become more interested in the field and scholars themselves respond.
It took a lot of time and effort for me to learn this way, which was appropriate for my career path but it cannot be expected of all librarians. For example, subject or liaison librarians work with scholars directly and those scholars might also be interested in metrics, especially those available on platforms that the library subscribes to. Yet these same librarians must also quickly become experts in changing open access practices and data management needs and other concerns of their scholarly population, whilst teaching information literacy to undergraduates and maintaining the library’s book collections in their subject area and their own knowledge of the disciplines that they support. They have a lot of areas of expertise to keep up to date, as well as a lot of work to do. And there are new, trainee librarians who have a lot to learn from our profession. How can we save their time?
I began collaborating with Library Connect because that’s exactly what they seek to do, support busy librarians. Colleen DeLory, the editor of Library Connect, has her ear to the ground regarding librarians’ needs and she has some great ideas about what we could use. I started by presenting in a webinar “Librarians and altmetrics: tools, tips and use cases”, and I went on to do the research behind an infographic “Librarians and Research Impact,” about the role of a librarian in supporting research impact. Another webinar came along “Research impact metrics for librarians: calculation & context” and then the very latest and, in my opinion, most useful output of my work with Elsevier is our poster on research metrics.
Quick Reference Cards for Research Impact Metrics
This beautifully illustrated set can be printed as a poster which makes a great starting point for anyone new to such metrics, or indeed anyone seeking to de-tangle the very complex picture of metrics that they have been glimpsing for some years already! You could put it up in your library office or in corridors and you can also reproduce it on your intranet – just put a link back to the Library Connect as your source.
You can also print our set out as cards which would be really useful in training sessions. You could use them to form discussion groups by giving each participant a card and then asking people to form groups according to which card they have: is their metric one for authors, one for documents or one for journals? Some people will find that they belong to more than one group, of course! The groups could possibly then discuss the metrics that they have between them, sharing their wider knowledge about metrics as well as what is on the cards. Do the groups agree which metrics are suited to which purposes, as listed across the top of the poster? What else do they know or need to know about a metric? Beyond such a guided discussion, the cards could be sorted in order of suitability for a given purpose, perhaps sticking them onto a wall underneath a proposed purpose as a heading. The groups could even create their own “cards” for additional metrics to stick on the wall(s!), then the groups would visit each other’s listings after discussion… We’d love to hear about how you’re using the cards: do leave a comment for us over at Library Connect.
Of course our set is not comprehensive: there are lots of other metrics, but the ones chosen are perhaps those that librarians will most frequently come across. The aspects of the metrics that are presented on the poster/cards were also carefully chosen. We’ve suggested the kind of contexts in which a librarian might turn to each metric. We’ve carefully crafted definitions of metrics, and provided useful links to further information. And we’ve introduced the kind of groupings that each metric applies to, be it single papers or all of an author’s output, or for a serial publication. It was a truly collaborative output, with brainstorming of the initial idea, research from me and then over to Colleen DeLory to coordinate the graphics and internal review by Elsevier metrics expert Lisa Colledge, back to me to check it over, then with Library Connect again for proofreading and even a preview for a focus group of librarians. It has been a thorough production and I’m very proud to have been involved in something that I believe is truly useful.
Freelance Librarian, Instructor & Copywriter