LIS Bibliometrics 2019 Event Summary and Outputs – Open Metrics and Measuring Openness

In this post, Dr Karen Rowlett summarises the recent #LisBib19 conference held on 29th January 2019. Many thanks to the hosts, organisers, speakers, panellists, sponsors and participants for a very interesting day, and link to presentations are is also included.

Torsten Reimer set the tone for our discussions by reminding us that metrics are about people’s lives and may affect or determine the opportunities that they are offered and the choices that they make. This can be a daunting responsibility for us at times when we are asked to provide analyses and reports. I think it has been very helpful to bring the mix of stakeholders together — Higher Education Institutions, funders, suppliers, publishers, and busineses reps were in attendance — particularly as we are often working in isolation. This pooling of ideas and support from the community is invaluable.

event 19 2
CC By Nitin Arya

We are reminded that we are a global community. Researchers are being evaluated and assessed by different means all over the world (with sometimes less than sensible approaches). It is up to us to promote and champion the use of appropriate and responsible metrics that are measuring what they claim to measure. The best way we can help is by changing attitudes to research evaluation and providing contextualised advice.

Open scholarship is a lofty aim that may require a huge carrot harvest to encourage our researchers to think about small changes they can incorporate into their working methods. Isabella Peters’s talk has given us all food for thought about how we measure openness in the transition to fully open scholarship. I’m looking forward to a day when we don’t need to assign numbers to publications, journals, institutions or researchers.

Telling a story about a researcher’s, institution’s or funder’s research outputs and their impact with just a sprinkling of responsible metrics to support the narrative might be the best way forward.

A dilemma that we face regularly is having to provide poor or simplistic metrics to cover the demands of funders or our institutions. In the workshop session on advice for ECRs we touched on the moral problem of having to advise researchers that they might need to think about how the progress of their careers will be measured, rather than encouraging them to follow their passions. Our role is to support and educate researchers as well as those who employ and assess them.

I found it interesting that as a group of bibliometricians, several of the talks suggested that we don’t actually need metrics (or that they are so flaky we shouldn’t use them, thanks to Ian Rowlands). Perhaps that is because we are more aware of the current limitations of the metrics that we have available. Telling a story about a researcher’s, institution’s or funder’s research outputs and their impact with just a sprinkling of responsible metrics to support the narrative might be the best way forward.  Our stories can be made more compelling by the range of data that is available to track now: data usage, altmetrics, shares, downloads etc. It is up to us to interpret these metrics and be able to explain their relevance, limitations and significance to others. The range of sources and types of data is growing and I’ve been impressed by the ingenuity of some of the presenters of our lightning talks and the ways they are drilling into publically available data sets, some very ephemeral, and developing tools to help us mine them. It is also great that the details of the tools and software that they are developing are being published openly and the software is being made available for others to reuse. All these tools will provide extra details for us to add to our narratives around outputs.

With the growing sharing of data, it was good to find out about the Counter initiative to track data usage. It is great to hear that we are learning from our experiences in providing article metrics and are generating better metrics for data usage from the outset. The involvement of the research community in this initiative is also beneficial. I was particularly encouraged by the call from Catriona MacCallum that we really can’t afford to make the same mistakes with data that we made with journals. I think we can all agree wholeheartedly that we do not need an impact factor for data sets.

A clear message from the day’s discussions is that we need to continue to work with the suppliers and vendors of metrics to improve the transparency and accuracy of the existing tools and help with the development of new ones. It is up to the bibliometric community to challenge the misuse of metrics and make users aware of the skewed nature of some of our proxy measures of quality and that common metrics may not take account of subject-specific differences.

Presentation slides and audio recordings could be found here.


Karen Rowlett

Since completing her PhD many years ago, Karen has worked in the scholarly publishing world as a copy-editor, news writer and managing editor with subscription and open access publishers. She is now the Research Publications Adviser at University of Reading working with researchers on bibliometrics, open access and supporting them in those tricky decisions of where to publish their research outputs.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.