The metric tidal wave – guest post by Karen Clews

Like most institutions, the release of the Metric Tide report in July 2015 came at a busy time for us. We were in the midst of a REF open access campaign, continuing our work to embed our CRIS (Pure) into institutional processes, and thinking about how to increase our research data management provisions without breaking the bank. Despite all of this the report actually came at a good time for us, as we were exploring the possibility of purchasing a research analytics tool and thinking about how we could use research metrics to support decision making.

Almost 2 years on and we are still learning. It is important to see this work as a long term change programme, and change programmes can be unwieldy beasts. Initially we kept the use of our research analytics system (SciVal) relatively closed; working closely with a group of academic leads and building up super-users in professional services teams such as the International Office, Planning Office and the Library. This gave us the space to think about how we wanted to approach the data, and what they were, and were not, telling us.

Knowledge picture
(CC BY-Matthias Melcher)

More recently we have taken the plunge and are promoting the use of the system more widely, specifically to researchers across the institution. Responsible metrics practice dictates that transparency is key, but with this brings risk. Suddenly we have more people using the data, not always taking these responsible practices into account. The benefits are obvious, the more researchers engage with the process, the better the data quality becomes, but we are pushing against embedded, historical attitudes that span far and wide. While I personally have pretty strong view on the pit-falls of the H-index as a measure for academic prowess, we can’t just ignore its existence. A lot of people know it and use it, so even if a researcher doesn’t agree with it themselves, when job and funding applications require it, what choice do they have?

Certainly, developing oversight and support without creating an atmosphere of restriction is something we are keen to develop. In addition, with such a range of metrics to choose from and a lot of reading to do before you can come close to calling yourself an expert, making sure the right metrics are being used in the right way is an ever evolving conversation.

Like I said, this is a long term change programme and there is a still a lot to do, but looking back we have started to make some real progress and even managed to learn a few things along the way!

  1. Think about what it means to your institution

Everyone is starting from a different place; the key is to know what you’re already doing and where your weak points might be so you can set your goals accordingly. For us, we knew research metrics were being used widely, but inconsistently, across the institution. One of the aims of this work was to think explicitly about how and why we wanted to use research metrics in the institution and how we could share this understanding.

  1. Get academics involved early

We were keen that this work should not just be a Professional Services project, but take a collaborative approach bringing in expertise from across the institution. The benefit of this approach came early on when we realised not everyone feels the same way about the H-index or the Journal Impact Factor. The range of views expressed in the group helped us form a much more nuanced message about the benefits and pitfalls of using metrics in certain ways.

  1. Lead by example

A key message of the report was the need for transparency around the use of metrics, and this applies to the use of indicators at all levels. For us this has meant thinking about how we articulate our key performance indicators to staff, and specifically which metrics are being used. With the institutional subscription to a research analytics tool we have been able to democratise the use of metrics, allowing anyone to access these data themselves so that they can understand what is being measured and how it relates to them.

  1. Celebrate, don’t denigrate

We didn’t want to become the metrics police, seeking out and admonishing anyone who was not using strict centrally prescribed methods. Instead our approach has been to identify good practice and use this to build a dialogue.

  1. Know what success looks like

Behaviour isn’t going to change overnight, new metrics are always popping up, and ideas around how to use metrics are evolving all the time. For us, success was about igniting debate. Amongst the day to day worries about Pure, developing Impact, REF, open access, research funding and Brexit we have started to get people thinking about the impact using metrics in a certain way can have, and importantly start to influence.

For anyone starting out looking at this, or just thinking about where to go next, hopefully these tips can help. For us, the next step is diving into the world of alternative metrics, and understanding how they can support and broaden our use of research metrics.


Karen Clewes photo
Karen Clewes, Research Information Manager, Strategic Planning Office, 124 Aston Webb Semi Circle, University of Birmingham, k.m.clews@bham.ac.uk, 0121 414 9134.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s