Introducing the recently launched The Metrics Toolkit – Stacy Konkiel outlines what it is, why it was created, and how it can be used by practitioners working in research impact management.
In January, The Metrics Toolkit launched to much interest from academics in the US and UK. The Metrics Toolkit is an online resource aimed at helping researchers and evaluators understand and responsibly use research metrics like journal acceptance rates, the Journal Impact Factor, and altmetrics in evaluation scenarios.
In this post, I’ll describe what the Toolkit is, why it was created, and how it can be used by practitioners working in research impact management. I’ll also share our next steps for expanding and improving the Toolkit, and ways you can get involved.
What the Toolkit is
The Metrics Toolkit explains 27 research impact metrics, including how they are calculated, their known appropriate uses and limitations, and ways to access them. The Toolkit is intended to be an impartial and literature-driven guide to using research metrics for both researchers and evaluators: researchers can find the metrics best-suited to help explain the kinds of impact their research has had, and evaluators can use the Toolkit to look up metrics they encounter in university department reports, grant narratives, promotion and tenure dossiers, and other professional advancement scenarios, to ensure that they fully understand them.
By design, metrics explained in the Toolkit can often be applied to research formats beyond journal articles. They include:
The Toolkit also includes a number of examples of how researchers have used impact metrics in professional advancement scenarios, alongside general best practices for using impact metrics in one’s own reporting (e.g. “Present quantitative data in context and use appropriately normalized scores when possible”). For those who want to dive deeper into the meanings of metrics, we include a number of recommended readings.
Why we created the Toolkit
The Metrics Toolkit was envisaged by Heather Coates (IUPUI, Indianapolis, USA), a librarian who regularly encountered researchers that wanted help understanding the rapidly evolving world of research impact metrics, usually in the urgent weeks (or days!) before a grant application or tenure dossier was due.
Heather recognized the need for a “one stop shop” that laid out impartial advice for using metrics appropriately, based on the peer-reviewed scientometrics research, using language that those from all disciplines could understand, that faculty (and those who evaluate faculty) could consult whenever they needed help.
Heather reached out to me and Robin Champieux (OHSU, Portland, OR, USA) given our long-standing interest in making research impact metrics easier to understand and use responsibly. Together, we secured funding and in-kind donations from each of our organizations and Force11 to build The Metrics Toolkit.
How it can be used
You can use The Metrics Toolkit to learn about specific metrics, which are documented thoroughly in our metrics explainer pages. You can look up metrics by name (Explore Metrics), and also to choose appropriate metrics based on the type of impact you want to demonstrate, the research output you’re gathering data for, and the discipline you work in (Choose Metrics).
Metrics explainer pages
The metrics explainer pages themselves include a number of important data points for each metric, including how the metric is calculated, appropriate and inappropriate use cases, known limitations, the transparency of the metric’s calculation, and the timeframe for which the metric applies. We base these pages on the peer-reviewed literature–nearly all statements link out to research that can back it up.
The Explore Metrics dashboard offers a simple overview of all the metrics included in the Toolkit. Using category buttons at the top of the dashboard, you can limit the metrics on display by how they’ll be applied: at the author-level, book-level, journal-level, and so on.
The Choose Metrics page allows you to filter for metrics that apply to the type of impact you want to demonstrate, the type of research output (e.g. journal article, book) you want to find metrics for, and the discipline of the evaluated research in question. You simply use the dropdown menus to make a selection based on the criteria you want (e.g. “Show me all metrics that can be used for the Arts & Humanities”), as seen above. Choose Metrics makes it much easier to find the most appropriate metrics for particular use cases.
Since launch, we’ve received a lot of great constructive criticism and encouragement, and have our own ideas about how we want it to grow. Soon, we’re going to:
- Expand the selection of citation-based metrics available in The Metrics Toolkit, adding explainer pages for co-authorship metrics and others;
- Improve the existing metrics pages, thanks to feedback from the community;
- Make the Disciplines for the Choose Metrics page more specific, so it’s easier for researchers to find more accurate metric recommendations; and
- Grow our network of experts who maintain The Metrics Toolkit, so as to ensure that new discoveries in scientometrics make it into the Toolkit as quickly as possible (more on this point below).
The Toolkit launched as a kind of “minimum viable product”, so there are a number of gaps in coverage that we hope to close soon. Which is where all you bibliomagicians come in!
We’re looking for individuals who are enthusiastic about responsible metrics to join The Metrics Toolkit’s volunteer Editorial Board. Qualified Editors will have an accurate understanding of how metrics are used in evaluation scenarios, a willingness to regularly read scientometrics research articles and distill the most important information, excellent written communication skills (in particular, an ability to write clearly and free of jargon), and the ability to meet monthly with those in the US and UK via video conference.
Editors will be responsible for regularly updating a small number of metrics explainer pages, identifying new metrics-related topics suitable for inclusion in the Toolkit, promoting The Metrics Toolkit amongst relevant audiences, and helping devise an ongoing strategy for the Toolkit’s development. We estimate that involvement will require an initial 2-3 hours per month upon joining the board, then an ongoing 1-2 hours per month.
We’ll be formally launching a recruitment campaign for our Editorial Board soon, but in the meantime, please do email us at firstname.lastname@example.org if you’re interested in applying.
Stacy Konkiel is the Director of Research & Education at Altmetric, a data science company that uncovers the attention that research receives online. Her research interests include incentives systems in academia and informetrics, and Stacy has written and presented widely about altmetrics, Open Science, and library services. She also currently chairs the Innovation committee of Library Pipeline and is building the Metrics Toolkit. Previously, Stacy worked with teams at Impactstory, Indiana University & PLOS. You can follow Stacy on Twitter at @skonkiel.
Unless it states otherwise, the content of The Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.