Book review: “Measuring research: what everyone needs to know”.

Lizzie Gadd reviews a new introductory text, “Measuring Research: what everyone needs to know” by Cassidy Sugimoto and Vincent Larivière.

The research community has been crying out for a decent primer on all things bibliometrics.  And now we have one. And it’s a good one – no doubt about that.  It’s well-written by two people who know what they’re talking about.  Cassidy Sugimoto and Vincent Larivière are both Associate Professors in Information Science fields; Sugimoto at Indiana University in the US, and Larivière at the Université de Montréal in Canada.  For newcomers, this book provides a really solid grounding in the history, and present day practices and challenges around the use of bibliometrics in research assessment.  In terms of the Lis-Bibliometrics’ Bibliometric Competencies I would say the content easily addresses the Entry Level competencies and touches on many of the Core Level competencies too.  However, going back to the title, anyone expecting a broader overview of the full range of input, process, and output elements of “measuring research” will be a bit disappointed.  Apart from a short section on research funding and peer review, this book is about bibliometrics.  I’ve no problem with that, but I think there is a general trend towards conflating research assessment with bibliometrics that isn’t helpful.  As the authors themselves make very clear, bibliometrics should be used cautiously when measuring research and in certain circumstances it shouldn’t be used at all.

Measuring research

In terms of content and delivery, this is where the book really shines.  It provides just the right level of detail around the foundational contributions of bibliometric greats such as de Solla Price, Lotka, Bradford and Zipf.  The reader is then whisked through the key bibliometric data sources (WoS, Scopus, Google Scholar), their coverage, strengths and weaknesses.  An enormous chapter (over half the book) called “The Indicators” then follows.  This provides short digestible overviews of the sorts of things bibliometrics are used to indicate (productivity, collaboration, interdisciplinarity and impact) and the metrics used to indicate them (JIF, Eigenfactor, SNIP, SJR, CiteScore, Altmetrics and the h-index). They’ve cleverly managed to describe how these indicators work, without putting people off with scary mathematical formula or worked examples.  (Having said that, I do think there is a space for some instructional videos on the calculation of some of the more complex metrics – SNIP, SJR, RCR, FCR etc., if anyone wants to volunteer!).  The book closes with a short chapter on some of the bigger issues around measuring research using bibliometrics.  Issues such as the fact that “the scientific community produces, but does not control, the data that govern it”; the fact that “there is…no mechanism for holding data service providers accountable” for their data, and the rather stark conclusion that, as a result “research administrators…must be held responsible for the appropriate use of the data”.  The authors sum up with some important points about the potential negative effect of measurement on scholarship (“making the capital generated more important than the science itself”) and with a call for research evaluators to “heed the mantra of the medical profession: first do no harm”.

The other surprising omission considering the book’s focus on responsible metrics was a lack of reference to the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto and the Snowball Metrics initiative.

Having waxed lyrical about the content, there are some topics I was expecting to see covered that weren’t.  There’s not a mention of the increasing prevalence of World University Rankings which measure research in weird and wonderful ways – many bibliometric.  These have a significant impact on universities both reputationally and financially and an understanding of their methodologies is an important bibliometric foundation I think.  There was also no real coverage of national research evaluation exercises (apart from an odd reference to the UK Research Excellence Framework on page 47 which seems to make the erroneous assumption that it is based on productivity).  This may be because the authors are lucky enough not to have to work in countries which are burdened by such things.  The other surprising omission considering the book’s focus on responsible metrics was a lack of reference to the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto and the Snowball Metrics initiative.  Again, this might be a cultural thing (in the UK this is pretty much all the bibliometric community are talking about) but these are important global initiatives.

Despite these omissions, I stick to my original claim: this is a good book.  It covers important ground in an enjoyable way.  I have a couple of suggestions for the second edition(!).  Firstly, as I can imagine the book being used as a ‘ready reference’ by practitioners, I think it would be helpful if key terms were emboldened (they weren’t all indexed).  And secondly, whilst I accept the authors’ stylistic rationale for “omitting explicit references from the chapters”, I think if you’re going to refer to studies that support your assertions (e.g. p117) you should really cite them.  Not only would this be very helpful to the reader, but I hear that citation analysis is quite a thing now…

It’s a very manageable 133 pages  – an important consideration for the busy practitioner.

To conclude, I think that Librarians and Research Office types who feel like they should know more about bibliometrics than they do, will find this a reliable and accessible way of getting up to speed.  Even practitioners who’ve been working in this area for a while will find the book fills some gaps in their knowledge. (Our research has shown that fewer than a third of us who have LIS qualifications received any bibliometrics training as part of those).  And what’s more: it’s a bargain. It’s not often I say that about bibliometrics books, but this costs the equivalent of two decent cups of coffee and a pecan slice.  And by the time you’ve consumed those you might be the best part of the way through this book if you’re a quick reader.  It’s a very manageable 133 pages  – an important consideration for the busy practitioner.  So to summarise: this book is a must-buy.  It provides useful, readable content at a good price and you can’t ask for more than that.

Measuring Research: what everyone needs to know by Cassidy Sugimoto and Vincent Lariviere was released in the UK by Oxford University Press in March 2018.  It retails at £10.99.


 

Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion

 

 

Creative Commons LicenceUnless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

4 Replies to “Book review: “Measuring research: what everyone needs to know”.”

  1. Thanks for your review Lizzie, it sounds like a good read. However I think you got a bargain price for the book as the website now states £47.99

    Like

    1. Hi Chris, Thanks for the comment. It looks like a hardback copy is now available for £47.99 but the paperback is still on sale at £10.99! Thanks, Lizzie

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from The Bibliomagician

Subscribe now to keep reading and get access to the full archive.

Continue reading