Measuring the magnificence of monographs

At Loughborough University we have recently been thinking about how we can use bibliometrics responsibly.  Not surprisingly our conversations tended to focus on journal and conference papers where the majority of citation databases focus.  However, as part of this process the question inevitably arose as to whether there were also ways we could measure the quality or visibility of non-journal outputs, in particular, monographs.  To this end we thought we should explore this question with senior staff in Art, English, Drama, History and Social Sciences.

During the conversation we had, the Associate Dean for Research in the School of Art, English and Drama said he felt that assessing the arts through numbers was like asking engineers to describe their work through dance!  Whilst he was not averse to exploring the value of publication indicators, I think this serves to highlight how alien such numbers are perceived to be to those working in creative fields.

So what do we know about monographs?  Well, we know that the monograph as a format is not going away, although some publishers are now also offering ‘short form’ monographs (between a book and journal article in size).  We also know that monographs are not covered by the commercial citation benchmarking tools which many of us rely on for analyses.  They are of course covered by Google Scholar – but there are known concerns with the robustness of Google data, and there is no way of easily benchmarking it.  However, the biggest problem with citation analysis in the field of monographs is not the lack of coverage in the benchmarking tools, but what a citation actually means in these fields.  In English & Drama for example, citations are often used to refute previous work in an effort to promote new ideas (“So-and-so thinks X but I have new evidence to think Y”). So the question remains: is it possible to measure the quality and impact of monographs in a different way?

Well, as part of our conversation we explored some alternatives which I’ll briefly run though here.

The Publisher

The most obvious choice of indicator is who actually published the work.  And we know that the Danish research evaluation system allocates publishers to tiers and those books published with top tier publishers are weighted more heavily than books published with lower tier publishers.  Whilst academics at Loughborough were not minded to formalise such a system internally, it was clear that they do make quality judgements based on publishers with comments such as: “if a university DIDN’T have at least a handful of monographs published by the Big 6, that would be a concern.”  So quality is assumed because the process of getting a contract with a top tier publisher is competitive, and the standard of peer review is very high. A bit like highly cited journals…

Book Reviews

Book reviews could serve to indicate quality–not only in terms of the content of those reviews, and how many there are, but where the reviews are published.  However, whilst there are book review indices, reviews can take a long time to come out.  Also, in the Arts & Humanities, it’s unusual for a book to get a negative review because the discipline areas are small and collegiate.  Essentially, if a book gets a review it means something, but if it doesn’t get reviewed, it doesn’t necessarily mean anything.  Just like citations…

Book sales

High book sales could be seen as an indicator of quality and the beauty of sales is that they are numerical indicators which bibliometricians like!  However, there is no publicly available source for book sales (that I’m aware of). Also, sales can be affected by market size. Thus books sold in the US will often outnumber those sold in the UK – an effect of population size.  Sales are also affected by the print run – i.e., whether the book comes out as a short-print-run hardback aimed at libraries, or a large-print-run paperback aimed at undergrads.  The former might be little sold but widely read; the latter might be widely sold, but never read!  So sales might be more an indicator of popularity than quality.  But the same could be said of citations….

Alt-metrics

Many alt-metric offerings cover books and provide a wide range of indicators.  One of particular relevance is the course syllabi on which books are listed – although this is probably more likely to favour text books than research monographs.  It is also possible to see the number of book reviews on such tools as well as other social media and news mentions.  However, altmetric providers have never claimed that they measure quality, but rather attention, visibility and possibly impact.  But, at the risk of repeating myself, the same could be said of citations…

The problem for us at Loughborough was that none of these indicators met our criteria for a usable indicator, which we defined as:

  • Normalisable – Can we normalise for disciplinary differences (at least)
  • Benchmarkable – Is comprehensive data available at different entity levels (individual, university, discipline, etc) to compare performance
  • Obtainable – Is it relatively simple for both evaluators to get hold of the data, and for individuals to verify it.

So to summarise, whilst there are legitimate objections to the use of non-citation indicators to measure the magnificence of monographs, most of those objections could also apply to citations.  The key difference is that we do have normalisable, benchmarkable and accessible indicators for journal and conference papers: we don’t yet for books.  At Loughborough we concluded that measuring the magnificence of monographs can only currently reliably be done through peer review.  However, evidence of the sort presented here can be used to tell good stories about the quality and visibility of books at individual author and output level. And these stories can be legitimately told in some of the same places (job applications, funding bids, etc.,) we’d normally see citation stories.  Whether colleagues in the Arts, Humanities and Social Sciences ever feel comfortable doing so is another question.


 Gadd-photo

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University.  She has a background in Libraries and Scholarly Communication research.  She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.

Outputs from Bibliometrics in Arts, Humanities and Social Sciences conference

Here are the links to presentations given at the recent #AHSSmetrics conference at the University of Westminster, 24 March 2017. Many thanks to all the presenters, and to the participants, for a stimulating day. For those who missed the event, Karen Rowlett has helpfully created a Storify of the tweets at https://storify.com/karenanya/bibliometrics-for-the-arts-and-humanities.

10.00 Welcome – Martin Doherty – Head of Department, Dept of History, Sociology & Criminology, University of Westminster

10.10 Opening Panel: How appropriate is bibliometrics for Arts, Humanities and Social Sciences?( Chaired by Katie Evans, University of Bath) – Peter Darroch (Plum Analytics), Professor Jane Winters (School of Advanced Study and Senate House Library), Stephen Grace (London South Bank University)

10.40 Citation metrics across disciplines – Google Scholar, Scopus and the Web of Science: A cross-disciplinary comparison – Anne-Wil Harzing (University of Middlesex)

11.20 Tea & Coffee

11.50 Impacts of reputation metrics and contemporary art practices – Emily Rosamond (Arts University of Bournemouth)

12.20 Bibliometrics as a research tool: The international rise of Jurgen Habermas – Christian Morgner (University of Leicester) NB presentation in person only

1.00 Lunch (Kindly sponsored by Plum Analytics)

1.45 Workshop: Practice with PoP: How to use Publish or Perish effectively? (laptop with PoP software installed needed) – Anne-Wil Harzing

2.45 A funder’s perspective: bibliometrics and the arts and humanities – Sumi David (AHRC)

3.15 Bibliometric Competencies – Sabrina Petersohn (University of Wuppertal)

3.45 Tea & Coffee

4.00 Lightning talks:

4.30 Round Up by Stephanie Meece (University of the Arts London)

Why should a bibliometrician engage with altmetrics? Guest Post by Natalia Madjarevic

Last month, Barack Obama published an article in the journal JAMA discussing progress to date with The Affordable Care Act – or Obamacare – and outlining recommendations for future policy makers. Obama’s article was picked up in the press and across social media immediately. We can see in the Altmetric Details Page that it was shared across a broad range of online attentAM1ion sources such as mainstream media, Twitter, Facebook, Wikipedia and commented on by several research blogs. We can also see from the stats provided by JAMA that the article, at time of writing, has been viewed over 1 million times and has an Altmetric Attention Score of 7539, but hasn’t yet received a single citation.

Providing instant feedback

Many altmetrics providers track attention to a research output as soon as it’s available online. This means institutions can then use altmetrics data to monitor research engagement right away, without the delay we often see in the citation feedback loop.

If President Obama was checking his Altmetric Details Page (which I hope he did!) he’d have known almost in real-time exactly who was saying what about his article. In the same way, academic research from your institution is generating online activity  – probably right now – and can provide extra insights to help enhance your bibliometric reporting.

AM2

Altmetric, which has tracked mentions and shares of over 5.4m individual research outputs to date, sees 360 mentions per minute – a huge amount of online activity that can be monitored and reported on to help evidence additional signals of institutional research impact. That said, altmetrics are not designed to replace traditional measures such as citations and peer-review and it’s valuable report on a broad range of indicators. Altmetrics are complementary rather than a replacement for traditional bibliometrics.

Altmetrics reporting: context is key

Using a single number, “This output received 100 citations” or “This output has an Altmetric Attention Score of 100” doesn’t really say that much. That’s why altmetrics tools often focus on pulling out the qualitative data, i.e. the underlying mentions an output has received. Saying, “This output has an Altmetric Attention Score of 100, was referenced in a policy document, tweeted by a medical practitioner and shared on Facebook by a think tank” is much more meaningful than a single number. It also tells a much more compelling story about the influence and societal reach of your research. So when using altmetrics data, zoom in and take a look at the mentions. That’s where you’ll find the interesting stories about your research attention to include in your reporting.

How can you use altmetrics to extend your bibliometrics service?

Here are some ideas:

  • Include altmetrics data in your monthly bibliometric reports to demonstrate societal research engagement – pull out some qualitative highlights
  • Embed altmetrics in your bibliometrics training sessions and welcome emails to new faculty – we have lots of slides you can re-use here
  • Provide advice to researchers on how to promote themselves online and embed altmetrics data in their CV
  • Encourage responsible use of metrics as discussed in the Leiden Manifesto and The Metric Tide
  • Don’t use altmetrics as a predictor for citations! Use them instead to gain a more well-rounded, coherent insight into engagement and dissemination of your research

Altmetrics offer an opportunity for bibliometricians to extend existing services and provide researchers with a more granular and informative data about engagement with their research. The first step is to start exploring the data – from there you can determine how it will fit best into your current workflow and activities.

Further reading

Natalia Madjarevic

@nataliafay