At Loughborough University we have recently been thinking about how we can use bibliometrics responsibly. Not surprisingly our conversations tended to focus on journal and conference papers where the majority of citation databases focus. However, as part of this process the question inevitably arose as to whether there were also ways we could measure the quality or visibility of non-journal outputs, in particular, monographs. To this end we thought we should explore this question with senior staff in Art, English, Drama, History and Social Sciences.
During the conversation we had, the Associate Dean for Research in the School of Art, English and Drama said he felt that assessing the arts through numbers was like asking engineers to describe their work through dance! Whilst he was not averse to exploring the value of publication indicators, I think this serves to highlight how alien such numbers are perceived to be to those working in creative fields.
So what do we know about monographs? Well, we know that the monograph as a format is not going away, although some publishers are now also offering ‘short form’ monographs (between a book and journal article in size). We also know that monographs are not covered by the commercial citation benchmarking tools which many of us rely on for analyses. They are of course covered by Google Scholar – but there are known concerns with the robustness of Google data, and there is no way of easily benchmarking it. However, the biggest problem with citation analysis in the field of monographs is not the lack of coverage in the benchmarking tools, but what a citation actually means in these fields. In English & Drama for example, citations are often used to refute previous work in an effort to promote new ideas (“So-and-so thinks X but I have new evidence to think Y”). So the question remains: is it possible to measure the quality and impact of monographs in a different way?
Well, as part of our conversation we explored some alternatives which I’ll briefly run though here.
The most obvious choice of indicator is who actually published the work. And we know that the Danish research evaluation system allocates publishers to tiers and those books published with top tier publishers are weighted more heavily than books published with lower tier publishers. Whilst academics at Loughborough were not minded to formalise such a system internally, it was clear that they do make quality judgements based on publishers with comments such as: “if a university DIDN’T have at least a handful of monographs published by the Big 6, that would be a concern.” So quality is assumed because the process of getting a contract with a top tier publisher is competitive, and the standard of peer review is very high. A bit like highly cited journals…
Book reviews could serve to indicate quality–not only in terms of the content of those reviews, and how many there are, but where the reviews are published. However, whilst there are book review indices, reviews can take a long time to come out. Also, in the Arts & Humanities, it’s unusual for a book to get a negative review because the discipline areas are small and collegiate. Essentially, if a book gets a review it means something, but if it doesn’t get reviewed, it doesn’t necessarily mean anything. Just like citations…
High book sales could be seen as an indicator of quality and the beauty of sales is that they are numerical indicators which bibliometricians like! However, there is no publicly available source for book sales (that I’m aware of). Also, sales can be affected by market size. Thus books sold in the US will often outnumber those sold in the UK – an effect of population size. Sales are also affected by the print run – i.e., whether the book comes out as a short-print-run hardback aimed at libraries, or a large-print-run paperback aimed at undergrads. The former might be little sold but widely read; the latter might be widely sold, but never read! So sales might be more an indicator of popularity than quality. But the same could be said of citations….
Many alt-metric offerings cover books and provide a wide range of indicators. One of particular relevance is the course syllabi on which books are listed – although this is probably more likely to favour text books than research monographs. It is also possible to see the number of book reviews on such tools as well as other social media and news mentions. However, altmetric providers have never claimed that they measure quality, but rather attention, visibility and possibly impact. But, at the risk of repeating myself, the same could be said of citations…
The problem for us at Loughborough was that none of these indicators met our criteria for a usable indicator, which we defined as:
- Normalisable – Can we normalise for disciplinary differences (at least)
- Benchmarkable – Is comprehensive data available at different entity levels (individual, university, discipline, etc) to compare performance
- Obtainable – Is it relatively simple for both evaluators to get hold of the data, and for individuals to verify it.
So to summarise, whilst there are legitimate objections to the use of non-citation indicators to measure the magnificence of monographs, most of those objections could also apply to citations. The key difference is that we do have normalisable, benchmarkable and accessible indicators for journal and conference papers: we don’t yet for books. At Loughborough we concluded that measuring the magnificence of monographs can only currently reliably be done through peer review. However, evidence of the sort presented here can be used to tell good stories about the quality and visibility of books at individual author and output level. And these stories can be legitimately told in some of the same places (job applications, funding bids, etc.,) we’d normally see citation stories. Whether colleagues in the Arts, Humanities and Social Sciences ever feel comfortable doing so is another question.
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.