Measuring the magnificence of monographs

At Loughborough University we have recently been thinking about how we can use bibliometrics responsibly.  Not surprisingly our conversations tended to focus on journal and conference papers where the majority of citation databases focus.  However, as part of this process the question inevitably arose as to whether there were also ways we could measure the quality or visibility of non-journal outputs, in particular, monographs.  To this end we thought we should explore this question with senior staff in Art, English, Drama, History and Social Sciences.

During the conversation we had, the Associate Dean for Research in the School of Art, English and Drama said he felt that assessing the arts through numbers was like asking engineers to describe their work through dance!  Whilst he was not averse to exploring the value of publication indicators, I think this serves to highlight how alien such numbers are perceived to be to those working in creative fields.

So what do we know about monographs?  Well, we know that the monograph as a format is not going away, although some publishers are now also offering ‘short form’ monographs (between a book and journal article in size).  We also know that monographs are not covered by the commercial citation benchmarking tools which many of us rely on for analyses.  They are of course covered by Google Scholar – but there are known concerns with the robustness of Google data, and there is no way of easily benchmarking it.  However, the biggest problem with citation analysis in the field of monographs is not the lack of coverage in the benchmarking tools, but what a citation actually means in these fields.  In English & Drama for example, citations are often used to refute previous work in an effort to promote new ideas (“So-and-so thinks X but I have new evidence to think Y”). So the question remains: is it possible to measure the quality and impact of monographs in a different way?

Well, as part of our conversation we explored some alternatives which I’ll briefly run though here.

The Publisher

The most obvious choice of indicator is who actually published the work.  And we know that the Danish research evaluation system allocates publishers to tiers and those books published with top tier publishers are weighted more heavily than books published with lower tier publishers.  Whilst academics at Loughborough were not minded to formalise such a system internally, it was clear that they do make quality judgements based on publishers with comments such as: “if a university DIDN’T have at least a handful of monographs published by the Big 6, that would be a concern.”  So quality is assumed because the process of getting a contract with a top tier publisher is competitive, and the standard of peer review is very high. A bit like highly cited journals…

Book Reviews

Book reviews could serve to indicate quality–not only in terms of the content of those reviews, and how many there are, but where the reviews are published.  However, whilst there are book review indices, reviews can take a long time to come out.  Also, in the Arts & Humanities, it’s unusual for a book to get a negative review because the discipline areas are small and collegiate.  Essentially, if a book gets a review it means something, but if it doesn’t get reviewed, it doesn’t necessarily mean anything.  Just like citations…

Book sales

High book sales could be seen as an indicator of quality and the beauty of sales is that they are numerical indicators which bibliometricians like!  However, there is no publicly available source for book sales (that I’m aware of). Also, sales can be affected by market size. Thus books sold in the US will often outnumber those sold in the UK – an effect of population size.  Sales are also affected by the print run – i.e., whether the book comes out as a short-print-run hardback aimed at libraries, or a large-print-run paperback aimed at undergrads.  The former might be little sold but widely read; the latter might be widely sold, but never read!  So sales might be more an indicator of popularity than quality.  But the same could be said of citations….

Alt-metrics

Many alt-metric offerings cover books and provide a wide range of indicators.  One of particular relevance is the course syllabi on which books are listed – although this is probably more likely to favour text books than research monographs.  It is also possible to see the number of book reviews on such tools as well as other social media and news mentions.  However, altmetric providers have never claimed that they measure quality, but rather attention, visibility and possibly impact.  But, at the risk of repeating myself, the same could be said of citations…

The problem for us at Loughborough was that none of these indicators met our criteria for a usable indicator, which we defined as:

  • Normalisable – Can we normalise for disciplinary differences (at least)
  • Benchmarkable – Is comprehensive data available at different entity levels (individual, university, discipline, etc) to compare performance
  • Obtainable – Is it relatively simple for both evaluators to get hold of the data, and for individuals to verify it.

So to summarise, whilst there are legitimate objections to the use of non-citation indicators to measure the magnificence of monographs, most of those objections could also apply to citations.  The key difference is that we do have normalisable, benchmarkable and accessible indicators for journal and conference papers: we don’t yet for books.  At Loughborough we concluded that measuring the magnificence of monographs can only currently reliably be done through peer review.  However, evidence of the sort presented here can be used to tell good stories about the quality and visibility of books at individual author and output level. And these stories can be legitimately told in some of the same places (job applications, funding bids, etc.,) we’d normally see citation stories.  Whether colleagues in the Arts, Humanities and Social Sciences ever feel comfortable doing so is another question.


 Gadd-photo

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University.  She has a background in Libraries and Scholarly Communication research.  She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.

Outputs from Bibliometrics in Arts, Humanities and Social Sciences conference

Here are the links to presentations given at the recent #AHSSmetrics conference at the University of Westminster, 24 March 2017. Many thanks to all the presenters, and to the participants, for a stimulating day. For those who missed the event, Karen Rowlett has helpfully created a Storify of the tweets at https://storify.com/karenanya/bibliometrics-for-the-arts-and-humanities.

10.00 Welcome – Martin Doherty – Head of Department, Dept of History, Sociology & Criminology, University of Westminster

10.10 Opening Panel: How appropriate is bibliometrics for Arts, Humanities and Social Sciences?( Chaired by Katie Evans, University of Bath) – Peter Darroch (Plum Analytics), Professor Jane Winters (School of Advanced Study and Senate House Library), Stephen Grace (London South Bank University)

10.40 Citation metrics across disciplines – Google Scholar, Scopus and the Web of Science: A cross-disciplinary comparison – Anne-Wil Harzing (University of Middlesex)

11.20 Tea & Coffee

11.50 Impacts of reputation metrics and contemporary art practices – Emily Rosamond (Arts University of Bournemouth)

12.20 Bibliometrics as a research tool: The international rise of Jurgen Habermas – Christian Morgner (University of Leicester) NB presentation in person only

1.00 Lunch (Kindly sponsored by Plum Analytics)

1.45 Workshop: Practice with PoP: How to use Publish or Perish effectively? (laptop with PoP software installed needed) – Anne-Wil Harzing

2.45 A funder’s perspective: bibliometrics and the arts and humanities – Sumi David (AHRC)

3.15 Bibliometric Competencies – Sabrina Petersohn (University of Wuppertal)

3.45 Tea & Coffee

4.00 Lightning talks:

4.30 Round Up by Stephanie Meece (University of the Arts London)

HEFCE: The road to the Responsible Research Metrics Forum – Guest post by Ben Johnson

On Wednesday 8th February 2017, Imperial College made headlines by announcing that it has signed the San Francisco Declaration on Research Assessment. Meaning that Imperial will no longer consider journal-based metrics, such as journal impact factors, in decisions on the hiring and promotion of academic staff. Their decision followed a long campaign by Stephen Curry, a professor of structural biology and long advocate of the responsible use of metrics.

At the end of last year, Loughborough University issued a statement on the responsible use of metrics in research assessment, building on the Leiden Manifesto.  This was followed two weeks ago, with a statement on principles of research assessment and management from The University of Bath, building on the concept of responsible use of quantitative indicators. And, earlier in 2016, the Stern review of the Research Excellence Framework recognised clearly that “it is not currently feasible to assess research outputs in the REF using quantitative indicators alone.

What these examples and others show is that the issue of metrics – in particular ‘responsible metrics’ – has risen up the agenda for many universities. As one of those closely involved in the HEFCE review of metrics (The Metric Tide), and secretary to the new UK Forum for Responsible Research Metrics, this of course is great to see.

Nevertheless, of course the issue of metrics has been bubbling away for much longer than that, as the Metric Tide report set out. University administrators, librarians and academics themselves have taken a leading role in promoting the proper use of metrics, with forums like the ARMA metrics special interest group promising to play a key part in challenging attitudes and changing behaviours.

In addition, as we have seen with university responses to the government’s HE green paper and to the Stern review, the wider community is very alive to the risks of an over reliance on metrics. This was reflected in the outcomes of both exercises, with peer review given serious endorsement in both the draft legislation and the Stern report as being the gold standard for the assessment of research.

These developments are exactly the kinds of things that the new UK Forum for Responsible Research Metrics wants to see happening. This forum has been set up with the specific remit to advance the agenda of responsible metrics in UK research, but it’s clear that this is not something it can deliver alone – it is a substantial collective effort.

So what will the Forum do? Well, as the Metric Tide report states, many of the issues relate to metrics infrastructure, particularly around standards, openness and interoperability. The Forum will have a specific role in helping to address longstanding issues, particularly around the adoption of identifiers – an area of focus echoed by the Science Europe position statement on research information systems published at the end of 2016, which is itself a useful touchstone for thinking about these issues.

To support the Forum; Jisc are working hard on developing an action plan to address the specific recommendations of the Metric Tide report, with a particular focus on building effective links with other groups working in this area, e.g. the RCUK/Jisc-led Research Information Management (RIM) Coordination Group. This will be discussed when the Forum meets again in early May.

However, sorting out the ‘plumbing’ that underpins metrics is no good if people continue to misuse them. To support this, the Forum will therefore take a complementary look at the cultures and behaviours within institutions and elsewhere; firstly to develop more granular evidence of how metrics are being used, and secondly to look at making specific interventions to support greater responsibility from academics, administrators, funders, publishers and others involved in research.

With that in mind, Universities UK and the UCL Bibliometrics Group, under the auspices of the Forum, will shortly be jointly issuing a survey of HEIs on the use of problematic metrics in university management and among academic groups, to help identify the scale of any (mis)uses of measures such as the JIF. But, to help us also understand better why initiatives like DORA have not been more widely adopted in the UK.

Of course, metrics have much broader uses than just measuring outputs – they are also used to measure people, groups and institutions. This is a key finding of the Metric Tide report, but one that is often overlooked when focussing very narrowly on output metrics. The forum will also be focussing on this, seeking to bring people together across all domains.

To make a decisive contribution here, the Forum needs to have clout, and it is for this reasons that the five partners (HEFCE, RCUK, Jisc, Wellcome and Universities UK) asked Professor David Price to convene and chair the Forum as a mixed group of metrics experts and people in positions of serious influence in their communities. This was a delicate balance to strike, and one that can only be successful if the Forum engages effectively with the various interested communities.

With that in mind, the Forum is planning to set up a number of ‘town hall’ meetings throughout 2017 to engage with specific communities on particular topics, and would very much welcome hearing from anyone interested in being involved in these or in engaging with the Forum in any other way. We will be announcing further details of these on the Forum’s web pages soon.

If you are interested in joining up with the work of the Forum throughout 2017, please contact me on b.johnson@hefce.ac.uk – I’d be delighted to hear from you.


Ben Johnson is a research policy adviser at the Higher Education Funding Council for England, is secretary to the UK Forum for Responsible Research Metrics and a member of the G7 expert group on open science.

He has responsibility for policy on open access, open data, research metrics, technical infrastructure and research sector efficiency within universities in England. In recent years, he co-authored The Metric Tide (a report on research metrics), developed and implemented a policy for open access in the UK Research Excellence Framework (REF), and supported Professor Geoffrey Crossick’s project and report to HEFCE on monographs and open access. He is a member of the UK’s open data forum and co-authored the forthcoming UK Open Research Data Concordat. In addition to this, he is currently part-seconded to the Department of Business, Energy and Industrial Strategy to work on reforming the research and innovation landscape.

REF consultation: Lis-Bibliometrics response

The four UK higher education funding bodies are consulting on proposals for the next Research Excellence Framework.  Thank you to all Lis-Bibliometrics members who have contributed their thoughts on this.  Here is a draft response the Lis-Bibliometrics Committee intends to submit on behalf of the group.  If you have any last minute comments please contact me or share via the list as soon as possible.  We’ve decided to respond only to consultation question 18:

Q.18 Do you agree with the proposal for using quantitative data to inform the assessment of outputs, where considered appropriate for the discipline? If you agree, have you any suggestions for data that could be provided to the panels at output and aggregate level?

We agree that quantitative data can support the assessment of outputs where considered appropriate by the discipline.  Any use of quantitative data should follow the principles for responsible use of metrics set out in the Metric Tide and the Leiden Manifesto.

  • Disciplinary difference, including citation patterns varying by output type, must be taken into account.
  • Data should only be used if it offers a high standard of coverage, quality and transparency. Providing data from a range of sources (e.g. Scopus, Web of Science, Google Scholar) would allow the panel to benefit from the strengths of each source whilst highlighting the limitations.
  • Known biases reflected by bibliometric indicators (e.g. around interdisciplinary research and gender) should be taken into account.
  • A range of data should be provided to avoid incentivizing undesirable side effects or gaming by focusing attention on a single indicator.
  • Given the skewed distribution of citations, and the ‘lumpiness’ of citations for recent publications in particular, we recommend measures of uncertainty be provided alongside any citation data. At the very least, false precision should be avoided.
  • In addition to citation indicators, panels should take into account the number of authors of the output.

Panels should receive training on understanding and interpreting the data and be supported by an expert bibliometric advisor.

We do not consider the field-weighted citation impact indicator appropriate for the assessment of individual outputs: as an arithmetic mean based indicator it is too heavily skewed by small numbers of ‘unexpected’ citations.  Furthermore its 4 year citation window would not capture the full citation impact of outputs from early in the REF period.  The use of field-weighted citation percentiles (i.e. the percentile n such that the output is among the top n% most cited outputs worldwide for its subject area and year of publication) or percentile bands (as used in REF2014) is preferable.  Percentile based indicators are more stable and easier to understand as the “performance” of papers is scaled from 1-100, but can be skewed by large numbers of uncited items.

Output level citation indicators are less useful for recent outputs.   Consequently, it might be tempting to look at journal indicators.  This temptation should be resisted!  Given the wide distribution of citations to outputs within a journal, and issues of unscrupulous ‘gaming’, journal metrics are a poor proxy for individual output quality.  Furthermore, use of journal metrics would incentivize the pursuit of a few ‘high impact’ journals to the detriment of timely, diverse and sustainable scholarly communications.

Use of aggregate level data raises the question of whether the analysis is performed only on the submitted outputs, or on the entire output from the institution during the census period. The latter would provide a more accurate picture of the institution’s performance within the discipline, but automatically mapping outputs to REF units of assessment is extremely challenging.  Furthermore it would be hard to disaggregate those papers written by staff who are not eligible for submission to REF.

Katie Evans, on behalf of the Lis-Bibliometrics Committee

Note: This replaces an earlier draft REF consultation response posted on 1st March 2017.   

My double life: playing and changing the scholarly communications game. By Lizzie Gadd

I love my job.  I work as a “Research Policy Manager (Publications)” at Loughborough University.  And I spend my time understanding and advising on how we can improve the quality and visibility of our research.  However the strategies for achieving this aren’t as straightforward as you might think.  And increasingly I feel like I’m leading a double life, both seeking to play and change the scholarly communication game.

 communication-by-jackie-finn-irwin-cc-by-2-0

‘Communication’ by Jackie Finn-Irwin CC-BY 2.0

 

What do I mean by this?  Well, the game we’re in is one where publications mean prizes. If others rate them (e.g. in the REF) or cite them (as measured by the University League Tables), you win. To be a winner, we know you need to produce quality research (of course), collaborate internationally (improves quality and visibility), and publish in those journals that are indexed by the tools that both expose your research to the world, but importantly, also do the citation measuring for the aforesaid REF and University League Tables.  And although there is a huge backlash against using journal metrics as an indicator of the quality of the underlying research, there is no doubt that getting a paper into a journal with extremely rigorous quality standards still means something to academics and their peers.

So the current game is inherently tied up with journal publications.  And there are two well-rehearsed reasons why this is not a good thing. The first is that journals are expensive – and getting more expensive. The second reason is that journals are slow at communicating research results.  Publication delays of years are not uncommon. (There are of course other objections to journals, not least the murky waters about how an end-user may re-use journal content, but I won’t go into these here.)

This is why we need to change the game. And the best option we have for changing the game is to keep producing quality research and collaborating internationally, but to also create new means of scholarly communication that are neither expensive, nor slow.

Some might argue that you can have it both ways. Publish in a journal which has a liberal green open access policy.  This will allow you to provide immediate access to the research through the pre-print, and access to the peer reviewed research through the postprint .  And to be honest, this is the compromise we currently go for.  But this form of open access is showing no signs of reducing the cost of subscriptions .  And not all journals have liberal green open access policies. And not all academics want to release their preprint until it has been accepted by a journal, in case the paper is rejected – so this rather defeats the object.

Now there are many alternative models of scholarly communication that ARE inexpensive and speed up publication.  These include preprint archives or ‘diamond’ open access journals that charge neither the author to submit nor the reader to read.  However, the problem is that these are not picked up by the citation benchmarking tools.  This is either because they are not journals at all (preprint archives) or because they are new entries to the market so have yet to prove their significance in the field and be selected for inclusion.

So what does a Research Policy Manager (Publications) do?  Well, it seems to me like I have two equally unsatisfactory options.  The first is to continue to play the journals game in order to ensure the citedness of our research is captured by the key citation benchmarking tools, but encourage OA as a means for improving visibility and discoverability of our work.  Whilst this isn’t going to speed up publication or reduce our costs, I think there is something about the lure of a high quality journal may well be a driver of research quality – which is very important.

The second option is to dramatically change our focus on to new forms of scholarly communication that will speed up publication rates and reduce our costs, such as preprint archives and diamond OA journals.  And by so doing, we’d need to hope that the well-documented citation advantage for immediately open research will do its thing. And that when the research is considered by the REF, they really will just focus on the content as they promise, and not the reputation of the vehicle it is published in.  Always bearing in mind that any citations that the research does accrue will only be picked up by open tools such as Google Scholar and not the tools that supply the REF – or the league tables.

So to answer my own question – what does a Research Policy Manager advise in these circumstances? Personally, I try to live with one whilst lobbying for the other, and as far as possible seek to ameliorate any confusion faced by our academics.  This is easier said than done – certainly when faced with later career academics who can remember a time when research was optional and where you published was entirely your business.  To now be faced with a barage of advice around improving the quality, accessibility, visibility, and citedness of your work, bearing in mind that the routes to these are often in conflict with each other, is a constant source of agony for both them and me.

I recognise that we have to play the game. Our reputation depends on it.  But we also have to change the game and provide quicker and more affordable access to (re-usable) research results. At the risk of sounding over-dramatic, the future may depend on it.

 

Elizabeth Gadd

Journal Metrics in Context: How Should a Bibliomagician Evaluate a Journal? Guest post by Massimo Guinta

In the world of academia and research, “publish or perish” has become more complicated than ever. It’s not enough to merely publish, one has to publish in a high-impact journal, in the hopes of getting noticed, and more importantly, perhaps, getting funded for further research.

journals
CC BY david_17in

Institutions are urging their researchers to publish in high-impact journals. Library collections are on tight budgets, so librarians want only the best journals for their collections. Emphasis on impact and quality has given rise to a whole new realm of metrics by which to measure a journal. But which metric is best? What’s the magic bullet to definitively name a journal as The Best?

One of the most well-known journal metrics is the Journal Impact Factor (JIF). It seems like the JIF has invaded every aspect of the academic researcher’s world, but did you know it was developed for a very specific use?

JIF is defined as “a ratio of citations to a journal in a given year to the citable items in the prior two years.” It was intended as a simple measure for librarians evaluating the journals in their collections. In fact, the entirety of the Journal Citation Reports (JCR) was developed for this purpose in the 1970s. Over the years, its utility to other markets has emerged – most importantly to publishers and editors of journals. It has also been misused to evaluate researchers, but Clarivate Analytics, formerly the IP & Science business of Thomson Reuters, has always been quite clear that JCR data, and the JIF in particular, should not be used as proxy measures for individual papers or people.

So is JIF the be-all and end-all of journal evaluation? No. The truth is, there is no one metric that can be used to name the best journals. Why not? “Best” is subjective, and so are metrics.

Sticking with the JIF for now, anyone seeking to evaluate a journal’s place in the research world should not simply look at its JIF; that number, on its own with no context, has limited meaning. Even in context, the JIF is just one number; the JCR contains an entire suite of metrics for journal evaluation, and other parties also offer journal evaluation metrics, such as the SCImago Journal Rank, or the Eigenfactor metrics, which are produced by Clarivate Analytics in partnership with the University of Washington.

Both Eigenfactor and Normalized Eigenfactor scores look at the data in a different way than the JIF does—they look at the total importance of a scientific journal in the context of the entire body of journals in the JCR. While JIF uses two years of data and is limited to the field in which a journal is classified, Eigenfactor scores look at the entire corpus of journals and five years of data. A journal could be ranked lower by its JIF than by its Eigenfactor (or Normalized Eigenfactor).

So which is better: Journal A with a higher JIF or Journal B with a higher Eigenfactor? Looking at just these two metrics will not answer the question. Perhaps Journal B also has a higher Article Influence Score—a score greater than 1 shows that a journal’s articles tend to have an above-average influence. Perhaps Journal A also has a higher Percent Articles in Citable Items, meaning it tends to publish more original research than reviews. Looking outside the JCR, perhaps Journal A has had a higher citation count in the past year, whereas Journal B skews more favorably looking at Altmetrics like page views or social media mentions.

Therefore, any statements about a journal’s impact need to include context. When you evaluate a journal, you should look at all of its metrics for the most complete picture, and this picture will vary by field and year.

Bottom line? While there is no magic bullet to determine the best journals, with the wealth of journal metrics out there, and whatever might come down the pipeline in the future, evaluating journals in context is not as difficult as you might think!

 

Further Reading:

  1. Best Practices in Journal Evaluation
  2. All About the Eigenfactor
  3. JCR Blog Series
  4. JCR product information

 

Massimo Giunta is Account Manager UK & Ireland for Clarivate Analytics

 

What are you doing today?

The Lis-Bibliometrics commissioned, Elsevier sponsored bibliometric competencies research project is seeking to develop a community-supported set of bibliometric competencies, particularly for those working in libraries as well as in other related services.  You can take part by completing the bibliometrics competencies survey at: https://survey.shef.ac.uk/limesurvey/index.php?sid=27492&lang=en

To get a flavour of the variety of bibliometric work going on, I asked fellow Lis-Bibliometric Committee members what they’re doing today:

“Today I’m helping a researcher clean up his very muddled and duplicated Scopus Author IDs and link his outputs to his ORCID iD. I’m also thinking about how best to benchmark the output of our Law school against our competitors for undergraduate students.” Karen Rowlett, Research Publications Adviser, University of Reading

“Today I’m discussing the release of our Responsible Metrics Statement (now approved by Senate J) with our PVCR; running some analyses on SciVal which look at the impact of Loughborough’s conference publications on our overall citation performance; and presenting at a cross-university meeting aimed at exploring how to improve the visibility of our research.” Elizabeth Gadd, Research Policy Manager (Publications), Loughborough University

“Today I am preparing a presentation on Metrics for one of the teams, and working on analysing the Leiden Ranking data.” Sahar Abuelbashar, Research Metrics Analyst, University of Sussex

Meanwhile, I’m advising researchers on using citation metrics in grant applications.  What are you doing today?

Katie Evans

Research Analytics Librarian, University of Bath