Measuring the magnificence of monographs

At Loughborough University we have recently been thinking about how we can use bibliometrics responsibly.  Not surprisingly our conversations tended to focus on journal and conference papers where the majority of citation databases focus.  However, as part of this process the question inevitably arose as to whether there were also ways we could measure the quality or visibility of non-journal outputs, in particular, monographs.  To this end we thought we should explore this question with senior staff in Art, English, Drama, History and Social Sciences.

During the conversation we had, the Associate Dean for Research in the School of Art, English and Drama said he felt that assessing the arts through numbers was like asking engineers to describe their work through dance!  Whilst he was not averse to exploring the value of publication indicators, I think this serves to highlight how alien such numbers are perceived to be to those working in creative fields.

So what do we know about monographs?  Well, we know that the monograph as a format is not going away, although some publishers are now also offering ‘short form’ monographs (between a book and journal article in size).  We also know that monographs are not covered by the commercial citation benchmarking tools which many of us rely on for analyses.  They are of course covered by Google Scholar – but there are known concerns with the robustness of Google data, and there is no way of easily benchmarking it.  However, the biggest problem with citation analysis in the field of monographs is not the lack of coverage in the benchmarking tools, but what a citation actually means in these fields.  In English & Drama for example, citations are often used to refute previous work in an effort to promote new ideas (“So-and-so thinks X but I have new evidence to think Y”). So the question remains: is it possible to measure the quality and impact of monographs in a different way?

Well, as part of our conversation we explored some alternatives which I’ll briefly run though here.

The Publisher

The most obvious choice of indicator is who actually published the work.  And we know that the Danish research evaluation system allocates publishers to tiers and those books published with top tier publishers are weighted more heavily than books published with lower tier publishers.  Whilst academics at Loughborough were not minded to formalise such a system internally, it was clear that they do make quality judgements based on publishers with comments such as: “if a university DIDN’T have at least a handful of monographs published by the Big 6, that would be a concern.”  So quality is assumed because the process of getting a contract with a top tier publisher is competitive, and the standard of peer review is very high. A bit like highly cited journals…

Book Reviews

Book reviews could serve to indicate quality–not only in terms of the content of those reviews, and how many there are, but where the reviews are published.  However, whilst there are book review indices, reviews can take a long time to come out.  Also, in the Arts & Humanities, it’s unusual for a book to get a negative review because the discipline areas are small and collegiate.  Essentially, if a book gets a review it means something, but if it doesn’t get reviewed, it doesn’t necessarily mean anything.  Just like citations…

Book sales

High book sales could be seen as an indicator of quality and the beauty of sales is that they are numerical indicators which bibliometricians like!  However, there is no publicly available source for book sales (that I’m aware of). Also, sales can be affected by market size. Thus books sold in the US will often outnumber those sold in the UK – an effect of population size.  Sales are also affected by the print run – i.e., whether the book comes out as a short-print-run hardback aimed at libraries, or a large-print-run paperback aimed at undergrads.  The former might be little sold but widely read; the latter might be widely sold, but never read!  So sales might be more an indicator of popularity than quality.  But the same could be said of citations….

Alt-metrics

Many alt-metric offerings cover books and provide a wide range of indicators.  One of particular relevance is the course syllabi on which books are listed – although this is probably more likely to favour text books than research monographs.  It is also possible to see the number of book reviews on such tools as well as other social media and news mentions.  However, altmetric providers have never claimed that they measure quality, but rather attention, visibility and possibly impact.  But, at the risk of repeating myself, the same could be said of citations…

The problem for us at Loughborough was that none of these indicators met our criteria for a usable indicator, which we defined as:

  • Normalisable – Can we normalise for disciplinary differences (at least)
  • Benchmarkable – Is comprehensive data available at different entity levels (individual, university, discipline, etc) to compare performance
  • Obtainable – Is it relatively simple for both evaluators to get hold of the data, and for individuals to verify it.

So to summarise, whilst there are legitimate objections to the use of non-citation indicators to measure the magnificence of monographs, most of those objections could also apply to citations.  The key difference is that we do have normalisable, benchmarkable and accessible indicators for journal and conference papers: we don’t yet for books.  At Loughborough we concluded that measuring the magnificence of monographs can only currently reliably be done through peer review.  However, evidence of the sort presented here can be used to tell good stories about the quality and visibility of books at individual author and output level. And these stories can be legitimately told in some of the same places (job applications, funding bids, etc.,) we’d normally see citation stories.  Whether colleagues in the Arts, Humanities and Social Sciences ever feel comfortable doing so is another question.


 Gadd-photo

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University.  She has a background in Libraries and Scholarly Communication research.  She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.

Job Opportunity: University of Sheffield is looking for a Library Scholarly Communications Manager!

Job Reference Number: UOS015654
Job Title: Library Scholarly Communications Manager

Salary: Grade 8 £39,324-£46,924 per annum with potential to progress to £52,793 through
sustained exceptional contribution

Closing Date: 31st March 2017

Summary:
Copyright advisory and advocacy services form a critical component of the infrastructure necessary in advancing teaching and learning in the digital age. Specialist educative copyright and licensing services also benefit research within the context of more open publishing of scholarly communications.

Reporting to the Associate Director for Academic & Digital Services, you will develop and implement services and programmes that build an understanding of copyright and licensing within the scholarly communications landscape and publishing, across the university community. You will ensure compliance with the copyright legislation, university policy and licenses and develop a shared institutional understanding of both the opportunities and challenges associated with this field.

The post-holder will provide to the university community (Faculties and Professional Services) legally compliant, detailed interpretation and policy advice on copyright. You will actively coordinate advisory services, making available current and reliable information on the web and bringing together specialists in the areas of broadcast media, newspapers, music and other formats particularly where the university has agreed license schemes. Operationally, you will oversee the necessary information management processes and audit requirements.

You will be the key contact with the University’s Legal Panel Agreement on matters pertaining to the Copyright, Designs and Patents Act 1988 and subsequent statutory instruments including the 2014 exemptions. You will engage with external bodies including the Copyright Licensing Agency, the UK Government Intellectual Property Office and other license issuing bodies.  Professionally, you will establish effective external networks concerned with copyright and intellectual property in
universities.

Educated to degree level (or equivalent work experience) you will be able to think strategically as well as deliver operationally.  You will be a confident communicator and able to identify opportunities to innovate and change within the evolving regulatory framework. You will enjoy working with groups and individuals, including academic staff, researchers and  students, as well as networking beyond the University.

Please see the Job Description & Person Specification for further details and apply using the online application form.

HEFCE: The road to the Responsible Research Metrics Forum – Guest post by Ben Johnson

On Wednesday 8th February 2017, Imperial College made headlines by announcing that it has signed the San Francisco Declaration on Research Assessment. Meaning that Imperial will no longer consider journal-based metrics, such as journal impact factors, in decisions on the hiring and promotion of academic staff. Their decision followed a long campaign by Stephen Curry, a professor of structural biology and long advocate of the responsible use of metrics.

At the end of last year, Loughborough University issued a statement on the responsible use of metrics in research assessment, building on the Leiden Manifesto.  This was followed two weeks ago, with a statement on principles of research assessment and management from The University of Bath, building on the concept of responsible use of quantitative indicators. And, earlier in 2016, the Stern review of the Research Excellence Framework recognised clearly that “it is not currently feasible to assess research outputs in the REF using quantitative indicators alone.

What these examples and others show is that the issue of metrics – in particular ‘responsible metrics’ – has risen up the agenda for many universities. As one of those closely involved in the HEFCE review of metrics (The Metric Tide), and secretary to the new UK Forum for Responsible Research Metrics, this of course is great to see.

Nevertheless, of course the issue of metrics has been bubbling away for much longer than that, as the Metric Tide report set out. University administrators, librarians and academics themselves have taken a leading role in promoting the proper use of metrics, with forums like the ARMA metrics special interest group promising to play a key part in challenging attitudes and changing behaviours.

In addition, as we have seen with university responses to the government’s HE green paper and to the Stern review, the wider community is very alive to the risks of an over reliance on metrics. This was reflected in the outcomes of both exercises, with peer review given serious endorsement in both the draft legislation and the Stern report as being the gold standard for the assessment of research.

These developments are exactly the kinds of things that the new UK Forum for Responsible Research Metrics wants to see happening. This forum has been set up with the specific remit to advance the agenda of responsible metrics in UK research, but it’s clear that this is not something it can deliver alone – it is a substantial collective effort.

So what will the Forum do? Well, as the Metric Tide report states, many of the issues relate to metrics infrastructure, particularly around standards, openness and interoperability. The Forum will have a specific role in helping to address longstanding issues, particularly around the adoption of identifiers – an area of focus echoed by the Science Europe position statement on research information systems published at the end of 2016, which is itself a useful touchstone for thinking about these issues.

To support the Forum; Jisc are working hard on developing an action plan to address the specific recommendations of the Metric Tide report, with a particular focus on building effective links with other groups working in this area, e.g. the RCUK/Jisc-led Research Information Management (RIM) Coordination Group. This will be discussed when the Forum meets again in early May.

However, sorting out the ‘plumbing’ that underpins metrics is no good if people continue to misuse them. To support this, the Forum will therefore take a complementary look at the cultures and behaviours within institutions and elsewhere; firstly to develop more granular evidence of how metrics are being used, and secondly to look at making specific interventions to support greater responsibility from academics, administrators, funders, publishers and others involved in research.

With that in mind, Universities UK and the UCL Bibliometrics Group, under the auspices of the Forum, will shortly be jointly issuing a survey of HEIs on the use of problematic metrics in university management and among academic groups, to help identify the scale of any (mis)uses of measures such as the JIF. But, to help us also understand better why initiatives like DORA have not been more widely adopted in the UK.

Of course, metrics have much broader uses than just measuring outputs – they are also used to measure people, groups and institutions. This is a key finding of the Metric Tide report, but one that is often overlooked when focussing very narrowly on output metrics. The forum will also be focussing on this, seeking to bring people together across all domains.

To make a decisive contribution here, the Forum needs to have clout, and it is for this reasons that the five partners (HEFCE, RCUK, Jisc, Wellcome and Universities UK) asked Professor David Price to convene and chair the Forum as a mixed group of metrics experts and people in positions of serious influence in their communities. This was a delicate balance to strike, and one that can only be successful if the Forum engages effectively with the various interested communities.

With that in mind, the Forum is planning to set up a number of ‘town hall’ meetings throughout 2017 to engage with specific communities on particular topics, and would very much welcome hearing from anyone interested in being involved in these or in engaging with the Forum in any other way. We will be announcing further details of these on the Forum’s web pages soon.

If you are interested in joining up with the work of the Forum throughout 2017, please contact me on b.johnson@hefce.ac.uk – I’d be delighted to hear from you.


Ben Johnson is a research policy adviser at the Higher Education Funding Council for England, is secretary to the UK Forum for Responsible Research Metrics and a member of the G7 expert group on open science.

He has responsibility for policy on open access, open data, research metrics, technical infrastructure and research sector efficiency within universities in England. In recent years, he co-authored The Metric Tide (a report on research metrics), developed and implemented a policy for open access in the UK Research Excellence Framework (REF), and supported Professor Geoffrey Crossick’s project and report to HEFCE on monographs and open access. He is a member of the UK’s open data forum and co-authored the forthcoming UK Open Research Data Concordat. In addition to this, he is currently part-seconded to the Department of Business, Energy and Industrial Strategy to work on reforming the research and innovation landscape.

Job Opportunity: University of Greenwich is looking for a Research Outputs Manager!

Greenwich Research and Enterprise

Location:  Greenwich
Salary:  £38,183 to £46,924 plus £4546 London weighting
Contract Type:  Open
Closing Date:  Monday 20 March 2017
Interview Date:  To be confirmed
Reference:  1346

Greenwich Research and Enterprise (GRE) is the University’s central office responsible for developing a supportive research culture and establishing links with industry and enterprise. GRE works across four service areas: research services, business development and enterprise services, commercial and IP services, and business support services.

The university is investing in expanding its research services and recognises high quality support is pivotal to its research environment and is now recruiting a Research Outputs Manager to join the GRE Research Development Services team at Greenwich.

This role will lead the development of library services as they relate to research outputs and research data management in order to meet the needs of the University’s research community, external research funders, and the requirements of the Research Excellence Framework. In particular, this will involve overseeing the ongoing development of the Institutional Repository – GALA (Greenwich Academic Literature Archive) – ensuring its effective use for Open Access requirements, and the development and implementation of a Research Data Management Policy & Framework.

Please see the Job Description & Person Specification for further details and please apply using the online application form.

 

Journal Metrics in Context: How Should a Bibliomagician Evaluate a Journal? Guest post by Massimo Guinta

In the world of academia and research, “publish or perish” has become more complicated than ever. It’s not enough to merely publish, one has to publish in a high-impact journal, in the hopes of getting noticed, and more importantly, perhaps, getting funded for further research.

journals
CC BY david_17in

Institutions are urging their researchers to publish in high-impact journals. Library collections are on tight budgets, so librarians want only the best journals for their collections. Emphasis on impact and quality has given rise to a whole new realm of metrics by which to measure a journal. But which metric is best? What’s the magic bullet to definitively name a journal as The Best?

One of the most well-known journal metrics is the Journal Impact Factor (JIF). It seems like the JIF has invaded every aspect of the academic researcher’s world, but did you know it was developed for a very specific use?

JIF is defined as “a ratio of citations to a journal in a given year to the citable items in the prior two years.” It was intended as a simple measure for librarians evaluating the journals in their collections. In fact, the entirety of the Journal Citation Reports (JCR) was developed for this purpose in the 1970s. Over the years, its utility to other markets has emerged – most importantly to publishers and editors of journals. It has also been misused to evaluate researchers, but Clarivate Analytics, formerly the IP & Science business of Thomson Reuters, has always been quite clear that JCR data, and the JIF in particular, should not be used as proxy measures for individual papers or people.

So is JIF the be-all and end-all of journal evaluation? No. The truth is, there is no one metric that can be used to name the best journals. Why not? “Best” is subjective, and so are metrics.

Sticking with the JIF for now, anyone seeking to evaluate a journal’s place in the research world should not simply look at its JIF; that number, on its own with no context, has limited meaning. Even in context, the JIF is just one number; the JCR contains an entire suite of metrics for journal evaluation, and other parties also offer journal evaluation metrics, such as the SCImago Journal Rank, or the Eigenfactor metrics, which are produced by Clarivate Analytics in partnership with the University of Washington.

Both Eigenfactor and Normalized Eigenfactor scores look at the data in a different way than the JIF does—they look at the total importance of a scientific journal in the context of the entire body of journals in the JCR. While JIF uses two years of data and is limited to the field in which a journal is classified, Eigenfactor scores look at the entire corpus of journals and five years of data. A journal could be ranked lower by its JIF than by its Eigenfactor (or Normalized Eigenfactor).

So which is better: Journal A with a higher JIF or Journal B with a higher Eigenfactor? Looking at just these two metrics will not answer the question. Perhaps Journal B also has a higher Article Influence Score—a score greater than 1 shows that a journal’s articles tend to have an above-average influence. Perhaps Journal A also has a higher Percent Articles in Citable Items, meaning it tends to publish more original research than reviews. Looking outside the JCR, perhaps Journal A has had a higher citation count in the past year, whereas Journal B skews more favorably looking at Altmetrics like page views or social media mentions.

Therefore, any statements about a journal’s impact need to include context. When you evaluate a journal, you should look at all of its metrics for the most complete picture, and this picture will vary by field and year.

Bottom line? While there is no magic bullet to determine the best journals, with the wealth of journal metrics out there, and whatever might come down the pipeline in the future, evaluating journals in context is not as difficult as you might think!

 

Further Reading:

  1. Best Practices in Journal Evaluation
  2. All About the Eigenfactor
  3. JCR Blog Series
  4. JCR product information

 

Massimo Giunta is Account Manager UK & Ireland for Clarivate Analytics