Journal Metrics in Context: How Should a Bibliomagician Evaluate a Journal? Guest post by Massimo Guinta

In the world of academia and research, “publish or perish” has become more complicated than ever. It’s not enough to merely publish, one has to publish in a high-impact journal, in the hopes of getting noticed, and more importantly, perhaps, getting funded for further research.

journals
CC BY david_17in

Institutions are urging their researchers to publish in high-impact journals. Library collections are on tight budgets, so librarians want only the best journals for their collections. Emphasis on impact and quality has given rise to a whole new realm of metrics by which to measure a journal. But which metric is best? What’s the magic bullet to definitively name a journal as The Best?

One of the most well-known journal metrics is the Journal Impact Factor (JIF). It seems like the JIF has invaded every aspect of the academic researcher’s world, but did you know it was developed for a very specific use?

JIF is defined as “a ratio of citations to a journal in a given year to the citable items in the prior two years.” It was intended as a simple measure for librarians evaluating the journals in their collections. In fact, the entirety of the Journal Citation Reports (JCR) was developed for this purpose in the 1970s. Over the years, its utility to other markets has emerged – most importantly to publishers and editors of journals. It has also been misused to evaluate researchers, but Clarivate Analytics, formerly the IP & Science business of Thomson Reuters, has always been quite clear that JCR data, and the JIF in particular, should not be used as proxy measures for individual papers or people.

So is JIF the be-all and end-all of journal evaluation? No. The truth is, there is no one metric that can be used to name the best journals. Why not? “Best” is subjective, and so are metrics.

Sticking with the JIF for now, anyone seeking to evaluate a journal’s place in the research world should not simply look at its JIF; that number, on its own with no context, has limited meaning. Even in context, the JIF is just one number; the JCR contains an entire suite of metrics for journal evaluation, and other parties also offer journal evaluation metrics, such as the SCImago Journal Rank, or the Eigenfactor metrics, which are produced by Clarivate Analytics in partnership with the University of Washington.

Both Eigenfactor and Normalized Eigenfactor scores look at the data in a different way than the JIF does—they look at the total importance of a scientific journal in the context of the entire body of journals in the JCR. While JIF uses two years of data and is limited to the field in which a journal is classified, Eigenfactor scores look at the entire corpus of journals and five years of data. A journal could be ranked lower by its JIF than by its Eigenfactor (or Normalized Eigenfactor).

So which is better: Journal A with a higher JIF or Journal B with a higher Eigenfactor? Looking at just these two metrics will not answer the question. Perhaps Journal B also has a higher Article Influence Score—a score greater than 1 shows that a journal’s articles tend to have an above-average influence. Perhaps Journal A also has a higher Percent Articles in Citable Items, meaning it tends to publish more original research than reviews. Looking outside the JCR, perhaps Journal A has had a higher citation count in the past year, whereas Journal B skews more favorably looking at Altmetrics like page views or social media mentions.

Therefore, any statements about a journal’s impact need to include context. When you evaluate a journal, you should look at all of its metrics for the most complete picture, and this picture will vary by field and year.

Bottom line? While there is no magic bullet to determine the best journals, with the wealth of journal metrics out there, and whatever might come down the pipeline in the future, evaluating journals in context is not as difficult as you might think!

 

Further Reading:

  1. Best Practices in Journal Evaluation
  2. All About the Eigenfactor
  3. JCR Blog Series
  4. JCR product information

 

Massimo Giunta is Account Manager UK & Ireland for Clarivate Analytics

 

What are you doing today?

The Lis-Bibliometrics commissioned, Elsevier sponsored bibliometric competencies research project is seeking to develop a community-supported set of bibliometric competencies, particularly for those working in libraries as well as in other related services.  You can take part by completing the bibliometrics competencies survey at: https://survey.shef.ac.uk/limesurvey/index.php?sid=27492&lang=en

To get a flavour of the variety of bibliometric work going on, I asked fellow Lis-Bibliometric Committee members what they’re doing today:

“Today I’m helping a researcher clean up his very muddled and duplicated Scopus Author IDs and link his outputs to his ORCID iD. I’m also thinking about how best to benchmark the output of our Law school against our competitors for undergraduate students.” Karen Rowlett, Research Publications Adviser, University of Reading

“Today I’m discussing the release of our Responsible Metrics Statement (now approved by Senate J) with our PVCR; running some analyses on SciVal which look at the impact of Loughborough’s conference publications on our overall citation performance; and presenting at a cross-university meeting aimed at exploring how to improve the visibility of our research.” Elizabeth Gadd, Research Policy Manager (Publications), Loughborough University

“Today I am preparing a presentation on Metrics for one of the teams, and working on analysing the Leiden Ranking data.” Sahar Abuelbashar, Research Metrics Analyst, University of Sussex

Meanwhile, I’m advising researchers on using citation metrics in grant applications.  What are you doing today?

Katie Evans

Research Analytics Librarian, University of Bath

In search of better filters than journal metrics

When I started working with bibliometrics I was aware of the limitations and criticisms of journal metrics (journal impact factor, SJR, SNIP etc.) so I avoided using them.  A couple of years on, I haven’t changed my mind about any of those limitations and yet I use journal metrics regularly.  Is this justifiable?  Is there a better alternative?  At the moment, I’m thinking in terms of filters.

High pile of hardcover books
CC-BY Alberto G

Martin Eve was talking about filters because of their implications for open access: the heavy reliance on the reputation of existing publications and publishers makes the journey towards open access harder than you might expect.  But it set me thinking about journal metrics.  Academics (and other users and assessors of scholarly publications) use journal metrics as one way of deciding which journals to filter in or out.  The scenarios presented in Dr Ludo Waltman’s CWTS Blog post “The importance of taking a clear position in the impact factor debate” illustrate journal impact factors being used as filters.

Demand for assessment of journal quality to fulfil filtering functions drives much of my day-to-day use of journal metrics.  Given the weaknesses of journal metrics, the question is: is there something else that would make a better filter?

Article level citation metrics cannot completely fulfil this filtering function: in order for a paper to be cited, it must first be read by the person citing it, which means it need to have got through that person’s filter – so the filter has to work before there are citations to the paper.  Similarly for alternative metrics based around social media etc.  A paper need to get through someone’s filter before they’ll share it on social media.  So we end up turning to journal metrics, despite their flaws.

Could new forms of peer review serve as filters?  Journal metrics are used as a proxy for journal quality, and journal quality is largely about the standards a journal sets for whether a paper passes peer review or not.  Some journals (e.g. PLoS One) set the standard at technically sound methodology, others (e.g. PLoS Biology) also require originality and importance of the field.   Could a form of peer review, possibly detached from a particular journal, but openly stating what level of peer-review standard the article meets, be the filtering mechanism of the future?  Are any of the innovations in open peer review already able to fulfil this role?

Comments, recommendation, and predictions welcome!

Katie Evans

(Katie is Research Analytics Librarian at the University of Bath and a member of the LIS-Bibliometrics committee, but writes here in a personal capacity)

Round up from Bibliometrics in Practice event

JUNE 2016 WILL GO DOWN IN THE ANNALS OF HISTORY FOR A NUMBER OF REASONS…Britain voted to leave the European Union, Andy Murray won Wimbledon for the second time, it was the hottest month ever recorded in history …and “Bibliometrics in Practice” took place in Manchester!
Sixty of the best and brightest minds from across the HE sector and beyond assembled in the sleek modern interior of Manchester Metropolitan University’s Business School. The delegate list was packed with analysts, librarians, consultants, research managers, planners, impact managers, researchers and digital managers from across a wide range of universities. We even attracted visitors from Charles Darwin University in Australia!
Ruth Jenkins, Director of Library Services (Manchester Metropolitan) welcomed everyone to Manchester before LIS-BIBLIO’s Lizzie Gadd blew the whistle and got the game underway.
First up was the opening plenary, “What is the place of bibliometrics in universities?” The aim here was to present a variety of perspectives from the individuals within universities who are generally tasked with taking care of all things biblio. Nathalie Cornee gave delegates a “behind the scenes” look at the approach of LSE’s Library Services before handing the baton to Dr. Andrew Walsh (University of Manchester) who provided insights from his role as a research manager. Professor Alan Dix (Birmingham / TALIS) rounded things off with the findings from his analysis of REF2014 data through a bibliometric lens . All was skilfully moderated by LIS BIBLIO’s Stephen Pearson.
From there we segued seamlessly into Dr. Ian Rowlands (Leicester) who delved deeper into statistical analysis in a session called “The strange world of bibliometric numbers: implications for professional practice” in which he managed to link the humble fruit fly to bibliometrics! The audience clearly loved it – “real food for thought” said one, “exceptional” said another…
After a large and hearty lunch, made possible by the generosity of sponsors Thomson Reuters, it was time for networking and catching up with old friends before heading off into the afternoon sessions.
Loughborough University’s Michael Norris joined forces with Lizzie Gadd to present a workshop on bibliometric competencies. This exciting development is aiming to take an engaged approach to building up a set of community-wide standards around managing bibliometrics…keep an eye on the blog for future details.
In the middle of the afternoon the audience split into two breakout sessions. Tanya Williamson (Lancaster University) gave us chapter and verse on a fascinating ESRC funded seminar series called “Designing the academic self, what metrics based publication can and can’t tell us”  whilst Professor Alan Dix gave us permission to “Get your hands dirty with the REF” in the adjoining room.
After all of that, there was just about time for a last minute wrap up, thank-yous and good-byes as Katie Evans bade a fond farewell to the LIS-BIBLIO Community.
All we were left with was our memories…and the evaluation feedback which shows that we did some things very well; you loved the speakers, the range of topics, the practical workshops, the networking, the venue and the lunch and that we have some room for improvement; you wanted slightly more networking time, the ability to experience all the breakouts, even more case studies and sessions that were pitched at different levels of expertise.

WE ARE ALREADY PREPARING IDEAS FOR OUR NEXT LIS-BIBLIO EVENT. WE ARE LOOKING TO HOST IN FEBRUARY NEXT YEAR. IF YOU ARE INTERESTED IN WELCOMING AROUND 80 BIBLIO-MAGICIANS TO YOUR UNIVERSITY THEN PLEASE GET IN TOUCH!

 

Why should a bibliometrician engage with altmetrics? Guest Post by Natalia Madjarevic

Last month, Barack Obama published an article in the journal JAMA discussing progress to date with The Affordable Care Act – or Obamacare – and outlining recommendations for future policy makers. Obama’s article was picked up in the press and across social media immediately. We can see in the Altmetric Details Page that it was shared across a broad range of online attentAM1ion sources such as mainstream media, Twitter, Facebook, Wikipedia and commented on by several research blogs. We can also see from the stats provided by JAMA that the article, at time of writing, has been viewed over 1 million times and has an Altmetric Attention Score of 7539, but hasn’t yet received a single citation.

Providing instant feedback

Many altmetrics providers track attention to a research output as soon as it’s available online. This means institutions can then use altmetrics data to monitor research engagement right away, without the delay we often see in the citation feedback loop.

If President Obama was checking his Altmetric Details Page (which I hope he did!) he’d have known almost in real-time exactly who was saying what about his article. In the same way, academic research from your institution is generating online activity  – probably right now – and can provide extra insights to help enhance your bibliometric reporting.

AM2

Altmetric, which has tracked mentions and shares of over 5.4m individual research outputs to date, sees 360 mentions per minute – a huge amount of online activity that can be monitored and reported on to help evidence additional signals of institutional research impact. That said, altmetrics are not designed to replace traditional measures such as citations and peer-review and it’s valuable report on a broad range of indicators. Altmetrics are complementary rather than a replacement for traditional bibliometrics.

Altmetrics reporting: context is key

Using a single number, “This output received 100 citations” or “This output has an Altmetric Attention Score of 100” doesn’t really say that much. That’s why altmetrics tools often focus on pulling out the qualitative data, i.e. the underlying mentions an output has received. Saying, “This output has an Altmetric Attention Score of 100, was referenced in a policy document, tweeted by a medical practitioner and shared on Facebook by a think tank” is much more meaningful than a single number. It also tells a much more compelling story about the influence and societal reach of your research. So when using altmetrics data, zoom in and take a look at the mentions. That’s where you’ll find the interesting stories about your research attention to include in your reporting.

How can you use altmetrics to extend your bibliometrics service?

Here are some ideas:

  • Include altmetrics data in your monthly bibliometric reports to demonstrate societal research engagement – pull out some qualitative highlights
  • Embed altmetrics in your bibliometrics training sessions and welcome emails to new faculty – we have lots of slides you can re-use here
  • Provide advice to researchers on how to promote themselves online and embed altmetrics data in their CV
  • Encourage responsible use of metrics as discussed in the Leiden Manifesto and The Metric Tide
  • Don’t use altmetrics as a predictor for citations! Use them instead to gain a more well-rounded, coherent insight into engagement and dissemination of your research

Altmetrics offer an opportunity for bibliometricians to extend existing services and provide researchers with a more granular and informative data about engagement with their research. The first step is to start exploring the data – from there you can determine how it will fit best into your current workflow and activities.

Further reading

Natalia Madjarevic

@nataliafay

A useful tool for librarians: metrics knowledge in bite-sized pieces By Jenny Delasalle

Metrics_poster_verticalHaving worked in UK academic libraries for 15 years before becoming freelance, I saw the rise and rise of citation counting (although as Geoffrey Bilder points out, it should rightly be called reference counting). Such counting, I learnt, was called “bibliometrics”. The very name sounds like something that librarians should be interested in if not expert at, and so I delved into what they were and how they might help me and also the users of academic libraries. It began with the need to select which journals to subscribe to, and it became a filter for readers to select which papers to read. Somewhere along the road, it became a measurement of individual researchers, and a component of university rankings: such metrics were gaining attention.

Then along came altmetrics, offering tantalising glimpses of something more than the numbers: real stories of impact that could be found through online tracking. Context was clearly key with these alternative metrics, and the doors were opened wide as to what could be measured and how.

It is no surprise that after working in subject support roles I became first an innovation officer, then institutional repository manager and then a research support manager: my knowledge of and interest in such metrics was useful in that context. Yet I was mostly self-taught: I learnt through playing with new tools and technologies, by attending training sessions from product providers and by reading in the published literature. I’m still learning. The field of scholarly metrics moves on quickly as new papers are published and new tools are launched onto the market. Also as university management and funders become more interested in the field and scholars themselves respond.

It took a lot of time and effort for me to learn this way, which was appropriate for my career path but it cannot be expected of all librarians. For example, subject or liaison librarians work with scholars directly and those scholars might also be interested in metrics, especially those available on platforms that the library subscribes to. Yet these same librarians must also quickly become experts in changing open access practices and data management needs and other concerns of their scholarly population, whilst teaching information literacy to undergraduates and maintaining the library’s book collections in their subject area and their own knowledge of the disciplines that they support. They have a lot of areas of expertise to keep up to date, as well as a lot of work to do. And there are new, trainee librarians who have a lot to learn from our profession. How can we save their time?

I began collaborating with Library Connect because that’s exactly what they seek to do, support busy librarians. Colleen DeLory, the editor of Library Connect, has her ear to the ground regarding librarians’ needs and she has some great ideas about what we could use. I started by presenting in a webinar “Librarians and altmetrics: tools, tips and use cases”, and I went on to do the research behind an infographic “Librarians and Research Impact,” about the role of a librarian in supporting research impact. Another webinar came along “Research impact metrics for librarians: calculation & context” and then the very latest and, in my opinion, most useful output of my work with Elsevier is our poster on research metrics.

Quick Reference Cards for Research Impact Metrics

This beautifully illustrated set can be printed as a poster which makes a great starting point for anyone new to such metrics, or indeed anyone seeking to de-tangle the very complex picture of metrics that they have been glimpsing for some years already! You could put it up in your library office or in corridors and you can also reproduce it on your intranet – just put a link back to the Library Connect as your source.

You can also print our set out as cards which would be really useful in training sessions. You could use them to form discussion groups by giving each participant a card and then asking people to form groups according to which card they have: is their metric one for authors, one for documents or one for journals? Some people will find that they belong to more than one group, of course! The groups could possibly then discuss the metrics that they have between them, sharing their wider knowledge about metrics as well as what is on the cards. Do the groups agree which metrics are suited to which purposes, as listed across the top of the poster? What else do they know or need to know about a metric? Beyond such a guided discussion, the cards could be sorted in order of suitability for a given purpose, perhaps sticking them onto a wall underneath a proposed purpose as a heading. The groups could even create their own “cards” for additional metrics to stick on the wall(s!), then the groups would visit each other’s listings after discussion… We’d love to hear about how you’re using the cards: do leave a comment for us over at Library Connect.

Of course our set is not comprehensive: there are lots of other metrics, but the ones chosen are perhaps those that librarians will most frequently come across. The aspects of the metrics that are presented on the poster/cards were also carefully chosen. We’ve suggested the kind of contexts in which a librarian might turn to each metric. We’ve carefully crafted definitions of metrics, and provided useful links to further information. And we’ve introduced the kind of groupings that each metric applies to, be it single papers or all of an author’s output, or for a serial publication. It was a truly collaborative output, with brainstorming of the initial idea, research from me and then over to Colleen DeLory to coordinate the graphics and internal review by Elsevier metrics expert Lisa Colledge, back to me to check it over, then with Library Connect again for proofreading and even a preview for a focus group of librarians. It has been a thorough production and I’m very proud to have been involved in something that I believe is truly useful.

@JennyDelasalle

Freelance Librarian, Instructor & Copywriter