When I started working with bibliometrics I was aware of the limitations and criticisms of journal metrics (journal impact factor, SJR, SNIP etc.) so I avoided using them. A couple of years on, I haven’t changed my mind about any of those limitations and yet I use journal metrics regularly. Is this justifiable? Is there a better alternative? At the moment, I’m thinking in terms of filters.
Martin Eve was talking about filters because of their implications for open access: the heavy reliance on the reputation of existing publications and publishers makes the journey towards open access harder than you might expect. But it set me thinking about journal metrics. Academics (and other users and assessors of scholarly publications) use journal metrics as one way of deciding which journals to filter in or out. The scenarios presented in Dr Ludo Waltman’s CWTS Blog post “The importance of taking a clear position in the impact factor debate” illustrate journal impact factors being used as filters.
Demand for assessment of journal quality to fulfil filtering functions drives much of my day-to-day use of journal metrics. Given the weaknesses of journal metrics, the question is: is there something else that would make a better filter?
Article level citation metrics cannot completely fulfil this filtering function: in order for a paper to be cited, it must first be read by the person citing it, which means it need to have got through that person’s filter – so the filter has to work before there are citations to the paper. Similarly for alternative metrics based around social media etc. A paper need to get through someone’s filter before they’ll share it on social media. So we end up turning to journal metrics, despite their flaws.
Could new forms of peer review serve as filters? Journal metrics are used as a proxy for journal quality, and journal quality is largely about the standards a journal sets for whether a paper passes peer review or not. Some journals (e.g. PLoS One) set the standard at technically sound methodology, others (e.g. PLoS Biology) also require originality and importance of the field. Could a form of peer review, possibly detached from a particular journal, but openly stating what level of peer-review standard the article meets, be the filtering mechanism of the future? Are any of the innovations in open peer review already able to fulfil this role?
Comments, recommendation, and predictions welcome!
(Katie is Research Analytics Librarian at the University of Bath and a member of the LIS-Bibliometrics committee, but writes here in a personal capacity)