Lizzie Gadd and Ian Rowlands report on the Lis-Bibliometrics survey of end-users around how bibliometric and altmetric suppliers might improve their services.
It’s fun being on the Lis-Bibliometrics Committee. We usually get together for a drink after a Lis-Bibliometrics event and talk about what worked and what didn’t. Eventually conversation will drift onto the issues we’re facing at work and a general putting-the-world-to-rights session. On one such occasion earlier this year, we were bemoaning the fact that our metrics suppliers often seemed disconnected with our real needs as end-users. Were we alone in this? And if not what could we do to address the problem? It was then that we came up with the idea (borrowed from Clare Grace & Bernie Folan) of surveying end-users as to what messages they’d like to send suppliers if they had the chance.
So back in February/March a simple survey was launched on the Lis-Bibliometrics, ARMA Metrics SIG and RESMETIG lists, asking bibliometric practitioners to answer the question, “What three things would you like your metrics suppliers to know”. In fact they could supply more than three because a free-text box was also provided. We also asked a few demographics questions so we could how representative the responses were. I have to say it didn’t generate a huge initial response from end-users – despite a massive amount of interest from vendors, some of whom kindly promoted the survey to their own networks. Eventually we had 42 respondents who together shared a rich collection of 149 ‘messages’ they’d like to send to metrics vendors. Not surprisingly, the majority were from the UK (32 or 76%), but seven other countries were represented. And again, the majority were librarians (27 or 65%), with ten research managers (24%) and some academics and other contributors too.
The full report is being published by the UKSG Insights journal and the accepted version is available at https://dspace.lboro.ac.uk/2134/34685. We’ve also made the whole (anonymised) dataset available for reuse at https://doi.org/10.17028/rd.lboro.7022213.v1 so it’s not our purpose to summarise the whole thing here. However, we did want to highlight the main themes in the hope that by repeating these messages, we might start to see some movement from suppliers on some of the areas of greatest concern.
A) Improve and share your data
The largest set of messages were around suppliers’ need to improve and share their data. There was a strong sense that the scholarly record should belong to the scholarly community and much support for open citation initiatives. Where data was locked behind paywalls, there were calls for download limits to be increased and standard identifiers to be available for all exports to enable end-users to mash-up metric data with other sources using data visualisation and networking tools.
Issues around the coverage of proprietary citation and alt-metric tools were also common. It was clear that what we really wanted was a perfectly curated open Google Scholar (for free) that indexed all output types in all disciplines! But in the absence of that we’d settle for honesty about coverage limits and an openness to recommendations for expansion.
Of course the biggest issue of all under this heading was a demand for vastly improved data quality, and recognising the prohibitive cost of that, at least some kind of KPIs and reporting around data quality. If we know that a service aims for a 97.5% accuracy rate and can plot the percentage of data corrections submitted by end-users over time, we can at least have a sense of what we’re dealing with and the level of confidence we can place in the data we’re working on.
B) Be more responsible
Although data concerns were at the top of the list, calls for the responsible provision of data were not far behind. Respondents felt strongly that suppliers had a ‘duty of care’ towards the scholars whose work was represented by their data and indicators, and that they should take a proactive approach to both label their products responsibly and educate end-users in their application.
We felt there was a useful analogy with the responsibility of food manufacturers here. At a bare minimum, we want a list of ingredients (sources), but ideally we want a sense of how healthy those ingredients are, i.e., what percentage of our Recommended Daily Intake do they consist of (how sensible is it to consume these metrics, at what level of granularity, and what risk?). Just with food labelling, this could be colour coded (and with error bars) if necessary. And if there are ingredients in there that could do serious harm (like cigarettes) make it blindingly obvious – or even better, don’t sell them at all (remove the h-index from researcher profiles). Just as producers of products that might be harmful are subject to higher rates of tax (sugar tax anyone?) so perhaps suppliers should be tasked with investing a certain proportion of their income into education of end-users through the production of guides, training, promotion campaigns, etc., – but this is secondary and in addition to labelling the product correctly in the first place.
C) Improve your tools
There were a range of messages around the ways end-users wanted to see metric tools improved – many specific to particular products. However, a common theme was around the need for suppliers to find the sweet spot between innovation vs the basics. There was a sense of frustration at the apparent investment in ‘bells and whistles’ features whilst more basic matters of data quality and the responsible provision and labelling of metrics went overlooked. It was clear that more transparency around the evidence trail for development priorities and co-design with end-users was much longed-for.
D) Improve your indicators
A final group of messages clustered around the need to improve certain indicators in order to enable better benchmarking of small or niche fields. Most citation tools categorise articles according to the journal in which they appear, which, ironically, is one of the fundamental no-no’s of most principles of responsible metrics. This makes all subject-based analysis challenging, but it’s a particular problem for small or interdisciplinary fields which are not really represented by large journal categories, and even more of a challenge when you want to compare academics in Niche Field at University X with academics in Niche Field in University Y. Not easy problems to address, but article-level indexing would go some way towards it.
It should be said that there weren’t many direct messages about alt-metric data, although most of the other comments could easily have related to both bibliometrics and alt-metrics. Where there were comments, they were mainly calls for a joint source of bibliometric and alt-metric data to provide different views on the impact of outputs, and to provide some standards around how alt-metric data are collected and counted in order to allow for fair comparisons.
A summary of the key messages and some of our recommendations are given below. We hope that these might kick-start a conversation with suppliers that moves us towards a better understanding of the art of the possible, and ultimately a more robust and responsible approach to bibliometric and alt-metric evaluation.
Theme A: Improve and share your data
A1: We want greater coverage (preferably for free!), but if we can’t have that, please be clear about coverage limits
- Suppliers should provide easily available, regularly updated lists of current coverage and signal more clearly any significant scope and coverage limitations
- Suppliers should make it easier make it easier for customers to suggest new sources to plug gaps in disciplines and output types
- Suppliers should make clearer statements on their plans for coverage expansion
A2: We want better quality data (or at least be honest about its limitations)
- Suppliers should establish and report on KPIs around data quality improvement
A3: We live in a mashup culture – enable end-users to export, use and repurpose data
- Suppliers should relax their system download limits
- Suppliers should ensure that a standard and consistent range of identifiers is available for all data exports on their platforms to facilitate data integration and mashups
A4: Remember to whom the data belongs – a desire to re-assert a sense of community ownership.
- Publishers should release cited references to CrossRef under open licensing conditions
Theme B: Be more responsible
B1: Suppliers have a duty of care to their end-users
- Suppliers should develop their own statements on the responsible use of metrics and adhere to them
- Supplier should keep abreast of developments in the field of responsible metrics and update their tools accordingly
B2: Suppliers should provide better labelling for their products …and better education activities and use cases
- Suppliers should provide clear, up-to-date guidance as to how indicators are calculated, with worked examples
- Suppliers should provide clear warnings alerting end-users to the dangers of using certain metrics at fine levels of granularity, when the publication sets are too small to be statistically robust
- Suppliers should provide confidence levels around their indicators, where appropriate
- Suppliers should set aside a certain percentage of their revenue for educational activities to support responsible metrics
Theme C: Improve your tools
C1: Suppliers need to find the sweet spot between innovation vs the basics
- Suppliers should involve end-users more actively in setting development priorities
- Suppliers should provide a clear evidence trail for their development priorities
Theme D: Improve your indicators
D1: The ability to benchmark by small or niche fields would be highly valued
- Suppliers should work to facilitate the sharing of benchmarking groups between members of the community
D2: Article-level subject indexing is needed
- Suppliers should explore ways develop more effective services (including enhanced benchmarking functionality) through output-level subject indexing
D3: Altmetrics are still nascent but better standards and integration would be welcome
- Suppliers should integrate altmetric and bibliometric data to a greater extent
- Suppliers should seek to standardise altmetric indicators and sources to better enable their interpretation.
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Research Evaluation Special Interest Group Champion. She also chairs the newly formed INORMS International Research Evaluation Working Group.
Ian Rowlands is a Research Information & Intelligence Specialist at King’s College London and a member of the LIS-Bibliometrics committee.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.