Is the best way of incentivising open scholarship to measure it? Lizzie Gadd is not so sure.
There is a lot of talk at the moment about measuring open scholarship as means of incentivising it. For example, the European Commission’s recently updated recommendation on access to and preservation of scientific information calls for member states to change the academic evaluation system by introducing “additional indicators and metrics that can inform assessment on openness”. The LERU Open Science roadmap is another, suggesting universities “embed Open Science principles in the institutional research assessment system, shifting away from an excessive reliance on publication-based journal impact factors and citation cultures and recognising Open Science approaches such as OA publishing, data/code/reagent sharing.” I have sympathy with these objectives. We all want openness, and we all believe Campbell’s Law – i.e., the way you measure someone is the way they’ll behave. It’s just that the more I think about it, the more concerns I have that measuring openness might not be the best way of achieving it. So as a form of blog therapy, I have laid out my fears in this post in the hope that either someone can reassure me that I’m overthinking this, or that we might adjust our collective view about the best way forward here.
1.Openness and quality are not the same thing
Open scholarship is really important. But the reason open scholarship is important is that scholarship is important, not because openness is inherently important in and of itself. However, you wouldn’t think that from some of the messaging we’re getting around openness. Here are three statements I’ve heard recently:
“Open science is just good science”. (Always?)
“Closed science is bad science”. (Really?)
“If it’s not open, is it really research?” (Erm, yes?)
The truth is, you can have bad science that is open and you can have good science that is closed. And closed research is still research. If these statements were true, anyone who was publishing before about 2002 was doing bad science (or no science at all according to the last comment) and predatory open access journals would be brimming with quality material. Quality and openness are two completely separate things, and we do researchers and research a disservice if we confuse the two.
The truth is, you can have bad science that is open and you can have good science that is closed. And closed research is still research.
Now I know that what is really meant by these statements is that open research is good practice. It is a call for the scholarly process to be made more open – to share what we’re working on, to publish our data and our findings as quickly and openly as possible, to fail faster, and progress more quickly. Of course we all want that. And the root of these comments is a sense of frustration that it’s all going too slowly. And we understand that too. But whilst making exaggerated statements like these might win a few hallelujahs from the choir, they are extremely off-putting to the unconverted. Indeed, taken the wrong way they imply that openness is more important than, or the route to, quality. And the last thing we want is openness at the expense of quality.
2.Measuring openness and quality leads to double the metrics
Despite the LERU report’s suggestion that we “shift away” from publication-related indicators and towards openness indicators, the reality is, because openness and quality are two different things, we are going to need both. Indeed the EC’s recommendation is more realistic on this point by calling for an “additional indicators and metrics” rather than alternative ones.
Now the problem with an additional layer of metrics is that academics are already buckling under the ones we’ve got. We’re losing good people (sometimes literally, sadly) through mental health problems exacerbated by the unbearable audit demands of the academic life. I’ve spent the last two years pedalling messages around the burden of research metrics and the importance of doing bibliometrics responsibly. If, in addition to evaluation criteria that communicates ‘only world-leading research is good enough’ we also introduce new criteria that states, ‘only world-leading research that is fully open from the first lightbulb moment, through to lab notebooks, beautifully curated open data, preprints and all subsequent versions, is good enough’ I fear it might be the straw that breaks the camel’s back. I know that calls for open scholarship are ultimately motivated by a desire to serve the world, but if in the process we are driving the originators of that scholarship out of academia through another set of ill-thought-through metrics, that strikes me as counter-productive.
3.Is openness mature enough to be measured?
One of the main reasons I feel concerned about an additional layer of openness metrics is that I’m not convinced that openness is yet at a mature enough state to be measured. I guess the very fact that we feel the need to incentivise openness through metrics speaks to this fact. The sad truth is, the vast majority of researchers either have not heard of, have not been convinced by, or have no practical means by which to engage with an entirely new open modus operandi. We have to remember there is a huge gap between scholars at the avant-garde of open practices who may shout loudly, and the masses in the rear guard. And I would wager that researchers at wealthier institutions, those with state-of-the-art data management services, big open scholarship support teams, and perhaps their own ‘open’ university press, are in a much better position than researchers in newer, smaller institutions to engage with all things open.
There are big disciplinary differences here too. The very fact that much of the ‘Open’ discourse is followed closely by the word ‘Science’ illustrates this perfectly. If you’re counting an institution’s open outputs, they’re going to have a much bigger number if they have lots of journal-based disciplines than monograph- or artefact-based ones. So if we’re not careful, in the brave new world of open, where the rule book is torn up and the metrics re-designed, STEM will win again.
But even within STEM, as Hilda Bastian recently pointed out, opportunities to do open may not be all they are cracked up to be. She found that despite protestations from OA advocates that most Gold journals are APC-free, the actual selection available to her that met various criteria (e.g., discipline-appropriate, provided a DOI, published in English, indexed in PubMed/Medline) was a fraction of those purportedly available. Now I know that in a truly OA world, the journal as we know it may become a thing of the past – and good riddance I say. But if we’re looking to transition to openness by measuring openness, we need to make sure folks all have equal access to the openness we seek.
So if we’re not careful, in the brave new world of open, where the rule book is torn up and the metrics re-designed, STEM will win again.
Perhaps one of the problems of these large ‘Open Science’ commissioning reports is that new openness indicators seem to be mentioned in the same breath as new openness practices and policies when actually there is an order to these things. We could argue as to whether policy should drive practice or the availability of tools and services should enable policy. But we could probably all agree that metrics should come last. When everyone has the same chance to do open, but some are dragging their feet.
4.Openness should be its own reward
I suppose one of my biggest disappointments in all of these calls to incentivise open scholarship through metrics is that openness was always supposed to be its own reward. Scholarship could be communicated at a faster rate than ever before – no publication delays! And there were piles of studies demonstrating a citation advantage to open access. Of course that didn’t lead to universal take-up, but in the disciplines where it did work (think Physics and Arxiv) the reward was simply that openness became the cultural norm. If you didn’t do open, you missed out. To my mind, we rushed in too quickly with policy demands instead of providing the mechanisms by which academic communities could form their own flavour of openness. At that point openness lost the opportunity to become a cultural norm because it then became a compliance issue. Academics did open because they HAD to, for fear of not being REF-submittable or being blacklisted for further funding.
I suppose one of my biggest disappointments in all of these calls to incentivise open scholarship through metrics is that openness was always supposed to be its own reward… If you didn’t do open, you missed out.
That’s not to say that the situation is irredeemable, but I’d say don’t wag the finger, point the way. We need to make it easier to do open scholarship than closed scholarship. We need to make it so that people turn round and say – it’s just easier to make it all open. It’s just easier to document trials – it saves two funders investing in basically the same research. It’s just easier to make my data available on a repository – it saves coming back to it in five years time and find my hard disk is corrupted. It’s just easier to put my papers on the preprint archive – I get earlier feedback on my paper and it leads to a better publication and greater impact. It would seem to me that the best incentive for open scholarship would be to make it so straightforward that folks would be daft not to engage with it, not (as is currently the case) some tricky business for which they might get a gold star.
Please don’t get me wrong – I’m a huge advocate for open scholarship. I do believe it will serve the world. But I feel like its best chance of success is through greatly simplified, scholar-centric systems and processes, not through a pressure campaign which conflates openness with quality, the success of which is measured by another layer of invasive and irresponsible metrics. What am I missing?
Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Research Evaluation Special Interest Group Champion. She also chairs the newly formed INORMS InternationalResearch Evaluation Working Group.
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.