Measuring openness: should we be careful what we wish for?

Is the best way of incentivising open scholarship to measure it?  Lizzie Gadd is not so sure.

There is a lot of talk at the moment about measuring open scholarship as means of incentivising it.  For example, the European Commission’s recently updated recommendation on access to and preservation of scientific information calls for member states to change the academic evaluation system by introducing “additional indicators and metrics that can inform assessment on openness”. The LERU Open Science roadmap is another, suggesting universities “embed Open Science principles in the institutional research assessment system, shifting away from an excessive reliance on publication-based journal impact factors and citation cultures and recognising Open Science approaches such as OA publishing, data/code/reagent sharing.” I have sympathy with these objectives.  We all want openness, and we all believe Campbell’s Law – i.e., the way you measure someone is the way they’ll behave. It’s just that the more I think about it, the more concerns I have that measuring openness might not be the best way of achieving it.  So as a form of blog therapy, I have laid out my fears in this post in the hope that either someone can reassure me that I’m overthinking this, or that we might adjust our collective view about the best way forward here.

Measuring West by Steve Harris CC-BY 2.0
Measuring West by Steve Harris CC-BY 2.0

1.Openness and quality are not the same thing

Open scholarship is really important.  But the reason open scholarship is important is that scholarship is important, not because openness is inherently important in and of itself.  However, you wouldn’t think that from some of the messaging we’re getting around openness. Here are three statements I’ve heard recently:

“Open science is just good science”. (Always?)

“Closed science is bad science”.  (Really?)

“If it’s not open, is it really research?” (Erm, yes?)

The truth is, you can have bad science that is open and you can have good science that is closed.  And closed research is still research.  If these statements were true, anyone who was publishing before about 2002 was doing bad science (or no science at all according to the last comment) and predatory open access journals would be brimming with quality material.  Quality and openness are two completely separate things, and we do researchers and research a disservice if we confuse the two.

The truth is, you can have bad science that is open and you can have good science that is closed.  And closed research is still research.

Now I know that what is really meant by these statements is that open research is good practice.  It is a call for the scholarly process to be made more open – to share what we’re working on, to publish our data and our findings as quickly and openly as possible, to fail faster, and progress more quickly.  Of course we all want that. And the root of these comments is a sense of frustration that it’s all going too slowly.  And we understand that too.  But whilst making exaggerated statements like these might win a few hallelujahs from the choir, they are extremely off-putting to the unconverted. Indeed, taken the wrong way they imply that openness is more important than, or the route to, quality. And the last thing we want is openness at the expense of quality.

2.Measuring openness and quality leads to double the metrics

Despite the LERU report’s suggestion that we “shift away” from publication-related indicators and towards openness indicators, the reality is, because openness and quality are two different things, we are going to need both.  Indeed the EC’s recommendation is more realistic on this point by calling for an “additional indicators and metrics” rather than alternative ones.

Now the problem with an additional layer of metrics is that academics are already buckling under the ones we’ve got.  We’re losing good people (sometimes literally, sadly) through mental health problems exacerbated by the unbearable audit demands of the academic life.  I’ve spent the last two years pedalling messages around the burden of research metrics and the importance of doing bibliometrics responsibly.  If, in addition to evaluation criteria that communicates ‘only world-leading research is good enough’ we also introduce new criteria that states, ‘only world-leading research that is fully open from the first lightbulb moment, through to lab notebooks, beautifully curated open data, preprints and all subsequent versions, is good enough’ I fear it might be the straw that breaks the camel’s back.  I know that calls for open scholarship are ultimately motivated by a desire to serve the world, but if in the process we are driving the originators of that scholarship out of academia through another set of ill-thought-through metrics, that strikes me as counter-productive.

3.Is openness mature enough to be measured?

One of the main reasons I feel concerned about an additional layer of openness metrics is that I’m not convinced that openness is yet at a mature enough state to be measured.  I guess the very fact that we feel the need to incentivise openness through metrics speaks to this fact. The sad truth is, the vast majority of researchers either have not heard of, have not been convinced by, or have no practical means by which to engage with an entirely new open modus operandi.  We have to remember there is a huge gap between scholars at the avant-garde of open practices who may shout loudly, and the masses in the rear guard.  And I would wager that researchers at wealthier institutions, those with state-of-the-art data management services, big open scholarship support teams, and perhaps their own ‘open’ university press, are in a much better position than researchers in newer, smaller institutions to engage with all things open.

There are big disciplinary differences here too. The very fact that much of the ‘Open’ discourse is followed closely by the word ‘Science’ illustrates this perfectly.  If you’re counting an institution’s open outputs, they’re going to have a much bigger number if they have lots of journal-based disciplines than monograph- or artefact-based ones. So if we’re not careful, in the brave new world of open, where the rule book is torn up and the metrics re-designed, STEM will win again.

But even within STEM, as Hilda Bastian recently pointed out, opportunities to do open may not be all they are cracked up to be.  She found that despite protestations from OA advocates that most Gold journals are APC-free, the actual selection available to her that met various criteria (e.g., discipline-appropriate, provided a DOI, published in English, indexed in PubMed/Medline) was a fraction of those purportedly available.  Now I know that in a truly OA world, the journal as we know it may become a thing of the past – and good riddance I say.  But if we’re looking to transition to openness by measuring openness, we need to make sure folks all have equal access to the openness we seek.

So if we’re not careful, in the brave new world of open, where the rule book is torn up and the metrics re-designed, STEM will win again.

Perhaps one of the problems of these large ‘Open Science’ commissioning reports is that new openness indicators seem to be mentioned in the same breath as new openness practices and policies when actually there is an order to these things.  We could argue as to whether policy should drive practice or the availability of tools and services should enable policy. But we could probably all agree that metrics should come last. When everyone has the same chance to do open, but some are dragging their feet.

4.Openness should be its own reward

I suppose one of my biggest disappointments in all of these calls to incentivise open scholarship through metrics is that openness was always supposed to be its own reward.  Scholarship could be communicated at a faster rate than ever before – no publication delays!  And there were piles of studies demonstrating a citation advantage to open access. Of course that didn’t lead to universal take-up, but in the disciplines where it did work (think Physics and Arxiv) the reward was simply that openness became the cultural norm.  If you didn’t do open, you missed out.  To my mind, we rushed in too quickly with policy demands instead of providing the mechanisms by which academic communities could form their own flavour of openness. At that point openness lost the opportunity to become a cultural norm because it then became a compliance issue.  Academics did open because they HAD to, for fear of not being REF-submittable or being blacklisted for further funding.

I suppose one of my biggest disappointments in all of these calls to incentivise open scholarship through metrics is that openness was always supposed to be its own reward… If you didn’t do open, you missed out.

That’s not to say that the situation is irredeemable, but I’d say don’t wag the finger, point the way. We need to make it easier to do open scholarship than closed scholarship. We need to make it so that people turn round and say – it’s just easier to make it all open.  It’s just easier to document trials – it saves two funders investing in basically the same research. It’s just easier to make my data available on a repository – it saves coming back to it in five years time and find my hard disk is corrupted.  It’s just easier to put my papers on the preprint archive – I get earlier feedback on my paper and it leads to a better publication and greater impact.  It would seem to me that the best incentive for open scholarship would be to make it so straightforward that folks would be daft not to engage with it, not (as is currently the case) some tricky business for which they might get a gold star.

Please don’t get me wrong – I’m a huge advocate for open scholarship.  I do believe it will serve the world.  But I feel like its best chance of success is through greatly simplified, scholar-centric systems and processes, not through a pressure campaign which conflates openness with quality, the success of which is measured by another layer of invasive and irresponsible metrics.  What am I missing?


Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and is the ARMA Research Evaluation Special Interest Group Champion. She also chairs the newly formed INORMS InternationalResearch Evaluation Working Group.

 

Creative Commons LicenceUnless it states other wise, the content of 
the Bibliomagician is licensed under 
a Creative Commons Attribution 4.0 International License.

14 Replies to “Measuring openness: should we be careful what we wish for?”

  1. Thanks, Lizzie. A very insightful blog. I’d like to highlight a couple of your comments.

    First, that Openness is not yet mature enough to be measured. This makes me think of other areas of the measurement culture, which tries to apply the same metric across the whole lifecycle, whereas that metric is likely to be a relevant indicator for only part of the lifecycle. Equally, indicators of activity and progress towards an endpoint are not necessarily (or usually) the same as measurement of the endpoint itself (and vice versa).

    Second, that Openness is one element of good practice (where the nature of the openness may also need to vary dependent on specific circumstances). That means that Openness is part of the Integrity agenda. For the latter, there is a danger that good practice only means the absence of misconduct, rather than being the wider ambition of good conduct in research practice. Of course, as well as being an element of Research Integrity, Openness is also part of the Impact agenda, as it may help to enable societal benefit (as long as the research is understandable and translatable – one of the findings of a report commission by the UK Open Access Implementation Group some years ago).

    As you note at the end of your piece, policy can (and should) encourage behaviours. The problem comes when policy requires compliance, which then drives minimum behaviours. In some cases, compliance-based policy is appropriate, but that’s usually where the topic has matured, there are accepted standards, or there are significant (e.g. health) risks. Openness is not yet at that point of maturity, and still needs enabling and stretching policies that engage across the disciplinary breadth.

    Liked by 2 people

    1. Thanks Ian. You make some really helpful and pertinent points. Your second point about openness being part of the integrity agenda particularly resonates with me. At Loughborough we are revisiting our open research policies in the light of our responsible metrics policy – i.e. how responsible are even responsible bibliometrics if they increase our focus on a small subset of ever more expensive journals? I believe open practices form part of a responsible scholarly communication strategy. I’m just not yet convinced (like you) that metrics would be the way of encouraging engagement with that strategy.

      Liked by 1 person

  2. I do agree with most of what you wrote. The community is but very hard to move and people will never see that it is easier if they never try, and there is a cost (a learning curve) in changing one’s habit.
    On the other hands, standards have been changing continuously (I would probably not be able to publish my 2007 papers anymore, just because nowadays, a larger sample size is required for the experiments I did): making code, data and text open for all to read and re-use is a new coming standard, and looking for metrics is less strong than making it a requirement for publication.

    What I see as a very good example of what could be an alternatives to metrics (which are always bad, even when necessary) is small requirement. For instance, requirement for a data management plan makes researcher call the data management helpdesks, who can then start a discussions going further than just the DMP.

    Step by step into better practices.

    Liked by 2 people

  3. Hi Lizzie, great post as always. I think I have to comment on those three statements, as I’m pretty sure it was me who made them all at one point or another..

    The historical component here is really important to me. Ten years ago, most researchers hadn’t heard about OA, data sharing repositories were rarer, and sharing was costly and often without reward. So just because the technology was not available for many aspects of open, it does not mean that things were retrospectively wrong – it was just the way of doing things then. By comparison, no one says that printing articles on paper is wrong historically, there’s just not much point now in a digital world. We should be asking the same questions of historical research as we do for all new research, but through the lens that research was simply performed communicated differently in the past. If we acknowledge that openness, through transparency, data sharing, enhanced reproducibility and verifiability (etc), generally leads to more rigorous science, than this is a good thing. I still don’t have a clue how to define scientific quality besides its longevity, so guess it is still too early to tell if ‘openness’ will lead to higher or lower quality science.

    “Open science is just good science”. (Always?) – Not always, but generally it could and should be. Sharing data, making work more reproducible, making it more fair and transparent, to me, underpin the values of good science. This actually highlights an ongoing conflict in my mind that open science, in fact, does not really exist as an independent entity, and in reality is just a multi-dimensional spectrum that overlaps in many respects with just ‘normal’ science. But also, is underpinned by a values such as equity, justice, and freedom, as a transformation of what many view has become an increasingly closed, corporate, and discriminatory enterprise. This paper by Mick Watson, which I’m sure you have read, but many others have not, emphasises much of this much better than I can in a slogan/tweet: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4436110/ (but shows much of my thinking on this matter)

    The other side of this is that if we don’t have data or code, cherry-picked results are everywhere (which they are), HARK-ing (common), and all of this stuff openness is working towards helping to resolve, then realistically science is indistiguishable from anecdote, and we are reduced to faith, instead of having transparency breeding our trust in science. So, I fully agree that openness and quality are not the same thing, not at all. However, I believe that generally more open practices should improve the research and communication process, which may or may not pay off in the long run with higher research quality.

    Am I frustrated with progress with ‘open science’? A little, some times, yes. But this is far overshadowed by my sense that things are moving into new states very fast, by some brilliant minds and their actions. Is their an open science choir? Absolutely, some times. But in many other domains, this is termed a community, and openness is one of the greatest out there in my view. Indeed, if you go to OpenCon, we call ourselves part of a big family, united in kinship and our passion for change. Call it a choir if you want, but at least to me it’s more than that, and is one of the most permeable, welcoming, and well-intentioned that I know of.

    I do feel part of a huge echo chamber some times, and try and expand out of it whenever possible. I do believe that we have an enormous communications issue within ‘open science’, particularly about the history, philosophy, wider implications, social importance, and underlying values and principles. Are there ‘unconverted’? I don’t think so – I feel again that all researchers come into their work wanting to be open by default, as part of their core values as a human and as a researcher. However, it is the hideous complexity of the broader modern system that constrains them (e.g., career progress incentives, costs, information overload), and this is where most of my frustration lies. Indeed, I feel a deep sadness and sympathy to researchers who want to be more ‘open’, but feel they cannot for one reason or another. Thus, my intention with saying these statements is more to be inspirational, but also challenging and introspective, as I think we need this.

    That being said, if you or anyone else feel my comments are ever off-putting in the future, please do call me out on it. There’s a lot of carelesness out there on social media these days, and I’m far from perfect, and let my enthusiasm sometimes come across too strongly – it’s a curse when you are passionate about something. A few people have also told me that I’m getting pretty important in this space, so although I don’t feel that at all, it does mean I need to be more responsible with my communications (as do we all). Hopefully though, I’ve been able to describe some of the thoughts that have led me to making these statements, and they might make a bit more sense now. Or not. So, thanks for this post, and I will be more careful not to come across as divisive in the future. No one wants that – rainbows and unicorns all the way. 🙂

    Liked by 3 people

  4. Dear Elizabeth,
    you made very good points on the principle of “openness metrics” and their interest in the current state of things. I would like to add two things :

    1/ even if I’m sure you don’t believe it, your post imply that “pre-existing metrics” were focusing on “quality”, which is not the case. Production, citation, impact for the main ones, and none of these equal “quality” as at least 40 years of bibliometrics have shown.

    2/ Some parts of the OA & OS movements have fought against existing metrics as they saw them as “consevative” or “misleading” or “reinforcing the current legacy publishing” or “not taking into account larger than close colleagues audiences.” Hence, the push towards “alternative” then “open” metrics, not as a “new layer” but rather as a replacing one.

    As often with OS, there is a division between those who wish for a new world and more reformist ones that simply call for some change (though both would sign DORA).

    Your point on “good practices” (as some comments) can lead to something very different from metrics: procedures, mandates, obligations, labels, as many institutions (from ERC to single departments) have proposed or edicted. DMPs, green archiving,…, thus can become part of the normal scholarly communication process, just like “publications” have become in the last century. So “no metrics”, “new metrics” or “alternative metrics” for OS ?

    Best

    Didier Torny
    Co-pilot of the evaluation group
    of the French Committee for Open Science

    Liked by 2 people

    1. Hi Didier!

      Thanks for your helpful comments.

      I take your point that publication metrics can’t all be seen as quality indicators even if they are used as such. I was using the term with a broad brush to contrast them with those used to measure openness.

      You raise an interesting point about altmetrics which are, of course, yet another layer of metrics! Although these are not currently used in any systematic way for reserch evaluation.

      I think the question about no or new or alternative metrics for OS probably, ultimately, needs to be answered in a much more nuanced way thinking in terms of particular OS practices (OS is an umbrella term for lots of things), research lifecycle stage (as Ian suggests), discipline, and level of granularity (university/reserach group/individual).

      I look forward to seeing the outcomes of your Open Science evaluation group!

      Liked by 1 person

  5. Thank you for a thoughtful post.
    I wanted to highlight a project that aims to provide indicators/metrics for OS. https://mniopenresearch.org/articles/2-2/v2 It’s been a challenging experiment, especially as creating a baseline to measure open against is difficult. I hope that we end up with a toolkit that researchers and institutions can use. Such indicators can help shape policy and ease fear of changing behaviors.

    Liked by 1 person

    1. Hi Ashley,

      Thanks for mentioning this. I was aware of the project and believe it aims to develop openness indicators to help persuade policy makers to establish open policy – is that right? If so, I think this is somewhat different to developing indicators to incentivise individuals to engage with openness – although ultimately one might lead to the other I guess!

      I look forward to learning more about the project when the results are published!

      Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from The Bibliomagician

Subscribe now to keep reading and get access to the full archive.

Continue reading