Wrong question?

Lizzie Gadd argues that good research evaluation starts with good questions.

The 2019 Ig Nobel Prize winners were announced in September.  Among my favourites was some research into the pleasurability of scratching an itch. Contrary to what the name suggests, winning an Ig Nobel prize is not an indictment of your research design or methods. It’s an indictment of your research question. Is the question a good one? Is it an important one? Or are you even starting with a research question at all?

It strikes me that in research evaluation we’re also in danger of this. Often, it’s not our metrics that are at fault, or even our methods, it’s our questions.

Image credit: critical thinking asylum and jppi, CC BY-NC-ND

The INORMS Research Evaluation Working Group recently developed a process called SCOPE by which research evaluation can be done responsibly. The S of which stood for ‘START with what you value’.  In other words, start with the right questions, the things you want to know, and work forwards. Don’t start with what others value, or with your dataset, and work backwards.  

So, a discussion list member recently reported that they’d been asked to come up with a single research indicator for their university’s KPIs. Just the one. ‘Research indicator’.  What does that mean? What do they want to indicate about research? What are they trying to achieve? There are a whole lot of pre-questions that need answering before we can start to answer this one.

No-one questioned whether arts & humanities colleagues actually valued citations enough to want to be measured by them.

Again on another discussion list, a colleague was seeking a bibliographic data source which provided better coverage of arts and humanities content for bibliometric analysis. Lots of helpful folks pitched in with responses. But no-one questioned whether arts & humanities colleagues actually valued citations enough to want to be measured by them, or whether there were better ways of assessing the quality and visibility of their outputs. Wrong question.

But it’s not just asking the wrong question that we’re prone to, it’s not starting with a question at all and retrofitting one once you have your data.  In science this has become known as ‘HARKing’ – Hypothesising After the Results are Known.  And I’ve seen two cases recently where it feels like this is happening.  

So, Elsevier’s newly formed International Centre for the Study of Research recently produced a report demonstrating the increase in fractional authorships resulting from increased collaboration. Fine. Except it was advertised as a study purporting to answer the question “Are authors collaborating more in response to the pressure to publish?”.  Now this is a good question, but you can’t answer this question with the data they have. If you want the answer to THIS question, you’d have to ask authors what their motivations for collaborating were. The study doesn’t do that.  Wrong question.

In the interests of balance, a collaboration between Digital Science and CWTS Leiden recently produced a research landscape mapping tool.  It’s a very interesting visualisation of the research publications resulting from the big funding agencies. Great. However, it was promoted as a tool to “support research funders in setting priorities”.  Now, knowing the brilliance of CWTS Leiden, I struggle to believe that they started with the question ‘How can we better support funders to make funding decisions?”, and ended up with a tool that showed what publications had resulted from historical funding decisions. And the thought that this data should be used in any significant way to support funding decisions concerns me.  What seems more likely to have happened is that a fabulous dataset and a fabulous visualisation tool got together and had a fabulous research mapping tool baby, and then, to justify the accident of its birth, cursed it with a grand title it could never quite live up to.

Asking the right question is important. And we need to articulate our question before we decide how best to answer it. And when we answer it, we need to offer our findings in light of the question we asked, not one we think might give the better headline.

Ultimately we need to look beyond the questions we are asking to the systemic effects of asking them. Let’s take the h-index as an example. My h-index is currently 13. And if my h-index of 13 is the answer, what is the question exactly?

There’s a bigger question as to whether rewarding only publication-producing individuals is good for scholarship? Good for our universities? Good for humanity?

So, technically, I guess the question is, “How many publications do you have with at least that number of citations?” But the implied question is “How prolific are you at producing well-cited publications?”  However, you can’t answer that question without knowing my career stage and discipline and how many career breaks I’ve had and what role I played on those publications and how many co-authors they’ve had. So the h-index alone doesn’t provide an answer.  But even if it did, the systemic effects of asking this question are significant. Do we want to employ or promote individuals who are only prolific at producing well-cited publications? The answer might be yes, because that’s how we as universities are measured. But then there’s a bigger question as to whether rewarding only publication-producing individuals is good for scholarship? Good for our universities? Good for humanity? Is this actually fulfilling our mission as universities? Even if it’s making our universities look good in the eyes of the rankings or some other well-meaning but misguided party.

Aaron Swartz once said, “what is the most important thing you could be working on right now? And if you’re not working on that, why aren’t you?”. It seems to me that the biggest problems in the research evaluation space could be solved not by better methods and metrics, but by better questions. Are they important? Are they honest? Are they mission-driven? Once we have our questions right – once we have put our values back into our evaluations – we will be well on the road to more responsible research assessment.

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She is the chair of the Lis-Bibliometrics Forum and co-Champions the ARMA Research Evaluation Special Interest Group. She also chairs the INORMS International Research Evaluation Working Group.

Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.

6 Replies to “Wrong question?”

  1. Lizzie, great post as always. I would also question if “we want to employ or promote individuals who are only prolific at producing well-cited publications?”, does that mean including the negative citations also?

    Like

  2. I am one of the researchers behind the funding landscape tool that you mention in this blog and I would like to clarify some misunderstanding sabout what this tool is aimed to be used for – which, it seems we didn’t manage to convey.

    First, let me say, that I really appreciate the post, and I fully agree that evaluation should start by asking what the organisation or programs aim to achieve, and only later look for relevant indicators or quantitative evidence.

    Research landscapes maps aim to visualise which research areas are relatively funded or unfunded by a given agency or to map research activity in general. They aim to help think policy analysts whether publications of agencies are in the expected research topics – and whether some topics are relatively over- or underfunded in relation to an agency’s goals. Some have been carrying out some pilot studies on how landscape mapping could be used for this purposes — you can find recent publications using this approach on various topics: obesity, avian flu or comparing diseases, among others.

    The goal is NOT to come up with an answer of how much research should be done on a topic, or which topics should be promoted, but to foster deliberation on priority setting. Notice that it is a map, not a road to be followed — it allows to put diverse questions and come up with several answers depending on perspectives taken.
    The NIH has been particularly active in using these research landscapes for various purposes, in up an office of portfolio analysis (https://dpcpsi.nih.gov/opa).

    The idea is grounded on previous theoretical contributions by many science policy scholars, such as Daniel Sarewitz, regarding the lack of alignment between research priorities and societal demands or more generally between stated programme goals and outcomes.

    Hence, the approach responds to potential policy questions, in agreement with the key idea behind the post.

    Like

    1. Thanks so much for your engagement! I can certainly understand the use of this sort of data for understanding historical funding decisions and portfolio analysis as the NIH have done. I think what I’m challenging is the use of this data to support future priority setting. Bibliometric data is just not current enough (papers can take years to get published and indexed) and not reliable enough (there is not always a correlation between funding volume and paper volume) and even if there was, just because a problem has been funded and written about, does not mean it has been addressed. Should we turn our attention away from climate change issues because a lot has been written about it and start investing in other areas? I think there is a difference between offering up data as a solution, and being called upon by those with a problem to apply data to it. I think I’d have more confidence if the tool had been co-designed by funders, or if there was evidence of funders using it for priority setting, but it doesn’t sound as though that is currently the case? Happy to be corrected though!

      Like

      1. Thanks Lizzie. The map does not tell you that you have to put more resources where you see more activity (publication). The interpretation can be the opposite. If you see in the map that your organisation does not support much research in climate change, it may help you ask yourself whether you should do it.

        And yes, funding agencies such as the NIH have been using this type of maps for strategic thinking. And yes, we interacted with colleagues in Wellcome Trust and Digital Science in the context of RoRI to develop this specific tool. It is indeed in an experimental stage and we hope it will improve with further interactions — for example, as you suggest adding funding data in terms of money spent.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.