In this blog post, Dr Muriel E Swijghuisen Reigersberg (University of Sydney), provides an insight of metrics and research assessment in the Australian research landscape, and how it compares to that of the UK.
How are metrics, research assessment, league tables, open access and researcher development interrelated, and how does this vary from one country to the next? My move to an Australian university from the UK higher education sector gave me cause to ponder this question.
Let me start with metrics and research assessment. Australia has a Research Excellence Framework (REF) equivalent: the Excellence for Research Australia (ERA). It is managed by the Australian Research Council (ARC). ERA is a discipline-specific research evaluation exercise. Its unit of evaluation is the discipline at an institutional level. It is not an assessment of individual academic performance. Officially ERA rates specific research disciplines at each university against national and international benchmarks. For the 2018 ERA submission a citation methodology was used to determine levels of research quality for science, technology, engineering, mathematics and medical (STEMM) fields, broadly speaking. This citation methodology is based on metrics. Arts, humanities and social sciences were assessed using peer review.
The use of citations metrics in Australia for research assessment purposes means that Universities rely heavily on citation data and journal impact factors to determine where they should aim to publish. Internally many organizations create tools to help staff develop their publication strategies based on journal impact factor and some libraries specialise in providing citation profiles. In more than one organisation I know of researchers receive monetary or other rewards for publishing in highly ranked journals to incentivize ‘excellence’ whilst institutional partnerships are planned, in part, based on co-authorship and citation statistics.
The E&I exercise is separate to ERA, although the research outputs for ERA do inform impact statements to provide a link between the research and its impact.
This may seem problematic in terms of quality assessment. There is however a critical difference in that ERA is not used to allocate research income, so whether or not metrics provide a 100% accurate profile for disciplinary excellence is in the financial sense at least, a moot point. ERA results do not determine the distribution of quality related income, as the REF does. ERA examines publication activity and other types of data such as funder-endorsed guidelines, research income and patents. Research Engagement and Impact (E&I) activities, recorded for the first time this year by the ARC, are currently also not tied to additional quality related income. The evaluation of E&I is not based on a metric system of evaluation either. The E&I exercise is separate to ERA, although the research outputs for ERA do inform impact statements to provide a link between the research and its impact. Specified income metrics feed into the engagement explanatory statement as well. Block grants supporting research activity instead are determined via the Higher Education Data Collection Exercise (HERDC). The upshot is that ERA is not the highly politicised data gathering exercise that REF has become, although the cost-implications of running the Australian exercise are substantial and the exercise is run more frequently than REF.
Australian funding bodies have equally decoupled ERA outcomes and citation indices from grant assessment and funding allocation in their reviewing and awarding policies. The National Health and Medical Research Council (NHMRC) for example is a signatory to the San Francisco Declaration on Research Assessment (DoRA), stating in its peer review guidelines that journal impact factors must not be used as a surrogate measure for quality and that all assessment should be made based on expert judgement. The question of whether citation metrics provide an accurate indicator for research excellence is consequently less fraught there in financial terms too.
What still bothers me though, is the difference in messaging around the appropriateness of metrics for assessing research excellence. The ARC’s ERA submission guidelines equate citation levels with research excellence despite making regular references to the need for expert input whereas the NHMRC’s support for DoRA indicates that citations and impact factors (which of course are based on citations) should not be used as a surrogate for excellence.
The answer is of course research-related league tables, scale and human bias. Other than peer review, which has its own challenges, we currently have no 100% accurate, equitable, unbiased way of assessing the quality of research on a large scale to inform league tables.
Some I have heard argue that ERA assesses disciplines whereas funding review assesses individual research (teams) and track records. Therefore, they argue, citation metrics are less appropriate in the review of grant proposals. This to me rather misses the point. Citation metrics are not appropriate for measuring research excellence, full stop. It does not matter whether metrics are applied to outputs, disciplines or individuals. The Metric Tide and many publications since have evidenced this. So why would Australian leaders choose to use citation metrics at all?
The answer is of course research-related league tables, scale and human bias. Other than peer review, which has its own challenges, we currently have no 100% accurate, equitable, unbiased way of assessing the quality of research on a large scale to inform league tables. League tables attract staff, students and collaborators. League tables are in the eyes of many University leaders a necessary evil which the sector does not have an answer to. Perhaps it is time we jointly began searching for one.
In the interim, the Australian reliance on metrics may change if the sequence of events I observed in the UK unfolds in Australia in a similar way. In the UK the emphasis on E&I led to an increased enthusiasm for open access (OA) publishing and open data sharing. This in turn shed light on how citation metrics are used by certain journals to support core business, thereby driving the (mis) use of citation metrics.
In Australia levels of OA-related activity are lower. The ARC and NHMRC have OA policies which require that publications based on funded grants are made OA. No strategic funding is allocated post-award for this to happen, however. For ERA the number OA publications is monitored, but OA not mandatory. Now that E&I has been introduced, however, government bodies have begun reviewing their data sharing legislation and Universities are responding, ORCIDs are making an appearance and organizations such as the Council of Australian University Librarians (CAUL) are scoping data on article processing charges. OA-activity is on the increase.
With cOAlition S established and Australian academics publishing in journals affected by cOAlition S, I also anticipate an increase in open access publishing activity. In the longer term this could lead to a revision of the use of metrics in Australian research assessment too as the links between metrics, research assessment, league tables, open access and researcher development become clearer.
Dr Muriel E Swijghuisen Reigersberg is a researcher development manager (strategy) at The University of Sydney, Australia, and previously worked at Goldsmiths, University of London, UK. At Sydney, she oversees the development of a University-wide researcher development training program in collaboration with researchers, faculty staff and professional service units.
In her spare time, Muriel maintains an academic profile in applied and medical ethnomusicology, regularly presenting at academic conferences, penning academic texts, peer reviewing and blogging. She has also offered consultancy support to specialist research institutes in arts and humanities in Slovenia and Japan. Muriel is a keen supporter of the responsible sharing of academic knowledg
View her publications here: http://orcid.org/0000-0003-2337-7962
Unless it states other wise, the content of the Bibliomagician is licensed under a Creative Commons Attribution 4.0 International License.