Job Opportunity: University of Sheffield is looking for a Library Scholarly Communications Manager!

Job Reference Number: UOS015654
Job Title: Library Scholarly Communications Manager

Salary: Grade 8 £39,324-£46,924 per annum with potential to progress to £52,793 through
sustained exceptional contribution

Closing Date: 31st March 2017

Summary:
Copyright advisory and advocacy services form a critical component of the infrastructure necessary in advancing teaching and learning in the digital age. Specialist educative copyright and licensing services also benefit research within the context of more open publishing of scholarly communications.

Reporting to the Associate Director for Academic & Digital Services, you will develop and implement services and programmes that build an understanding of copyright and licensing within the scholarly communications landscape and publishing, across the university community. You will ensure compliance with the copyright legislation, university policy and licenses and develop a shared institutional understanding of both the opportunities and challenges associated with this field.

The post-holder will provide to the university community (Faculties and Professional Services) legally compliant, detailed interpretation and policy advice on copyright. You will actively coordinate advisory services, making available current and reliable information on the web and bringing together specialists in the areas of broadcast media, newspapers, music and other formats particularly where the university has agreed license schemes. Operationally, you will oversee the necessary information management processes and audit requirements.

You will be the key contact with the University’s Legal Panel Agreement on matters pertaining to the Copyright, Designs and Patents Act 1988 and subsequent statutory instruments including the 2014 exemptions. You will engage with external bodies including the Copyright Licensing Agency, the UK Government Intellectual Property Office and other license issuing bodies.  Professionally, you will establish effective external networks concerned with copyright and intellectual property in
universities.

Educated to degree level (or equivalent work experience) you will be able to think strategically as well as deliver operationally.  You will be a confident communicator and able to identify opportunities to innovate and change within the evolving regulatory framework. You will enjoy working with groups and individuals, including academic staff, researchers and  students, as well as networking beyond the University.

Please see the Job Description & Person Specification for further details and apply using the online application form.

HEFCE: The road to the Responsible Research Metrics Forum – Guest post by Ben Johnson

On Wednesday 8th February 2017, Imperial College made headlines by announcing that it has signed the San Francisco Declaration on Research Assessment. Meaning that Imperial will no longer consider journal-based metrics, such as journal impact factors, in decisions on the hiring and promotion of academic staff. Their decision followed a long campaign by Stephen Curry, a professor of structural biology and long advocate of the responsible use of metrics.

At the end of last year, Loughborough University issued a statement on the responsible use of metrics in research assessment, building on the Leiden Manifesto.  This was followed two weeks ago, with a statement on principles of research assessment and management from The University of Bath, building on the concept of responsible use of quantitative indicators. And, earlier in 2016, the Stern review of the Research Excellence Framework recognised clearly that “it is not currently feasible to assess research outputs in the REF using quantitative indicators alone.

What these examples and others show is that the issue of metrics – in particular ‘responsible metrics’ – has risen up the agenda for many universities. As one of those closely involved in the HEFCE review of metrics (The Metric Tide), and secretary to the new UK Forum for Responsible Research Metrics, this of course is great to see.

Nevertheless, of course the issue of metrics has been bubbling away for much longer than that, as the Metric Tide report set out. University administrators, librarians and academics themselves have taken a leading role in promoting the proper use of metrics, with forums like the ARMA metrics special interest group promising to play a key part in challenging attitudes and changing behaviours.

In addition, as we have seen with university responses to the government’s HE green paper and to the Stern review, the wider community is very alive to the risks of an over reliance on metrics. This was reflected in the outcomes of both exercises, with peer review given serious endorsement in both the draft legislation and the Stern report as being the gold standard for the assessment of research.

These developments are exactly the kinds of things that the new UK Forum for Responsible Research Metrics wants to see happening. This forum has been set up with the specific remit to advance the agenda of responsible metrics in UK research, but it’s clear that this is not something it can deliver alone – it is a substantial collective effort.

So what will the Forum do? Well, as the Metric Tide report states, many of the issues relate to metrics infrastructure, particularly around standards, openness and interoperability. The Forum will have a specific role in helping to address longstanding issues, particularly around the adoption of identifiers – an area of focus echoed by the Science Europe position statement on research information systems published at the end of 2016, which is itself a useful touchstone for thinking about these issues.

To support the Forum; Jisc are working hard on developing an action plan to address the specific recommendations of the Metric Tide report, with a particular focus on building effective links with other groups working in this area, e.g. the RCUK/Jisc-led Research Information Management (RIM) Coordination Group. This will be discussed when the Forum meets again in early May.

However, sorting out the ‘plumbing’ that underpins metrics is no good if people continue to misuse them. To support this, the Forum will therefore take a complementary look at the cultures and behaviours within institutions and elsewhere; firstly to develop more granular evidence of how metrics are being used, and secondly to look at making specific interventions to support greater responsibility from academics, administrators, funders, publishers and others involved in research.

With that in mind, Universities UK and the UCL Bibliometrics Group, under the auspices of the Forum, will shortly be jointly issuing a survey of HEIs on the use of problematic metrics in university management and among academic groups, to help identify the scale of any (mis)uses of measures such as the JIF. But, to help us also understand better why initiatives like DORA have not been more widely adopted in the UK.

Of course, metrics have much broader uses than just measuring outputs – they are also used to measure people, groups and institutions. This is a key finding of the Metric Tide report, but one that is often overlooked when focussing very narrowly on output metrics. The forum will also be focussing on this, seeking to bring people together across all domains.

To make a decisive contribution here, the Forum needs to have clout, and it is for this reasons that the five partners (HEFCE, RCUK, Jisc, Wellcome and Universities UK) asked Professor David Price to convene and chair the Forum as a mixed group of metrics experts and people in positions of serious influence in their communities. This was a delicate balance to strike, and one that can only be successful if the Forum engages effectively with the various interested communities.

With that in mind, the Forum is planning to set up a number of ‘town hall’ meetings throughout 2017 to engage with specific communities on particular topics, and would very much welcome hearing from anyone interested in being involved in these or in engaging with the Forum in any other way. We will be announcing further details of these on the Forum’s web pages soon.

If you are interested in joining up with the work of the Forum throughout 2017, please contact me on b.johnson@hefce.ac.uk – I’d be delighted to hear from you.


Ben Johnson is a research policy adviser at the Higher Education Funding Council for England, is secretary to the UK Forum for Responsible Research Metrics and a member of the G7 expert group on open science.

He has responsibility for policy on open access, open data, research metrics, technical infrastructure and research sector efficiency within universities in England. In recent years, he co-authored The Metric Tide (a report on research metrics), developed and implemented a policy for open access in the UK Research Excellence Framework (REF), and supported Professor Geoffrey Crossick’s project and report to HEFCE on monographs and open access. He is a member of the UK’s open data forum and co-authored the forthcoming UK Open Research Data Concordat. In addition to this, he is currently part-seconded to the Department of Business, Energy and Industrial Strategy to work on reforming the research and innovation landscape.

REF consultation: Lis-Bibliometrics response

The four UK higher education funding bodies are consulting on proposals for the next Research Excellence Framework.  Thank you to all Lis-Bibliometrics members who have contributed their thoughts on this.  Here is a draft response the Lis-Bibliometrics Committee intends to submit on behalf of the group.  If you have any last minute comments please contact me or share via the list as soon as possible.  We’ve decided to respond only to consultation question 18:

Q.18 Do you agree with the proposal for using quantitative data to inform the assessment of outputs, where considered appropriate for the discipline? If you agree, have you any suggestions for data that could be provided to the panels at output and aggregate level?

We agree that quantitative data can support the assessment of outputs where considered appropriate by the discipline.  Any use of quantitative data should follow the principles for responsible use of metrics set out in the Metric Tide and the Leiden Manifesto.

  • Disciplinary difference, including citation patterns varying by output type, must be taken into account.
  • Data should only be used if it offers a high standard of coverage, quality and transparency. Providing data from a range of sources (e.g. Scopus, Web of Science, Google Scholar) would allow the panel to benefit from the strengths of each source whilst highlighting the limitations.
  • Known biases reflected by bibliometric indicators (e.g. around interdisciplinary research and gender) should be taken into account.
  • A range of data should be provided to avoid incentivizing undesirable side effects or gaming by focusing attention on a single indicator.
  • Given the skewed distribution of citations, and the ‘lumpiness’ of citations for recent publications in particular, we recommend measures of uncertainty be provided alongside any citation data. At the very least, false precision should be avoided.
  • In addition to citation indicators, panels should take into account the number of authors of the output.

Panels should receive training on understanding and interpreting the data and be supported by an expert bibliometric advisor.

We do not consider the field-weighted citation impact indicator appropriate for the assessment of individual outputs: as an arithmetic mean based indicator it is too heavily skewed by small numbers of ‘unexpected’ citations.  Furthermore its 4 year citation window would not capture the full citation impact of outputs from early in the REF period.  The use of field-weighted citation percentiles (i.e. the percentile n such that the output is among the top n% most cited outputs worldwide for its subject area and year of publication) or percentile bands (as used in REF2014) is preferable.  Percentile based indicators are more stable and easier to understand as the “performance” of papers is scaled from 1-100, but can be skewed by large numbers of uncited items.

Output level citation indicators are less useful for recent outputs.   Consequently, it might be tempting to look at journal indicators.  This temptation should be resisted!  Given the wide distribution of citations to outputs within a journal, and issues of unscrupulous ‘gaming’, journal metrics are a poor proxy for individual output quality.  Furthermore, use of journal metrics would incentivize the pursuit of a few ‘high impact’ journals to the detriment of timely, diverse and sustainable scholarly communications.

Use of aggregate level data raises the question of whether the analysis is performed only on the submitted outputs, or on the entire output from the institution during the census period. The latter would provide a more accurate picture of the institution’s performance within the discipline, but automatically mapping outputs to REF units of assessment is extremely challenging.  Furthermore it would be hard to disaggregate those papers written by staff who are not eligible for submission to REF.

Katie Evans, on behalf of the Lis-Bibliometrics Committee

Note: This replaces an earlier draft REF consultation response posted on 1st March 2017.   

My double life: playing and changing the scholarly communications game. By Lizzie Gadd

I love my job.  I work as a “Research Policy Manager (Publications)” at Loughborough University.  And I spend my time understanding and advising on how we can improve the quality and visibility of our research.  However the strategies for achieving this aren’t as straightforward as you might think.  And increasingly I feel like I’m leading a double life, both seeking to play and change the scholarly communication game.

 communication-by-jackie-finn-irwin-cc-by-2-0

‘Communication’ by Jackie Finn-Irwin CC-BY 2.0

 

What do I mean by this?  Well, the game we’re in is one where publications mean prizes. If others rate them (e.g. in the REF) or cite them (as measured by the University League Tables), you win. To be a winner, we know you need to produce quality research (of course), collaborate internationally (improves quality and visibility), and publish in those journals that are indexed by the tools that both expose your research to the world, but importantly, also do the citation measuring for the aforesaid REF and University League Tables.  And although there is a huge backlash against using journal metrics as an indicator of the quality of the underlying research, there is no doubt that getting a paper into a journal with extremely rigorous quality standards still means something to academics and their peers.

So the current game is inherently tied up with journal publications.  And there are two well-rehearsed reasons why this is not a good thing. The first is that journals are expensive – and getting more expensive. The second reason is that journals are slow at communicating research results.  Publication delays of years are not uncommon. (There are of course other objections to journals, not least the murky waters about how an end-user may re-use journal content, but I won’t go into these here.)

This is why we need to change the game. And the best option we have for changing the game is to keep producing quality research and collaborating internationally, but to also create new means of scholarly communication that are neither expensive, nor slow.

Some might argue that you can have it both ways. Publish in a journal which has a liberal green open access policy.  This will allow you to provide immediate access to the research through the pre-print, and access to the peer reviewed research through the postprint .  And to be honest, this is the compromise we currently go for.  But this form of open access is showing no signs of reducing the cost of subscriptions .  And not all journals have liberal green open access policies. And not all academics want to release their preprint until it has been accepted by a journal, in case the paper is rejected – so this rather defeats the object.

Now there are many alternative models of scholarly communication that ARE inexpensive and speed up publication.  These include preprint archives or ‘diamond’ open access journals that charge neither the author to submit nor the reader to read.  However, the problem is that these are not picked up by the citation benchmarking tools.  This is either because they are not journals at all (preprint archives) or because they are new entries to the market so have yet to prove their significance in the field and be selected for inclusion.

So what does a Research Policy Manager (Publications) do?  Well, it seems to me like I have two equally unsatisfactory options.  The first is to continue to play the journals game in order to ensure the citedness of our research is captured by the key citation benchmarking tools, but encourage OA as a means for improving visibility and discoverability of our work.  Whilst this isn’t going to speed up publication or reduce our costs, I think there is something about the lure of a high quality journal may well be a driver of research quality – which is very important.

The second option is to dramatically change our focus on to new forms of scholarly communication that will speed up publication rates and reduce our costs, such as preprint archives and diamond OA journals.  And by so doing, we’d need to hope that the well-documented citation advantage for immediately open research will do its thing. And that when the research is considered by the REF, they really will just focus on the content as they promise, and not the reputation of the vehicle it is published in.  Always bearing in mind that any citations that the research does accrue will only be picked up by open tools such as Google Scholar and not the tools that supply the REF – or the league tables.

So to answer my own question – what does a Research Policy Manager advise in these circumstances? Personally, I try to live with one whilst lobbying for the other, and as far as possible seek to ameliorate any confusion faced by our academics.  This is easier said than done – certainly when faced with later career academics who can remember a time when research was optional and where you published was entirely your business.  To now be faced with a barage of advice around improving the quality, accessibility, visibility, and citedness of your work, bearing in mind that the routes to these are often in conflict with each other, is a constant source of agony for both them and me.

I recognise that we have to play the game. Our reputation depends on it.  But we also have to change the game and provide quicker and more affordable access to (re-usable) research results. At the risk of sounding over-dramatic, the future may depend on it.

 

Elizabeth Gadd

Job Opportunity: University of Greenwich is looking for a Research Outputs Manager!

Greenwich Research and Enterprise

Location:  Greenwich
Salary:  £38,183 to £46,924 plus £4546 London weighting
Contract Type:  Open
Closing Date:  Monday 20 March 2017
Interview Date:  To be confirmed
Reference:  1346

Greenwich Research and Enterprise (GRE) is the University’s central office responsible for developing a supportive research culture and establishing links with industry and enterprise. GRE works across four service areas: research services, business development and enterprise services, commercial and IP services, and business support services.

The university is investing in expanding its research services and recognises high quality support is pivotal to its research environment and is now recruiting a Research Outputs Manager to join the GRE Research Development Services team at Greenwich.

This role will lead the development of library services as they relate to research outputs and research data management in order to meet the needs of the University’s research community, external research funders, and the requirements of the Research Excellence Framework. In particular, this will involve overseeing the ongoing development of the Institutional Repository – GALA (Greenwich Academic Literature Archive) – ensuring its effective use for Open Access requirements, and the development and implementation of a Research Data Management Policy & Framework.

Please see the Job Description & Person Specification for further details and please apply using the online application form.

 

Journal Metrics in Context: How Should a Bibliomagician Evaluate a Journal? Guest post by Massimo Guinta

In the world of academia and research, “publish or perish” has become more complicated than ever. It’s not enough to merely publish, one has to publish in a high-impact journal, in the hopes of getting noticed, and more importantly, perhaps, getting funded for further research.

journals
CC BY david_17in

Institutions are urging their researchers to publish in high-impact journals. Library collections are on tight budgets, so librarians want only the best journals for their collections. Emphasis on impact and quality has given rise to a whole new realm of metrics by which to measure a journal. But which metric is best? What’s the magic bullet to definitively name a journal as The Best?

One of the most well-known journal metrics is the Journal Impact Factor (JIF). It seems like the JIF has invaded every aspect of the academic researcher’s world, but did you know it was developed for a very specific use?

JIF is defined as “a ratio of citations to a journal in a given year to the citable items in the prior two years.” It was intended as a simple measure for librarians evaluating the journals in their collections. In fact, the entirety of the Journal Citation Reports (JCR) was developed for this purpose in the 1970s. Over the years, its utility to other markets has emerged – most importantly to publishers and editors of journals. It has also been misused to evaluate researchers, but Clarivate Analytics, formerly the IP & Science business of Thomson Reuters, has always been quite clear that JCR data, and the JIF in particular, should not be used as proxy measures for individual papers or people.

So is JIF the be-all and end-all of journal evaluation? No. The truth is, there is no one metric that can be used to name the best journals. Why not? “Best” is subjective, and so are metrics.

Sticking with the JIF for now, anyone seeking to evaluate a journal’s place in the research world should not simply look at its JIF; that number, on its own with no context, has limited meaning. Even in context, the JIF is just one number; the JCR contains an entire suite of metrics for journal evaluation, and other parties also offer journal evaluation metrics, such as the SCImago Journal Rank, or the Eigenfactor metrics, which are produced by Clarivate Analytics in partnership with the University of Washington.

Both Eigenfactor and Normalized Eigenfactor scores look at the data in a different way than the JIF does—they look at the total importance of a scientific journal in the context of the entire body of journals in the JCR. While JIF uses two years of data and is limited to the field in which a journal is classified, Eigenfactor scores look at the entire corpus of journals and five years of data. A journal could be ranked lower by its JIF than by its Eigenfactor (or Normalized Eigenfactor).

So which is better: Journal A with a higher JIF or Journal B with a higher Eigenfactor? Looking at just these two metrics will not answer the question. Perhaps Journal B also has a higher Article Influence Score—a score greater than 1 shows that a journal’s articles tend to have an above-average influence. Perhaps Journal A also has a higher Percent Articles in Citable Items, meaning it tends to publish more original research than reviews. Looking outside the JCR, perhaps Journal A has had a higher citation count in the past year, whereas Journal B skews more favorably looking at Altmetrics like page views or social media mentions.

Therefore, any statements about a journal’s impact need to include context. When you evaluate a journal, you should look at all of its metrics for the most complete picture, and this picture will vary by field and year.

Bottom line? While there is no magic bullet to determine the best journals, with the wealth of journal metrics out there, and whatever might come down the pipeline in the future, evaluating journals in context is not as difficult as you might think!

 

Further Reading:

  1. Best Practices in Journal Evaluation
  2. All About the Eigenfactor
  3. JCR Blog Series
  4. JCR product information

 

Massimo Giunta is Account Manager UK & Ireland for Clarivate Analytics

 

What are you doing today?

The Lis-Bibliometrics commissioned, Elsevier sponsored bibliometric competencies research project is seeking to develop a community-supported set of bibliometric competencies, particularly for those working in libraries as well as in other related services.  You can take part by completing the bibliometrics competencies survey at: https://survey.shef.ac.uk/limesurvey/index.php?sid=27492&lang=en

To get a flavour of the variety of bibliometric work going on, I asked fellow Lis-Bibliometric Committee members what they’re doing today:

“Today I’m helping a researcher clean up his very muddled and duplicated Scopus Author IDs and link his outputs to his ORCID iD. I’m also thinking about how best to benchmark the output of our Law school against our competitors for undergraduate students.” Karen Rowlett, Research Publications Adviser, University of Reading

“Today I’m discussing the release of our Responsible Metrics Statement (now approved by Senate J) with our PVCR; running some analyses on SciVal which look at the impact of Loughborough’s conference publications on our overall citation performance; and presenting at a cross-university meeting aimed at exploring how to improve the visibility of our research.” Elizabeth Gadd, Research Policy Manager (Publications), Loughborough University

“Today I am preparing a presentation on Metrics for one of the teams, and working on analysing the Leiden Ranking data.” Sahar Abuelbashar, Research Metrics Analyst, University of Sussex

Meanwhile, I’m advising researchers on using citation metrics in grant applications.  What are you doing today?

Katie Evans

Research Analytics Librarian, University of Bath