Federal research funders endorse plan to improve research assessment

Lindsay Borthwick
December 17, 2019

Five major Canadian research funders have signed the San Francisco Declaration on Research Assessment (DORA), a set of recommendations for improving the way the output of scientific research is evaluated. DORA was developed in 2012 by a group of journal editors and publishers aiming to move the research community beyond Journal Impact Factor, which had become a surrogate for research quality and widely used in funding, hiring and promotion decisions.

“The signatories are committed to the meaningful assessment of excellence in research funding and to ensuring that a wide range of research results and outcomes are valued as part of the review process, as noted in the recent joint statement,” the Natural Sciences and Engineering Research Council of Canada (NSERC) said in a statement to RE$EARCH MONEY.

The other signatories were the Canada Foundation for Innovation (CFI), Canadian Institutes of Health Research, Genome Canada and the Social Sciences and Humanities Research Council of Canada. (CFI had previously signed DORA, in 2018.)

The signing comes at a time of conflict between what academia values and what the system incentivizes and rewards. The research community is increasingly embracing open science, a movement to make scientific research and data widely accessible. However, existing approaches to research assessment are impeding the widespread adoption of open science practices.

Canada's move aligns with recent actions taken by research funding agencies around the world, particularly in Europe.

Just this week, five research funders and public knowledge institutions in The Netherlands, four of which have signed DORA, issued a joint plan to overhaul the existing rewards and recognition system. And in a presentation at the Inaugural Open Science Symposium in Montreal, Alain Schuhl, Deputy CEO for Science at France’s Centre National de la Recherche Scientifique (CNRS), outlined the agency’s new approach to research evaluation as part of its Open Science Roadmap. CNRS will evaluate research results themselves and not the fact that they have been published in a prestigious journal. Furthermore, all types of scientific outputs will be considered, he said.

“We have to convince everyone that this is a necessary revolution,” Schuhl concluded.

Impact factor misuse

Journal Impact Factor (JIF) has become a key part of research assessment methods at many universities and national peer-review committees, despite the fact that it was originally developed as a tool to help librarians decide which journals to purchase, not as an indicator of research—or researcher—quality.

Several studies published this year by Juan Pablo Alperin, co-director of the Scholarly Communications Lab at Simon Fraser University, underscore how entrenched JIF has become. An online survey exploring publishing decisions by faculty showed that 36 percent of respondents, who were researchers at 55 universities in the United States and Canada, thought the journal’s impact factor was “very valued” for review, promotion and tenure (RPT).

Another study collected and analyzed RPT documents from 129 universities from the US and Canada. Alperin and his colleagues found that 40 percent of doctoral, research-intensive institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. The results "raise specific concerns that the JIF is being used to evaluate the quality and significance of research, despite the numerous warnings against such use," the study's authors concluded. "We hope our work will draw attention to this issue, and that increased educational and outreach efforts, like DORA... will help academics make better decisions regarding the use of metrics like the JIF."

None of the universities in the sample had signed DORA, though it has been endorsed by more than 1,500 organizations, including funding agencies, research institutes and publishers, and more than 15,000 individuals. Together with the Leiden Manifesto for Research Metrics and The Metric Tide Report, both of which were published in 2015, DORA has become a catalyst for improving research assessment, including in Canada.

Shift in research assessment

Indeed, DORA has influenced the overall approaches to peer review at the Tri-agencies for years. For example, NSERC modified its guidance for peer reviewers in accordance with DORA principles in 2014, "by stating that the assessment of researcher excellence must be based on the quality and impact of all contributions, not only on the number of publications or conference presentations. Impact does not refer to quantitative indicators such as the impact factor of journals or h-index, but on the influence that results have had on other researchers, on the specific field, the discipline as a whole, or on other disciplines."

But as Alperin's research suggests, more work needs to be done to change research assessment procedures and the culture that surrounds it. According to NSERC, it is currently exploring ways to further integrate DORA principles.

Stefanie Haustein, assistant professor of library and information studies at the University of Ottawa, is an expert on both traditional and non-traditional indicators of research output and their use. In an interview with RE$EARCH MONEY, she said efforts are underway to improve existing metrics, for example, by accounting for the context of citations within scientific articles, including where they occur and whether they are positive, negative or neutral references. "I think that will be coming up in the next five years or so because now, with more and more open-access publications, we can get the full text of a study and analyze it. That could make these measures more meaningful," she said.

Haustein, in collaboration with Alperin, is also leading the Metrics Literacy project, which aims to reduce the misuse of indicators, such as the Journal Impact Factor. She intends to develop and share online resources aimed at educating researchers and research support staff about how to apply and interpret scholarly metrics.

"These indicators are being used, so I think that all of academia, including researchers, but also staff, managers and policymakers, need to be aware of what these metrics can do, but also, what their limitations are."

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.