Science and Research Assessment
Reform of the approach in the assessment of science and research at UP
In July 2023, Palacký University Olomouc (UP) management signed the CoARA (Coalition for Advancing Research Assessment) agreement, in which the university officially declared its commitment to reform its approach to science and research assessment.
UP has thus joined hundreds of other institutions around the world that are now determined to implement the CoARA commitments. Steps should be gradually implemented in the functioning of the institution that will lead to a responsible assessment of science and research, one that does not focus only on the number of publications and their impact factor, nor on the mere h-index of researchers, but recognises the broadest disciplinary diversity and also the importance and diversity of the different roles of academic and scientific staff in the research process.
Out of the CoARA’s ten commitments, the first four – the core commitments, as they are called – are the most important: 1) Recognise the diversity of contributions and practices in science and research; 2) Base research assessment primarily on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators; 3) Abandon inappropriate uses of publication-based metrics (especially the Journal Impact Factor, Article Influence Score, and h-index) in research assessment; and 4) Avoid the use of rankings of research organisations in research assessment.
Within one year of the signing of the agreement, an Action Plan was to be developed in order to outline how the CoARA commitments would be implemented over the next five years. The Action Plan was developed by a newly formed workgroup, which included representatives of the UP Research Concepts and Support Office, the UP Vice-Deans for R&D, and a representative of the Czech Advanced Technology and Research Institute (CATRIN). The document is fully public and is also intended to be read by all UP employees, as the change in the approach to science and research assessment should take place from the top down as well as from the bottom up, i.e. starting with each individual.
Do you want to know in more detail how we need to change our approach? Please read on.
First commitment: Diversity
Changes in assessment procedures should allow for the recognition of broad diversity:
- in valuable contributions that researchers make to science and society, including a variety of results beyond journal publications and regardless of the language in which they are communicated;
- in practices that contribute to the robustness, openness, transparency, and inclusiveness of research and the research process, including peer review, teamwork, and collaboration;
- in activities including teaching, tutoring, supervision, training, and mentoring.
It is also important that the assessment facilitates the recognition and appreciation of various roles and careers in research, including for example roles as data managers, software engineers, and data scientists, technical jobs, public outreach work, science diplomacy, science advisors, and science communicators and popularisers. It is acknowledged that current practice is often too narrow and restrictive, so the aim cannot be to replace the narrow criteria we want to move away from with other, yet equally narrow criteria. Rather, the aim is to enable organisations to broaden the range of what they value in research, while recognising that this may vary across disciplines and that each individual researcher should not be expected to engage in all activities at once.
Second commitment: Quality
This commitment will enable a shift towards research assessment criteria that focus primarily on quality, recognising that the responsible use of quantitative indicators can support assessment if it is meaningful and relevant, which depends on the context.
Peer-review evaluation is the most robust method of quality assessment known and has the advantage of being in the hands of the research community. It is important that peer-review evaluation processes are designed to meet the basic principles of rigour and transparency, i.e. peer review, transparency, impartiality, appropriateness, confidentiality, integrity and ethical considerations, gender equity, equality, and diversity. To address the biases and imperfections to which any method is prone, the research community re-assesses and progressively improves peer-review evaluation. In addition, revised and possibly new criteria, tools, and procedures appropriate for quality assessment could be explored. Moving to evaluation practices that rely more heavily on qualitative methods may require additional efforts by researchers. Researchers should be compensated for these efforts, and their contributions to peer review should be valued as part of their career progression.
Third commitment: Indexes
This commitment will limit the dominance of a narrow set of quantitative metrics based on journals and publications in research assessment. In particular, this means moving away from the use of metrics such as the Journal Impact Factor (JIF), the Article Influence Score (AIS), and the h-index as proxy indicators of quality and impact.
Their “inappropriate uses” include:
- relying solely on author-based metrics to assess quality and/or impact (e.g. counting articles, patents, citations, grants, etc.);
- evaluating results based on indicators related to the place of publication, format, and/or language;
- reliance on any other metrics that do not properly capture quality and/or impact.
Fourth commitment: International rankings
We acknowledge that the international rankings most often referred to by research organisations are not currently “fair and accountable”. This commitment will help to prevent the metrics used in international rankings which are inappropriate for researcher assessment to be carried over into research and researcher assessment. It will help the research community and research organisations to regain autonomy in shaping their assessment procedures, rather than having to follow criteria and methodologies set by external commercial companies.
Where ranking approaches are considered unavoidable, as may be the case for forms of assessment beyond the scope of this agreement, such as comparative evaluation and performance evaluation of countries and institutions, the methodological limitations of such approaches should be recognised, and institutions should avoid “trickle-down” effects on the assessment of research and researchers.