The third lecture of the Swiss Year of Scientometrics explored the relationships between Open Science and Research Assessment

Gilles Dubochet introducing the lecture
Gilles Dubochet introducing the lecture (EPFL / Alain Herzog)

On 7 February 2024, the third Swiss Year of Scientometrics (SYoS) lecture took place at EPFL bringing together over 40 participants. After a welcome address by David Johann (Project Lead of SYoS) and a short introductory talk by Gilles Dubochet (Head of Open Science at EPFL), Elizabeth Gadd gave an inspiring lecture on the relationships between Open Science and Research Assessment.

Gadd began by discussing the impact of Research Assessment practices on open research. In a setting where assessment is strongly based on articles published in prestigious journals, and where journal prestige is tied to high Article Processing Charges [1], researchers are incentivised to favour publishers focussing on citedness over those promoting openness. Moreover, researchers who wish to engage with forms of open research which are not assessed must work harder to have a chance to succeed in a highly competitive environment.

In order to “fix” this “incentive system”, alternative forms of assessment have been called upon, especially in international statements and recommendations on Research Assessment. The Research Assessment reform is hence often rooted in Open Science. Among the most prominent examples are the DORA declaration (2013), The Leiden Manifesto (2015), The Metric Tide (2015), The Hong Kong Principles (2020) and CoARA (2022) [2].

Before considering how to assess Open Science, Gadd discussed what should not be done by critically addressing some propositions appearing in documents on Research Assessment issued by recognized organisations. She began by discussing the idea that, to enable Open Science, open research practices should be rewarded [3]. After an initial reservation regarding the introduction of a measure for openness in Research Assessment as it would not escape the Goodhart’s law, Gadd affirmed that, in an already overly evaluated system, assessing openness might not be the appropriate incentive to it. She also asserted that openness can neither be a direct replacement for citedness, nor for quality. Finally, she underlined that institutions wanting to monitor Open Science do not currently have many other choices than opting for commercial services, and that, while the Research Assessment reform focuses mainly on universities, other stakeholders such as funders and governments should be included in the debate since they can play a significant role in making open data available.

After having dealt with ways Open Science should not be assessed, Gadd went to the context in which and to the means by which research could responsibly be assessed. To this end, she introduced the SCOPE framework for Research Evaluation, which consists in three principles and five stages of design.

Audience listening to Elizabeth Gadd (EPFL / Alain Herzog)
Audience listening to Elizabeth Gadd (EPFL / Alain Herzog)

The fact that assessing Open Science to incentivise it is not always the appropriate course of action can be deduced from the first of the three SCOPE principles, which asserts that evaluation should only be carried out where necessary. The next two principles emphasise on the need to co-design assessment with the evaluated, and to draw on evaluation expertise. After having introduced these three principles, Gadd went through each of the five stages of design, each one being linked to one letter of the acronym of the framework: S stands for “Start with what you value”, C for “Context considerations”, O for “Options for evaluating”, P for “Probe deeply” and E for “Evaluate your evaluation”. Gadd discussed how each of these stages can concretely be applied to topics related to Open Science.

Gadd started with the first stage, consisting in identifying values. She highlighted that it is not openness itself that should be valued, but rather what it leads to, such as “improving quality”, “accelerating impact” and “enhancing visibility” of research.

She continued with considerations on each of the next stages in the framework. For instance, the letter C stands for “Context considerations”. Gadd underlined on the one hand that analysing universities’ engagement with Open Access is at low risk, and that monitors such as the Open Access dashboard provided by COKI or the Open Science Observatory provided by OpenAIRE, which give insights at national or institutional level, appear to be “appropriate and helpful”. On the other hand, the use of bibliometric data to rank individual researchers without taking context into account, as it occurred in a “Transparency Leaderboard” a few years ago, appears to be “hugely problematic”. Other examples presented by Gadd include those of letter P, which stands for “Probe deeply”. She showed how the choice of the data can lead to geographical discriminations regarding openness, and how unfair evaluation can arise in some disciplines if Open Science evaluation is based on an insufficiently probed procedure. She then spoke about potential unintended consequences, such as the fact that openness could become the next “big competition” in Higher Education Institution. She referred to it as the “ultimate irony” given the actual ambitions of open research. She also mentioned recent and more encouraging endeavours for openness advocates, such as the abandonment of the use of proprietary bibliometric products by the Sorbonne University [4], the decision of the CNRS to unsubscribe from a commercial bibliographic database [5], the introduction of an Open Edition of the CWTS Leiden Ranking [6], and the soon to be launched Barcelona Declaration on Open Research Information [7].

In her conclusion, Gadd insisted on the fact that openness should be valued for what it brings, not for what it is. Instead of rewarding openness and developing new assessment tools, she encouraged to incentivise openness and to make it a standard. This could be achieved by establishing long-term expectations on Higher Education Institutions, by implementing interim periods of awareness raising, and by making the best to require and enable open research practices.

The lecture was followed by a panel discussion moderated by Gilles Dubochet, during which panellists discussed Gadd’s propositions. Participants had the opportunity to continue the conversation in a convivial atmosphere over an aperitif [8].



[1] See also figure 2 of Morrison, Heather; Borges, Luan; Zhao, Xuan; Kakou, Tanoh Laurent; & Shanbhoug, Amit Nataraj (2022). Change and growth in open access journal publishing and charging trends 2011–2021. Journal of the Association for Information Science and Technology, 73(12), 1793–1805. An OA version of the article is available through the the University of Ottawa institutional repository,

[2] For further details and examples, see also figure 1 (p. 58) of Curry, Stephen; Gadd, Elizabeth; Wilsdon, James (2022). Harnessing the Metric Tide: indicators, infrastructures & priorities for UK responsible research assessment. Loughborough University. Report.

[3] Such a recommendation can for instance be found in the UNESCO Recommendation on Open Science (2021) and in a scoping report from the European Commission which underpins CoARA, Towards a reform of the research assessment system (2021).

[4] “Sorbonne University unsubscribes from the Web of Science”. Published on December 8, 2023, updated on December 11, 2023, retrieved on March 6, 2024,

[5] “The CNRS has unsubscribed from the Scopus publications database”. Published on January 11 2024, retrieved on March 6, 2024,

[6] “Introducing the Leiden Ranking Open Edition”. Published on January 30, 2024, retrieved on March 6, 2024,

[7] “Barcelona Declaration on Open Research Information: Launching March 2024”. No date, retrieved on March 6, 2024,

[8] I gratefully thank Dr. Julian Dederke, Dr. David Johann and Dr. Kathrin Thomas for their helpful comments during the revision of this text.

Photo of author

Simon Willemin

Simon Willemin is a data scientist at the ETH Library. He works in the group Knowledge Management and is part of the "Swiss Year of Scientometrics" project team.

Leave a comment