The wonderful world of Open Science? The case of eLife

The case of eLife, a life science journal, shows that Open Science is not an easy task and can generate many open questions and uncertainties in the assessment of research. eLife drastically changed its peer review procedure this year, for example, by revoking editorial decisions for acceptance and rejection of a manuscript. Perhaps disciplines focusing on individual assessment, such as psychology or medicine, could give some perspectives on how to deal with uncertainty in research assessment. A broader set of information is necessary that includes peer review ratings and a set of bibliometric indicators beyond the classical citation impact indicators. The quality of information in terms of reliability, validity, and fairness should be explicitly taken into account.

Open Science

Research faces profound changes, as a result of the Open Science movement, which has both an impact on research evaluation and on the use of evaluative bibliometrics, also in Switzerland. According to the Coalition of Advancing Research Assessment COARA (https://coara.eu/), research output should be considered in its diversity and variety, indicating qualitative judgments are also required in research assessment. Therefore, peer review should play a central role.

The case of eLife

CC-BY by eLife (https://elifesciences.org/)
CC-BY by eLife (https://elifesciences.org/)

The fact that these ideas are not only met with a positive response in the scientific community is illustrated by the current case of the journal eLife. eLife has recently been reviewed by Alison Abbott (2023) in Nature and is used as a source here. The journal eLife has been set up in 2012 to provide an alternative to Nature and Science in the field of life sciences. From day one, the peer-review process was drastically changed (https://elifesciences.org/about/peer-review): All submissions (preprints) are returned to the authors after a triage of reviews. Authors can decide whether they revise their paper to address the reviewers’ comments. There is no final editorial decision (accepted or rejected), which should speed up publishing. Instead, the authors decide when their peer-reviewed article is published as a regular journal article (Version of Record). The focus is now on the article itself and its content rather than on the title and the prestige of the journal. The reviews do not get lost but are posted alongside the preprint on the journal’s website.

However, the decision to change the procedure did not meet with unanimous approval amongst the authors and the editors of eLife. Some editors complained that without the option of rejecting a paper, reviewers’ suggestions for revisions might be ignored, and, overall, the quality of the journal would suffer. The initial triage by the editors would be disproportionately important, as editors have to decide which papers are sent for reviews in the first place. In general, it raises the question what is considered to be established knowledge at a given point in time.

Open questions of Open Science

The Open Science movement is driven by digitization and undoubtedly provides important impulses for improving research (e.g., replicable research results, open access to data and publications, quality over quantity, less incentives for scientific misconduct). But, as the above case shows, it also leads to a great deal of uncertainty about the criteria of research evaluation and about the peer review process itself. How should scientific competition and the selection of scientific personnel be organized in the face of increasingly scarce funds and positions? What should young scientists look for in their career planning? Doesn’t the focus on content of publications and proposals further overwhelm the peer review process and slow down an already overburdened and criticized system? Can evaluative bibliometrics really only be dismissed as a misguided development in the sense of setting the wrong incentives, or don’t they also reflect a research reality that relies heavily on reputation, e.g., the reputation of journals via the journal impact factor? Do the current challenges (e.g., climate catastrophe, energy crisis, poverty) demand even more effort on the part of science, and more competition among scientists?

A possible way out: Broader information and quality of information

The sciences that deal with the individual and its evaluation, such as psychology or medicine, have so far hardly taken part in the discussion about research assessment. In medical or psychological diagnostics, the importance of medical history (symptomatology, qualitative case description) and diagnostic tests is equally emphasized against the backdrop of uncertainty of diagnoses. Hypotheses from the anamnesis can be further validated in tests. In this respect, research evaluation would also need to be based on a broad range of information, taking into account the varying quality of information. In this way, possible biases could be identified, and uncertainties in decision making might be reduced. Bibliometrics could serve as a corrective. These efforts, however, require a broader system of bibliometric indicators than the classical system of citation impact indicators offers, such as altmetrics, indicators for sustainable development goals, indicators for diversity and interdisciplinarity or measures for disruptiveness of research.

In this respect, it is important to know more about the quality of these information sources (peer review ratings, bibliometric indicators) in terms of objectivity, reliability, validity, and fairness (Mutz & Daniel, 2022). Together with colleagues, I am currently trying to develop a psychometric perspective for research assessment that explicitly takes into account these quality criteria (e.g., Mutz, 2022).

What we need in Switzerland: Research and Funding

The case eLife shows how difficult it is to implement Open Science in research assessment and what great uncertainties are associated with it for the journals, but also for individuals and institutions. Therefore, increased efforts in Quantitative Science Studies (QSS) and Scientometrics are necessary to accompany and support this process. Corresponding research funding instruments like those, for example, in Germany are needed in Switzerland as well. The German Federal Ministry of Research and Technology (BMBF), for instance, has launched a large funding program “Quantitative Science Studies” (QSS) recently, and the Volkswagen Foundation is currently in the process of setting up a QSS funding initiative. In Switzerland, researchers search in vain for similar funding opportunities. In the area of Open Access, for example, swissuniversities seems to be more interested in funding projects with technical applications (e.g., Platinum platforms, OA monitoring) rather than social science projects dedicated to central issues of QSS, such as quality assurance and Open Science.

 

References

  • Abbott, A. (2023). Strife at eLife: inside a journal’s quest to upend science publishing. Nature, 615, 780-781, 30.3.2023. https://doi:10.1038/d41586-023-00831-6
  • Mutz, R. & Daniel, H.-D. (2022): Scientific analysis of data on proposals and the decision-making procedure of the FWF with particular focus on the programme “Stand-Alone Projects” in the years 2010-2019. DOI: https://doi.org/10.5281/zenodo.6596769
  • Mutz, R. (2022). Why simply summing up any bibliometric indicators does not justify a good composite indicator for individual researcher assessment – A measurement perspective. Proceedings for the 16th International Conference on Science and Technology Indicators, STI 2022 (7.9.-9.9.2022, Granada).

 

Contact details of the author

Dr. Rüdiger Mutz
Center for Higher Education and Science Studies (CHESS)
University of Zurich
E-mail : ruediger.mutz@uzh.ch
Web page: https://www.chess.uzh.ch/de/about/geschaeftsstelle/Projektleitungen/rmutz.html

Photo of author

Dr Rüdiger Mutz

Dr Rüdiger Mutz holds university degrees in psychology (Dipl.-Psych.) and economics (Dipl.-Kfm.). He was a senior researcher at the Chair of Social Psychology and Higher Education Research at ETH Zurich until July 2020. Since August 2020, he has been a senior researcher at the "Competence Center for Higher Education and Science Studies", CHESS, at University of Zurich.

Leave a comment