“More is better” surely also applies to the journal impact factor (IF), wouldn’t you agree? A paper published in a high IF journal clearly must create a significant impact in the research community. Or is that an oversimplification? Taking a closer look, we will see how an impact factor can go up and down unexpectedly. Reducing to simple numbers may be unreliable, and careers may be on the line …
The Impact Factor (IF) and the Choice of Journal
When looking for a journal to publish their research, many researchers will consider the impact factor of prospective journals. What can better serve as a benchmark of scientific journals than an indicator that features the word “impact” in its name? Sadly enough, the harsh reality is that early career researchers will often be judged using the impact factors of the journals in which they publish, as exemplified by the following tweet:
How could using the impact factor alone mislead a budding researcher about the benefit of publishing in one specific journal: Acta Crystallographica Section A, a journal on crystallography?  It has five siblings, B to F, and its 2022 impact factor (with self-citations) is listed as 1.8.  This is neither particularly high nor particularly low. For 2022, the median impact factor of all journals is 1.6, and 1.8 is around the top 40%, or to phrase it differently: From the 21’460 journals with an IF, the journal ranks around 8500. 
Over the last ten years, the IF of Acta Crystallographica Section A stayed relatively stable, around 2 , but two distinct outliers occurred in 2016 and 2017. Suddenly, the IF was three to four times the original value. This is a substantial increase as far as impact factors go. How did this happen?
It gets even more peculiar
Extending our look back further over the last 23 years, we notice two more striking outliers. In 2009 and 2010, the impact factors were 49.9 and 54.3, respectively, before and after – business as usual. Curiously, Acta Crystallographica Section A was ranked number 2 of the highest impact journals in both years, only trumped by CA-A Cancer Journal For Clinicians, with 87.9 and 94.3, respectively. For anybody using impact factors as a proxy for journal quality, Acta Crystallographica Section A suddenly became the second-most-desirable journal possible to publish in. How did this happen?
Impact factors are an annually released set of numbers, calculated using the formula below. The number of citations in a given year that the journal’s articles of the preceding two years received is divided by the number of articles published during these two years. If every article published was cited once and only once, the impact factor would be exactly 1.
There are two possible causes for such a drastic increase in IF: the number of citations has dramatically risen, or the number of published articles has profoundly shrunk. Below is a visualization of the two numbers for Acta Crystallographica Section A.
The number of published articles remained roughly constant, and increased citation numbers caused the two spikes in IF in 2009/2010 and 2016/2017. This begs the question: “What is the origin of all these citations?”. In January 2008, a special issue celebrated “60 years of Acta Crystallographica and the IUCr”. One article in this issue was “A short history of SHELX” , a perspective on software commonly used in crystallography. The abstract suggested: “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.” People jumped at the opportunity, and the article raked in around 5700 and 6700 citations in 2009 and 2010, respectively. The SHELX paper explains more than 98% of the increase in citations for the journals in the two years.  It remains highly cited afterward, but this is outside of the 2-year window included in IF.
What about the peak in 2016, 2017? Funnily enough, in 2015, the same author published a successor for the SHELX paper . This accounts for most, but not all, extra citations for both years.
SHELX was a genuine high-impact paper. Its impact was so significant that for two years, it catapulted the impact factor of Acta Crystallographica Section A up to rank number 2. However, impact factors are no more than a ratio of citations to publications. The observed surge was possible because the number of citations to the journal was generally low, and the number of articles published each year remains small. If, for any reason, one of the two numbers changes substantially, the IF will be affected accordingly.
This case was revived because, among non-scientometric researchers, the impact factor is almost certainly the most widely known scientometric concept. Impact factors influence lives, career trajectories, and the allocation of research funding. Unfortunately, to date, they are (ab)used as heuristics for the quality of papers, even though this never was their intended purpose. Impact factors are a journal-level metric, and one must bear in mind that the fleetingly high IF in our example did not add any more value to other papers in the same journal. Additionally, the Acta Crystallographica Section A case demonstrates that the impact factor can be incredibly misleading even as a metric for journals.
As such, impact factors incentivize problematic behavior: People will find new strategies to game journal metrics for their own interest. A look at the broader context can help catch suspicious behavior. We should always question the metrics. How does the impact factor change over time? If there are changes, is there an obvious cause? More than reducing the world to a single number is required.
This article started out with a tweet, and it will end with one that was discovered just at the final stages of writing.  I agree with its first part. High impact publication may mean different things and one should check carefully what definitions other people use. However, there may or may not be correlation between the two, and again, one always needs to check carefully.
In part 2, coming soon, we will try to make a connection between feeling and ranking of impact factors.
 The author of this article did not discover this story. It was written about earlier in other outlets: https://doi.org/10.1038/466179b and https://niscpr.res.in/jinfo/ALIS/ALIS%2058(1)%20(Correspondence).pdf
 https://journals.iucr.org/services/impactfactors.html Journals will typically list impact factors with self-citations because this makes the values slightly larger.
 Data was retrieved for 2022 from Journal Citation Reports (https://jcr.clarivate.com/jcr/browse-journals), version Jun 28 of the dataset.
 Determining the numbers that were cited in the 2008 SHELX paper is impossible. E. g. the 2010 citation numbers in Web of Science are larger than the total citations used to calculate the IF in Journal Citation Index. A possible explanation might be increased Web of Science coverage, which may include citations from additional journals that were not included back in 2010. Impact factors are not typically corrected or adapted retrospectively. A snapshot of citations used for individual papers is given for more recent impact factors, which facilitates retracting history.
 Dr. Norrby attributes the content of his Tweet to Dr. Stuart Cantrill. Dr. Cantrill in turn directed to one of his blog posts under https://stuartcantrill.com/2016/01/23/imperfect-impact/.
 Dr. Oliver Renn, Dr. Gina Cannarozzi und Julia Ecker are gratefully acknowledged for their assistance in revising this text.