Skip to main content
Free AccessEditorial

Impact Factor Wars

Episode VI – Return of the Meaningless Metric

Published Online:https://doi.org/10.1027/1015-5759/a000679

“When a measure becomes a target, it ceases to be a good measure”

Goodheart’s law

The journal impact factor is the most commonly used metric for assessing scientific journals and is widely used as a proxy measure to judge the quality of individual research articles and their authors. It is a simple numerical ratio of article citations relative to articles published and is calculated as IF2020 = A/B, in which A is the number of times the articles published in 2018 and 2019 were cited in 2020 (by articles published in indexed journals), and B is the total number of citable items published in 2018 and 2019. Using this formula, the European Journal of Psychological Assessment (EJPA) maintained a relatively stable impact factor of around 2.0 each year (over the past 5–10 years), but this increased to approximately 3.0 in the 2020 Journal Citation Reports published by Clarivate in June 2021. In this editorial, we aim to provide a brief overview of the journal impact factor, including its viability as a meaningful metric for scientific journals and what this means for EJPA moving forward.

The journal impact factor was developed over 60 years ago to help librarians make decisions regarding journal subscriptions (Larivière & Sugimoto, 2019) and quickly became a convenient indicator of journal quality. Journals with a high impact factor were seen as publishing the best quality research, and researchers made a concerted effort to publish in high impact factor journals, with journal editors interested in boosting their journals’ impact factor. Unfortunately, this editorial drive to achieve a high impact factor led to a number of questionable editorial practices considered harmful to progress in science. In the late 1990s and early 2000s, a series of articles were published that detailed how the journal impact factor had contributed to a number of problems in science (see, e.g., Seglen, 1997). For example, journal editors might favor the publication of articles predicted to be highly cited (such as review articles in preference to clinical and applied practice), or strategically delay the publication of potential “heavy hitters” until the start of the following year so they have more time to accrue citations over the full 2-year period (Seglen, 1997). More major problematic issues include journal editors “encouraging” authors of manuscripts to cite recent work published in the journal (or publishing editorials that excessively cite the journal’s own articles). Some journals even went so far as to insist that submitted manuscripts cite recent work published in the journal as part of their submission guidelines (see Caon, 2017). The problem had become so severe that in 2007 two scientists published an editorial that cited every article published in the journal in the previous 2 years – boosting the journals’ impact factor from 0.7 to 1.4 in the process – to highlight the problem of excessive journal self-citation (Schutte & Švec, 2007). Thomson Reuters’ (predecessor of Clarivate) response was to remove the journal from the 2007 Journal Citation Reports, a gesture that was likened in an amusing editorial (Brumback, 2009) to the evil galactic empire wielding its enormous power over the rebels in the movie “Star Wars.”

By the 2010s, everyone seemed to agree that journal impact factors were near meaningless and bad for science (Ioannidis & Thombs, 2019; Larivière & Sugimoto, 2019; Zhang et al., 2017). The development of the San Francisco Declaration of Research Assessment (DORA; Cagan, 2013) – along with several major funding organizations opting not to consider journal impact factors in assessments (see Brito & Rodríguez-Navarro, 2019) – appeared to have finally led to the demise of this problematic metric. It might therefore appear rather puzzling that every June (following the release of the Journal Citation Reports), dozens of journal editors take to social media to announce their pride and joy (or disappointment) in their journals’ (often slight) change in impact factor. It appears that no matter how much information emerges on the problems of journal impact factors, everybody remains captivated. A devil’s advocate might argue that if academics want to embrace impact factors, and editors want to boost their journal’s impact factor, then who cares – nobody is getting hurt, right? Unfortunately, a major problem with impact factors is that they are often used to judge the quality of individual research articles and their authors.

Much research has explored whether journal impact factors relate to research quality, with little success. As one study noted, evaluating the quality of research by the journal impact factor was no better than coin-flipping (Brito & Rodríguez-Navarro, 2019). Trivial-small correlations have been observed between journal impact factor and article citation count (Fox et al., 2020; Zhang et al., 2017), and statistical power of studies (a useful indicator of study quality) is unrelated to the impact factor of the journal in which the work is published (Brembs et al., 2013). The reproducibility of research findings is also unrelated to the journal impact factor (Prinz et al., 2011). By one estimate, the top 15% of most cited articles account for 50% of citations, and the top 50% account for 90% of citations (Seglen, 1997). In other words, rarely cited articles receive equal credit for the impact obtained by a few highly cited articles. This citation skewness can often manifest in rather large fluctuations in impact factor. For example, the impact factor of International Review of Sport and Exercise Psychology jumped from 6.9 in 2019 to 14.3 in 2020, whereas Health Psychology Review fell from 9.1 in 2019 to 3.7 in 2020. In terms of citations, there appears to be no clear benefit to publishing in a journal with a high impact factor (Milojević et al., 2017), and the journal impact factor is not representative of the quality of individual articles (Larivière & Sugimoto, 2019).

Of more concern, academic promotions and employment can be influenced by the impact factors of the journals in which academics publish. Despite the numerous warnings about such use (Brembs et al., 2013; McKiernan et al., 2019; Moustafa, 2015; Seglen, 1997), journal impact factors continue to be used in academic promotion and tenure evaluations. One recent study found that, in a representative sample of US and Canadian universities, 40% of research-intensive institutions mentioned impact factors, of which 87% supported their use in promotion and tenure applications, and 63% associated the metric with quality (McKiernan et al., 2019). This is clearly problematic. However, some have argued that simply trying to overhaul impact factors and pointing out that they do not matter ineffective and potentially harmful to aspiring researchers since different institutions play different rules (Tregoning, 2018). Consider the following example: an early career researcher is asked about their ability to publish in high-impact journals at a job interview. Is the best course of action to point out that impact factors are meaningless and do not reflect research quality? Possibly. But it very much depends on who is on the interview panel. For early-career researchers, ignoring this metric could cost them in the long run, as some institutions continue to use it for academic promotion and tenure (Tregoning, 2018). In other words, graduate students need to be aware of impact factors, not because they are inherently useful, but because their careers could depend on them. DORAs strategic plan includes spreading awareness of alternatives to the journal impact factor and providing examples of good evaluation. Some useful alternatives have been proposed (see Moher et al., 2018), but it will take somewhat of a culture shift in academia before these new measures become common and are adopted by institutions and individual researchers.

Judge Me by My Impact Factor, Do You?

So what does all this mean for EJPA? In short, it means that the increased impact factor in 2021 from 2.0 to 3.0 is essentially meaningless. If it encourages researchers to submit high-quality work to EJPA then great, but the standards at EJPA have not changed. Of course, we continue to try to improve the journal, as evidenced by the adoption of registered reports (Greiff & Allen, 2018), open science practices (Greiff et al., 2020), and supporting academic freedom (Iliescu et al., 2021). However, we are not interested in gaming the impact factor, nor will we judge our success by how much the journal’s impact factor increases (or decreases) over the next few years. Multiple factors influence where researchers choose to submit their work for publication. These include the aims of the journal, the readership, the perceived likelihood of publication, the speed of the editorial and review process, the length of time it takes to be published once accepted, publication costs, journal word limits, accessibility of the journal, and the quality of the editorial and advisory board (Eston, 2005). We believe we have an excellent editorial and advisory board (in particular, we would like to acknowledge the hard work of the previous editors-in-chief of EJPA), and we will continue our aim to speed up the editorial and review process, as well as reduce the delay from acceptance to publication. As academics, we should continue to push back against the impact factor as an indicator of quality (and, of course, its use in academic promotion, tenure, and allocation of funding) while also making sure that graduate students are aware of the problems and career implications of journal impact factors. We encourage readers to continue to submit high-quality work to EJPA – not because of the journal impact factor, but because it is an outlet for high-quality scientific research on psychological assessment.

References

  • Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7, Article 291. https://doi.org/10.3389/fnhum.2013.00291 First citation in articleCrossrefGoogle Scholar

  • Brito, R., & Rodríguez-Navarro, A. (2019). Evaluating research and researchers by the journal impact factor: Is it better than coin flipping? Journal of Informetrics, 13(1), 314–324. https://doi.org/10.1016/j.joi.2019.01.009 First citation in articleCrossrefGoogle Scholar

  • Brumback, R. A. (2009). Impact factor wars: Episode V – The empire strikes back. Journal of Child Neurology, 24(3), 260–262. https://doi.org/10.1177/0883073808331366 First citation in articleCrossrefGoogle Scholar

  • Cagan, R. (2013). The San Francisco declaration on research assessment. Disease Models & Mechanisms, 6, 869–870. https://doi.org/10.1242/dmm.012955 First citation in articleGoogle Scholar

  • Caon, M. (2017). Gaming the impact factor. Where who cites what, whom, and when. Australasian Physical & Engineering Sciences in Medicine, 40, 273–276. https://doi.org/10.1007/s13246-017-0547-1 First citation in articleCrossrefGoogle Scholar

  • Eston, R. (2005). The impact factor: A misleading and flawed measure of research quality. Journal of Sports Sciences, 23(1), 1–3. https://doi.org/10.1080/02640410400014208 First citation in articleCrossrefGoogle Scholar

  • Fox, G. A., Fox, A. K., & Guertault, L. (2020). A case study on the relevance of the journal impact factor. Transactions of the ASABE, 63(2), 243–249. https://doi.org/10.13031/trans.13756 First citation in articleCrossrefGoogle Scholar

  • Greiff, S., & Allen, M. S. (2018). EJPA introduces registered reports as new submission format. European Journal of Psychological Assessment, 34(4), 217–219. https://doi.org/10.1027/1015-5759/a000492 First citation in articleLinkGoogle Scholar

  • Greiff, S., van der Westhuizen, L., Mund, M., Rauthmann, J. F., & Wetzel, E. (2020). Introducing new open science practices at EJPA. European Journal of Psychological Assessment, 36(5), 717–720. https://doi.org/10.1027/1015-5759/a000628 First citation in articleLinkGoogle Scholar

  • Iliescu, D., Greiff, S., Proyer, R., Ziegler, M., Allen, M. S., Claes, L., Fokkema, M., Hasking, P., Hiemstra, A., Maes, M., Mund, M., Nye, C., Scherer, R., Wetzel, E., & Zeinoun, P. (2021). Supporting academic freedom and living societal responsibility. European Journal of Psychological Assessment, 37(2), 81–85. https://doi.org/10.1027/1015-5759/a000652 First citation in articleLinkGoogle Scholar

  • Ioannidis, J. P., & Thombs, B. D. (2019). A user’s guide to inflated and manipulated impact factors. European Journal of Clinical Investigation, 49(9), Article e13151. https://doi.org/10.1111/eci.13151 First citation in articleCrossrefGoogle Scholar

  • Larivière, V., & Sugimoto, C. R. (2019). The journal impact factor: A brief history, critique, and discussion of adverse effects. In W. GlänzelH. F. MoedU. SchmochM. ThelwallEds., Handbook of science and technology indicators (pp. 3–24). Springer. First citation in articleGoogle Scholar

  • McKiernan, E. C., Schimanski, L. A., Nieves, C. M., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Meta-research: Use of the journal impact factor in academic review, promotion, and tenure evaluations. Elife, 8, Article e47338. https://doi.org/10.7554/eLife.47338 First citation in articleCrossrefGoogle Scholar

  • Milojević, S., Radicchi, F., & Bar-Ilan, J. (2017). Citation Success Index – An intuitive pair-wise journal comparison metric. Journal of Informetrics, 11(1), 223–231. https://doi.org/10.1016/j.joi.2016.12.006 First citation in articleCrossrefGoogle Scholar

  • Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16(3), Article e2004089. https://doi.org/10.1371/journal.pbio.2004089 First citation in articleCrossrefGoogle Scholar

  • Moustafa, K. (2015). The disaster of the impact factor. Science and Engineering Ethics, 21(1), 139–142. https://doi.org/10.1007/s11948-014-9517-0 First citation in articleCrossrefGoogle Scholar

  • Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery, 10(9), 712. https://doi.org/10.1038/nrd3439-c1 First citation in articleCrossrefGoogle Scholar

  • Schutte, H. K., & Švec, J. G. (2007). Reaction of Folia Phoniatrica et Logopaedica on the current trend of impact factor measures. Folia Phoniatrica et Logopaedica, 59(6), 281–285. https://doi.org/10.1159/000108334 First citation in articleCrossrefGoogle Scholar

  • Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 498–513. https://doi.org/10.1136/bmj.314.7079.497 First citation in articleCrossrefGoogle Scholar

  • Tregoning, J. (2018). How will you judge me if not by impact factor? Nature, 558, Article 345. https://doi.org/10.1038/d41586-018-05467-5 First citation in articleCrossrefGoogle Scholar

  • Zhang, L., Rousseau, R., & Sivertsen, G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLoS One, 12(3), Article e0174205. https://doi.org/10.1371/journal.pone.0174205 First citation in articleCrossrefGoogle Scholar