China to end the reign of the impact factor?

Despite numerous commentators denouncing the metric and many prestigious journals discontinuing its use as a promotional tool, the journal impact factor (JIF) refuses to go away and remains pervasive in scientific publishing.

The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers” [1].

“…it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking” [2].

“Everybody hates the impact factor… But everyone recognizes that we’re [beholden] to it” [3].

Gallant efforts have been made to loosen the grip of the JIF, including the San Francisco Declaration on Research Assessment (DORA) [4], the development of numerous alternative metrics [5], and the advent of journals with a mission to publish papers that are scientifically sound, irrespective of expected impact, e.g., PLOS One.

But arguably the biggest blow to the JIF came just last month (Feb, 2020) via an order by the Ministry of Education and the Ministry of Science and Technology in China. With this new policy, cash incentives to publish in high-impact journals are no longer permitted, and Chinese researchers will no longer be evaluated based on publication quantity or venue [6]. The aim of this policy is to introduce a “fewer but better” policy in relation to researcher output in China [7], a move that could have a domino effect worldwide, finally ending the global reign of the JIF.

On the back of this major development, in this blog we review the biases and problems associated with the JIF, as exposed by various authors over the past ~30 years.

Problems with the JIF

Skewed data

If you look at the distribution of citations of published articles (see Fig. 1), you will see that the data are highly skewed to the right, i.e., most papers are cited relatively infrequently.

Graph of Nature and PLOS One articles published in 2013 and 2014

Fig. 1 Distribution of citations of Nature and PLOS One articles published in 2013 and 2014. Source: [8].

As pointed out by Diamandis (2017), as undergraduates we are taught to use nonparametric measures when dealing with skewed data [9]. And yet, the JIF uses the mean—a measure that is based on the assumption that the data are normally distributed. The JIF of Nature in 2015 (which is based on the number of citations for the two previous years, 2013 and 2014) was 31.8. Yet, from Fig. 1, you can see that the bulk of papers were cited less than 31.8 times. Therefore, the JIF is not a statistically sound metric.

Article type affects the JIF

The JIF equation is: JIF2020 = (Citations2019 + Citations2018)/(Publications2019 + Publications2018). The problem is that the numerator in this equation (i.e., Citations) includes the number of citations for ALL article types, whereas the denominator (i.e., Publications) only includes “back matter” content, e.g., original research articles and reviews [10]. Many of the larger journals publish “front matter”, e.g., editorials, letters to the editor, news, obituaries, which are often cited (i.e., included in the JIF numerator) but not included in the denominator. Thus, these journals have inflated JIFs.

The JIF favours certain research fields over others

As the JIF is based on a narrow two-year window, it favours faster moving research fields (e.g., molecular biology) over slower moving fields (e.g., mathematics), as such fields tend to have a higher proportion of citations to recent (1–2-year-old) publications [11]. The JIF also fails to account for research whose impact may take longer than 2 years to be fully realised [12].

Disincentivises risky research

Highly experimental research is risky, yet has the potential to be ground-breaking. As researchers are evaluated on their publication record, they are not likely to pursue high-risk areas of science. On the contrary, scientists are rewarded for pursuing the more popular areas of science, as they can expect to be cited more frequently [13].

Misuse and abuse by editors

Some editors engage in questionable practices to increase their journal’s impact factor, including:

1. Unjustified self-citation, whereby the editor asks authors to include articles published in the editor’s journal or affiliate journals. This may be OK if there is genuine scientific justification for adding the references; however, sometimes the suggested articles are irrelevant to the author’s paper and/or the requests are excessive.

In an extreme example, one editor who handled 82 papers in 2017 (as either an editor or reviewer) suggested 622 additional references be added. Most of the suggested articles were from the journal of which he was editor-in-chief. This strategy worked: the JIF of the journal he edited jumped from 3.089 to 8.145 in one year [14].

Inappropriate self-citation occurs to varying degrees, but a considerable number of authors have reported feeling pressurised into adding references suggested by editors [15].  A phenomenon known as the “citation cartel” has also emerged, whereby groups of editors agree to work together and promote the citation of papers from each of their journal titles [16].

2. Increasing the journal’s “front matter”, i.e., article types included in the JIF numerator but not in the denominator. Appropriate “front matter” is often of scientific value, e.g., letters can be used to self-correct science [17]. However, some editors play the “impact factor game” [18] and include front material of little scientific value, but which will benefit their JIF, e.g., annual “highlights” that include a high number of internal references [10].

Misuse and abuse by authors

1. Unjustified self-citation, whereby authors cite their own previous work, irrespective of its relevance. This results in an unjustifiable boost in the JIF in those journals in which the papers were published. Similarly, when peer reviewing, authors may request the addition of unjustifiable citations to their own work, resulting in further distortion of the JIF rankings.

2. The pressure to publish in high-impact journals can result in unethical practices, including data falsification or fabrication; retractions due to such misconduct occur disproportionately in high-impact journals [19].

“The most selective journals demand clean stories and immaculate data, which seldom match the reality of laboratory investigation, where experiments can produce messy results. Hence, some investigators may be tempted to cut corners or manipulate data in an effort to benefit from the disproportionate rewards associated with publishing in prestigious journals.” [12]

Persons may be “gifted” authorship, whereby individuals—who have not contributed in any substantive way to the research—are given authorship credit. Such gift authors may be in “The Golden Club” [12], i.e., publish frequently in high-impact journals. It is hoped their inclusion will sway editors/peer reviewers, but such biases only serve to perpetuate the Matthew effect in the sciences, i.e., “the rich get richer and the poor get poorer” [20].

Misuse by funders/evaluators

Potentially the greatest misuse of the JIF is the evaluation of individual researchers based on publication venue rather than the actual content of their papers [12]. This occurs to the extent that some universities will not even consider processing an applicant that does not have a least one first-author paper in a high-impact journal [21]. A recent survey found that 40% of research-intensive institutions in North America use the JIF when making decisions on promotion and tenure [22].

The list presented in this blog represents only a sample of the biases and problems associated with the JIF. For a more comprehensive list, see [23].

References

  1. Reciprocal Space. Sick of impact factors. 13 August 2012. Available from: http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
  2. Schekman R. How journals like Nature, Cell and Science are damaging science. The Guardian. 9 Dec 2013. Available from: https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
  3. Woolston C. TOP Factor rates journals on transparency, openness. Nature Index. 18 Feb 2020. Available from: https://www.natureindex.com/news-blog/top-factor-rates-journals-on-transparency-openness
  4. San Francisco Declaration on Research Assessment. DORA—ASCB [Internet]. ASCB. Available from: http://www.ascb.org/dora/
  5. Oosthuizen JC, Fenton JE. Alternatives to the impact factor. The Surgeon. Oct 2014 1;12(5):239-43.
  6. Mallapaty S. China bans cash rewards for publishing papers. Nature News. 28 Feb 2020. Available from: https://www.nature.com/articles/d41586-020-00574-8?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews
  7. Tao T. New Chinese policy could reshape global STM publishing. Scholarly Kitchen. 27 Feb 2020. Available from: https://scholarlykitchen.sspnet.org/2020/02/27/new-chinese-policy-could-reshape-global-stm-publishing/
  8. Lariviere V, Kiermer V, MacCallum CJ, McNutt M, Patterson M, Pulverer B, Swaminathan S, Taylor S, Curry S. A simple proposal for the publication of journal citation distributions. BioRxiv. 1 Jan 2016:062109. Available from: https://www.biorxiv.org/content/biorxiv/early/2016/07/05/062109.full.pdf
  9. Diamandis EP. The Journal Impact Factor is under attack–use the CAPCI factor instead. BMC Med. 2017:15, 9. Available from: https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0773-5
  10. Lariviere V, Sugimoto CR. The journal impact factor: A brief history, critique, and discussion of adverse effects. arXiv preprint arXiv:1801.08992. 26 Jan 2018. Available from: https://arxiv.org/ftp/arxiv/papers/1801/1801.08992.pdf
  11. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 15 Feb 1997;314(7079):497. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126010/
  12. Casadevall A, Fang FC. Causes for the persistence of impact factor mania. MBio. 1 May 2014;5(2):e00064-14. Available from: https://mbio.asm.org/content/5/2/e00064-14?cpetoc=
  13. Alberts B. Impact factor distortions. Science. 17 May 2013. Vol. 340, Issue 6134, pp. 787. Available from: https://science.sciencemag.org/content/340/6134/787#
  14. Davis P. Citation cartel or editor gone rogue? Scholarly Kitchen. 9 Mar 2017. Available from: https://scholarlykitchen.sspnet.org/2017/03/09/citation-cartel-or-editor-gone-rogue/
  15. Wilhite AW, Fong EA. Coercive citation in academic publishing. Science. 3 Feb 2012;335(6068):542-3. Available from: https://science.sciencemag.org/content/335/6068/542.summary
  16. Davis P. The emergence of a citation cartel. Scholarly Kitchen. 10 Apr 2012. Available from: https://scholarlykitchen.sspnet.org/2012/04/10/emergence-of-a-citation-cartel/
  17. Ioannidis JP, Thombs BD. A user’s guide to inflated and manipulated impact factors. European Journal of Clinical Investigation. Sep 2019;49(9):e13151. Available from: https://onlinelibrary.wiley.com/doi/pdf/10.1111/eci.13151
  18. PLoS Medicine Editors. The impact factor game. PLoS Med. 6 Jun 2006;3(6):e291. Available from: https://journals.plos.org/plosmedicine/article%3Fid%3D10.1371/journal.pmed.0030291
  19. Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. PNAS. 16 Oct 2012;109(42):17028-33. Available from: https://www.pnas.org/content/109/42/17028
  20. Larivière V, Gingras Y. The impact factor’s Matthew Effect: A natural experiment in bibliometrics. Journal of the American Society for Information Science and Technology. Feb 2010;61(2):424-7. Available from: https://arxiv.org/ftp/arxiv/papers/0908/0908.3177.pdf
  21. Verma IM. Impact, not impact factor. PNAS. 30 June 2015;112 (26) 7875-7876. Available from: https://www.pnas.org/content/112/26/7875?ijkey=f6f13798e7b48947e6339175a3d057a0687062ea&keytype2=tf_ipsecsha
  22. McKiernan EC, Schimanski LA, Nieves CM, Matthias L, Niles MT, Alperin JP. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. ELife. 2019;8. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6668985/
  23. Nestor MS, Fischer D, Arnold D, Berman B, Del Rosso JQ. Rethinking the Journal Impact Factor and Publishing in the Digital Age. The Journal of Clinical and Aesthetic Dermatology. Jan 2020;13(1):12. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7028381/