Duncan MacRae |
There may be no more divisive topic in scholarly publishing than the Journal Impact Factor (JIF) — a metric first circulated in 1975 with the goal of providing librarians a tool with which to make informed journal purchases. Currently owned by Clarivate Analytics, JIFs are released in late June each year as an element of the Journal Citation Reports, and regardless of the growing number of alternative metrics available and wide-ranging criticism of the JIF itself, it is likely the most anticipated day of the year for the thousands of journals that will receive an updated JIF.
Journal success is measured using many data points: subscriptions, advertising, profitability. From an editorial perspective, JIF exists as both a measure of journal success and a primary driver of success. Author surveys often reveal that an author’s main priority when selecting a journal is “prestige,” conventionally measured by JIF, and consequently an improving JIF drives submissions. Editors recognize that JIF is not only a measure of how well the journal has performed but also predictive of how the journal will perform in the upcoming year, given the correlation between JIF increases and submission increases.
For this reason, the acquisition of a JIF is considered essential to the long-term survival of any journal — and a growing concern as Clarivate Analytics effectively controls what journals are included in the indices that receive JIFs. Journals that are not in the Science Citation Index Expanded or Social Sciences Citation Index (two of the indices that make up Web of Science) do not receive JIFs, although they are listed in the Journal Citation Reports and do receive other journal metrics. The process of acquiring a JIF has changed considerably in the last four years and, as Clarivate would admit, has become more exclusionary as the number of journals has exploded in the open access era.
What is the JIF?
On its surface, the formula used to derive the JIF appears straightforward: First, Clarivate tallies how many times articles the journal published in the previous two years have been cited by researchers in the current JIF year. Then they divide this value by the number of source items the journal published in the same two previous years. The fact I said “source items” — and not “articles” — illustrates why there is so much concern over the level of subjectivity and manipulation that the JIF invites.
What constitutes a source item? Roughly, “source items” are original research and reviews. (Note: contrary to myth, case reports are considered source items and do count in the denominator of the equation — hence the number of journals that no longer publish case reports and instead launch spin-off case report journals to protect the JIF of the “flagship” journal.) What about an article whose primary content is a link to a surgical video? Or an article that is a brief description of a research paper but does not itself contain any original research? If an article has no abstract, can it be a source item? The answer to most of these queries would be “it depends,” and that ambiguity introduces the opportunity for manipulation.
JIF: Manipulation or editorial strategy?
There exists a very fuzzy line between legitimate editorial strategy and JIF tampering. The most obvious manipulations involve self-citations, and journals are still occasionally called out for egregious examples, sometimes explicitly requiring an author to cite the journal as a prerequisite for acceptance. “Citation farms,” wherein journals engage in a coordinated effort to cite each other’s work, are high-profile examples of JIF manipulation — to Clarivate’s credit, they have improved their ability to govern this sort of behavior.
Most JIF-driven strategies revolve around the denominator, which the journal can exert control over. Strategies commonly include:
Improving article discoverability on index searches by simplifying titles and improving abstracts.
Implementing the use of research reporting guidelines and emphasizing articles that are associated with higher levels of evidence.
Identifying the commonalities of highly cited articles and adjusting the journal’s acceptances accordingly.
Carefully managing journal selectivity to avoid unwanted growth in the number of annually published articles.
Would we consider these strategies unethical in that they are overtly designed to affect the JIF? It’s a dilemma that confronts almost every editor-in-chief at some point. The issue is not that a high JIF is necessarily hard to achieve — the strategies for doing so are obvious — it’s whether the journal is willing to make difficult compromises with the express goal of improving JIF. If a journal introduces a new article type specifically designed for young researchers to be able to get their foot in the door of scientific publishing — and the submissions indicate great enthusiasm and usage statistics reflect high interest amongst readers — is it consistent with the mission of the society to discontinue the article type solely because they underperform from a citation standpoint?
Reactions to the JIF
There exist dozens of alternatives to the JIF — too many to mention here. Often, they are designed to specifically address one of the JIF formula’s perceived weaknesses. CiteScore extends the time frame for article citation from two to three years. Eigenfactor Score considers the relative quality of the citing journal. Journal Citation Indicator normalizes its score based on subject area. One indicator of how JIF has come to dominate the discussion around journal metrics is that in my 25 years of journal management, not one editor has ever contacted me to inquire about a journal’s Source Normalized Impact per Paper, h-index or Article Influence Score.
There have also been recent attempts to counter this oversized influence — DORA is one such high-profile attempt to disconnect JIF from funding, appointment and promotion processes. However, despite over 21,000 signatures to the DORA declaration, I suspect next June 30 will still be the most anticipated day in 2023 for journal editors and owners.
Learn About the JIFs for ACSM's Journals
Duncan MacRae is the director of editorial strategy and publishing policy for Wolters Kluwer, one of the world’s foremost publishers of medical, nursing and allied health journals. In this role, Duncan oversees the development and implementation of editorial policies followed by journals in the Lippincott and Medknow imprints. In addition, he works with a portfolio of editorial service providers to assist WK’s society partners in achieving their strategic goals.
Wolters Kluwer is the publishing partner of ACSM.