The following is an article from the latest issue of Interface by co-editor Vijay Ramani.
The precise definition of the “impact” of a research product (e.g. publication) varies significantly among disciplines, and even among individuals within a given discipline. While some may recognize scholarly impact as paramount, others may emphasize the economic impact, the broad societal impact, or some combination therein. Given that the timeframe across which said impact is assessed can also vary substantially, it is safe to say that no formula exists that will yield a standardized and reproducible measure. The difficulties inherent in truly assessing research impact appear to be matched only by the convenience of the numerous flawed metrics that are currently in vogue among those doing the assessing.
Needless to say, many of these metrics are used outside the context for which they were originally developed. In using these measures, we are essentially sacrificing rigor and accuracy in favor of convenience (alas, a tradeoff that far too many in the community are willing to make!).
Perhaps the most widely misused metric is the journal impact factor (JIF). Originally conceived in the 1960s to help select journals to be included in the Science Citation Index (SCI), the JIF has morphed into a default indicator of author/scholarship impact. While there is awareness in the community of the inherent dangers of conflating the JIF with the merits of the published work, this statistically bankrupt metric is (still) widely used in advising critical decisions such as hiring, tenure and promotion, award of grants, etc. In some countries, there is even a monetary reward to authors that scales with the JIF! The unfortunate side-effect is that an increasing number of scientists, especially those starting their careers, are pressured into performing work that has a higher chance of being published in a so-called “high-impact-factor” journal. In other words, the focus is increasingly shifting to performing and publishing research that is likely to rapidly garner citations, only in the next two years, without much thought devoted to the longer-term
implications of the research. Disturbingly, but unsurprisingly, a strong correlation exists between article retraction frequency and journal impact factor.
An additional concern is that the JIF is a metric that can be readily gamed to increase the numerator and lower the denominator (number of published articles over the past two years) in the JIF calculation. The numerator of course is the citation count in a given year across all indexed journals to the articles counted toward the denominator – a dubious metric in itself given that not all citations are equivalent. The inherent fallacies of using the JIF to measure the impact of an individual article or an author cannot be overstated. Some methods of gaming the JIF include: a) publishing a large number of review articles — these articles have zero new research impact, but are widely cited as a matter of convenience; b) declining to publish or even review articles that are technically sound and well within the journal scope, but are deemed insufficiently capable of rapidly gathering citations – this is an unfortunate but common practice among “high-impact-factor” journals today; c) the considerably less ethical practice of coercive citations (enough said!); and d) encouraging excessive self — citations (some authors are only too happy to oblige). As one example, a journal was able to nearly triple its JIF for the year by the simple expedient of publishing an editorial in each issue that cited every paper published by that journal in the prior two years (note that the journal editors here did this deliberately to point out the fallacies inherent in the system).
Why do these developments concern us as a Society? For one, ECS publishes Journal of The Electrochemical Society and ECS Journal of Solid State Science and Technology – both outstanding journals – that have to compete for article submissions in this environment. Unlike the so-called “high-impact-factor” journals, ECS journals do not filter articles based on their ability to garner citations rapidly. On the contrary, and to their credit, the ECS journals strive to publish each technically sound article that is within the scope of the Societyʼs topical interest areas. The Society should continue to follow this practice and resist the dangerous temptation to conclude — as many journals have done with the motive of enhancing their JIF — that advances that are not immediately relevant (i.e. papers that are not deemed to be citation magnets) are unworthy of publication, or even review. The support and participation of all ECS members in this endeavor is
essential. Secondly, and perhaps more importantly, we as a Society should encourage researchers to think more deeply (i.e. beyond immediate citations) when it comes to conceiving and executing a research project. To this end, we must ensure that we eschew dubious metrics and reclaim traditional (but sound) methods of evaluation when it comes to assessing the output of our peers.
The consequences of not doing so will be the slow but sure devaluation of our governing research principles. Fortunately, many agencies and societies have acted to minimize the pernicious effects of improper research assessment (see for example the San Francisco Declaration on Research Assessment (DORA) under the aegis of the American Society for Cell Biology). ECS was an early signatory to DORA, and should continue to champion efforts to educate the scientific community on the fallacies of using the impact factor of a journal as a measure of the scientific impact of a published article.