In December, Elsevier announced the release of Scopus’ new journal metrics system, CiteScore, which it describes as providing “comprehensive, transparent, current insights into journal impact.” The company calls CiteScore a new standard “that will help you guide your journal more effectively in the future.” CiteScore metrics are “part of the Scopus basket of journal metrics that includes SNIP (Source Normalized Impact per Paper), SJR (SCImago Journal Rank), citation- and document- counts and percentage cited. The integration of these metrics into Scopus provides insights into the citation impact of more than 22,220 titles.”
Strengths and Weaknesses
Ludo Waltman of Leiden University questions its true value versus other Scopus-based measures. “The novelty of CiteScore relative to IPP [impact per paper] is in the sources (i.e., the journals and conference proceedings) and the document types (i.e., research articles, review articles, letters, editorials, corrections, news items, etc.) that are included in the calculation of the metric,” he says. “CiteScore includes all sources and document types, while IPP excludes certain sources and document types. Because CiteScore includes all sources, it has the advantage of being more transparent than IPP.”
However, Waltman notes that one key weakness is that the CiteScore percentile metric is intended to show how a given journal ranks in relation to other journals in the same field, but there is a lack of clarity in the Scopus field definitions. As a result, Waltman says, “in the field of Law, the journal Scientometrics turns out to have a CiteScore Percentile of 94%. However, anyone familiar with this journal will agree that the journal has nothing to do with law. Another example is the journal Mobilization, which belongs to the field of Transportation in Scopus. Interestingly, the journal has no citation relations at all with other Transportation journals. The journal in fact should have been classified in the field of Sociology and Political Science.”
Carl Bergstrom and Jevin West of the Eigenfactor Project note that CiteScore categories and methodology currently impact rankings of many prominent journals. The Lancet, for instance, ranks fourth in the world under the impact factor, according to Nature, but ranks below 200th in the CiteScore system. “That’s because, while both metrics calculate impact by dividing the number of citations by the total number of articles published, CiteScore includes editorials, letters to the editor, corrections, and news items in its calculation,” Joshua A. Krisch writes in The Scientist. “The Lancet loses points for publishing a good deal of these types of articles, which are seldom cited.” Dealing with the apples and oranges of scientific publishing will be a key issue for Scopus, whose officials note that they are still working to clarify CiteScore’s data structure to better reflect data categories.
Waltman does see a positive side to the metrics. “The impact factor is often criticized because of this ‘inconsistency’ between the numerator and the denominator. CiteScore includes all document types both in the citation count in the numerator and in the publication count in the denominator. Compared with the impact factor, CiteScore therefore has the advantage that the numerator and the denominator are fully consistent.”
“As there is intense competition among top-tier journals, adoption of CiteScore will push editors to stop publishing non-research documents, or shunting them into a marginal publication or their society website,” publishing consultant Phil Davis tells Nature.
Assessment and ‘Perverse Incentives’
Bergstrom and West conclude their assessment by stressing “the perverse incentives that metrics create. By neglecting to count the front matter in its denominator, Impact Factor creates incentives for publishers to multiply [their] front matter. By counting front matter fully in the denominator, CiteScore does the reverse. Because we value and enjoy the front matter in many of our favorite journals, we see the Impact Factor as the lesser of evils in this regard. Should CiteScore ever reach the level of [prominence] that Impact Factor currently holds, journals will face strong incentives to reduce or eliminate the news and editorials that appeal many readers. It would be a great shame to see this content shrink or disappear.”
The assessment of scientific publications is an integral part of the overall scientific process today, including in tenure decisions, funding evaluations, and public perceptions of the quality of a researcher, program, or institution. In a 2013 PLOS Biology paper, comparative assessment of various impact strategies resulted in no perfect answer: The “assessor score depends strongly on the journal in which the paper is published, and … assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit.” Adam Eyre-Walker, the primary author of this paper, comments that “all measures of merit are subject to error and there is no evidence that the [impact factor] is any worse. I’m not suggesting that the [impact factor] should be used blindly to assess papers and researchers, but suggesting that it contains little or no information about the merit of a paper seems illogical to me.”
According to Bergstrom and West, “In metrics, as in politics, we hope that those in charge will be objective in their judgement and free from conflicts of interest. The problem with Scopus producing metrics such as CiteScore is of course that their parent company Elsevier publishes a sizeable fraction of the journal that this metric evaluates. Thus when Elsevier makes a decision about how the metric should be designed—e.g. to include front matter in the denominator—people will justifiably wonder whether this decision was made for purely bibliometric reasons or for financial ones as well. … For a journal metric to be fully credible, the organization producing and promoting this metric must be financially independent of the [organization] publishing the journals that it ranks. This is not a new problem, but rather a continuation of our long-running discomfort with the vertical integration of an infometrics company Scopus and a big publisher Elsevier.”
“Taken together,” Davis writes, “it doesn’t appear that the CiteScore indicator can be considered a viable alternative to the Impact Factor.” Nevertheless, CiteScore is subject to the same criticisms as the impact factor. Krisch writes, “Critics have argued that journal metrics contribute to a culture of journal worship, judging researchers based on the publications that accept their work, rather than doing so on the merits of their research.”
Perhaps CiteScore is a work in progress that will, at least modestly, improve with time. Or perhaps it’s just another measure purporting to provide a quick and easy answer to questions that have proven exceptionally complex.
Image source: elsevier.com/editors-update/story/journal-metrics/citescore-a-new-metric-to-help-you-choose-the-right-journal