Skip to main content
Ethics

Ethics

Courtesy: iStock/xubingruo
27 Nov 2017
Taken from the November 2017 issue of Physics World

Promoting your career is the name of the game for most researchers, but is excessively citing your own papers a good or bad thing? Dalmeet Singh Chawla investigates

Tooting your horn can be good at times. For academics, it comes in the form of citing your own prior work in new studies. Citing yourself is not necessarily a bad thing: researchers mostly report incremental advances in their niche area of expertise, so citing oneself alerts other colleagues to your work that they might otherwise have missed. But since self-citations contribute to long-established metrics such as the h-index — a measure of a researcher’s productivity and impact of their publications — abusing them may boost one’s own citation record.

For Justin Flatt of the University of Helsinki in Finland, the answer may be using another metric. In a manuscript he co-authored in the journal Publications this August (10.3390/publications5030020), he and colleagues who are now based at the Swiss Federal Institute of Technology in Zurich, Switzerland, propose the self-citation index — or s-index. It is similar to the h-index, which was developed in 2005 by Jorge Hirsch, a condensed-matter physicist at the University of California in San Diego, to quantify the impact and productivity of individual scientists’ research output. The measure is simple: a scientist with an h-index of, say, 10 has published 10 articles that have each attracted at least 10 citations (papers with fewer than 10 citations therefore do not count).

One drawback of the h-index is that you can artificially inflate your score by simply citing your past papers a lot. The s-index would serve to shed light on this practice. To have an s-index of 10, for example, a researcher will have authored 10 papers with at least 10 or more self-citations each. The higher your s-index, the more you’ve been self-citing. An author’s s-index should appear alongside their h-index, according to Flatt and colleagues, as this would offer insights into researchers’ self-citation behaviours. In their paper, the researchers also calculate adjusted h-indices, which don’t count self-citations.

Unfair advantage

Superfluous self-citations can be a “slippery slope to abuse for personal gratification and self-promotion”, Flatt and colleagues warn. What’s more, a 2007 study by James Fowler from the University of California, San Diego, and Dag Aksnes from the Nordic Institute for Studies in Innovation, Research and Education, Oslo, found that the more a researcher cites themselves, the more other scholars cite them. This means those abusing the system may have even more of an unfair advantage. Excessive self-citations within a journal – where papers in a journal excessively cite other previously published papers by the same journal – is also a problem. Last year, for instance, the intellectual-property division of Thomson Reuters (now an independent company, Clarivate Analytics) delisted several journals for excessive self-citations from its annual list of journal rankings.

Yet another issue emerges in light of a 2016 study that found that men cite themselves on average 56% more than women. That number rises to 70% when only considering papers published in the last two decades. The analysis, carried out by Molly King at Stanford University in the US and colleagues, looked at more than 1.5 million papers published between 1779 and 2011. Referring to this study, Flatt and his team point out that self-citations can exacerbate the existing disadvantage female scientists face in terms of visibility and recognition. King, on the other hand, says that the s-index’s simplicity and similarity to the h-index are its assets, although she adds that a potential drawback is that “it will be difficult to use the metric to tell if an author’s self-citations have had a noticeable impact on his/her overall citation profile”.

But not citing your own closely related work can be problematic too, notes Matt Hodgkinson, head of research integrity at open-access publisher Hindawi. This, he adds, is particularly apparent when authors split their work into several smaller publishable units – a practice known as “salami slicing”.

Flatt and colleagues hope the s-index doesn’t come across as a “crude measure of bad behaviour”, as a high s-index can also indicate a consistent and illustrious career. After all, you can only cite yourself a lot if you’ve already published a lot – so to some extent a high s-index can be a sign of high productivity. “If self-citations are scientifically warranted, there is nothing wrong in having a high s-index,” the authors explain, but “the best arbiter to spot whether self-citations are unnecessary or excessive is another expert in the same field”.

No metric to rule them all

As for Hirsch, he likes the idea of an adjusted h-index, which, he says, has been proposed before. However, he finds the s-index to be uninformative at best and misleading at worst. Hirsch believes that peer-reviewers should be responsible for spotting and pointing out any relevant missing citations or irrelevant self-citations. Hirsch believes that, with context, the h-index and related metrics are still a useful evaluation tool.

But Sidney Redner, a statistical physicist at the Santa Fe Institute in New Mexico who has previously written about metrics and the h-index (arXiv:1002.0878), notes that a publication may be cited because researchers disagree with its conclusions (or for other hard-to-quantify reasons), meaning that it can be misleading to use citations as an indicator for quality. For example, a 2005 paper (Science 307 426) about a protein that allegedly mimicked the effects of the hormone insulin was retracted in 2007 but nevertheless has been cited more than 1100 times, with many of the citations occurring after the paper was pulled. This article is at the top of the “10 most highly cited retracted papers” list published by the research-integrity blog Retraction Watch, where I used to work full-time as a reporter.

Citation indices might seem arbitrary, but some funding agencies and institutions use them extensively to evaluate grants and even decide on who to recruit or promote to a new job. One organization that avoids any kind of metrics when evaluating what research to fund is the Laura and John Arnold Foundation, which is based in Houston, Texas. One reason for this, explains vice-president of research Stuart Buck, is publication bias – a long-standing problem in scientific publishing where positive results are favourably published over negative ones. As Buck points out, flooding literature with positive findings means these are more likely to be cited, leading to a skewed picture of exaggerated and biased claims. Instead, he tends to examine factors that determine robustness, such as statistical power and sample sizes, when judging grant applications.

Hodgkinson also discourages the use of simple metrics to judge research. Adjusting for self-citations, however, won’t necessarily stop the h-index from being manipulated, he says, since scholars may do deals to exchange citations with each other – a problem known as “citation stacking”. At Hindawi, the staff actively monitor “citation concentrations” towards certain researchers and journals, Hodgkinson says, and then works with authors to reduce that. Indeed, a case of citation-boosting earlier this year led to resignations at the European Geosciences Union. Hindawi also discourages peer-reviewers from suggesting authors should cite their work, which Hodgkinson notes is also mentioned in the reviewers’ guidelines of the Committee on Publication Ethics.

Ultimately, “the s-index should prompt people to think differently about how to assess scientific qualities”, Flatt and colleagues say. “Numbers are extremely useful, but we must always take them with a pinch of salt.”

Copyright © 2024 by IOP Publishing Ltd and individual contributors