16 March 2007

Publish AND Perish

By Rusty Rockets

The prestige that a small number of scientific journals have attained over the years has made them into authorities that scientists continually refer to, but just what exactly is this scientific kudos founded upon? Currently, the main determinant of a journal's popularity is based upon a ranking system called the "impact factor" (IF), which is updated and available each year from the Institute of Scientific Information's (ISI) website. But there have been numerous calls from critics of the IF that it is a very poor way in which to gauge the scientific quality of a journal. If this is true, then it is hard to understand why the IF continues to be used as a measure of scientific merit. But what should be of even greater concern is that the IF may actually undercut the validity of the entire peer-review system.

The IF was developed by American scientist and ISI founder Eugene Garfield in 1960, after discovering that his earlier Scientific Citation Index (SCI) was a good way to measure the significance of scientific journals. Simply, a journal's IF is calculated over a 3-year period and represents the total number of citations of all its eligible articles (full papers and reviews) published during the previous two years, which is then divided by the total number of eligible articles. To give you some idea of the scale, the IF of a journal usually lies somewhere between a score of 0 and 30, with the highest number being the most prestigious. But is this a fair measure of a journal's scientific importance? According to Gareth Williams, Dean of the Faculty of Medicine and Dentistry, University of Bristol, UK, the answer is an emphatic "no".

Williams believes that the IF system; "has become the global currency for a journal's scientific standing and, by implication, of the papers it publishes." Williams points out that big name journals like to parade their high IF ranking, which, adds Williams, may convince some people to believe that the IF is both "credible and important." But nothing could be further from the truth in Williams' mind, as he believes that only the most naïve of individuals could possibly take the IF seriously or attach any value to it.

Williams has joined a chorus of critics who provide multiple reasons (including self-citation, bias toward English language papers, citation of reviews, and citation period length) why the IF is defective, and should be scrapped. Since its introduction, there has been something of an obsession with prestige over content, and examples of the IF system being gamed are not hard to find. Reporting in the British Medical Journal (a ranking of 9.0 in 2005), journalist Hannah Brown relates a story about a Dr Lundberg who needed to raise the prestige of the once popular Journal of the American Medical Association (JAMA), which he had just taken over back in 1982, but had been in decline since the 1960s.

"Recognizing that impact factors were derived from citations, Dr Lundberg reasoned that chasing high profile authors and institutions could help boost JAMA's rank and, therefore, its reputation," writes Brown. "He instructed his editorial team to seek out studies that had the potential to become staple references in other papers and try to woo the authors into submitting to JAMA. 'We were looking for prestige,' Dr Lundberg recalls."

Whether it was Dr Lundberg's IF strategy or the enthusiasm that comes with a fresh appointment, the JAMA's IF rating nearly trebled in a short space of time.

But if the IF is nothing more than crude algorithm open to manipulation, why does the academic community continue to support it? According to Williams, none of the answers to this question are at all flattering, as they present a rare glimpse at the ugly underbelly of academia. The continued existence of the IF; "feeds off three attributes that no academic could be proud of - gullibility, intellectual sloppiness, and vanity," says Williams.

Williams critique is damning, as it implies that scientists are either too credulous, too inept, or too self-indulgent to be able to do anything about the IF situation. But Williams sees yet another reason why nobody wants to tackle - or even notice - the 300lb gorilla that sits alongside IF.

Unless you're a scientist in the employ of lobbyists who in turn work on behalf of another interested party, finding funding for your research is a tough gig. So it comes as no surprise, then, to find that some scientists will exploit inherent weaknesses in the IF system to bolster their profile with the aim of attracting more funding. "Nowadays, many applicants for jobs or promotion tag their publications with the journal's impact factor, and there is a risk that impressionable assessors might take this seriously," explains Williams.

But Williams goes further, saying that IF scores can make or break careers. "Of much greater concern is evidence that the impact factor profile of individual academics is used by universities and funding bodies to determine employability and grant support - even though this is scientifically indefensible," he states.

But even if scientists recognize the IF system as rotten, it's unlikely to be fixed anytime soon thanks to the inertia it has established. "Hey, why shouldn't I do it, everyone else does, after all."

Williams goes deeper into the innards of scientific publishing and shows how an imperfect peer-review system coupled with the IF system can feedback to give a paper more status than it is actually due. He says that "every scientist knows" how peer-review can push a "not so good" paper into a "good" journal and vice versa. The fact that this situation can arise again and again is based, bemoans Williams, on the "fatally flawed" assumption that "the stature of an individual paper equates to the impact factor of the journal in which it appears."

Even the journal Nature (a ranking of 29.3 in 2005) has acknowledged this, and in a 2005 editorial explained; "we have analyzed the citations of individual papers in Nature and found that 89 percent of last year's figure was generated by just 25 percent of our papers." The editorial went on to say that only 50 out of roughly 1,800 eligible papers published in the two year period received more than 100 citations, and that "the great majority of our papers received fewer than 20 citations."

Unfortunately, it seems, the peer-review process, IF and research funding all currently go hand-in-hand, leading Williams to conclude that; "The impact factor is a pointless waste of time, energy, and money, and a powerful driver of perverse behaviors in people who should know better. It should be killed off, and the sooner the better."

But while there are many who would like to see an end to IF, there are others who are suggesting that we shouldn't throw the baby out with the bathwater. One of them is Richard Hobbs, head of the Primary Care and General Practice Department, University of Birmingham, UK, who says that while the IF system is flawed, all it really needs is a thorough overhaul.

Like Williams, Hobbs can point out the numerous failings of the IF system, such as the fact that only 2.5 percent of journals are monitored; that not all disciplines routinely cite others' work; specialist clinical journals are cited less than non-specialist; selective citation; country of origin citation biases (most notably in the US); and the imbalances that arise between journals that publish weekly and monthly. But in spite of all of these negatives, Hobbs says that there needs to be a way of assessing the relevance, value and significance of papers and journals.

"It's easy to criticize bibliometrics, but we should attempt to refine them and debate in parallel how we can track academic careers and encourage fewer, but better studies that affect the wider community," says Hobbs. Hobbs makes a number of suggestions, which include extending the citation surveillance period, applying weightings to adjust for the average number of references across journals, and scoring journals on only their most important papers. "After all, says Hobbs, "development of more complex citation scoring was advocated [to avoid citing unreliable studies and deepen historical meaning] by Eugene Garfield, the father of impact factors, in his original 1955 paper."

Eugene Garfield was obviously well intentioned when he devised the IF, but if Williams is correct, then the IF system is being played like a fiddle by many researchers. Perhaps it's finally time to yank the IF security blanket out from under the scientific community, give it a good shake, and see what happens.

Related articles:
The War On Science
A Spoonful Of Science Helps The Climate Change Go Down
Medical Journals Too Influenced By Pharmaceutical Marketing