Saturday, November 30, 2013

Peer Review, Impact Factors, and the Decline of Science

[I've had interesting tabs open for weeks waiting for time to report on them. Sorry for the delay. -dww]

The Economist reported on October 19, 2013 (pp. 21-24) that there is "Trouble at the lab". Indeed. And trouble has been brewing for quite some time without a single identifiable culprit or an easy way to solve the problem. This problem is concerned with predatory publishing, irreproducibility of scientific results, and the use of quantitative data as an attempt to judge quality.

University administrations, search and tenure committees, governments, funding associations, and other bodies need some way of judging people they don't know in order to decide whether to offer them jobs or promotions or funding. This has often boiled down to counting the number of publications, or the impact factors of the journals in which their articles are published. Coupled with the crisis in publishing, with the subscription price of subscription journals exploding, an unhealthy mix is brewing.

Predatory publishers promise quick publication in good-sounding "international" journals, using the Open Access "golden road" to extract fees from authors. They promise peer review, but if at all they only seem to look at the formatting. Established publishers trying to keep up their profits have incorporated more and more journals into their portfolios without keeping a watchful eye on quality control.

Enter John Bohannon. In October 2013 Bohannon published an article in Science, Who's Afraid of Peer Review? He details a sting operation that he conducted between January and August 2013, submitting 304 papers with extremely obvious deficiencies to journals that he chose both from Lund University's "Directory of Open Access Journals" as well as from Jeffrey Beall's list of predatory publishers.

Bohannon has put his data online, showing that 82% of the journals chosen from Beale's list accepted the fabricated paper, as well as 45% of the journals on the DOAJ list. Predictably, DOAJ is not amused and accusing Bohannon of, among other things, racism because he chose African-sounding names for the authors (1 - 2).

In August 2013, Nature journalist Richard van Noorden detailed a scheme by publishers called "citation stacking" in which a group of publishers collude to quote extensively from each other's journals in order to avoid being sanctioned for coercive citation. This activity was described in Science in 2012 by Allen W. Wilhite and Eric A. Fong as a process by which authors are instructed to quote from a publisher's own journals in order to increase the so-called impact factor. van Noorden's article focused on a group of Brazilian journals, so he, too, was accused of racism. This is unfortunate, as it detracts from a very serious problem.

We find ourselves today in a rapidly expanding world with scientific research being conducted in many different places and much money being invested in producing results. People need publications, and have little time for doing peer review, a job that is generally not paid for and performed as a service to the community. Universities in countries without a tradition of rigorous scientific practice have researchers who need publications, and there are people out to make money any way they can. Researchers competing for scarce jobs in countries that are trying to spend less on science and education than they have in the past are also sometimes tempted to follow the path of less resistance and publish with such journals. And some are not aware that they have just selected a publication that sounds like one that is well respected, as Beall has noted.

I don't have a solution to offer, other than boycotting the use of quantitative data about publications and getting people to be aware of the scams going on. We need to get serious about peer review, embracing such concepts as open access pre- and post-publication peer review in order to get more rigor into the publication process. I realize that people have been complaining about the decline of science since at least Charles Babbage (Reflections on the Decline of Science in England, And on Some of Its Causes, 1830). But we are in grave danger of letting bad science get the upper hand.

And what happens to those who try and point out some of the dicier parts of science? Nature just published another article by van Noorden, together with Ed Yong and Heidi Ledford, Research ethics:  3 ways to blow the whistle.

Update 2013-12-01: fixed a typo

1 comment:

  1. A striking 2010 study by Douglas Arnold and Kristine Fowler shows examples of h-index and impact factor manipulation within the traditional publication sector: . These seem to have occurred due to absence of high-level editorial oversight and I think they are relatively rare in that sector. Some of the mechanisms used are rather subtle and ingenious, and are no doubt being further improved in the open-access sector!


Please note that I moderate comments. Any comments that I consider unscientific will not be published.