The choice whether a scientific article is published or not is made through the process of peer reviewing with most journals that have (or want) an international reputation in their discipline (Arms 2002). Manuscripts submitted to an editor are judged basically as suited to the journal and, if so, forwarded to at least two experts related to the topic. This is the pre-publication peer review aimed at enhancing the quality of scientific output. The classic form of this process is when an editor sends the article to two people who are expected to assess the article on the basis of several criteria. For instance, the way in which the article is written, the validity of arguments and interpretation of data. The editors decide then if the article is suitable for publication in the particular journal based on these two reviews. However, in the past decades a critique of this system has occurred. Hereby, peer reviewing is considered an expensive and profligate procedure, slow, biased and poor in the detection of fraud. Out of this critique grew the open access movement. This is a related but quite different topic to dwell upon.
Smith (2006) summarized as weaknesses of peer review a) slow and expensive: it takes time to find suitable reviewers willing to do the work, do the reviewing and process the results; b) inconsistent: essentially the peer review process is subjective; c) bias: studies with negative results or with authors from less prestigious institutions are more easily rejected (it should be noted that Smith made this conclusion in a medical context); d) abuse: plagiarization is an obvious example. In order to expel the defects of peer review, several resolutions have been put forward: standardizing procedures; opening up the process; blinding reviewers to the identity of authors; reviewing protocols; training reviewers; being more rigorous in selecting and deselecting reviewers; using electronic review; rewarding reviewers; providing detailed feedback to reviewers; using more checklists; or creating professional review agencies (Smith 2006, Hauser & Fehr 2007). Bornmann et al. (2010) suggested the most important weakness of the peer review process being the different ratings given to the same submitted manuscript by different reviewers, which highlights the incoherence of verdicts of reviewers. In a meta-study of a number of inter-rater reliability (IRR) studies, Bornmann et al. looked at factors influencing IRR. They included a variety of disciplines and a number of covariates, e.g. the number of manuscripts, the method used to calculate the IRR, the review system (single-blind or double-blind), and the rating system used by the reviewers (metric or categorical). One of their conclusions was that mentioning or non-mentioning of the rating results provides information about the quality of a study.
As a reviewer I regularly get requests from several journals, usually on taxonomic papers related to the Neotropics. Such review requests always are considered as something that may be interesting and an opportunity to learn (content-wise or people-wise). Although I haven’t kept track of all final results, some observations can be made.
First, a review is done voluntarily and always costs time. Sometimes, when the paper is good, it can be done rather quickly but takes at least one hour to critically read, look up some data and write up the review. Sometimes, when the paper is not so good, it needs considerable more time; not only to digest the manuscript but also to carefully phrase the comments. Although personally I don’t mind to be disclosed as reviewer, editors usually pass on the reviews anonymously using a blinding procedure. Some authors acknowledge anonymous reviews (polite colleagues); others don’t mention reviews at all and suggest the quality of the paper was their own merit, even after severe criticism and a major revision (haughty people). One gradually learns who is a ‘jewel’ and who is a ‘rotten apple’. Although publication of the review results (e.g., one reviewer suggested a minor revision, one reviewer a major revision) would certainly add to transparency, it still leaves the ‘black box’ of the editorial decision process. After all, the editor(s) have their own responsibilities and may ignore a review when they dislike e.g. the negative response of a reviewer.
Second, there is a creeping trend to inflate the number of authors, also in taxonomy and related papers. Given the policies in many countries and at many institutions world-wide this may be understandable to a certain extent. But you have to admit that this creates perverse incentives. I even fear that these bad habits lead to bad statistics and encourage bad science. Some journals, usually open access ones (e.g., PLoS ONE), have a standard section on ‘author contributions’ in their papers. This may be a way to more transparency and, even though cheating will always be possible, it may discourage the addition of superfluous co-authors. Although it is more of a custom in experimental studies, I can imagine that in taxonomy instead of ‘design of the experiment’ could be used ‘outline of the paper’. My suggestion is that more journals would include this in their standard procedure, as it is informative for both the reviewers and the readers.
Acknowledgements. I would like to thank Timo Breure for the impetus to this post. Author contributions: outline BB, data BB TB, writing BB TB.
Arms, W.Y. (2002). What are the alternatives to peer review? Quality control in scholarly publising on the web. The Journal of Electronic Publishing 8. Avaliable at http://bit.ly/Hqq2rG.
Bornmann, L., Mutz, R. & Daniel, H.D. (2010). A reliability-generalization study of journal peer reviews: a multi-level meta-analysis of inter-rater reliability and its determinants. PLoS ONE 5(2): e14331.
Hauser, M. & Fehr, E. (2007). An incentive solution to the peer review problem. PLoS Biology 5(4): e107.
Smith, R. (2006). Peer review, a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine 99: 178-182.