Peer review isn't working
I'm grateful to reader Steve for pointing me to this article by Carl Phillips, an epidemiologist, who is looking at the efficacy of peer review. The whole article is worth a look, but here are some choice quotes:
Do the reviewers ever correct errors in the data or data collection? They cannot – they never even see the data or learn what the data collection methods were. Do they correct errors in calculation or choices of statistical analysis? They cannot. They never even know what calculations were done or what statistics were considered. Think about what you read when you see the final published paper. That is all the reviewers and editors ever see too. (Note I have always tried to go the extra mile when submitting papers, to make this system work by posting the data somewhere and offering to show someone the details of any analytic method that is not fully explained. This behavior is rare to the point that I cannot name anyone else, offhand, who does it.)
Does this mean that if you just make up the data, peer review will almost certainly fail to detect the subterfuge? Correct.
Does this mean that if you cherrypick your statistical analyses to exaggerate your results, that peer review will not be able to detect it? Correct.
But it serves just fine for justifying the uprooting of the economy.
Reader Comments (20)
Back when I was doing science, one had to publish what you did, how you did it with enough detail so somebody else could replicate your work. Then everyone waited to see if somebody else was able to replicate your results independently.
Seems to me that would still work.
Just wait for the first to cry “He takes money from the tobacco industry”.
TobaccoHarmReduction.org.
http://www.tobaccoharmreduction.org/
FAQ
[8.3] Why should we believe you; we hear that you get money from the tobacco industry?
http://www.tobaccoharmreduction.org/faq/authority.htm#73
How are research scientists supposed to get grant funding for subsequent research, if their curent project is not producing the results they promised in their application for funding?
Science would be better served if ALL results had to be made available, not just the ones cherry picked by the researcher
Strangely the simple industrial R&D practice of signing lab books etc. seems to provide a better model than the academics have been able to develop - despite all that taxpayer funded academic freedom.
What Don Pablo de la Sierra says is still current in my discipline. You simply have to provide enough data and experimental to the reviewers so that they can replicate your results with no further input. Not only that but some reviewers make a point of doing this, if only to be mischevious. It is clear to me that climate science is, in general, weak in this area and this weakness should be more widely disseminated.
Across articles submitted by me for peer review I have noticed that papers more or less coincident with accepted views are more readily accepted than more provocative ones somewhat departing from the ruling orthodoxy. Whenever I present results or interpretations at variance with commonly held views and interpretations, I get either direct dismissal or exhaustive questioning and argument, extending sometimes during four or five rounds of tiresome reviewing and editing of the article. In several occasions the paper is ultimately rejected. In one recent occasion, after satisfactorily responding to a large number of questions and objections, my paper was rejected by the editor who explained that he had not noticed before that my paper was on a subject and approach not exactly fitting the remit of the journal in question.
Application of a good QA system, involving independent verification, would solve the problem of inadequate peer review.
Sadly, the problems of peer/pal review are not new, nor are they confined to climate science as this article from the Observer in 2003 demonstrates:
http://www.guardian.co.uk/society/2003/dec/07/health.businessofresearch
One has to wonder if the Royal Society would be better talking to Dr. Phillips, than chasing round after climate sceptics, who have been trying to make exactly the same point for years.
In my industry the client will either QC the work with their own technical specialists or, if they do not have anyone suitable within their own staff, they will pay an independent industry expert to QC the work. The QC will ask tough questions and make you work to justify your technical decisions and conclusions.
I attended the Battle of Ideas event last year on:
End of the peer-review: has the peer-review process lost credibility?
You might like to check out the audio file here
http://www.battleofideas.org.uk/index.php/2010/session_detail/4088/
I checked the article. While I suspect that the conclusions are mostly accurate, the article is an opinion piece based on the author's experience - which may or may not broad-based and/or extensive. I prefer the empirical work of John Ioannidis - written up in the Atlantic - http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
Bernie - yes, I had read the Atlantic article but didn't have the link to hand. The point about the 2003 article is that it shows this is not a new problem, in particular with regard to the close ties between the pharmaceutical industry and the medical science journals.
As you probably know, Judy Curry discussed the issue in November - http://judithcurry.com/2010/12/14/lies-damned-lies-and-science/ . Other links:
Atlantic Magazine: http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/
New Yorker article at: http://crayz.org/science.pdf
Wired Magazine: http://www.wired.com/wiredscience/2010/12/the-mysterious-decline-effect/
golf charley
I agree.
The British Medical Journal introduced a system whereby authors of prospective papers submit to the journal their research protocols (study design, patient selection, etc). If the protocols are approved by the journal, then the subsequently submitted manuscript is ensured publication - regardless of the result (treatment effective or not).
I used to be a medical researcher and now my job includes managing a ISO9001 QMS, giving me a quality perspective.
Peer review has no quality assurance or audit trail on data - you could make up a a part of a study or the study entirely, and as long as you analyse it correctly and the results align with mainstream opinion or that of the reviewer, the reviewer will accept it. Since most studies only add small incremental information, you could even guess at the incremental results, especially if you are part of the current group think, and you may even be reproduced - rightly or wrongly.
Essentially, what is the reviewer assuring? I suggest this:
Assuming the data, was collected and recorded and processed according to the explicitly stated and implied methods, which I am not in a position to verify, the claimed methods, the analysis of the presented summarised data and conclusions drawn are what I personally consider to be adequately correct.
ACADEMIC research is honesty-based. Once you inject commercial interests and public safety is involved, society insist on REGULATED Research, as with medicine, aviation and military, where the data itself is audited. The climate industry concerns the lives of millions or possibly billions - it is insane that it is not regulated, and doubly insane that it should be based on data which is has been lost or repeatedly shown to have been manipulated - whether significantly or not.
Michael:
Nicely put. I would add that competitiveness among researchers and simple human failings frequently lead to dishonesty and questionable science.
In a nutshell, isn't 'peer review' basically a system which checks for spelling and grammatical mistakes..?
"isn't 'peer review' basically a system which checks for spelling and grammatical mistakes": not at my hands, matey. What I try to insist on is as clear, unambiguous and complete a statement as possible of what the authors claim. Then other workers can understand it and try to repeat it, or just demolish it for logical or factual errors. What I make absolutely no claim to do is to verify that the work is fundamentally correct. Of course if I spot a howler I'll point it out, but the notion that I have under my command a team of elves with whom I can work all hours in the attempt to falsify key chunks of a paper is a fantasy.
However, during my career it has become increasingly difficult to insist that necessary detail be published; editors cry that they have no room.
On the subject of "spelling and grammatical mistakes" I must say that one obstacle to "clear, unambiguous and complete" statements is the poor standard of scientific English. And it's no use blaming non-native speakers - they are only part of the problem.
Addendum: if you want to know why papers in PNAS are often manifestly better written than many others, consider this -
"Prior to submission, authors who believe their manuscripts would benefit from professional editing are encouraged to use a language-editing service (see list at http://www.pnas.org/site/misc/language-editing.shtml). PNAS does not take responsibility for or endorse these services, and their use has no bearing on acceptance of a manuscript for publication."
Some of Carl's comments below that thread are even more illuminating and seem very pertinent w.r.t. Ryan O'Donnell's posting on Eric Steig.
There's a newer Peer review critique up there now as well
http://ep-ology.blogspot.com/2011/02/unhealthful-news-38-more-on-limits-of.html