Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Half-term | Main | Ross McKitrick on Yamal »
Friday
Oct022009

Peer review

David Appell has picked up my comments on his comments on peer review. To recap somewhat, David suggested that McIntyre's findings on Yamal should not be taken seriously because they are not peer reviewed. I pointed out that Einstein and Watson and Crick were not peer reviewed either, to which David has now responded

Steve McIntyre isn't Einstein. Enough said.

In technical terms, this is what is known as a "straw man". The point at issue was whether Steve McIntyre should be taken seriously, not whether he is Einstein.

Given that David has not disputed that Einstein, Watson and Crick were not peer reviewed, I think we can probably now agree that peer review is not a suitable criterion for deciding if an idea should be taken seriously.

David then goes on to say that Einstein, Watson and Crick were published in the best journals of their day. This is a better point, but I think it's hardly persuasive. If the papers passed the review of an editor instead of a pair of peer reviewers, what does that amount to other than another kind of peer review?

Lucia makes some pertinent comments on the need for peer review today too:

...these communications about published papers happen in both formal and informal settings. Historically, no one has said, “Oh. But who cares about Prof. X’s opinion about paper B. He only said it in a conversation at a conference. Until he writes a journal article, I’m not going to pay attention to that opinion."

And besides, if we should ignore McIntyre's comments because they are not published in a journal, hasn't David shot himself in the foot by quoting, in his very next post, the responses of Briffa and the Real Climate team, none of which were (a) peer reviewed or (b) published in a journal?

PrintView Printer Friendly Version

Reader Comments (24)

Well Crick was not Einstein, nor was Watson, for a bad argument enough has been said.

What a foolish thing to write.

Oct 2, 2009 at 4:47 PM | Unregistered CommenterJohn

If you cannot attack the findings, attack the man, or his antecedents.

Claiming that Steve McIntyre's findings should be ignored because they are not 'peer reviewed' is a compliment. It means that the AGW team cannot fault the science.

Incidentally, the problem Steve McIntyre had with the Yamal data was that it was not published. So it could never have been 'peer reviewed'. This point alone is enough to show that peer review is a broken concept, and code for 'not invented by us'.

Oct 2, 2009 at 5:23 PM | Unregistered CommenterDodgy Geezer

NOAA and CRU data are not peer reviewed are they?

Oct 2, 2009 at 6:50 PM | Unregistered CommenterGene L.

Compared to the fifth-rate physicists who perpetrate Climate Scientology, McIntyre was a whiff of Einstein about him.

Oct 2, 2009 at 6:51 PM | Unregistered Commenterdearieme

Peer review is inherently flawed as a method of assessing the merits of a claim that upsets the status-quo. Peer reviewers are likely to lose by allowing the opposing work to be published, *especially* if it is correct, since it will reflect very poorly on them as well as reflect poorly on the field as a whole. Unfortunately, it seems the human factor prevents intellectual integrity from prevailing in such cases much of the time.

Oct 2, 2009 at 7:19 PM | Unregistered CommenterKurt

The official "Peer Review" process was once valuable, but appears to have become laced with too many conflicts of interest. Possibilities include: (a) the publisher and editorial team might find it uncomfortable to come under scrutiny if they allow something to go to press that actually contains fraudulent data or mere stupid errors (for example; (b) the reviewers themselves may not be professional in their conduct because they feel threatened by new data that appear to refute their own pet results or processes; and (c) the potential that researchers -- as either reviewers or authors -- may suffer financially or in terms of "prestige" from publication that might suggest they may not be nearly as good or insightful (even setting aside the potential for actual fraud) due to another author's paper. Add to that the fact that peer reviewers are not compensated and may not have access to the underlying data associated with the paper, and you can see that it has really limited value. In many ways, the posting of analyses such as that done by McIntyre in public forums provides increased transparency, though it can also be littered with pitfalls as well (e.g., censoring of adverse comments, ad hominem attacks by either article/research authors or commenters, etc.).

For an interesting view of the way comments on peer reviewed papers can be treated -- with apparent political or self-serving interests at the top of the list of things driving the process -- see this post (it is NOT climate related):
http://www.scribd.com/doc/18773744/How-to-Publish-a-Scientific-Comment-in-1-2-3-Easy-Steps
This case involves a situation in which a well-published author recognized a possible error in the paper by another author and sought to ask about it, which is what the comment process is all about. In the course of things, one of the comment author's graduate students may have been blocked from an employment opportunity due to misperceptions of the validity of her work. By my count, the delays due to waiting amounted to about 14 months, and the entire process at least 18 months. Absurd.

Note that there is an extended version of this comment that was never allowed to be published by what appears to have been editorial misconduct, as well as a description for the layman of the issue.

Oct 2, 2009 at 7:50 PM | Unregistered CommenterGene L.

Appell gets it spectacularly wrong:
http://davidappell.blogspot.com/2009/10/apology-for-yamal-accusation.html

do you reckon he will apologise ?

per

Oct 2, 2009 at 9:20 PM | Unregistered Commenterper

The peer review process in this case failed miserably. When scientific misconduct occurs, does it require the blessing of peer review when the data speaks for itself?

Oct 2, 2009 at 9:31 PM | Unregistered CommenterChris

Name-calling, ad-hominems, irrational tangential arguments are all expected when the substance of the argument can't be attacked. It's disgraceful behaviour in my opinion.

Oct 2, 2009 at 9:36 PM | Unregistered CommenterKurt

McIntyre isn't Einstien? well he's clearly a different person but is he of equivalent value? unlikely.
But are any of Briffa, Hanson, Mann and their peers who do the reviewing of equivalent value to Newton?
Newton published (at great length) all his data and all his methods, Briffa et al have apparantly lost their data, and refuse to publish their methods (at least they are selective in what they publish) so clearly they are not. Newtons work stood every test for more than a century, after full publication of both data and method- Briffa, Mann, Hanson et all have not yet published full informaqtion- so we'll see in 150 years or so. (or our heirs will), presuming they actually publish in full today.
If Briffa, Mann etc seriously wanted to be believed they would have published in full ten years ago- so MIcIntyre's task is not really to match Newton (which is what Einstien did with access to better data)- he merely needs to expose a fraud that wouldn't have fooled anyone in Newtons day, not even for a second. In those days Caveat Emptor really meant something.

Oct 2, 2009 at 11:51 PM | Unregistered CommenterPat

Peer review is not intended to be a process that confirms that the data, reasoning and conclusions of a paper are correct. Reviewers are asked to assess whether or not the content is worthy of publication, primarily in terms of adding something new to the field, and to comment on the clarity and presentation of the arguments. Clearly, the reviewers are unable to try and replicate what may be months or years of complex research carried out on purpose built apparatus or at specific locations (depending on the subject). This is the work of other researchers after publication.

Peer review has attained an unwarranted status in the eyes of some who, presumably, are not able to assess the merits of a paper themselves and, therefore, use peer review as a guarantor of the correctness of the content of said paper.

When I read a paper in my field, I do not ask if it has been peer reviewed, I critically read it and come to my own conclusions as to its worth. The starting point should be that the paper is wrong and then allow the authors to attempt to persuade you otherwise. This is not disrespectful, nor is it accusing the authors of fraud, it is simply the best approach to reading any paper irrespective of the author. I'd expect readers of any paper or report I've written over the years to adopt a similarly critical approach.

Oct 3, 2009 at 4:12 AM | Unregistered CommenterDocBud

"Briffa, Mann, Hanson et all have not yet published full informaqtion- so we'll see in 150 years or so. (or our heirs will), presuming they actually publish in full today."

Well, we can't afford to wait that long since if Briffa, Mann, Hanson et all have their way there won't be a technical/industrial civillisation to do science left.

Oct 3, 2009 at 8:55 AM | Unregistered CommenterDavidNcl

To quote from a biography of Einstein:
"At first Einstein's 1905 papers were ignored by the physics community. This began to change after he received the attention of just one physicist, perhaps the most influential physicist of his generation, Max Planck, the founder of the quantum theory."

Dare I say, Michael Mann and Gavin Schmidt are no Max Plancks.
It took a decade for the physics consensus of the time to recognise Einstein's theories.And they didn't stand to lose their generous research grants or their political influence. Imagine the difficulty this will present to a fringe soft science populated by researchers that can't do basic statistics, maintain measurement equipment or provide data and methods for their papers.

The sad thing is how far they have brought down science.

Oct 3, 2009 at 12:35 PM | Unregistered CommenterHarry Snape

DocBud,

Agreed. Having acted on both sides of the peer review process, I know how it works. The basic function of peer review is to filter out papers of no interest to the journal's readership. Basic checks are done to see that the results are new, interesting, clearly and completely enough presented to replicate, and not obvious nonsense. It is NOT and never has been a check on the result's correctness. The function of journals is not to present settled science, to the extent that science is ever settled. (That's textbooks.) Journals present work in progress - hypotheses and experiments - for others to try to find errors in them, or to extend and improve them. Journals are simply a filtering/sorting mechanism to enable researchers to find work of interest to them quickly. A large proportion (maybe a third) of the papers published turn out to be wrong.

The peer review process has not failed (except to enable replicability) because it was never intended to stop this sort of thing happening. Publication in a journal is no guarantee of quality, let alone truth, and non-publication is not evidence to the contrary. The failure is on the part of all the other researchers who failed to challenge it. All those institutional hacks who took unverified results on trust.

If anybody argues that a result has to be published in a journal to be taken seriously, or to take part in the scientific process, then you can conclude that they are not a real scientist (i.e. a person who follows the scientific method), that they don't understand what journals and peer review are for, and their argument is tantamount to the fallacy of Argument from Authority - the very antithesis of science.

Science accepts input from anywhere. Blogs are actually a very good way of doing science, because they make the process of sceptical criticism much faster. (They could probably do with better filtering of the significant criticisms from the dross, though.) The scientific method is based on only keeping ideas that can survive honest and determined scepticism attempting to disprove it. Ideas that have not been challenged cannot be trusted, which is why blog science happens in the comments. A blog post with no comments, or selectively censored comments is not science. But a high-profile blog post that can be relied upon to attract motivated and well-informed attackers can also be relied upon to provide a solid counter-argument if one exists. If none appears in a reasonable time, the idea is worth taking seriously. That's exactly how the scientific method works.

Steve McIntyre's results should be published in the journals. But this is only to make it easier for the professionals to find them, not because there is any sort of official process to doing Science. Even if the general public can't do the technical stuff, they should at least be able to understand what science is, enough to recognise this anti-science when they see it.

Oct 3, 2009 at 12:35 PM | Unregistered CommenterStevo

Word, DocBud! And dito Stevo.

Oct 3, 2009 at 4:59 PM | Unregistered CommenterJonas N

As I recall, the original Hockey Stick was extensively peer-reviewed, so it's a bit rich for Steve McIntyre's critics to insist upon it!

Oct 3, 2009 at 9:16 PM | Unregistered CommenterJames P

The peer review process is being discredited by this whole climate change business, as is environmentalism as a whole. Which is a great shame.

Oct 4, 2009 at 1:51 AM | Unregistered CommenterBill Sticker

The real problem is the opposite.
Briffa's Yamal paper was weakest where it should have been strongest. That in establishing tree ring data as a plausible proxy, necessitates showing a strong correlation between recent tree ring data and the known temperature record. It is only when the validity of the proxy has been established that the proxy becomes operational for the estimation of temperatures where there is no temperature record. (Like a new, laboratory, weighing scale can only be used once it has been calibrated). There is no excuse for small sample size, as the area is a large forest. (But then, maybe the researchers on the ground could not see the trees for the wood?)
In the competitive world of the academia, and immersed in all the detail it is easy for individuals to lose sight of this basic premise - that the whole thesis may, principally, rely on a single, rogue, sample point. In Briffa’s case, maybe he could not see the wood for the tree rings? That is why we have peer reviews to perform these checks. In an empirical paper, it is not just on the hypothesis formulation, the math and the method, but also on the data and the statistical analysis thereof that needs some checking.
The fact that these checks were not carried out shows a fundamental weakness in the peer review process. There is a parallel with the Enron collapse. This showed that the audit by their accountants did anything but establish a "true & fair view". Indeed, Arthur Anderson’s Management Consultants had assisted the Enron executives in presenting a picture that was anything but “true & fair”.
What should now be happening (especially after the earlier Mann Hockey Stick was undermined by McIntyre) is for learned journals to try to salvage their reputations by
a) including basic statistical & data checks into the peer review process.
b) calling in some statisticians in to re-check past papers. The starting point would be any papers that purport to show a hockey stick, or rely on hockey-stick papers.

Oct 4, 2009 at 2:57 AM | Unregistered CommenterVincent Shand

I have left a comment on Appell's blog... In the fairly certain expectation that it won't survive moderation, I repeat it here for your "edification". :-)

"I think that it's worth remembering that, in reality, "peer review" is little different from getting school kids to mark their friends' exam papers."

(My opinion is based upon being on both sides of the peer review process a few times).

Oct 4, 2009 at 6:39 PM | Unregistered CommenterPogo

Vincent,

Yes, but the validity of the proxy has to be established by means independent of the contents of the data. Otherwise it's circular reasoning. It's the same sort of reasoning as when you modify your hypotheses in light of the results, and then declaring the hypotheses experimentally confirmed.

To be fair to Briffa, the original intention for this data was slightly different. Two Russians, Hantemirov and Shiyatov selected the trees (or a related set, it's a bit unclear) but processed them in a way that was known to hide long-term trends. Briffa simply wanted to know what it would look like if you processed them the conventional way, that brought out the trends.

The problem was, the selection H&S made is unsuitable for the standard method (too small, and biased sampling), and Briffa should have realised that and commented on it at the time, and nobody should have used it in any of the subsequent studies. Given the other data from the area, the hockeystick result of applying the standard method was highly suspicious, and should have led to a careful investigation of exactly what had gone wrong. It was probably confirmation bias.

Interestingly, if you read H&S's original paper that the data came from, they report a reconstruction of the treeline, (the northernmost edge of the forest), which logically seems more likely to be a possible temperature proxy (although of course all the same caveats apply, other things can cause it to move). According to their reconstruction, the treeline was several kilometres further north around 1200 AD, suggesting it was warmer then.

But that's just an amusing point, not one I would attach any significance to. Even if trees were perfect thermometers, the temperature in any one location goes up and down too noisily to discern any supposed climate change. You have to average temperatures on at least a continental scale to even think of detecting this signal. And you need comprehensive global measurements to be sure the global temperature is doing the same.

How many trees did they measure in the Pacific ocean, or Antarctica, or the Southern Ocean, from 1000 years back, to be able to make that sort of assertion? To a fraction of a degree, no less?

The idea that you can measure the summer temperature at a few hundred spot-locations in various forests and get a global average accurate to a tenth of a degree is simply absurd. The idea that you can measure the temperature that precisely using trees is barking insane. You would have to be howling at the moon to even think of trying it. This is real random chicken-entrails stuff.

There is this peculiar myth among those with a dangerous smattering of statistics that you can get unlimited accuracy simply by averaging more and more measurements. And among those impressed at clever algorithms to believe that you can fill in the gaps in data almost perfectly using correlations. Thus they fill in the gaps constituting 99.99% of the world, and apply cleverly weighted averages, that weight most heavily the answers they know they're supposed to get. Statisticians argue back and forth over the mathematical minutiae, which is fun for statisticians, but ultimately pointless. Because it should be obvious to any statistician after ten minutes study that the data could never provide what is claimed for it. The hypothesis might be either true or false; this data simply does not contain that information.

Oct 4, 2009 at 11:24 PM | Unregistered CommenterStevo

I wish they would stop that peer review nonsense when related to the internet publications and blogs. Either the results are sound or they aren't.
Just some anecdotes (peer reviewed, of course): I remember vividly how one referee of my paper liked part of the results and said so, while the second one requested in no kind words that exactly the same part should be abolished. What to do then? I left that part standing and asked the editor to step in.
Or another thing - when searching DNA sequence database for sequences related to some fungal species, to my great surprise 4 sequences allegedly from macaca monkeys jumped out (among the fungi). I searched for the paper (in a Pakistani peer-reviewed biological journal) and even the total discrepancy between the "macaca" and chimp sequences did not alert the authors AND the referees to the fact tha something might be very wrong. I wrote to the journal editor, but my letter seems to be ignored.
So what to think about the peer-review process? Certainly it isn't the ultimate oracle of correctness...

Oct 5, 2009 at 12:26 AM | Unregistered CommenterEW

Really awesome post!! I like it very much!! Great work.. I am completely agree with Vincent. Thanks

Oct 6, 2009 at 8:56 AM | Unregistered Commenterernahrung

Is not Steve McIntyre's commentary and findings the very definition of peer review? Are we going to demand peer review for peer reviewers next?

When it works, peer review is about finding errors, nitpicking the details and ensuring that the methodology is sound before publication. When it fails peer review is no better than a group of 15 year old cheerleaders deciding who to invite to their party. McIntyre is being criticized because the cheerleader set already decided not to invite him and he showed up anyway. Hell hath no fury like a clique of offended researchers and overly dramatic cheerleaders.

-AG

Oct 12, 2009 at 4:55 PM | Unregistered CommenterAaron Gee

Where is it that any peer reviewer specifically warrants that what they have reviewed for subsequent publication is "true"?

In fact, Peer reviewers can do anything they want as their "review", which should have already been clear to everyone reading publications.

Likewise, it should be fairly obvious by now that the most significant peer review takes place after a paper is published.

In addition, how could peer reviewed pubications possibly publish everything worth publishing?

How can anyone say a priori that anything published anywhere without prior peer review is false or not worth considering?

Fortuneately, we're all still required to make our own assessments.

And I see that DocBud has already made some of these points much better than I have.

Oct 28, 2009 at 8:26 AM | Unregistered CommenterJ. Peden

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>