Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Yeo wants Patchy to go | Main | More climate backtracking »
Thursday
Sep232010

Responses to McShane and Wyner

Readers will remember McShane and Wyner's critique of the way paleoclimatologists handle statistics, which was widely reported some weeks back. The journal in question has invited responses to the paper and these are now online.

There is one from Schmidt, Mann and Rutherford and another from Tingley.

PrintView Printer Friendly Version

Reader Comments (40)

Don't we have the same problem highlighted by Wegman? These respondents aren't statisticians.

Sep 23, 2010 at 7:55 AM | Unregistered CommenterPhillip Bratby

It's funny that the hockey team is taking the p1ss out of their own data! :)

Mailman

Sep 23, 2010 at 8:46 AM | Unregistered CommenterMailman

I thought the whole point of M&W's work was that they used exactly the same data set that Mann et al used yet drew totally different conclusions from....confused.....

Sep 23, 2010 at 8:47 AM | Unregistered Commenterconfused

The first paper seems to hinge on criticising MWs understanding of "Objective selection criteria" so the objectivity of assessing the data seems to be key there.

The Tingley paper talks about the LASSO technique and claims it inappropriate for the type of data involved. One of the key arguments seems to be:

A more scientifically sound approach recognizes that the proxies are related to the local climate, which in turn displays both spatial and temporal correlation.

In my layman opinion a lot of the critcism boils down to the claim is that Climate proxy experts with a knowledge of stats, beat Statistics experts with a knowledge of proxies. But it has always seemed to me that any "expertise" in climate proxies has to rest largely on the soundness of the original statistical analysis, so there seems to be a flaw in the circularity of that claim.

I expect Jeff ID and Steve McIntyre Climate Audit will have something to say about this.

Sep 23, 2010 at 8:49 AM | Unregistered CommenterSteve2

So Gavin doesn't have an opinion on Tiljander, is not that knowledgeable about the field, his focus is all on the modeling and the physics, but now he is the lead author of a response? And in that response, he mentions that Tiljander is contaminated and should be removed?

Sep 23, 2010 at 8:50 AM | Unregistered CommenterMikeN

Quote from SMR

McShane and Wyner (2010) (henceforth “MW”) analyze a dataset of “proxy”
climate records previously used by Mann et al (2008) (henceforth “M08”) to attempt to
assess their utility in reconstructing past temperatures. MW introduce new methods in
their analysis, which is welcome. However, the absence of both proper data quality
control and appropriate “pseudoproxy” tests to assess the performance of their methods
invalidate their main conclusions.

As MW used the M08 data with no modifications are they saying M08 also had no proper data quality control either. Seems an odd defence, if you read on they are still fixated with getting rid of the MWP so at least no change there ;)

Sep 23, 2010 at 8:51 AM | Unregistered CommenterJohnH

Citizen statisticians challenge real statisticians about statistics.

Sep 23, 2010 at 8:54 AM | Unregistered Commentergeronimo

MW is a peer reviewed paper isn't it?

Sep 23, 2010 at 8:57 AM | Unregistered Commentergeronimo

I couldn't get past...

"the absence of both proper data quality control" ?!

So the rebuttal of M08 has data quality issues, using the same data as M08, but M08 is supposed to be "robust" ?

This is "peer reviewed" trolling!

Sep 23, 2010 at 9:03 AM | Unregistered CommenterPete

Frantic efforts from the usual suspects again to lower the MWP temperatures below present levels. Statistics is welcome in Climate Science but only if it produces the existing group-think outcome.

Sep 23, 2010 at 9:04 AM | Unregistered CommenterAnton

The attack vector of choice against McShane and Wyner has been to treat it as a paper in the climate 'science' field, and to complain that it is not a very good CS paper, or adds nothing new.
Both the above responses seem to fall into this category, just with more of the 'you can't criticize xyz unless you propose something better' echo added.

Sep 23, 2010 at 9:29 AM | Unregistered CommenterChuckles

MW’s inclusion of the additional poor quality proxies has a material affect on the reconstructions, inflating the level of peak apparent Medieval warmth, particularly in their featured “OLS PC10” [K=10 PCs of the proxy data used as predictors of instrumental mean NH land temperature] reconstruction. The further elimination of 4 potentially contaminated “Tiljander” proxies [as tested in M08; M08 also tested the impact of removing tree-ring data, including controversial long “Bristlecone pine” tree-ring records. Recent work, c.f. Salzer et al 2009, however demonstrates those data to contain a reliable long-term temperature signal], which yields a set of 55 proxies, further reduces the level of peak Medieval warmth (Figure 1a, c.f. Fig 14 in MW; See also Supplementary Figures S1-S2 (Schmidt, Mann and Rutherford, 2010a; 2010b)).

Is it just me, or is this treating M&W's work as an attempt to produce an alternative time/temp series as opposed to what they were actually doing, which was demonstrating the fact that the statistical analysis previously used was flawed and that similar results were produced with a red noise input as with the proxy data?
The MWP (which I was led to believe was scientifically uninteresting) showing up or not in any of their reconstructions was not, as far as I understand it, an attempt to reassert the fact that the MWP happened, neither was it central to the paper. The focus on that in the response seems to me to be missing the point that the uncertainty inherent in the dataset is greater than the signal in every instance and therefore a statisticaly sound reconstruction with a high enough SNR could not be produced with existing data, meaning a 1000 year series was worthless in any real sense as the level of uncertainty was larger than any anthropgenic temperature change previously claimed to have been isolated from said data.

Or have I missed the point?

Sep 23, 2010 at 9:52 AM | Unregistered CommenterMackemX

SHOCK NEWS: Climate scientists claim that statistical experts are not experts on statistics!

It is quite funny to see the Team criticise their own data.

Sep 23, 2010 at 9:52 AM | Unregistered CommenterMac

Anyone think this could be the teams jumping the shark moment?

Surely "proper" (non team) scientists will see this BS for what it is?

Sep 23, 2010 at 9:57 AM | Unregistered CommenterPete

Very amusing to see Mann and Schmidt criticising McShane and Wyner for allegedly using inappropriate tests!

Both papers claim that the 'Lasso' method is inappropriate. But MW have already discussed this issue at length on page 25 of their paper, saying that other methods give the same general results. This is then shown on the following few pages.

It's also quite funny to see the SMR paper saying "Look, if we use this proxy, throw away this one, use this one, and throw away that one, we can still make the MWP disappear!" If data is discarded as they seem to be suggesting, this will just make the error bars even bigger.

I don't think M&W will have any difficulty dealing with these criticisms.

Sep 23, 2010 at 10:03 AM | Unregistered CommenterPaulM

I have been saying the only comment I can think of is desperate but i think Pete above says it better -

'This is "peer reviewed" trolling!'

Sep 23, 2010 at 11:24 AM | Unregistered CommenterShevva

Apparently their point on data quality is that M08 tossed out certain series like Bristlecones while M & W left them in. That's what they mean by poor data control - ie including series even Mann let go. So don't be too quick to say M&W used exactly the same data.

Of course Mann left in Tjilander and Yamal ....

Here's a challenge to Tingley - grab the M08 data AND what you say is the right stat function and let's have at it. I bet he's done that but found much the same as M & W so doesn't want to put a spanner in Mann's work. Li is the same.

Funny how these climate stat guys always show up to try and defend Mann but are never there to examine M08 itself.

Jeff Id has taken it apart every which way - you can make anything with the M08 function.

Sep 23, 2010 at 11:25 AM | Unregistered Commenterpete m

Lubos Motl has a post about the Schmidt, Mann and Rutherford response

Schmidt, Mann, Rutherford: just clueless

Sep 23, 2010 at 12:04 PM | Unregistered CommenterSteve2

Lubos Motl executive summary

Schmidt, Mann, and Rutherford simply fail as scientists.

LOL

Well worth a read, thanks Steve2

Sep 23, 2010 at 12:13 PM | Unregistered CommenterJohnH

Post just up at Real Climate - Researchers debunk McShane and Wyner's paper.

Quote - "Although this paper has attracted much attention in the press, climate researchers have, after thorough analysis, shown that that the Lasso statistical methodology is the only method that suggests proxies do not predict temperature significantly better than random series generated independently of temperature.

The totally independent, unaffiliated, non-connected, salt-of-the-earth climate researchers further demonstrated that all proxies (especially tree ring) do predict temperature significantly better than random series generated independently of temperature. The predictions are shown to be robust independent of proxy selection and statistical methods used." End Quote

They further went on to say that it is a complete disgrace that these sort of papers be accepted in reputable... etc... and that it incumbent on the scientific community to unite in stopping, by whatever means, papers such as this being...etc...and that the work of climate scientists is far too important to be distracted by such diversions and furthermore...etc...etc

Sep 23, 2010 at 1:24 PM | Unregistered CommenterGrantB

Mann and Schmidt are like the Church trying to deal with Galileo not on theological terms but scientific to maintain the Earth centered dogma. The Church lost.
There only hope was to continue to continue to ignore the critics and stick to what they do best: repeating the rosaries of their faith and shaking down corporations and governments for money.
They have actually engaged now, and as this critique demonstrates, are not really able to defend very much.

Sep 23, 2010 at 1:32 PM | Unregistered Commenterhunter

I'm surprised how very weak Gavin and Mann's response is.

Sep 23, 2010 at 2:08 PM | Unregistered CommenterJason

Schmidt, Mann and Rutherford's official comment to the Annals of Applied Statistics is "bizarre".

Sep 23, 2010 at 2:45 PM | Unregistered Commenter"Dr." Karl

Possible reason for the SMR response has been posted at WUWT, old games being played again.


'This looks like a replay of the McIntyre-Mann wars…

As I recall Mann complained on occasion that McIntyre analyzed the wrong data. This was because (I recall) that McIntyre was always “guessing” as to what data was actually used. It seems that the Mann articles were never exactly clear as to what data was or was not included or excluded, and the meta-data that accompanied any data never made this clear either. Quite a neat trick actually. That way you can always claim that your critics got it wrong. See the Climate Audit site comments regarding the data for MBH98 to refresh your memories.

Isn’t there a song about this?

“Here we go again…” '

Sep 23, 2010 at 2:59 PM | Unregistered CommenterJohnH

I agree that this is awfully similar to the 'pea and shell' game that CA often refers to. If there is a problem with the data, then the honest solution is to provide the 'correct' data. They never seem to be able to do this, always keeping the shells moving and claiming that the 'correct' data was not used. Their inability and unwillingness to provide the 'correct' data is a sure sign of failed research.

Sep 23, 2010 at 3:42 PM | Unregistered CommenterRedbone

interesting that the team took the bait. not only will this bolster this fairly new journals' street cred, but now, with these responses, will more then likely get more statisticians looking at the CC science.

I really thought the team would have kept their heads down on this one, but it appears that their egos know no bounds.

Sep 23, 2010 at 4:03 PM | Unregistered CommenterDeNihilist

Jeff ID at the Air Vent has a post about these papers.

Ostriches

Sep 23, 2010 at 8:17 PM | Unregistered CommenterSteve2

Tingley does at least seem to have made an effort to analyse the power of the method - and I've not seen much in the way of an explanation as to what is wrong with what he says. Either it is an elaborate obfuscation, or his point is valid. Does anyone understand the subject well enough to know?

Sep 23, 2010 at 9:28 PM | Unregistered CommenterSean Houlihane

I've concluded the Hockey Team needs a new ID; the old image is tarnished. I was thinking: The Cherry Pickers -- has a rhythm and rhyme to it, no? We cherry-pick the trees, we cherry-pick the analyses, we cherry-pick the PCs, we cherry-pick the peer reviewees, we're The Cherry Pickers. Sort of catchy.

Sep 23, 2010 at 9:33 PM | Unregistered CommenterDrCrinum

Lord Oxburgh to HoCSTC: "We were fortunate in having a very eminent statistician on our panel. He looked at pretty well all of that material very carefully. His conclusion was that they had not used state of the art methods to do-to solve/attack-what is essentially a statistical problem. You have got these dispersed data and you really have to manage this lot statistically. And he was really quite serious in saying that this was not the best way to do it. Having said that, even though they had adopted their own methods, he said, actually, had they used more sophisticated and state of the art methods, it wouldn’t have made a great deal of difference."

I'd like to see the actual comments of their statistician - I suspect, given the confidence interval associated with his testimony, when Lord O says "...state of the art..." he is really avoiding awkward terms like "...correct..." or "...valid..."...

Sep 23, 2010 at 9:46 PM | Unregistered Commenternot banned yet

Laugh!

Ive finally found out why we get this strange answers from the hockeyteam. They simply don`t understand the critics!! Thats why they screw up so badly they just dont understand its simple as that. Because if they did they wouldnt embarras themselfes as they now so obviously does. They are simply statistic morons!

Sep 23, 2010 at 9:54 PM | Unregistered CommenterSlabadang

Lovely bit of verbal obscuration, this in the summary:

Problems in climate research such as statistical climate reconstruction require sophisticated statistical approaches and a thorough understanding of the data used. Moreover, investigations of the underlying spatial patterns of past climate changes, rather than simply hemispheric mean temperature estimates, are most likely to provide insights into climate dynamics (e.g. Mann et al., 2009, Schmidt, 2010

Translation: We had to be creative with the statistics and futz around with the data to show anything and you are just too stupid to understand because you are not "Climate Scientists" like we are.

I love the "spatial patterns" bit. I wonder how they will do that? Maybe with a Ouija board.

Pathetic. All M&W did was to take their messed up data and analyze it according to generally accepted statistical procedures and standards.

Sep 23, 2010 at 10:06 PM | Unregistered CommenterDon Pablo de la Sierra

@Sean Houlihane

Tingley is the most technical. His paper could help expand knowledge, but if I could be a layman hereti, I say see Jeff Id for his opinion on MW2010 and the Hockey stick, - My two pence is that all historical proxy interpretations are (or should be) dependent on sound statistically techniques.
Like I said above you can argue who is the better - stat guy or historical climate guy. But if you say you are a better interpreter of historical climate than the guy who objectively uses raw stats to interpret it? Who can win?

Surely you need to to rely on the most severe and sound stat techniques?

And from the central comittee of climate hegemony, time after time you dont see that.

You get the mantra that if the stats don't match, then our model assumptions will win...

It all can be said by this statement:

"How dare you say red noise could be called a "pseudoproxy" when we all know a proper "pseudoproxy" is created by using a computer algorithm we climate experts know about and created to meet our expectations!"

Sep 24, 2010 at 12:46 AM | Unregistered CommenterSteve2

Whatever the merits of the response, I'm pleased that this controversy has been aired in a top class statistics journal. In my experience, professional mainstream statisticians have up to now taken very little interest in the statistical methods applied to climate change.

Sep 24, 2010 at 1:04 PM | Unregistered CommenterJonathan Bagley

I cannot believe that they actually wrote this:

"A standard benchmark in the field is the use of synthetic proxy data known as “pseudoproxies” derived from long-term climate model simulations where the true climate history is known, and the skill of the particular method can be evaluated"

So a "pseudoproxy" is what exactly? It is the output of a long-term climate model computer simulation where the true climate history is known. How? The long-term climate history is unknown. The rest of the comment paragraph (not shown above) appears to say (although it is very ambiguously worded) that methods that agree with the long-term climate simulation are better...because they agree with the long-term climate simulations.

These are computer simulations. They are not real data. Confirming your statistical analysis by comparison with a computer simulation is a tautology.

Are these guys for real? Do they seriously believe this argument?

Sep 24, 2010 at 2:03 PM | Unregistered CommenterThinkingScientist

Thinking Scientist

In other words: "How do we know the Hockey Stick is valid? Because we have checked it with pseudoproxies. How do we know the pseudoproxies are sound? Because we have checked them against the Hockey Stick."

Where I come from we call that begging the question.

Sep 24, 2010 at 3:06 PM | Unregistered CommenterDreadnought

ThinkingScientist, "Are these guys for real? Do they seriously believe this argument?"

I think they do. They seriously think, for example, that their computer models are good ("have skill" in their ridiculous jargon) because they have tweaked all the parameters in them that they can successfully predict the past! MW10 use a much more sensible 'pseudo-proxy', just random numbers, which SMR then claim is a 'misuse' of the term.

The source of their delusions seems to be a circular self-reinforcing groupthink within their own small clique (look up the 8 symptoms of Groupthink and it fits 'the team' perfectly). Interestingly, criticism encourages this behaviour as the group rallies round to defend itself forgetting its differences (See for example email 1024334440.txt where Briffa and Cook refer to Mann's hockeystick as 'crap'). Steve Mc refers to this as 'circling the wagons' which I had to look up.

Sep 24, 2010 at 3:26 PM | Unregistered CommenterPaulM

This is an interesting "post-modern" version of statistics. Back 40 years ago, we use to collect real data and compare it against a mathematical model. To explain: When you do a linear regression, you are first estimating the best fitting line in the form of Y = slope(x) +c. Then you compare the experimental value against the the estimated (from the fitted line) to see what the cosine of the two vectors is. (This was known as the Pearson Product moment in my day).

Now you built two models and compare them. Where is the science? I guess that is where post-modern comes in. The science is superfluous. All you need is a computer and Harry to program it.

Sep 24, 2010 at 4:23 PM | Unregistered CommenterDon Pablo de la Sierra

ThinkingScientist

Are these guys for real? Do they seriously believe this argument?

You clearly logically dissected the issue neatly TS. However, what you missed is they are merely following the ages old rhetoric rule of:

If you can't dazzle them with brilliance, confound them with bullshit.

Once again I point out, this isn't about facts, but rhetoric.

Sep 24, 2010 at 4:37 PM | Unregistered CommenterDon Pablo de la Sierra

This is from Steve McIntyre at CA

The 93-proxy dataset that they use in their AD1000 reconstruction includes 24 strip bark series and 2 Korttajarvi series (Tiljander) without removing the contaminated segments. So it requires great caution in interpreting their results other than where they are, in effect, only mathematical. While I welcome their interest in the field, I wish that they hadn’t used things like “lasso” that aren’t actually in use in the field.

Oct 4, 2010 at 1:12 PM | Unregistered CommenterJohnH

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>