Krugman homeopath
Paul Krugman is considering Michael Mann this morning. Amazingly, the great man is trying to resurrect the Hockey Stick.
Mann, as some of you may know, is a hard-working scientist who used indirect evidence from tree rings and ice cores in an attempt to create a long-run climate record. His result was the famous “hockey stick” of sharply rising temperatures in the age of industrialization and fossil fuel consumption. His reward for that hard work was not simply assertions that he was wrong — which he wasn’t — but a concerted effort to destroy his life and career with accusations of professional malpractice, involving the usual suspects on the right but also public officials, like the former Attorney General of Virginia.
He wasn't wrong? Like our friend Anders, Mr Krugman could really do with getting himself a copy of The Hockey Stick Illusion. Like Anders, I don't suppose he will.
Mr Krugman, you really do need to centre your data if you are going to do principal components analysis. Really you do. There is not a reputable statistician who has ever looked at this question and concluded that Mann got it right. I wonder if Mr Krugman is a fan of the Mann view that not centring your data properly is "modern" (and therefore OK) or whether he favours the Gerald North view that you can use a biased method and inappropriate data and still arrive at the right answer.
Homeopathy has nothing on climate science.
Reader Comments (126)
And Then There's Physics beclowns themselves: "
This is not about whether Mann got it right or wrong. This is about whether or not it's acceptable for a journalist (and others) to make accusations of malpractice and fraud. Getting something wrong in a paper is not a sign of scientific misconduct. Getting something wrong on purpose so as to get the result you want, would be a sign of scientific misconduct. Proving that it was on purpose, however, is what needs to be done, not simply asserting it. / Here's another reason why I won't bother reading it. It's because I don't really care. This is work done 15/16 years ago. Progress in science doesn't occur by every step along the way having no mistakes."
The ancient history whitewash doesn't survive the Marcott 2013 update, widely celebrated by mathematician Mann:
http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg
To obtain the blade, a very simple fraud was involved of re-dating data from a Ph.D. thesis that had no blade in the result, to afford a completely spurious blade by sudden data drop-off at the end. This was weasel worded away by calling it not significant, but then a coauthor described the result to NY Times reporter Revkin in video chat as a "super hockey stick" with a swoosh gesture. That is where the hockey stick team interacted with the media, fraudulently, criminally so to the extent that this fraud is being used to promote emergency level funding for their "research."
-=NikFromNYC=-, Ph.D. in carbon chemistry (Columbia/Harvard)
There is a certain similarity between Krugman and Mann in their academic activism. Both appear to feel they should be allowed to dish out partisan, often insulting, polemics, while enjoying protection against counter polemics through their status as academics. Without knowing all the details, this seems a considerable element of Mann's lawsuit(s). Political knockabout, appropriate to his activist persona, becomes slander of his academic persona and should not be allowed, in his view anyway. Krugman is apparently also thin-skinned in this regard, although, to be fair, he has never remotely suggested litigation, as far as I know!
Anders "That's not the only source of information. " True that.
But if you want to understand the criticisms of somebody, it's better to read what what the critics have to say and understand their arguments, before criticizing them.
Of course it is easy to dismiss MBH98/99 as a couple of bad papers, noting that bad papers happen in science all the time. But that is completely missing the real problem with Mann's work. Which is not just that his work was bad.
Scientists are supposed to be sceptical types. You know, take nobody's word for it, check the workings and the analysis. But nobody did that with Mann's work. Mann's work was like honey to bees - nobody in climate science would dare criticise it, and Mann was put straight into a plum role in the IPCC - to promote his own terrible work.
The problem with Mann's work never was that we had a couple of bad papers. The problem was we had a couple of bad papers that were promoted as the poster child of global warming and none of the career climate scientists dared to criticise it until M&M came along. It shows that climate science was badly broken as "science", and that the result was always more important than the method used to get there.
What is astonishing is that even today, nearly ten years on from McIntyre and McKitrick's initial debunking, that so many global warming activists - like ATTP - remain in denial about the actual problems the MBH98/99 fiasco represents. Oh, the irony.
After years spent in school classrooms, ATTP's tactics are very familiar - that old refrain, na na ne na na, sung while refusing to listen to any evidence contrary to that which he wishes to take note of. Not a terribly adult means of arguing, but hey, trolls are usually not terribly adult.
"To fixate on one paper in two decades ,at the expense of the tens of thousands of others pubished by literally hundreds of journals touching on climate science brings the praxis of the contrarian enterprise into the same regime of intellectual dilution in which homeopaths practice their art, for many of their nostrums would prove fatal if administered at concentrations as high as 400ppm."
OK Russell...have those papers been retracted? Has anyone said that nobody has ever been able to replicate the hockey-stick?
It is worth fixating upon because all other alleged temperature reconstructions are not hockey-stick-shaped!
"To fixate on one paper in two decades ,at the expense of the tens of thousands of others pubished by literally hundreds of journals touching on climate science brings the praxis of the contrarian enterprise into the same regime of intellectual dilution in which homeopaths practice their art, for many of their nostrums would prove fatal if administered at concentrations as high as 400ppm."
Total gobbledegook.
Come back, when you can construct a proper argument - in English.
Say the word: FRAUD.
"I thought I was engaging in a discussion about the topic of this post with some people. Wasn't that obvious? Actually, if I ever mention someone in a post I write, I always allow them to comment."
I'd assume if you're coming here, you're aiming for an exchange of views with people who disagree with you, so that we might learn from one another. That's good - travel broadens the mind!
Your comment policy is better than many places on the 'believer' side I've been. Not everyone agrees that opponents should be allowed to comment.
"This is the blogosphere and so if I'm not welcome somewhere, that's absolutely fine. I can always comment elsewhere, or write my own post if I wish."
You're welcome as far as I'm concerned. But obviously you're going to get people disagreeing with you.
"This is not about whether Mann got it right or wrong. This is about whether or not it's acceptable for a journalist (and others) to make accusations of malpractice and fraud."
Michael Mann has accused others (scientists included) of malpractice and fraud, on a lot less evidence. Do you think that's acceptable?
Yes, it's acceptable. It's a matter of free speech. Hyperbole and exaggeration are allowed in commentary and opinion. Not every statement has to be literally true. And so many people have been saying his work is fraudulent for so many years - with no noticable effect on his career or his reputation among his peers so far - that it's difficult to see what's different about this one.
And if nobody is allowed to accuse anyone of fraud, how could such cases ever be discovered or investigated?
"Getting something wrong in a paper is not a sign of scientific misconduct. Getting something wrong on purpose so as to get the result you want, would be a sign of scientific misconduct. Proving that it was on purpose, however, is what needs to be done, not simply asserting it."
I agree. And as has already been explained to you, the MBH result was tested to check that the output was correlated with the physical quantity it was supposed to be measuring, it failed the test, Mann knew it had failed before publication, but published it anyway and neglected to mention the adverse results. That counts as scientific malpractice in my book - how about yours?
We know he calculated it because in the one case where the correlation was sort of half decent he mentioned it in his paper. We know he didn't publish the others because they're not there, and he fought for years to avoid having to reveal them. We know the test failed, because when Mann's student Caspar Ammann published his own version, McIntyre's efforts peer reviewing it forced the adverse statistics to be revealed.
Read Montford's book. Then you'd know all this stuff.
Nullius,
I never think it's acceptable. I know Mann can be rather blunt in what he says. I can't remember an example of where he's explicitly accused someone of fraud/malpractice but he may well have.
Yes, I largely agree with this, with the caveat that people also have the right to defend themselves. I also think there's a massive difference between "In my opinion Joe Bloggs committed fraud" and "Joe Blogs committed fraud". I see the latter quite often.
Sure, that's indeed true. I'm not arguing that noone should ever make an accusation. I'm suggesting that it might be better if people qualified what they said and also suggesting that defending oneself against such accusations is fine.
Here's the problem with reading all these different stories. I can find places where people argue strongly about why something was bad and illustrates something about poor behaviour etc. I can find other places that make perfectly plausible arguments as to why what happened had been misinterpreted and doesn't really mean anything significant. I have no great interest in exploring all these different options. Not only is it a topic about which people have very strong feelings but, as I've said before, I don't see it as significant with respect to the science. There are lots of people working in this, and related, area. If people don't trust certain people, then just look at the work of others.
ATTP,
Thanks for your contributions to this blog.
I am sure there have been many occassions when anders has told his esteemeed guests that there is no point in going back to criticise Mc&Mc's reply to Mann's original work. I am really suuuuureeee.
Likewise I remember how nobody ever mentioned anything bad against Tol or Pielke Jr. But my memory may be faulty, I have abandoned that asylum after a couple of days, before the abyss had the chance to stare back.
Omnologos,
There is a point of going back to consider M&M's reply. It's useful to do so when people "claim" that it debunked the Hockey Stick or make statements about M&M that are not consistent with what the actual paper said. Plus, me expressing my opinion about a topic doesn't mean that I expect everyone else to agree.
This isn't a terribly complex concept, but I'll try and explain it again. The issue is more to do with making libelous statements, than about saying bad things about other people. It's also the difference between expressing an opinion and making a statement that you regard as fact. Plus, criticising what someone has said or done, is significantly different to making claims about their character. I get the impression that this subtlety is lost on you, but I may be mistaken.
Recap: irrelevant is the paper, relevant is the criticism of it. Plus you decide what is libelous and what isn't. Plus if anybody ever said anything you consider libelous against Mann, nobody can say he was wrong.
That pretty well sums it up omnologos.
Did you like his sidestep? He reckons he isn't aware of Mann libelling anyone, which is a cop out given the airtime Judeth Curry has given to just one instance of Manns odious behaviour when it comes to libelling anyone who dares question the righteousness of his work!
No one is more blind than the man who refuses to see.
Mailman
And then there's comics.
=================
Anders deserves a break. He's saving the planet, the biosphere, humanity and our grandchildren, so little things like being consistent or fair or honest or open minded or informed do not deserve his attention right now. Maybe later.
Mailman,
and people who make what they think are insightful quotes, should probably remember that they are intentionally generalisations.
Omnologos,
If you think this is true, that may explain your confusion.
Anders gets my vote for nitpicker in chief. His mode of argument seems to involve grabbing a short quote of the person he's attacking, based on his ability to argue against it and run with it, while ignoring the substance of that person's arguments. Trying to discuss anything meaningful when somebody resorts to this tactic is a waste, and addressing such people just ends up with a thread hijack.
Nullus in Verba:
We also know he calculated verification R2 because it's in his analysis code that he released in conjunction with the investigation by the Barton committee.
We know now that anything prior to 1820 failed cross-validation with R2. We know that Mann calculated it. And oddly, he still reported it, when R2 did not fail cross-validation.
Mann said in his testimony at the Barton Committee:
The criticism of R2 is a legitimate one, but it has to do with whether you've detrended data prior to validation. If there are large trends in the two segments being compared (even if they have opposite signs), you'll always get decent R2 scores. This is an example of R2 being a weak test: Your data can spuriously validate with R2.
This is especially an issue here because of the red noise contained in the proxies. Thus, you'd expect to see an excess of false positives… fails to reject (Type I errors), not false negative, fails to validate (Type II errors). Since you can spuriously fail to reject, you need to use other tests besides R2 for cross-validation, in the event that R2 cross-validates.
Incidentally, my curve was replicated on SkS link, but without showing MBH98, and excluding Loehle, because [no idea]. And yet Rob H and others from SkS still go on to threads and argue that MBH is not invalidated by more recent reconstructions.
The point is that MBH 98 is not consistent in the statistical sense with newer reconstruction and that this does suggest there were substantive flaws with the paper that prevented it from obtaining a valid reconstruction. I believe there were multiple substantive errors, and errors do tend to compound.
I should add in cases where R2 fails to validate, like here, it can be considered a powerful test. MBH failing to validate is likely an indication that MBH was invalid prior to 1820. This result seems to be confirmed by comparison against newer reconstructions, which do not suffer the many problems of these original papers.
Carrick,
Yes, I agree with this. This is indeed an issue. If, however, you are referring to my responses to Omnologos, then I am not trying to have a meaningful discussion because that would be entirely pointless. I am indeed simply trying to nitpick, because - when it comes to Omnologos - that's the only thing worth doing (other than ignoring completely, but that would be no fun).
Let me add a more substantive response to your comment (I'm going to try to take you seriously for a little while at least, because I don't think you're a fool - you just happen to be rather unpleasant at times). As far as MBH98/99 goes I don't dispute that the criticisms may be valid. I shall maybe look into the R2 issue. You showed me a figure comparing MBH results with some other ensemble. I had two issues with this. Your ensemble didn't appear to be consistent with other data I've seen. As far as I'm concerned, if someone shows me some data, then it's up to them to explain where it comes from. Even if it is consistent with the current best data and does show that the MBH results are statistically inconsistent, I still don't really see the overall relevance. So the MBH result is no longer consistent with more recent analyses. We don't retract or withdraw papers (or accuse authors of malpractice) just papers their results end up not consistent with more up to data analyses.
So, again, all I've really seen are people who have illustrated that there are errors with his method, that he didn't report some validation statistic that they think he should have reported, and that maybe his results are inconsistent with more recent analyses. None of that - in my view at least - are indicative of malpractice or fraud. You may, of course, disagree but that doesn't mean I'm suddenly going to change my mind.
In fact, I had a quick look through the retraction watch website and from my quick scan, all of the cases involved fraudulently creating data (which clearly MBH did not do) or plagiarism (again which MBH did not do). Anyway, that's probably all I really want to say on the issue. You're welcome to judge me in any way you like.
Anders doesn't have substantive discussions because he's as empty as a bottle of free beer on a Thursday night. I mean, the guy has just discovered in this thread about the importance of the R2 omission, and without even looking at its meaning he's already declared it irrelevant.
Please can we have some phlogiston type now, at least there would be some kind of ethereal substance to opine about.
I applaud Anders for coming by and hanging in there. He's not been treated completely fairly, but he takes it in good grace.
So tell me, strange visitor from another realm...
Isn't the one word at issue, strictly as a matter of fact and accurate quotation, "fraudulent" -- an adjective referring to the hockey stick graph -- rather than "fraud" -- which is a noun appositive of Dr Mann himself?
Is there a similar distinction worth making between Mann's (and others') use of the word "shilling" -- as a verb to describe the process of attempting to shape a market and public perception by various actions -- and a specific "shill", the noun refering to a paid agent falsely bidding or otherwise participating in the auction or debate? That is, if Mann says Steyn, a professional opinion provider, is "shilling" for the oil companies, is that less actionable as a court-decided legal matter than to give Steyn the accusation direct and say "Steyn is a shill" -- a person whose opinion can be and has been bought?
Are the terms "hyperbole" or "hyperbolic" ambiguous enough to require a court intervention? That is, suppose we have a graph showing a sharp upwards curve correlating any two measurments -- measles and anti-vaxer interviews on radio programs, perhaps. If one says the graph is "hyperbolic" while in fact the inflection is better described as two linear trends -- and one might argue either that the writer meant to use the term as rhetorical in the fashion of "hyped" or "sexed up", but the offended parties say that their science has been deliberately mischaracterized as mathematical trickery by use of the technical math term "hyperbolic" -- that the author of a comment either KNEW that the term hyperbolic was wrong, but used it recklessly, or that the author maliciously intended a non-mathematically inclined audience to mis-construe the term as mathmatical rather than rhetorical ...
ANYHOW, is "hyperbolic" itself a term that might get an author sued?
pouncer, yes Anders is not being given a fair treatment here. I'll see if I can cut back on the rhetorical heat too.
Anders, the issue with R2 giving false positives when you have red noise is pretty easy to understand. It relates to the fact that you get "trend-like" segments of data. In turn, depending on your background, you can either relate this to a large correlation length (in weather it goes "today is more likely to be similar to yesterday's weather than two weeks ago"). or you can write it in the frequency domain as a spectrum like 1/f^nu (nu > 0).
Regarding the selection of reconstructions, I don't know of any modern reconstructions which give 50-year or better temporal resolution that purport to be global in extent that disagree with the four that I included. If you know of one, perhaps you can point it out.
It's a legitimate question what you get with other reconstructions, but given I used what I considered the four good quality modern temperature series including Mann's 2008 EIV that were present at the time I did the little study. Given that even Mann's own 2008 EIV contradicts the main conclusions of the earlier paper, I think it's safe to say I wasn't cherry picking by only picking critics of Mann. Moberg was critical of the lack of variability of MBH from very early on…this is the same issue I had with the MBH result.
I did this right after Ljungqvist published his paper, and my initial impetus was to see whether Loehle agreed with other, what I considered to be good quality, reconstructions. Moberg is an obvious candidate, initially I didn't look at Mann 2008 EIV because of the issues raised about it (Tiljander and Graybill). What I found was a surprising amount of conformity, which is especially interesting given what a total mess earlier reconstructions were… th
I considered both Mann 2008 EIV and CPS, that latter of which wasn't included in the final graph. I noted however that Mann 2008 stated "we place greatest confidence in the EIV reconstructions, particularly back to A.D. 700, when a skillful reconstruction as noted earlier is possible without using tree-ring data at all," so I think retaining his EIV reconstruction over CPS was justified.
My interest in looking at Mann 2008 EIV was the degree to which this reconstruction agreed with the other three. What I found, to a bit of a confound to his critics, what it agrees quite well with the other recent reconstructions.
I did a later study with MBH after an enumerable *cringe worthy* boasts by Mann's ardent supporters that his original paper was not "wrong". (Krugman is still claiming Mann is not "wrong".) Of course I know how to calculate confidence intervals, so I did that analysis at the same time. I didn't expect a failure by 11-sigma.
So that was an interesting result to me.
"NikFromNYC=-, Ph.D. in carbon chemistry"
Oh, good! Then you must be an EXPERT on "carbon pollution"
/sarc
pouncer,
I'd argue that that is a subtle distinction. I think the technical definition of something fraudulent is that it is something obtained using deceptive means. I guess one could then argue about intent, but that would just go in circles, I think.
Carrick,
I'll take your word about the R2 issue. As far as your comparison goes, I was referring to (for example) Figure 5.8 in the latest IPCC WGI report, which appears to show a less deep LIA than you seem to have in your comparison. Again, I don't really see that as all that relevant.
Let me present this a different way, for context only. I'm not really trying to convince you of anything. I realise that you work in academia, so maybe you've encountered this, but there are very few (I haven't really encountered anyone) who disputes mainstream climate science. Many aren't even aware of the controversies that exist in the blogosphere.
Now consider the following hypothetical example. Someone publishes a paper that generates a lot of interest. Let's imagine it's in a field that isn't particularly controversial. Others get interested, collect more data, do more analyses, develop new techniques. After 20 years, there's a much better understanding of the system being studied. Maybe the results are broadly the same as the original work, maybe not. Now consider someone finds an error in the original paper. Maybe it's significant, maybe not. Do you think anyone will really care? If the error has propagated through all the later work, then sure; it would change everything. On the other hand, if it hasn't and people have updated the techniques, developed new ones and run all sorts of tests, then it doesn't really matter. The first paper still generated the interest that lead to all the later work. It still played an important role in improving our understanding of the subject.
So, back to the Hockey Stick. Today we have much more data. The methods are different and varied. There are various different proxies. The broad picture, however, seems about the same. We know more about the MWA and LIA than we did before. There's more variability in the newer reconstructions than in the older, but that's not a surprise. So, in my view, any scientist who looked at this without being aware of the other (outside) factors wouldn't really see an issue, even if they were aware with problems in the method in MBH98/99. Even if the early analyses do differ from the later ones.
Now, you can be very upset by the behaviour of various people if you wish. You can be upset by the focus the IPCC put on the Hockey Stick in 2001. Again, your right to do so. Me, not so much. I see very little to convince me that there are any fundamental issues with our understanding of past climate. Sure, we can always learn more. Collect more data. Do more analyses. Develop new methods. Try to understand what physical processes are associated with past variability. That's just how research works. I also see very little to convince me of other issues in climate science. The models aren't perfect, but what do we expect. There's still uncertainty, but the uncertainty is presented.
Now, maybe I could go and read more and find out more about what happened. However, as I already said, I can find sources that tell me that things are awful and people behaved badly. And I can find sources that tell me that things have been mis-interpreted and mis-construed. As with all of us, we'll have certain sources we trust and others we don't. There's doesn't seem much point.
Again, I'm not trying to convince you of anything. I'm just presenting a different context that maybe you can appreciate. In truth we could probably spend all eternity arguing/discussing this without really getting anywhere. We're each entitled to draw our own conclusions.
Anders you're fooling yourself. Your example doesn't apply because MBH 98 and 99 haven't been independently verified or replicated. All HS papers since have been variations on the same theme. But you know zero of those details and proudly refuse to leave your Mannian dreams.
There's also no need to take anybody's word on the R2 controversy, just honest curiosity. You casually mentioning your total ignorance on the topic is akin to try to pop in a discussion on Israel without knowing of Balfour or Sykes-Picot...only total abysmal absolute ignorance prevents you from understanding the total shame of it.
Omnologos,
Have I really got to you that badly? Come on, get over it. I'm not really worth all this vitriol. I would respond further to your comment, but I suspect there's not much point (as I may have already said).
"Now consider the following hypothetical example."
OK, let's consider a hypothetical example of a field of science in which none of the published results are actually checked. People publish, and because it's been published everybody accepts it as true. Furthermore, let's imagine in this field of science that even if the basic easily-understood errors in a published result are pointed out, everybody ignores it and carries on using the result as true and reliable. They use it as an input into other reconstructions. They continue to show it on charts and in official documents. They continue to support the result, and denounce anyone who points out the errors as a paid liar. They refuse to archive or release the data needed to check many of their newer results. And every subsequent result in the field is endorsed by the same people and the same system as endorsed and failed to catch the errors in the first paper.
Do you think anyone will really care?
Nullius,
Well, that seems rather implausible. I don't know anyone who accepts something as true simply because it's been published.
Anders - as I said you've got no argument. Try studying before speaking away from your sycophants. I suspect you'd have been prime material for establishing pi=3, only to wonder why people suggested it a shame.
Nullius,
Sorry, got my blockquotes wrong in the last comment.
Omnologos,
Whatever. At least you're making me feel young again "You're wrong, you're wrong, I'm not talking to you, I'm not talking to you, na nana na na". Like being back at school. Of course, it is rather childish, so I'll leave you to it from now on.
Yes there's no end to your pettiness Anders . I've always shown where you're wrong and why and invariably you've replied with childish aborted attempts at moving attention elsewhere.
Omnologos,
Of course I'm being petty. How else would you expect me to behave towards you? Come on, at least show some self-awareness, or not - doesn't matter.
Your behaviour is not petty just about me, something of infinitesimal interest. What you've been showing is that in HS matters you selectively choose from recorded scientific literature history, don't understand Mc&Mc's points, don't even realise what the R2 issue is, haven't read any HS study in any detail, refuse to learn anything new, live in Mannian land where libel is only one way and in general speak of what you're ignorant about.
This ain't no interpretation, just a description of what you have done in two or three threads here. But do continue, 50S 179E cannot be far. So is my understanding of what kind of human would sell ethics to protect the planet.
Anders, I think the biggest difference between IPCC and myself is I'm not assuming a uniform scale or assuming the same offset between reconstructions.
One of the problems with regressing against variables that have errors is that you get a scaling bias (the expectation value of the magnitude of the scaling constant divided by the actual one is less than one).
A scaling offset also occurs. That is when you constrain the reconstruction to pass through the temperature data, there is an vertical offset between original temperature that you're reconstructing with the proxies and the actual temperature. This doesn't happen if you have uncorrelated white noise, but does if you have red noise. (Lucia and Nick Stokes have posts up in which this is discussed using Monte Carlo analysis, I can dig them out if you're interested).
Even if you have a reconstruction method, like Loehle's, which in principle should have no scaling bias, you'll still end up with a scaling bias associated with the different geographic distributions of the proxies…. All of the methods suffer from spatial under sampling of the temperature field (significant spatial structure in temperature trend remains even over 50 year periods… again Nick Stokes has a nice tool that you can use to explore this).
So what I do is perform a relative calibration for each series… I choose Ljungqvist, but any valid reconstruction would work as well, and use the linear regression method transform each original series onto the same pseudo temperature scale. While this is entirely defendable, I can imagine why they didn't do it for the IPCC AR5 ("cries of data tampering from the peanut gallery").
Some of the differences that tend to fill in the "dip" in the LIA are differences in the vertical axes from the different reconstructions, and Loehle ends up with a very different scale, so if you plot it without rescaling, it looks "really off".
I can dig up the details on the linear regression method I used if interested.
It's clearly a matter of personal taste what we are interested in and what we are not. There are people who still debate Millikan's oil drop experiment, whether he withheld data and so forth. There is an interest in how science works that goes beyond just what science found. It is an interesting pursuit for some on its own right. But it can also inform us of where the process breaks down and how to improve it.
Unlike politically driven debates, which are worthless other than for the cardiologists treating everybody's high blood pressure who engages in it, there is some hope that over time, the scientific method will be informed by understanding the mistakes of the past, and we will adopt better methods for performing research.
In the US we have something called "responsible conduct in research" training, which is an outgrowth in this. Everybody in my group was required to go through this. At first we thought it would be useless, but we all admitted afterwards it had real value.
II might be the only one this means anything to, but I think this statement That is when you constrain the reconstruction to pass through the temperature data, there is an vertical offset between original temperature that you're reconstructing with the proxies and the actual temperature also depends on there being a trend in the temperature data at the start of the training period. If you picked a time where the trend was zero, you might not have a bias in offset coming from CPS like methods.
"If they did indeed do that, it would seem pretty abysmal. Is it relevant, though? It's my understanding that numerous different methods and proxies have been used to produce millenial temperature reconstructions."
Your understanding would be incorrect. And does it matter how many different methods are used if none of them are checked?
"Feel free to prove me wrong, though."
If you don't fancy buying the Bishop's book, then you could try reading through the back archives at Climate Audit. Find the 'Categories' dropdown on the left-hand panel, and scroll down to 'Multi-proxy Studies' and 'Proxies'. You'll see there are a lot more headings under here than under MBH.
Or another fun read is the famous 'Harry read me' file. The sense of humour bursting out of every paragraph can keep the reader entertained for hours even through the driest bit of code analysis. And you can't help feeling sorry for Harry. One of the highlights is the bit about inventing false ID codes to hide the existence of inconsistencies - search for the phrase "I can make it up. So I have". He explains how bad he feels about doing it, and how he understands what the consequences will be, but you can understand why he does it, too. The database he's talking about is published in the peer reviewed literature, was cited in the IPCC reports, and there was no public announcement or acknowledgement of any problems with it until this document came out. In fact, there still isn't, officially. The database still up there on their web site, with no warnings or caveats. You can't check it, because the underlying source data hasn't been published (although the reason for not doing so given in the FAQ is now known to be bogus - long story).
Seriously, spend just half an hour skimming through Harry. This is how climate science is done, behind closed doors.
Carrick,
I doubt that you and I would disagree about the idea that we should all conduct ourselves with integrity and honesty, that research integrity is important, maybe even that making people more aware of the issues would be of value. I certainly don't see this about research integrity itself (which is clearly important and there are clear issues that we should probably all be facing up to) but about how to interpret some events related to papers published more than 10 years ago.
Anders, we of course agree on the first part.
But how (to me) an invalid reconstruction got adopted by the WMO and the IPCC is of relevance to RCR.
It is also of importance for understanding and improving how we as a world community make decisions that impact all of us. If there were no potential for impact outside of say the history of science community, I wouldn't personally find discussing MBH relevant, though I might still be interested in understanding where the breakdown in his methodology occurred.
Anders says:
I can speak personally on this as Mann made very damaging accusations of fraud and dishonesty against me, when I first entered this field, accusations that, in my opinion, have had a lingering and adverse impact on attitudes towards my work. I became aware of some accusations at the time; others are evidenced both in the Climategate dossier. Mann has made similarly defamatory remarks about others. It's pretty amazing that you comment on these matters without being aware of Mann's own conduct.
Mann commenced making such allegations before Climate Audit started. And while many of your commenters seem to think that I have made accusations of "fraud" against Mann at Climate Audit or elsewhere, I have consistently refrained from using such terms in criticizing Mann's work and have editorially snipped or deleted such allegations by commenters at Climate Audit. Mann recognized this discretion in his pleadings in Mann v Steyn where he specifically noted that neither I nor McKitrick had publicly accused Mann of fraud - a point that you and your readers might take heed of.
Obviously, I took issue with many of Mann's false claims and thought that many aspects of his articles are very questionable. Several commenters have raised the issue of Mann's withholding of adverse verification statistics. This arose because Mann placed great emphasis on verification statistics in the original controversy. Mann's claims about the supposed robustness of his reconstruction to presence/absence of tree ring data is also contradicted by the information in the notorious CENSORED directory. In his capacity as IPCC Lead Author, Mann was involved with hide the decline. In my opinion, such issues ought to have been addressed by the Penn State academic misconduct inquiry rather than letting the matter drift on.
In respect to our original criticisms of Mann's work, we published in academic journals and I remain firmly convinced that our criticisms were solid ones and that they remain unrebutted both by the various contemporary responses and subsequent commentary. I note that much commentary has relied on caricatures of our articles and that many participants in the debate, including apparently yourself, do not bother reading our responses to the various contemporary comments.
Anders also commented that interest in Mann et al 1998, 1999 ought to have waned long ago in favor of more recent reconstructions.
A couple of comments.
Anders blames "skeptics: and critics for continued interest in the Mann controversies, but Mann himself has continued to stir the pot even on issues that otherwise would have dissipated through his book and his frequent public appearances, in which he regularly attempts to color scientific criticism of his work as merely political.
I agree that more recent reconstructions are, at this point, of more scientific interest. It has been a long time since I've commented other than in passing on Mann's older work, notwithstanding frequent allegations otherwise. Unfortunately, the field seems to learn little from the past and many, if not most, of the newer reconstructions continue to rely on questionable data and methods.
For example, the Marcott reconstruction has received much publicity but the blade of its hockey stick is an artifact of proxy dropout rather than blades in the individual proxies. Its SH reconstruction is particularly egregious as its very 'early" blade depends on a single oddball series.
Mann's 2008 reconstruction depended on the same bristlecone chronologies that were at issue in Mann et al 1998-99 - proxies that the NAS panel had recommended not be used. The Graybill bristlecone chronologies have been repeatedly used in other reconstructions, including the PAGES2K North American reconstruction. In a well-known controversy, Mann 2008 used contaminated Finnish sediment data the huge HS of which was due to modern agriculture. After much evasion, Mann conceded in the SI of a different publication that his vaunted no-dendro reconstruction did not survive without the contaminated data, but did not retract or correct the original publication which continued in widespread use, including by the EPA. Mann's failure to withdraw the contaminated Tiljander data resulted in its subsequent use in Tingley and Huybers, who likewise have not acknowledged a problem. The PAGES2K Arctic reconstruction used a similarly contaminated.
People have long criticized me for not trying to tweak methods, but I've long observed that, in my opinion, the problem in the field is not a need for a more complicated multivariate method, but development of consistent datasets. A focus on data is far more prevalent in Holocene scale work than in "recent" work which, in my opinion, is overly influenced by exotic multivariate methods at the expense of patient quality control, data set by data set.
Careful Omnologos, you will have Anders telling you to stop generalising when using some kind of quote because you are too close to the mark!
Mailman
I'm glad Anders has exposed his intransigence.
===========
> It has been a long time since I've commented other than in passing on Mann's older work, notwithstanding frequent allegations otherwise.
See for yourself:
http://climateaudit.org/category/mbh98/
There is a possibility that this category does not faithfully represent "Mann's older work".
Might be a vocabulary thing.
***
> I have consistently refrained from using such terms in criticizing Mann's work and have editorially snipped or deleted such allegations by commenters at Climate Audit.
That editorial practice may have been more relaxed at Tony's:
As too often, nevaudit makes stuff up. You challenged my statement that " It has been a long time since I've commented other than in passing on Mann's older work, notwithstanding frequent allegations otherwise." by pointing to
http://climateaudit.org/category/mbh98/ That link shows that my most recent article on MBH98 is from 2008. What's your point?
Since you did not provide any evidence from Climate Audit, I take it that you agree with Mann that both McKitrick and I have not publicly used the term "fraud" in respect to Mann's work, notwithstanding the contrary allegations at blogs that you frequent.
.
[Let's see if the first part goes through.]
The Auditor writes:
> That link shows that my most recent article on MBH98 is from 2008.
The most recent article in that category is this one:
http://climateaudit.org/2014/05/09/mann-misrepresents-the-epa-part-1/
This means that in May 2014 there was an article that was classified as “MBH98″ on CA.
Wonder why?
[Let's try with the correct second part, with no F-words.]
> What’s your point?
That articles allegedly not “on MBH98″ got classified as “MBH98.” That “on MBH98″ cuts very little ice regarding what AT was discussing. These two points alone show that the accusation that I “make stuff up” has little merit.
And that’s notwithstanding that we’re in 2014, discussing something that can be categorized as “MBH98″ on some contrarian blog.
This exchange appears to indicate that willard tries hard at and succedes to play the part of the cretin. Congratulations.
I wrote: "It has been a long time since I've commented other than in passing on Mann's older work, notwithstanding frequent allegations otherwise." Nevaudit contests this point, armwaving to www.climateaudit.org/category/MBH98. I pointed out that the most recent post in this category that commented directly on MBH98 was in 2008. Nevaudit challenged this by observing that a 2014 post on Mann's mispresentation of the EPA findings contained a tag "MBH98", as though that refuted my observation.
However, the post in question http://climateaudit.org/2014/05/09/mann-misrepresents-the-epa-part-1/ proves my point. It was about the EPA report and Mann's misrepresentation in the pleadings. It did not address or discuss MBH98 issues other than show a spaghetti graph with MBH98 in it. Better confirmation of "other than in passing" can hardly be contemplated.
Not that Nevaudit cares about the facts.