
Whether to trust statistics


Betsey Stevenson and Justin Wolfers, writing at Bloomberg's website, consider some rules of thumb for helping the layman decide whether they should trust someone's statistical analysis or not. Here's the first of them:
Focus on how robust a finding is, meaning that different ways of looking at the evidence point to the same conclusion. Do the same patterns repeat in many data sets, in different countries, industries or eras? Are the findings fragile, changing as one makes small changes in how phenomena are measured, and do the results depend on whether particularly influential observations are included? Thanks to Moore’s Law of increasing computing power, it has never been easier or cheaper to assess, test and retest an interesting finding. If the author hasn’t made a convincing case, then don’t be convinced.
It's hard not to recall the case of the Hockey Stick and its reliance on the bristlecones. And all the other paleoclimate studies that are said to support the Mannian stick, and which rely on bristlecones too.
Reader Comments (51)
Did they suggest that the reader might put more trust in statistics if the writers told fewer lies?
It is more than trusting statistics. The example the Bish gives at the end would concern be way before any statistical manipulation. Your saying you can get accurate temperature readings from tree rings?? You would really have to convince me of the science behind that first before I looked at your wiggily graphs.
If you can't plot your raw data in a way that shows your point clearly I'm very doubtful of any 'statistical' result too.
There has been an obvious typo. The Bloomberg's text should read: "Thanks to Moore’s Law of increasing computing power, it has never been easier or cheaper to produce mountains upon mountains of absolutely rubbish findings masqueraded as statistical analyses".
Pick your illusions:
The end of a psuedo[sic]-scientific illusion.
I love this bit:
I am going to steal it. It is mine.
This is perhaps relevant: Margins of Error: a debate co-organized by the Royal Society of Statistics on public understanding and trust in statistics. Also the June event on "gaps between public perceptions and reality on key policy issues" (see last para of quote). I wonder which "key policy issues" they have in mind...
http://www.rssenews.org.uk/2013/04/society-to-hold-debate-on-trust-in-statistics/
D'Arrigo et al did not use Bristlecone pines!
http://www.st-andrews.ac.uk/~rjsw/all%20pdfs/DArrigoetal2006a.pdf
Rob
While I fully support the extract and the Bish's examples, it is rather unfortunate that Moore's law gets quoted. Quite apart from never really being a law in the first place, it's prediction on compute power hasn't been true for years, even while it remains trivially true regarding the number of transistors. Silicon has long since hit the rules of physics, which is why everything is multi-core these days (can't cool it otherwise). Performance benefits are mainly from number of cores, not frequency now, so if you can't usefully paralellise your task, you don't get benefits. And even if you can, there are more overheads.
http://www.networkworld.com/community/node/45635
"D'Arrigo et al did not use Bristlecone pines!"
Indeed it didn't! It used Yamal!
Here is a comparison of DWJ06 with itself WITHOUT Gaspe and Yamal
http://www.st-andrews.ac.uk/~rjsw/ftp/for%20BH.JPG
Rob
forgot to say - time-series are z-scores w.r.t. 1750-1950
R
Thanks Rob. That's interesting.
Medieval warm and LIA showing up nicely there. What happens after 1950?
"What happens after 1950?"
It jumps up suddenly in 1923 and then again in 1936. There's a peak around 1943 following which it drops steeply until 1975. Then it jumps up again in 1983 to almost the level of the 1943 peak before dropping again up to the end of the series. Interesting behaviour, to say the least.
The 1936 jump appears to come from a series called 'Wrangells'. I was looking for the source of the 1923 jump, but it's not quite so obvious - several series spike then, but no individual series seem to show any major sustained change. I haven't looked at 1982 yet.
Data is available here: http://www.ncdc.noaa.gov/paleo/pubs/darrigo2006/darrigo2006.html
But back to the main topic. D'Arrigo et al. did use Yamal, which Rob Wilson doesn't disagree with, and Yamal was sensitive to an individual tree. D'Arrigo et al. might or might not be - Rob's graph says it isn't sensitive to Yamal or Gaspe, but I couldn't say any more than that without a closer look. It's a fair point.
However, the graph could be seen as another way of repeating the constant climate science refrain "the errors don't matter". I think they do matter, even when they don't make a difference to the results, because the aim of science is not merely to get the right answer, but to *know* that you've got the right answer - for the confidence to be justified. If errors like Yamal can get through, we don't have that confidence. We don't know what else might have got through that we don't yet know about.
However, all respect to Rob for showing that my pat 'Yamal' shot is at best a glancing blow. I'm not sure if I'm going to pursue the question, to figure out where those jumps do come from. Are they common to lots of series, or is the result sensitive to individual ones? But that's the sort of question people need to ask.
It's worth noting Steve McIntyre thought D'Arrigo et al. was much better than the average paleoclimate paper. I think it was only the others that the Bishop was talking about.
Rob Wilson. No climate scientist has yet explained with any degree of plausibility why some trees at treelines are "treemometers" and others are not.
It all smacks of spurious correlation. You select your "sensitive" trees- a statistically dubious procedure anyway- and assume that they continue to act as "treemometers" before the calibration period.
Pure speculation- it is up to you to PROVE that this is the case.
You can't.
So what if trees sometimes follow temperature and at other times do not?
Well you will get a hockey stick shaped reconstruction.
But isn't that the whole point?
Don and others:
Not yet advanced enough with the work, but one day I will answer your very question using Scotland as a case study. Maybe next year......?
http://www.st-andrews.ac.uk/~rjsw/ScottishPine/
It is the perfect Case Study to "understand" the complexity of tree-growth response - i.e. influence of ecology, elevation, human impact and other factors. It is a mine field, but I also know that through all the noise and complex issues, there is a really good climate signal in there.
Be patient and watch this space.
Rob
Come on then Rob, let's see it. Any half decent plant physiologist knows you've got bugger-all chance of seeing a temperature signal using the output of any annual crop plant, even using heavily replicated trials because of the interaction of dozens of confounding factors. Claiming you can detect tenths of degrees in tiny numbers of ancient pieces of wood by ANY analysis, never mind trivial vectors of principal component analyses has the credibility of crystal healing or homeopathy to serious applied biologists. It's just noise - you're making it up.
Just as a matter of interest, I am a half-decent plant physiologist.
My last Research Assessment Exercise (2008) rating 3* (Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence)
I can tell anyone, for nothing, that the factors that determine plant growth are complex, multifaceted and interact in non-linear ways.
Best of luck, Rob.
re. let's see it:
well, have a look at the motivation link at: http://www.st-andrews.ac.uk/~rjsw/ScottishPine/
Now - of course, these are trees - they are not thermometers - but the do model temperatures quite well.
As a proxy, they are far from perfect, but for periods with NO instrumental data, they do provide important information that we would otherwise NOT have.
May 4, 2013 at 8:03 AM | Rob Wilson
Interesting stuff. I hope you can unravel - to your personal satisfaction - the chaotic behaviour of a treemometer proxy. You will then need to convincingly relate that to the chaotic behaviour of the climate and convincingly demonstrate that it is us nasty humans wot dunnit.
Do you think that you will be prepared to bet your 'ranch' on your results? And all your neighbours' and friends' ranches?
That's precisely where all your predecessors have enthusiastically gone.
Not to say that you are wrong to carry on with the 'science bit'.
But beware un-sciencey peasants with inflated energy bills, unreliable energy, pitchforks and fire-brands, starting to mass outside the academic ivory towers.
Rob "It is the perfect Case Study to "understand" the complexity of tree-growth response - i.e. influence of ecology, elevation, human impact and other factors."
I trust two of these "other factors" are CO2 fertilisation and fertilisation from atmospheric nitrate deposition. As you are aware both of these have risen sharply in the last 100 or so years.
It will be the Deveil's own work to deconvolute increased growth from these factors compared with the piddling small stimulation from a few degree days of extra warmth.
As I said earlier, best of luck.
Rob, the facts to date are that tree ring proxies have been used as a half plausible source of proxy measurements that satisfy the the needs of activists. As for providing information we do not have, how do we begin to know that? Their performance against the recent instrumental record is so worthless that no other branch of science would bother with them - best admit you know nothing than believe in falsehoods.
The truth is that they're an essential source of disinformation for activists and this abuse has enabled misanthropic nihilists to gravely discredit science and ultimately humanity.
Maybe another reader can help me with this. Clearly the mass media report of a statistical finding will not contain the richness of information needed for the appraisal suggested above, and maybe the paper itself won't either. Are the authors suggesting that the results should not be relied upon unless the tests or considerations they suggest are included in the paper? If they are not included, do other readers here suppose it likely that the information needed to do ones own audit would be publicly available?
I ask these questions because, perhaps wrongly, I find their suggestions a bit silly in light of the tools and methods available to most of us. We are all in great luck if the study attracts the interest of Steve McIntyre, Lucia, Nic Lewis, or a doubter of similar caliber. But if not??
To trot out and possibly pervert a Richard Feynmann requirement for reports on scientific studies, each should contain a section discussing what could be wrong with the method, the data, what things were not looked at, whether they should have been, and so forth. I would bet that few reports bother with this.
I am not quite ready to perfunctorily dismiss any statistical study which doesn't include a significant section testing the findings by varying the data in the ways discussed above, but maybe I should.
Where do you folks think these examinations they advocate are going to come from?
I might add that my response to their suggestions is a bit like my response would be to an auto industry critic recommending that I run my newly purchased car on my dynamometer and return it to the dealer if it doesn't meet the torque and horsepower specifications. What dynamometer?
"Are the authors suggesting that the results should not be relied upon unless the tests or considerations they suggest are included in the paper?"
If the tests and data are not there, it's a bad sign. If nobody else has checked it's OK, it might not be safe. Even if you can't do the checks yourself, you can ask if the checks have been done, by someone you would expect to be motivated to say if there's a problem.
And yes, the results should not be relied upon until they've been thoroughly tested and validated.
"I ask these questions because, perhaps wrongly, I find their suggestions a bit silly in light of the tools and methods available to most of us."
That depends on how important it is for you to make the right decision. If it's not something that matters that much to you, then you can make your decision by whatever unscientific heuristic you like - that's not a criticism, that's what freedom of belief means - so long as you understand that your belief does not have the endorsement of the scientific method or the confidence it can confer. You can 'trust the experts', follow the herd, keep the faith, read it in the stars and the entrails of chickens, whatever.
But if you want to be able to back your beliefs with the power and reputation of science, there is no alternative to *doing science*.
When the Egyptian king Ptolemy asked Euclid whether there was a shorter/easier way to learn than working his way through the Elements, Euclid replied that "There is no Royal Road to geometry". It's the same question we always ask when we see the achievements of hard-won expertise: how can I do that but without making any effort? How can I be a pop star or top footballer? How can I be a chess grand master or a Judo black belt? Kids ask it: "How can I pass my science exams without having to study?" You can't. But you can make the effort.
In this context, I think sometimes of the story of Sophie Germain, who wanted to do mathematics in an age when women were not allowed to. She hid her books under the bed covers and studied at night by candlelight so her parents wouldn't find out. She taught herself Latin and Greek. She corresponded under the assumed name of Monsieur Le Blanc. In the age of the internet we have no cause for complaint.
So I would say that when you ask "What dynamometer?", the next step is that the expert tells you where to get one, and teaches you how to use it. There are people on the internet who enjoy doing so; even sometimes the people you're criticising, if you're nice to them about it.
And from their point of view, if they're tired of people who don't know what they're talking about criticising them, then they should teach them to be better critics. Turn them into a free resource.
So Rob Wilson could tell you where to download his data, and someone could tell you how to get more data from the ITRDB database, and someone else could tell you how to get them into R, and how to plot them out, and how to look at the correlations between them, what it means, how others interpret it, and equip yourself to ask better questions and to argue. We're all on the same long and winding road, and we never stop learning. Those ahead of you can tell you ways to make the walking easier. Even if you only go a little of the way, it helps.
But it's still not easy, and not certain. What can I say? That's life.
An old CA post on trees as thermometers with (IIRR) a documented statistical analysis:
http://climateaudit.org/2009/08/28/the-lodgepole-pine-a-case-study/
Nullius, thank you for your thoughts. I think they are the right ones.
Of course I knew that the way you describe is the only way, taken either by me or someone i trust. I think I was put off by what seemed to me the casual way the recommendations in the post were tossed off. It isn't a simple thing to qualify or dis-qualify a complex statistical study.
If a weighty decision hangs on the validity of a study, the owner of the decision should of course insist that its authors have responses to the issues raised.
Thank you again.
Rob -
I don't know if you've read Jim Bouldin's posts on temperature reconstruction from tree rings, beginning with this, where he writes, "It is not an overstatement to say that all long term climatic estimates [from tree rings] are suspect."
Perhaps you could post a rebuttal to his criticisms there. A well-argued discussion about the confidence which can be placed in such temperature reconstructions would be instructive.
jferguson,
You may also like tip number 3 from the original article linked.
It's also a good test because if they can't explain it in terms you can understand, it might be that they don't really understand it either.
Nullius,
I've been pondering what you say above. I agree that this approach ought to be applicable. Feynmann recommends it. But doesn't expecting an understandable explanation seem more realistic for an analysis which has its basis in a physical construct rather than what amounts to signal detection by statistical process of a suspected signal whose excursion characteristics are really unknown in the looked-at data?
in other words, it should be easy to explain what is going on in secondary-school terms if one knew exactly how it worked, but one doesn't. Instead we are confronted by complex analysis of data with partially understood symptoms of the object signal.
I don't think conclusions dependent on statistical analyses of the sort we read about at Lucia's or CA, including use of principal components, centering, date offsetting, etc. can be explained at the secondary school level yet I wouldn't dismiss them for that reason.
It's too easy to dismiss the tree-ring work as nonsense because a discrete temperature signal cannot be isolated by physical understanding.
In my heart,though, I think it must be nonsense unless you can.
Applied statistical analysis of data with only one or two variables can be tricky enough to pursue with high levels of rigour, but when many variables are involved, it becomes almost impossible and a great deal more by way of ad hoc assumptions are required. That is not to say that it cannot be useful – it could be, and anyway it can be hard to resist if you have a large data set and software that can handle it!
At the very least, multivariate analyses (such as principal components) can be useful as a source of hypotheses which can then be pursued, verified, or rejected by further investigations. Using multivariate analysis on its own seems to me to be unlikely ever to be really convincing in general. I know of successful industrial uses in process monitoring, but there the method is used to alert a process operator to a possible problem, to the possibility that something unusual has happened to the process and therefore it ought to be investigated just in case.
One more vivid success I can recall for it concerns African killer bees. They are quite hard to distinguish in appearance as individuals from more docile honey bees (their behaviour in swarms is of course dramatically different!). According to an article in Scientific American in December 1993 (By Rinderer et al.)
The article at Bloomberg does give good advice for the checking of arguments relying heavily on statistical analyses. But the problem of ‘the fallibility of human reason’ is a far broader one. A great has been published on that. One of the most readable books on it that I have come across is by Thomas Gilovich, ‘How We Know What Isn’t So’ (1991, The Free Press). There is this quote at the head of his final chapter:
Statistics as an area of study, as a part of whatever the scientific method is, surely has a lot to contribute to that. The Royal Statistical Society has some materials aimed at helping journalists do a better job in this area: http://www.getstats.org.uk/resources/journalists/. For example: http://www.getstats.org.uk/wp-content/uploads/2012/02/How-to-spot-error.pdf
John Shade,
Thank you for your observations. Gilovich's book is in Kindle so later today ...
I have no doubt of the value of statistical analysis, particularly in identifying that two things are different, or maybe the same, or that a process has changed suddenly, or over time. And I don't doubt that variables which change could indicate changes in other variables not monitored or maybe not recognized.
Statistical analysis of an industrial process is likely effective because the owner of the process controls the variables, or may in the course of a process anomaly discover variables he didn't know were there - I'm thinking of the sudden appearance of trace formaldehyde in canned cat-food when the "fish-parts" were re-sourced from docks of Seattle to fish farms in China - see E.M. Smith for how this was detected - cat as an analytical device.
But if you want to track a single variable by statistical analysis and the proxy of that variable also is responsive to other effects, moisture, sunlight, local CO2 concentrations, soil conditions, and you don't (cannot?) track those variables in structuring the model, then I don't see how you can train your model to isolate the one variable you are interested in.
I realize that there are some very smart people investing their careers in these studies, but it doesn't seem impossible, to me at least, that they may be wasting their time.
What statistical analysis might do is discover a really great (clean) proxy. Sounds like it hasn't happened yet
jferguson,
That's a key point - tracking a singe variable isn't something PCA is capable of. The vectors simply aren't labelled by the analysis process - they merely show 'something' is happening. Additionally, the magnitude of the vector cannot be gradated into, say, temperature. To pick a fourth (i.e. inherently inconsequential, since it represents only a tiny proportion of observed variation) vector Mann so infamously has, to label it as temperature and to then go on to pretend that he therefore has revealed historical temperatures to fractions of degrees is gross fraud. To go on to put 'confidence intervals' on the said fraud yet another level of deception. What we have is noise, pure and simple. Average noise tends to be a straight line... seen one of those somewhere?
Wow, I've apparently failed to convince even HaroldW of the legitimacy of the detrending problems I've described in detail. Fat chance then of any practicing tree ring workers like Rob or others taking any actual notice.
I was suspicious from the start that trying to explain these issues on a blog was likely a waste of time. Now I'm fairly well certain it was. Such a clusterfuck these discussions are. We are complete and utter idiots to believe anything is accomplished by them, utter than as excuses to reinforce one's preconceptions.
"But doesn't expecting an understandable explanation seem more realistic for an analysis which has its basis in a physical construct rather than what amounts to signal detection by statistical process of a suspected signal whose excursion characteristics are really unknown in the looked-at data? "
Err. Yes. That's the point. Signal detection when you don't know the characteristics of the signals or the noise is impossible. That's the error.
What they're doing is making assumptions about how the signal and noise should behave, quite often circularly based on the hypothesis they're trying to prove, and using those. The best you can say is that *if* their assumptions are correct *then* the signal they get is the most likely value of it.
But for those who try to *detect* the signal by these means, they're using fancy mathematics built on circular and muddled premises which they carefully don't explain. That's probably why you don't understand it.
"I don't think conclusions dependent on statistical analyses of the sort we read about at Lucia's or CA, including use of principal components, centering, date offsetting, etc. can be explained at the secondary school level"
It's a lot easier to do it with some diagrams, but I'll have a go.
When you plot data out in several dimensions, the dots are often grouped in an ellipse or rugby ball shaped blob around some point. To specify the shape precisely, we look for the longest axis of the ball, measure its length and direction, then the next longest, and so on. The axes are all at right angles to one another, and longest axis tells you about the biggest source of error or spread, the next biggest axis about the next most important, and so on. The offset is the sum of all of them, but if you take only the biggest few you usually get quite close.
When it's the spread you're interested in, then the direction of each axis is called a "principal component" and the length of it is called its "eigenvalue". They're just posh names for the dimensions of a rugby ball.
To calculate them, you measure the spread from the centre of the ball. That's "centering". If you instead measure the spread from some different point, you're not measuring the dimensions of the ball, you're mixing in some part of the offset. If it's a big enough offset, then the 1st principle component just sees the variation of the entire ball from your centre, and is in that direction. Then all the other vectors will be in the wrong direction and will have the wrong lengths. That's what Mann did.
With this picture in your mind, you can then ask sensible and intelligent questions like "what happens if the blob isn't elliptical, but banana-shaped?" (the method is invalid, you need to transform the data to straighten the banana.) or "why should the factors of interest be at right angles to one another?" (the method looks for statistically independent contributors, which for a big enough sample will be very close to being at right-angles, correlated contributors do indeed get mixed up) and even "why the hell would we expect the principle component to be measuring temperature? Rather than some other variable or mixture of variables.", to which the answer is we wouldn't, it's an assumption.
"It's too easy to dismiss the tree-ring work as nonsense because a discrete temperature signal cannot be isolated by physical understanding."
Let's just take two variables, to make the picture simpler. We put temperature on the x axis, soil moisture on the y axis, and plot tree growth as a function of these two variables along the z axis, so we've got some surface sticking up like a hill from the xy plane.
Now the climate moves along some curve in the xy plane, like a hill-walker following a trail on a map. Because it's hilly, she goes up and down, too. But you can't see the track, all you can see is the changes in height. How much can you work out about the track just from this?
Well, if you happen to know the hill-walker is on a north-facing slope, you can tell how far north or south they are. If you know they're on an east-facing slope, you can tell how far east or west they are. But if they're on a north-east-facing slope, the height can only tell you about a mixture of the two.
Similarly, if a tree is in a temperature-limited part of its range, it will measure temperature quite well. If it's in a moisture-limited part, it will measure precipitation. But you don't always know, and it can silently switch from one behaviour to the other as other variables change. What's often seen in tree rings is that it will show a strong correlation with temperatures for a period of many decades, but then suddenly stop doing so. And it's not obvious that this has happened just from looking at the wiggly lines, which appear much the same.
The science, therefore, is in figuring out *when* a tree ring series is measuring temperature, and when it's measuring something else.
There are some tricks for doing so, but so far as I understand it they're pretty crude at the moment. I think that with a sufficient study of the circumstances of each sampled tree (soil type, slope, drainage, nearby trees, etc.) and how these probably varied in the past, you could do a better job. There is recoverable information buried there. But it's hugely complicated to do, and I'm afraid that in the main current science shows little sign of making the attempt.
Rob could maybe correct that impression, by giving us more detail on how they separate factors, but that will depend on us being nice to him, and paying attention. What's the point, if we're going to ignore anything he says we don't like?
However, it is only with such physical understanding that they can make progress, and we can be convinced. I don't know that it's impossible, I don't think it's obviously so, but I haven't seen anyone plotting out growth response surfaces for us either. It would need some good will on both sides.
Nullius, I cannot thank you enough for these useful explanations. I suppose one thing i could do is the same thing i did when i was learning structural design, do problems with known answers until i got the right ones and keep moving up in complexity.
thank you again.
john
"Rob could maybe correct that impression, by giving us more detail on how they separate factors, but that will depend on us being nice to him, and paying attention. What's the point, if we're going to ignore anything he says we don't like?"
The point is to cast aspersions on climate science generally. Obviously. Find flaws in some branch of the science and then over-generalize them to the broader field as a whole.
What do you expect Jim? Crap science is the vehicle used by politics to destroy people's lives and livelehoods. Then you bring a knife to a gunfight! The fact is, dendrothermometry is nowhere good enough to determine political policy. Over at Climate Audit "climate scientists" are dissecting the minutiae of a pollen spike when they are incapable of taking a sample of mud from a lake bed without stuffing it up.
And you, Jim are getting precious about your "science" when your "science" is driving industry overseas. You get paid by taxes while private sector workers lose their jobs. You, Jim are doing fine with a nice website and a fancy dog, while others are going bust and losing their homes. You, Jim need to get a grip.
Jim Bouldin (3:49 AM) -
You haven't failed to convince me, quite the opposite in fact. However, two factors temper my agreement. First, although I've downloaded your code, I haven't had time to play with it, to understand the interplay of tree response to climatic conditions and the attempted deconvolution of the response. So I have to consider myself as uneducated in this area, and a counterpoint would be valuable to aid my learning. What does Rob (as a presumed defender of current approaches) think are the strong and weak points of your analysis? Second, I work in an area which involves signal detection, although more structured than tree rings, less noisy, and replicable. My intuitive reaction is therefore that the information content of tree rings is adequate only to make weak deductions at best from the measurements, certainly nothing with the confidence attributed to reconstructions. But I am ever mindful of confirmation bias -- it's very easy to nod one's head in agreement with what accords with one's prejudices. Until I have a deeper understanding, that's really all I'm doing.
Hey Hector, nice job of spouting a bunch of nonsense that you nothing whatsoever about and has nothing to do with understanding the science. Not that I should expect anything less of course.
OK thanks Harold, I misinterpreted what you said above.
I think its unlikely that Rob will weigh in on what I've written, although he is more likely to do so than any other practicing dendrochronologist, since at least he engages on websites like this one to some degree. But I doubt that he has read what I've written, and if he has, I doubt that he's willing to level the criticisms against the field that my work leads directly to. But I could be wrong on that.
I agree with your assessment about the information content of tree rings using current analysis methods, but if those methods are altered--in particular the field sampling methods--then that also changes the information content, because it ameliorates the confounding of the effects of tree age/size on ring response. I've explained that in a couple of the pieces of the series.
Jim, you come here an criticise other posters as non-science mumbskulls, yet make the elementary error of confusing "dendrochronologist": someone who measures times and dates using treerings - an entirely uncontroversial science, with "dendroclimatology" someone, usually a climate psientist, who loudly proclaims that they can reconstruct past temperatures, to tenths of a degree, by measuring treerings.
A complete, to use your own description, "clusterfuck".
Jim, fact is that their incompetence/fraudulence entirely changed my 'preconceptions'.
When the facts change, what do YOU do?
"Jim, you come here an criticise other posters as non-science mumbskulls"
Uh, no, I didn't but since SOP among so called sceptics at sites like this is to play the "oh boo hoo, that scientist was a meanie to me" then go ahead and believe that in this case as well.
Jim hi, I haven't followed this thread at all carefully but I did skim it earlier and thought that HaroldW was broadly agreeing with you. So I'm glad to see that that at least has been cleared up.
As for the rest, I'm grateful for your attempt at dialog. I personally would prefer that someone like you using their real name, and not a very frequent contributor, wasn't confronted by angry criticism from those feeling more at home here but not prepared to be identified in the same way. To Don Keiller I would extend more leeway so you might still struggle! But my approach in any case has not attracted universal acclaim since I first mentioned it. Which has the interesting by-product that I may empathise a little more with how you feel today.
Anyway, thanks again. There is almost always more value in such attempts - not least because of lurkers who listen and reflect but choose not to dive in - than it seems at the time.
Jim Bouldin
Since you apparently agree that tree rings make worthless proxy thermometers, why are you using the abusive 'so called' tag for people who share your opinion? It seems an odd approach - does your link with 'Real'Climate mean that you have to behave this way?
"The point is to cast aspersions on climate science generally. Obviously. Find flaws in some branch of the science and then over-generalize them to the broader field as a whole."
Boy, is that a weird fantasy theory or what.
For the record, but for its policy-mongering, popularizing and publicity-addicted shenanigans, no one would give a crap about climate science. That is the sad truth, sadder than the pain climate scientists (of course, not all of them, but I guess one has to say it) try to inflict on society, drawing attention to their discipline. "oh, please, care about our findings, our models, our fossil fuel politics, 'cause otherwise, civilization will collapse".
JIm, at least you have a sense of irony- "oh boo hoo, that scientist was a meanie to me".
One high -profile climate psientist actullay discussed with his colleagues how he could make trouble for me at my workplace because I had the temerity to ask for his data.
In case you are unaware it is standard scientific practice to produce data so that your results can be replicated/verified by one's peers.
If you do not believe me, check out the "Climategate" emails.
Someone noted that Jim Bouldin is part of RealClimate. His comments here remind me why I avoid that site – the tone of it under duress is so often like that of a petulant and hot-headed youth with a few barbs up his sleeve which he can scarcely stop tumbling out at the slightest hint of a relevant opportunity. Thus Jim has given us (I have added italics to help distinguish quotes from other people):
(1) Out of the blue, missing the nature of the comment he is reacting to, but allowing a few barbs:
(2) Another barb:
(3) And another:
(4) Then a moment of embarrassed contrition (and I wonder if he will feel a need to compensate for this with some subsequent aggression):
(5) I don’t think this is the pay-off yet, this non-sequitur is just another triggered barb-release:
Richard Drake displays a better response to all this than I do. He brushes aside the tone, and notes the value of having an exchange of views. I admire that, and I wish I could always manage it myself.
John thanks, but I did also say that I hadn't really been paying attention to this thread. Jim may deserve what he's getting, in the stead of others from RC that wouldn't dream of appearing here. There again, Gandalf once said to Frodo "Many who live deserve death, and some that die deserve life - can you give it to them? Do not be so quick to deal out death and judgement. For even the very wisest cannot see all ends." Ah, he was a great sceptic in my early days.
Jim, you say you think you've been wasting your time. I don't think that's the case. People have read your posts and learnt from them - even though the comments on your blog have been low in quantity and quality. The failure of the tree-ring community to reply or address any of the issues has been noted. You've also gained some "skeptic credibility points", if you care about these, for being prepared to criticise an aspect of the field. In other words, I'm more likely to trust something you say than something that certain other climate scientists say.