The Royal Society on the temperature records
I was thinking about Doug Keenan's WSJ article about statistical significance in the global temperature records - for those unfamiliar with it, we don't know whether the recent warming is significant or not because we don't know what statistical model to adopt to describe the climate's normal behaviour. Doug has published a "director's cut" of the article at his website.
I found myself wondering how the Royal Society had explained the recent warming to the public in their new paper on climate change and, more particularly, how they had addressed the question of statistical significance. Here's the relevant excerpt:
Measurements show that averaged over the globe, the surface has warmed by about 0.8°C (with an uncertainty of about ±0.2°C) since 1850. This warming has not been gradual, but has been largely concentrated in two periods, from around 1910 to around 1940 and from around 1975 to around 2000. The warming periods are found in three independent temperature records over land, over sea and in ocean surface water. Even within these warming periods there has been considerable year-to-year variability. The warming has also not been geographically uniform – some regions, most markedly the high-latitude northern continents, have experienced greater warming; a few regions have experienced little warming, or even a slight cooling.
When these surface temperatures are averaged over periods of a decade, to remove some of the year-to-year variability, each decade since the 1970s has been clearly warmer (given known uncertainties) than the one immediately preceding it. The decade 2000-2009 was, globally, around 0.15°C warmer than the decade 1990-1999.
So, no mention of statistical significance. This is a bit disappointing really - this is our national science academy. I'm not sure that saying that recent decades are warmer than earlier ones is saying anything very much at all.
The other thing that interests me is the reference to known uncertainties. What is the magnitude of the known uncertainties in the temperature records? I'm not sure I've seen these before (or perhaps I've forgotten).
Reader Comments (33)
Its almost as if they are embarassed to talk about it.
The RS paper essentially relies on computer models to make its case. There is no claim that the instrumental temperature record, on its own, evidences AGW; there is no claim that proxy temperature records, on their own, evidence AGW; etc. That is a huge divergence from the IPCC position.
The authors of the RS paper could hardly have done that accidentally. They seem to know the truth: there is no empirical case for AGW. This is fascinating.
To anyone trained to work with financial market data, the insignificance of late 19th century to early 21st century warming is evident. Stochastic trends more impressive than this crop up all the time: just use a random number generator repeatedly to produce red noise series of 130 or so points and you can see for yourself.
Douglas Keenan has done a great job in laying bare the flimsy nature of the IPCC argument on this. There is no reason to assume that the global temperature record is simply a first order autoregressive time series, and this assumption is key to their case for significance.
Other advocates of AGW orthodoxy such as Bart Verheggen make the even less supportable assumption that temperature can be modelled as a simple linear trend plus white noise - this was the approach savaged by VS a year or so back.
All this ought to be pretty embarrassing to the warmists, especially since Cohn and Lins drew attention to the crucial relationship between significance and the underlying statistical model in 2005 - http://water.usgs.gov/osw/pubs/Naturally_Trendy-Cohn-Lins_GRL_2005.pdf - and it was peer-reviewed.
If it's aimed at Joe & Joanna Public, then the concept of statistical significance wouldn't be understood by them.
Also, it'd be another case of "diluting the message"?
'We know what we know about the unknowns, but the unknown unknowns could have an unknown effect on these results.
But - hey - we want to show warming, so that's what these results show - alright..??'
I keep hoping that Simon Singh will one day respond to the comments on his blog, one of which was a reference to Doug's article. Being a mathematician one would hope he would modify his views ... or doesn't it work like that?
Multiple choice facing SS:
(a) Skeptics are still a bunch of numpty deniers
(b) I have looked at all the comments and I am unpersuaded to modify my views.
(c) Thanks for all the comments which I am still digesting: I hope to reformulate my view on AGW in due course ...
(d) I am no longer interested in publishing my views on global warming.
The whole idea of a "global average temperature" is meaningless. Like a "global average phone number" - it's arithmetically possible but mathematically pointless.
I am perpetually puzzled by the error figures that are associated with temperature data.
I don't know what fancy accurate gizmos may be used in modern readings, but I expect that the older ones are alcohol or mercury based thermometers. In a perfect world, such a device would have a precision of somewhere 'round +/- 0.5 degrees C. The true accuracy would probably be +/-2 degrees or possibly much worse. (Feel free to correct me here).
From such data, how do we generate variation figures to 2 or somethines 3 decimal places. I know that statistical models are applied, but how do the models make inaccurate data accurate?
With some types of data, you may assume that over a large number of readings the errors will be well distributed and so some kind of mean will be more accurate. That does not make sense for the temperature data. There is no reason to assume that the error will be well distributed, it may be systematic in one direction.
I would be grateful if somebody could enlighten me.
Even without knowing the statistical model, can't we say whether recent warming is unusual or not by comparing to historical records? In other words, using Keenan's dice, if we have 1000000 throws with no trend one way or the other and then suddenly see a large persistent trend over 100 throws which hasn't been observed in the last 1000000 throws (not saying this is what we see in climate - in fact we see the opposite), doesn't that say at least something? I think this aspect is missing from Keenan's article which only involves a few throws as I remember it (haven't read "directors cut").
The Central Limit theorem suggests that the sampling distribution of ANY statistic will be normal (or nearly normal) assuming that the sample size is large enough.
Whether the sample mean will approximate the mean of the unerlying distribution is another matter, and relies on the sample being truly representative. I think we all suspect that the sample being considered is no longer truly representative (consider how many stations have been dropped from the record) but the argument then becomes one of whether the sample still allows one to estimate the underlying trend.
This is presumably where the BEST project comes in.
Pax: In short, no, the essence of Doug's article is that you cannot say whether recent warming is unusual without proposing a model which explains historic temperatures.
So:
(1) Model historic temperatures accurately enough
(2) Calculate whether recent temperatures deviate significantly from the proposed model
This is part of the reason why they needed to get rid of the MWP and the LIA : they weren't able to incorporate this variability into their models.
So: get rid of historic variability.
Then propose a (relatively) simple model that explains unvarying historic temperatures.
Notice that recent temperatures are not explained by this simple model.
Claim that this must be due to human factors.
The RS excerpt has nothing to do with Doug Keenan's article. The RS makes the correct statement that the data shows that the world is warmer now than in 1850. The uncertainty of ±0.2°C is due to measurement errors. There can of course be many arguments about the validity of the data, and how big the measurement errors really are, but the RS are simply saying what the data show (at least in the excerpt you have shown). Doug used the same data in his article, so presumably agrees that the value in 2010 is higher than that in 1850. His argument is that the change we can see could have happened by chance (in this context this means without any intervention from man). The IPCC says that the data should follow an AR 1 model, and that therefore the increase we can see is impossible without additional forcing. Doug says this AR1 assumption is wrong. It is an argument about which model is appropriate for the observed changes, not about whether the observed changes are real.
Snowrunner
I agree - my posting raised two issues though - why the RS didn't mention statistical significance and what the magnitude of the error bars is.
The RS continues to be an embarrassment; their dependence on the degree of accuracy of the various thermometer types commonly used during any given historical period is misplaced. Their trust in the network of thermometers is also misplaced as many have vanished, many have been 'moved' and vast areas of the atmosphere immediately above the world's oceans were only measured hapazardly by passing ships until the advent of the satellite era.
I am not a statistician and thus have great problems believing the notion that a huge amount of dubious temperature measurements somehow cancels out errors.
I agree with Jack Hughes - the concept of 'global average temperature' is devoid of relevance to anything and is therefore meaningless.
I found the RS paper quite fascinating. There are a few places, like in the introduction, where they present the party line quite clearly, without any hint of doubt.
They then go on to present a lot of detail about climate science, and I think there is very little in there that many of us would disagree with at all.
Then a bit of scaremongering towards the end, but without presenting any evidence at all that the scare is reasonable, apart from a sprinkling of precautionary principle.
Very odd. And as DJK has already noted, this paper could almost still stand as it is on a post-CAGW world as a "see, we told you so" statement.
No mention of land use changes except in the introduction, where it could be read as being presented as an alternative explanation to GHG warming.
And Fiona on the panel, apparently unable to inject any hysteria.
Well, the RS do give the uncertainty of ±0.2°C in the change from 1850 until now, though admittedly they don't say what this really means (is it a 95% confidence interval? 90%? Something else entirely?). I don't think adding statistical significance to this would help, it is certainly very highly significant even if the ±0.2°C is only a 90% interval. Hardly anyone in the target audience understands p values anyway.
I don't think there is anything controversial in this part of the paper, or in fact in much of the rest of it. The controversy is mostly in three paragraphs, 37 and 38 in the section "attribution of climate change" and 57 in the conclusion. This is where they say that there is "strong evidence that changes in greenhouse gas concentrations due to human activity are the dominant cause of the global warming that has taken place over the last half century", and that this evidence is because the models don't match the observed trend without including CO2. This is of course where the whole edifice collapses.
The fact that casting CO2 as the villain of the piece happens to be extremely convenient for those (I shan't name any names) who would much prefer that we didn't use carbon as a fuel in any form only makes me the more suspicious.
I thought organisations like the Royal Society were supposed to protect us from snake-oil salesmen.
It would be interesting to know how that document would have read without the involvement of the second and third names, on the working group, Fiona Fox and the Director of the Grantham Institute.
Also why does a "scientific" organisation require policy and strategy advisors to write a scientific report? Is this not an admission of trying to stay on message with PR spin and propaganda. What additional purpose did Ms Fox's presence serve?
This is what you get at the end of a revolution: self serving pap.
This RS report is no less worthless than Trenberth and his inverted null hypothesis.
golf charley,
Here is an analogy that you might find useful.
This is like at the end of the Roman period when suddenly Christians were rewriting things to claim that the only way to be moral was through being Christian. Imagine being a virtuous pagan and finding out that your life had been redefined to make you less than good, while your actions had not changed.
AGW is that new religion.
I still want to know what cause the MWP, LIA and the Great Depression era warming as those don't seem to correlate well with CO2 as the dominant mechanism. There are many natural climate cycles and global warming could just be the peaks of a few of those coinciding, with a sprinkling of CO2 to provide a small radiative feedback. It'd be nice to see a cycle tracker showing known cycles, their confidence levels, waveform and magnitude with the option to overlay.
Atomic:
http://avaxhome.ws/video/cloud_mystery_svensmark_climate_change_rapidshare_climategate.html
matthu
But it must have been rising CO2 levels that made the sun "feeble"!
Re matthu
Yes, that's one of the cycles I'm thinking of, and I'm looking forward to see what comes out of the CLOUD experiments. I'm not entirely convinced that on their own they're enough to explain everything, but given we've also had a declining magnetosphere, land use changes, aerosol emissions and CO2 changes it may fill in some of the blanks in the energy budget. Naturally Svensmark's work gets denounced because it may knock CO2 off it's pedestal and CO2 is so good for advocates pushing policy. But policy isn't science.
Even if the GCR theory "only" explains e.g.1/3 of the warming what it does do is expose the lie that "it must be CO2 because otherwise we can't explain it" by demonstrating that scientists simply did not include enough independent variables in their model.
This is the part that has always caught my interest...
Measurements from the surface, research aircraft and satellites, together with
laboratory observations and calculations, show that, in addition to clouds, the two
gases making the largest contribution to the greenhouse effect are water vapour
followed by carbon dioxide (CO2).
So they mention measurements, lab observations and calculations.... what measurement do you make that shows that the warming you see is occurring because of the effect of CO2.
Let's assume that I put a thermometer in the back yard. The Thermometer only measures "raw" temperature. What instrument should I add to determine what part of that number is due to CO2? This is the part that has always baffled me...
@WillR - I understand the term "calculations" to be a coy term for computer models...
re WillR
A spectrometer to measure the precise amount of radiation reflected back that could be attributed to any increase in CO2. 2 pyranomters or pyrgeometers, one looking up, one looking down to measure solar radiation and reflected radiation. Something else to figure out if it's water vapour or CO2 :)
@ golf charley,
"Here is an analogy that you might find useful.
This is like at the end of the Roman period when suddenly Christians were rewriting things to claim that the only way to be moral was through being Christian. Imagine being a virtuous pagan and finding out that your life had been redefined to make you less than good, while your actions had not changed."
That is an utterly useless and historically inaccurate analogy. The key doctrine of Christianity is that nobody can measure up to God's standards and therefore we all need his grace and forgiveness.
I hope that you do not misrepresent the arguments of those you disagree with on the subject of global warming in the same way that you very confidently and ignorantly misrepresent Christianity.
I've played around with random number generators somewhat, having researched and written my own. It seems to me that an average increase of 0.8 over 161 years would be within a random generation bias toward one end of the value spectrum.
Running sets of random numbers, particularly small runs, creates some amaizingly non-uniform randomness. It takes real effort in random number genertion to create "uniform" randomness. Which is itsself not consistant with natural randomness, as natural randomness is in fact clumpy (and for this statement, I have recorded long processions of 6 sided and 20 sided dice rolles - no longer held by me).
The system got whacked by a large perturbation called the end of the last ice age. I find it entirely unsurprising that all sorts of physical variables in the climate system now exhibit periodic, quasi periodic and aperiodic oscillations so far during this short interglacial period. There's no equilibrium here. You hence don't need to find specific causes like CO2 or cloud cover changes to explain a small temperature rise over a couple of hundred years.
I've been thinking;
The long term temperature variations kinda sorta look like Perlin Noise randomness. That is, compounded randomness at various scales.
Never mind the uncertainty is the 0.15°C even correct?
No, it is in fact lower than the actual calculated figures for surface temps. It is even lower that than satellite calculations.
So where does 0.15°C figure come from????????