Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Lindzen on AR5 | Main | AR5 press cuttings »
Friday
Sep272013

Keenan writes to Slingo

Doug Keenan has just written to Julia Slingo about a problem with the Fifth Assessment Report (see here for context).

Dear Julia,

The IPCC’s AR5 WGI Summary for Policymakers includes the following statement.

The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C, over the period 1880–2012….

(The numbers in brackets indicate 90%-confidence intervals.)  The statement is near the beginning of the first section after the Introduction; as such, it is especially prominent.

The confidence intervals are derived from a statistical model that comprises a straight line with AR(1) noise.  As per your paper “Statistical models and the global temperature record” (May 2013), that statistical model is insupportable, and the confidence intervals should be much wider—perhaps even wide enough to include 0°C.

It would seem to be an important part of the duty of the Chief Scientist of the Met Office to publicly inform UK policymakers that the statement is untenable and the truth is less alarming.  I ask if you will be fulfilling that duty, and if not, why not.

Sincerely, Doug

 

This seems quite important to me.

PrintView Printer Friendly Version

Reader Comments (73)

Does anyone actually think we shouldn't be able to rule out the possibility of no warming in the last 100+ years?


Sep 28, 2013 at 10:15 AM | Unregistered CommenterBrandon Shollenberger

On the basis of simplified statistical models alone, yes.

Sep 28, 2013 at 11:33 AM | Registered CommenterMartin A

This is a repost from Jo Nova's blog:

Keenan claims with respect to the 90% confidence limits for warming for the period 1880 to 2012 0.85 [0.65 to 1.06] °C or 0.064 [0.049 to 0.080] °C/decade

“the confidence intervals should be much wider—perhaps even wide enough to include 0°C.”

Keenan supplies no reasons whatsoever for this claim or what the level should be.

Statistical significance is usually taken to mean a 95% (2 sigma) confidence interval. This is the level that skeptics wish to use when claiming that the probability of warming for the last 17 years (93% for Hadcrut4) is not “statistically significant”. It would be statistically significant at the 90% confidence level.

Now to avoid any accusations of cherry picking, I will take dataset (Hadcrut4) with the lowest trend and highest 2 sigma level of the global land/sea data sets available here for 1880-2012.

http://www.skepticalscience.com/trend.php

Trend: 0.062 ±0.008 °C/decade, or in the form of the report;
0.06 [ 0.05 to 0.70] °C/decade

Which for 132 years is

0.82 [0.71 to 0.92] °C

The result is nowhere near including zero.

On Friday Mr Bolt claimed that the report supported a pause in warming. It does not.

It reports a drop in the rate of warming since 1998 to 0.05 [–0.05 to +0.15] °C per decade

Again I must point out that statistical significance cuts both ways.

The data to a 90% confidence level does not support a statistically significant cooling or flat trend (pause) because part ot the range (in fact most of it) is in the warming region.

Nor does it for 95% limits (again Hadcrut 4)

0.052 ±0.155 °C/decade or
0.05 ±0.16 [ -0.10 to 0.21] °C/decade

Furthermore as the ranges overlap, there is no statistically significant drop in the warming rate compared to the entire 1880-2012 period (see data above) or the period from 1951 to 2012:

0.12 [0.08 to 0.14] °C /decade (90%)

0.11 [0.09 to 0.13] °C/decade (95%) (Hadcrut4)

That is the problem with choosing short data sets of less than a couple of decades. They really tell you very little.


--------------------------------------------------------------------------------

Sep 28, 2013 at 2:12 PM | Unregistered CommenterPhilip Shehan

Brandon,

"I posted this over at WUWT:"

And I posted this in reply:

Depends what you mean by “correct”.

My view is that all this talk about whether changes in temperature are “significant” or not are meaningless without a validated statistical model of ‘signal’ and ‘noise’ derived independently of the data, which we don’t have. We don’t know the statistical characteristics of the normal background variability precisely enough, so it is simply impossible to separate any ‘global warming signal’ from it. All these attempts where you make nice neat mathematical assumptions simply get out what you put in, and your conclusion depends on what you assumed. If you assumed a trend you’ll find a trend. If you assume no trend, you’ll find there’s no trend. Doug’s ARIMA(3,1,0) is merely a standard example derived by the textbook method to illustrate that point.

But it’s got no independent validation, either, so it’s no more “correct” than anything else we could do. It’s simply a better fit.

There are no correct error margins because we don’t have an independent, validated model of the errors. We cannot rule out, by purely statistical means, the possibility of no warming in the last 100+ years. And the IPCC’s confidence intervals are just the same sort of significance testing in disguise.

However, I don’t expect the mainstream is ready to accept that one, so I’ll let it pass. That you accept that linear+AR(1) is “lacking” is a good start, and sufficient for the time being.

---

Philip,

"Keenan supplies no reasons whatsoever for this claim or what the level should be."

The reason is that using a more general model broadens the confidence bounds. Since the data fits a trendless ARIMA(3,1,0) model pretty well (for which the trend is zero by definition), the claim is certainly plausible. The reason he didn't say what the level should be is that there's no way of telling. We don't have a validated model of the natural variability - not even of what form it should have. Without any objective constraints, you could use anything, and get any answer. It's like having a maths question that reads simply: "The variable x satisfies an equation. Determine the value of x."

"Trend: 0.062 ±0.008 °C/decade"

And what error model are you assuming, there?

--

There is a problem with computer statistics software - one I first saw identified by Deep Thought in The Hitch Hiker's Guide To The Universe. It's very easy to tell the computer to tell you what the answer is, but it's not much use if you don't really understand the question.

In this case, you've told it to calculate a trendline using Ordinary Least Squares (OLS) which implicitly assumes additive, identically and independently distributed zero mean Gaussian errors on a (usually) linear trend. Problem is, the assumption is wrong - the errors are not independent or identically distributed. It's even more wrong than what the IPCC did, which did at least drop the independence assumption. This is the point Doug is trying to make. The answer you get depends critically on the statistical model you assumed, and if you pick one that doesn't fit the data, you'll get the wrong answer.

--

It's all the fault of the exercises in school textbooks, I reckon. What they do is to give you a theorem, setting out the caveats, assumptions, and preconditions, and then the conclusion you can draw from it. They give examples and methods based on applying the theorem. And then they give an exercise that - lo and behold - just so happens to satisfy all the preconditions so you can apply the method. This conditions people to assume that the textbook methods always work. They don't bother to check, and therefore do not remember, the preconditions - because they're always fulfilled. It would be an especially sneaky textbook that threw in some exercises where the method you'd just been taught didn't work, and no doubt highly unpopular with students. The same applies double to the exams. However, the real world does not work like a textbook/exam. Nature is sneaky.

Sep 28, 2013 at 3:23 PM | Unregistered CommenterNullius in Verba

Darn! That should have been "Galaxy", obviously.

Sep 28, 2013 at 3:28 PM | Unregistered CommenterNullius in Verba

"Can Lord Lawson pony up Doug's question to the Met O?
Sep 28, 2013 at 1:32 AM ATheoK

Is Lord Donahue on the case?

Sep 28, 2013 at 8:47 AM | Registered CommenterMartin A."

Many thanks for the correction Martin! I could claim any number of reasons, e.g. late night, bourbon, senile; but the truth is worse, I screwed up and failed to double check. Thanks for the reminder that I need to always check my memory and more importantly to correct my error.

Lord Donahue, my apologies!

Sep 28, 2013 at 3:35 PM | Unregistered CommenterATheoK

You can get anything you want at the end of the universe.
===================

Sep 28, 2013 at 3:35 PM | Unregistered Commenterkim

My view is that all this talk about whether changes in temperature are “significant” or not are meaningless without a validated statistical model of ‘signal’ and ‘noise’ derived independently of the data, which we don’t have.

Sep 28, 2013 at 3:23 PM Nullius in Verba

You are modest to state that it is simply your view. To me it seems absolutely glaringly obvious that doing statistical analysis in the absence of a validated model and pretending the result has any sort of meaning is nothing more than bullshit in fancy dress.

Sep 28, 2013 at 3:39 PM | Registered CommenterMartin A

"...This may well be true if those activities include human 're-calibration' of the temperature records :-)"

At last, the truth is out. Recalibration of the record (aka fiddling the figures) is the science behind the IPCC statement.

Sep 28, 2013 at 4:05 PM | Unregistered CommenterDavid Chappell

Thank you Nullius in Verba.

Could you expand a little on why the errors are not independent or evenly distributed? This may sound naive but is not the hypothesis that with additional CO2 the temperature will increase on that of the previous mesurement period, say monthly, so the data is not independent in that sense. Of course the trend from month to month will be buried in the noise but appear over the longer term. Is not a simple linear regression analysis valid? As I note, certainly that appears to be the assumption used by skeptics and adherents of AGW alike in temperature trend analysis that I have seen.

Sep 28, 2013 at 5:02 PM | Unregistered CommenterPhilip Shehan

Dear Doug,

It does not matter if it is right or wrong, as it is a political document and not a scientific document. So please don't interfere with inferior arguments abouth the truth. And Connie says that it does not matter if catastrophic global atropogenic warming is real or not, because the consequent policies are right. Wind turbines are good. Solar power is good. Green energy is good. Why won't you people just get it?

Sincerely yours, Julia
Greenpea, no, I mean Met office.

Sep 28, 2013 at 5:09 PM | Unregistered CommenterTroels Halken

In other words, without having a full understanding of the extent of internal natural variability, no-one can say definitively what is a significant variance.

Sep 28, 2013 at 5:56 PM | Unregistered CommenterDolphinhead

"Could you expand a little on why the errors are not independent or evenly distributed?"

Certainly.

There are a number of major reasons why the temperature series is not independent. One is the thermal heat capacity of the top few hundred metres of the ocean. It takes a (relatively) long time for it to warm up or cool down. A second reason is that heat is cumulative. Weather tends to directly affect the rate of heating, for example a long run of sunny days gradually warms the oceans, a long run of cloudy ones gradually cools it. If you suppose the cloudiness is a random series, the temperature tends to be related to its integral. (Not quite, since there are other feedbacks that keep it within bounds.) And thirdly, there are longer-term quasi-random variations and oscillations. Weather systems are often big enough and coherent enough to last for days, or weeks. There are other features that vary on timescales of months (e.g. Rossby waves), years (e.g. ENSO), decades to centuries (AMO and PDO), centuries to millennia (Bond interstadials, Dansgaard-Oeschger events) and multi-millennia (ice ages). In local records, like the Central England Temperature series (HadCET) that goes back to 1659, we see several medium-term variations extending over decades, as big as the modern one (e.g. 1680-1730). Weather is chaotic and non-linear, and it has natural variations at every frequency, there's no magic cut-off at 30 years. But our data on most of the longer periods is spotty, and we have only a very poor understanding of their magnitude and inter-relationships.

The easiest and most educational way to make the point, though, is to show the difference graphically. Get a computer program to generate a series of Normally distributed random numbers, and plot them. You will see a fairly uniform fuzzy cloud. Then introduce some autocorrelation - the simple AR(1) process is very useful for this purpose - and see how the cloud first becomes speckled, then "clumpy", then turns into a line wiggling irregularly up and down, and eventually the wiggles expand until you get the appearance of systematic trends up and down. This final picture is very recognisable - it looks a lot like climate data!

(Note, AR(1) is the easiest to play with, but don't let this mislead you into thinking it's the only possibility. More advanced tests demonstrate that temperature is definitely not AR(1).)

So for example if you use R, you can try something like:
plot(arima.sim(n=1000,model=list(ar=0.999)))
and change the 0.999 to numbers in the range 0 to 1. Using 0 gives independent errors. Using numbers close to 1 gives strongly autocorrelated errors illustrating spurious trends. (They're spurious because the mean of the data distribution for every time step is zero. There is no trend.) The big problem is how you distinguish these spurious trends from actual ones.
(If you've got Excel or some other package, there are ways to generate the same sort of example.)

The reason that the errors are not evenly distributed is that the sensor network changed over time. Back in 1880 there were large parts of the world where no temperature data was collected - where people did not often go. So the global temperature is not very accurate. As time passed, there were more and more sensors covering more of the world, so the accuracy increased. This means the errors in individual annual averages vary over time. There's a modified method for calculating trends with varying errors (weighted least squares), but the default algorithm assumes all the errors are the same.

Sep 28, 2013 at 7:38 PM | Unregistered CommenterNullius in Verba

@Martin (3:39) "bullshit in fancy dress"

Got a smile from me. Such a relief from the overworked "nonsense on stilts". But short shelf life like all its kind. Dispose after use.

Sep 28, 2013 at 8:19 PM | Unregistered Commentersimon abingdon

Thank you Nullius in Verba

Sep 28, 2013 at 9:45 PM | Unregistered CommenterPhilip Shehan

The Met Office needs a great deal of help if it is ever to pull out of the dive it has gotten into over climate alarm. Doug Keenan is being very public -spirited in providing encouragement and guidance for them. If a government agency could be equipped with auto-pilot kit, it would surely be blaring 'Pull-up!' 'Pull-up!' They don't have it, and so crash and burn may be their fate, but at least Doug tried to save them.

Sep 28, 2013 at 11:36 PM | Registered CommenterJohn Shade

Just adding a reference to Professor Slingo's letter that Geoff Shorten posted.

http://www.bdlive.co.za/opinion/letters/2013/09/27/letter-the-world-is-changing

I have requested from the UK Met Office references to the fundamental physics she claims: "The fundamental physics is clear; CO² traps radiation and warms the planet."

I mean, if there was fundamental physics then presumably the IPCC would not be needed or they would have to conclude that AGW is 100% due to man's CO2 emissions, why is there 5% uncertainty if it's fundamental physics?

Sep 28, 2013 at 11:55 PM | Unregistered Commenterclimatebeagle

"I mean, if there was fundamental physics then presumably the IPCC would not be needed or they would have to conclude that AGW is 100% due to man's CO2 emissions, why is there 5% uncertainty if it's fundamental physics?"

Because while that part is true, there's other stuff going on.

It's like saying that the refrigerator door traps heat outside is fundamental physics, but you're not sure whether the reason all your food is defrosting is because somebody opened the door, or because there was a power cut, or because somebody changed the thermostat setting. The refrigerator door is certainly going to have an effect, but how much?

Anyway, reference to the physics....

Sep 29, 2013 at 9:52 AM | Unregistered CommenterNullius in Verba

Thanks, Nullius in Verba.

Slingo: "CO² traps radiation and warms the planet"
NiV: "Because while that part is true, there's other stuff going on."

That's my point, there is other stuff is going on, so you cannot make the jump from CO2 has certain absorption properties to "the planet will warm" and claim it's fundamental physics.

So is Slingo saying:

A) The addition of CO2 will always lead to a warmer plant.
B) The addition of CO2 will warm the planet if that other stuff doesn't change?

It reads as though it's A), which seems to be untrue given the pause. To me, B) is meaningless, as it's a complex chaotic system, so you can't work on the premise that everything else will be unchanged. At least not for setting policy.

How would most people read Slingo's statement, I would guess as A).

Sep 29, 2013 at 11:12 PM | Unregistered Commenterclimatebeagle

Nullius in Verba, meant to submit this supplementary question earlier.

Firstly I have held you up as an example to some people at Jo Nova and Andrew Bolt on how to effectively argue a skeptical case:

"I reposted this over at Bishop Hill and received what appeared to be a very knowledgeable and polite counterargument from Nullius in Verba. When I receive such replies I take respectful notice and have asked him for more detail of his point of view which he again politely provided. I have not finished with my questions to him."

I will explain a little about my background. I trained initially in the physical sciences (PhD in NMR spectroscopy) so had some sympathy with Rutherford: “If your experiment needs statistics, you ought to have done a better experiment”. I later found myself in biomedical research using applications of the technique and discovered that when you are dealing with messy, complicated living systems rather than simple molecules statistics are required.

I am aware that many scientists are unhappy with simple Fisherian statistics with somewhat arbitrary cut off points like 2 sigma (95%). I have some familiarity with Bayesian analyses which take into account the history of the data, so to speak. Do you have any thoughts on whether such analyses would be more suitable to the problems you have outlined?

Sep 30, 2013 at 8:16 AM | Unregistered CommenterPhilip Shehan

climatebeagle,

"So is Slingo saying:..."

There is a very general human tendency not to check the logic of arguments so thoroughly when their conclusions match what you expected. It's called confirmation bias - and it applies to everyone. I doubt Julia is making such precise distinctions in her thinking.

The usual mainstream argument is that the average temperature is the result of forcings, feedbacks, and 'noise'. Increasing CO2 increases the forcing enough to cause warming, which is currently small but will accelerate later, the current best estimate of the feedbacks boosts it to a level experts have decided is dangerous, and the noise is poorly quantified but thought to be small compared to the anticipated warming. So while natural variation might mask it now, it couldn't possibly mask the warming expected in the future. That the natural variation seems to be a bit bigger than was thought, and the resulting pause politically extremely inconvenient, it doesn't really change their conclusion about the endgame.

You could summarise that as saying: "C) The addition of enough CO2 will always lead to a warmer planet eventually."

Philip,

Thanks! While I do this purely for my own entertainment, and mental exercise, it's always nice to hear from somebody that has found it thought-provoking. My hope in posting arguments and thoughts is for people with different viewpoints to point out where I've missed stuff, or made mistakes. Everyone has their blindspots, but if we talk to people with different</'i> blindspots, we can see more. That doesn't mean I give up easily, of course, but since I see it as a mutually advantageous exercise, I'm more inclined than many to take robust disagreement in good humour. I've also found politeness to be far more effective - conflict and insult entrenches positions, while remaining polite in the face of rudeness gives you the moral high ground. The technique can be very persuasive - and I encourage people on all sides to take advantage of it.

The argument between the Bayesians and Frequentists does tend to go a bit over the top, sometimes. A lot of it is more due to misunderstandings than genuine philosophical differences, although there are those, too. For example, people commonly misunderstand Fisher's purpose for the 95% hypothesis tests - their aim was not to come to a conclusion about what was true, it was to provide a filter on what was strong enough evidence to be worth paying attention to.

The point is most easily understood from the Bayesian perspective. Your confidence in a claim can be represented on a sort of linear scale, by calculating the log-likelihood ratio. Your confidence in X is represented by the quantity log(P(X)/(1-P(X))). I call it 'linear' because it has the property that the outcome of an (independent) experiment adds or subtracts a fixed amount to your confidence, the amount added depending only on the properties of the experiment.

So you have an initial position on the scale, your prior confidence in the claim, and you do the experiment which jumps you to the left or right by some set amount, depending on the outcome. Where you end up is your confidence in the claim after having done the experiment.

But that's not a fixed quantity, because it depends where you start! And there's nothing in either framework to tell you where you should start from. Frequentists say it's subjective and not a scientifically meaningful quantity. Bayesians say we only have access to subjective experience, so everything is subjective, and you start from what you know.

Now the point of the 95% significance test is not to say where you end up, but to say how big the jump is. Evidence that passes the test is saying this is a big jump, that pushes you along the line a fair way, and so is worth paying attention to. It ought to modify your views by a significant amount. But that does not mean you therefore ought to conclude it is true, because if your starting point was way to the left of zero, then you're probably still to the left, only not so much. That is, you still think its false, but perhaps a little less strongly.

The 95% limit is simply meant to keep weak results that scarcely shift your confidence out of the journals. It's a filter on importance, for directing one's attention, for accepting for publication to ones peers for further checking - it is not and never was a threshold for the general scientific acceptance of a conclusion. At the least, you'd want a lot more than 95% confidence for something to be worthy of the label "Science", meaning a published result really needs to be replicated several times independently before it is 'accepted'. And if it's something surprising, with a low prior confidence, more even than that.

The reason for setting such strict standards is because a lot of science depends on long chains of reasoning. If each step only has 80% confidence of being right, you can only link four steps together before 0.8^4 is less than 50%, meaning it is as likely wrong as not. 95% confidence in each step only gets you to 13 steps. 99% confidence let's you chain together 69 steps before that's true. Setting strict standards allows longer and more complicated arguments to work reliably.

And a lot of scientific arguments involve a heck of a lot more that 69 logical steps.

So yes, 95% is not a magic number that makes it 'Science'.

That said, I think the problem with the IPCC's reasoning is tangential to that. The problem is that in calculating the size of the step that a piece of evidence gives you, you have to know something about what each hypothesis predicts about the outcomes, which means you need an accurate model of the errors. In general, you always have an additional hypothesis - usually impossible to quantify - that your model is wrong. Validation is about setting bounds on this other hypothesis.

The IPCC's problem is that they don't have a good handle on this probability - partly because they have tried to play the issue down for the sake of getting political action. You could call it the "The Science is Settled" hypothesis. It isn't.

And I think that's a big problem because the possibility of dangerous global warming hasn't been disproved. And while I think it's highly unlikely, and certainly not enough to justify major economic dislocation at this point, it's a serious enough issue to be worth looking at seriously. And they're not. Instead of getting some proper engineering discipline into it, we've still got a bunch of absent-minded third-rate academics haphazardly throwing it together, and bluffing the stats. And playing silly games, like not sharing data until they've got some papers out of it, or hiding "dirty laundry", or making up numbers rather than admit that their database is corrupted. And this is the end of the world they're talking about?!

I don't believe global warming is a problem, and even I regard their actions as grossly irresponsible, considering the gravity of the issue. People who do believe it's a problem ought to be hitting the roof over their behaviour! But nobody wants to know. It's not about whether it's the right answer, it's about whether it convinces people, and they'd rather be wrong than "give ammunition to the sceptics".
(And granted, some of the more aggressive sceptics don't help.)

It's human nature, I'm afraid, and too late to do anything about it now. The bridges have all been burnt. Matters will play out as they will.

Sep 30, 2013 at 8:18 PM | Unregistered CommenterNullius in Verba

NiV

great post

please stick around

science needs you

Sep 30, 2013 at 9:01 PM | Unregistered CommenterDolphinhead

I am much grateful for the detailed and insightful comments by Nullius in Verba. Perhaps it would be good to take the physics-related remarks linked from the comment on Sep 29 at 9:52 AM and turn those remarks into a guest blog post?

Slingo has not replied to my message. Lord Donoughue has contacted me and asked for assistance in drafting a relevant Parliamentary Question, to get a reply.

Oct 1, 2013 at 1:42 PM | Unregistered CommenterDouglas J. Keenan

Thanks, Doug, and you're very welcome.

Anyone is welcome, too, to copy/use the Climate Etc comment I did, although I'm sure we've discussed it at BH in the past. It's always had a bit of a mixed reception, even from sceptics.

I don't know, but I would guess the Met Office's (and the IPCC's) response to this would be to say that the trend method wasn't intended as any sort of objective estimate of anything, it was just an arbitrary informal low-pass filter selected for presentational purposes, and the confidence intervals likewise. It was merely unfortunate that in the context it was presented it could be misunderstood as saying something more. Or to put it more simply "I didn't mean it like that."

You might be better off getting a generic disagreement with the technique, rather than an explicit criticism of the IPCC. Does the Met Office consider it appropriate to report temperature rise confidence intervals based on a trend+AR(1) model when it is known this model is wrong? Does the Met Office believe that such a 90% confidence interval means that it covers the estimated quantity with 90% probability?

But I wouldn't bet money on them not finding a way to wriggle out of it.

Oct 2, 2013 at 7:03 AM | Unregistered CommenterNullius in Verba

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>