Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« AR5 hearings | Main | More Briffa vs Ridley »
Monday
Jan272014

Walport's reverse thinking

Hidden behind the Times paywall, I gather that Sir Mark Walport is being rude:

Climate sceptics should stop attacking the science of global warming and have a “grown-up” debate, the Government’s most senior scientist has said.

Sir Mark Walport accused climate sceptics of questioning the scientific evidence in order to dodge the more challenging question of what to do about it.

OK, so let me get this right. The world hasn't warmed for 17 years or so. Climate scientists can only hypothesise as to the reasons why. We can't detect any significant changes in the surface temperature record. The evidence about climate sensitivity is that it's much lower than we had been led to believe (but the IPCC obfuscated the issue).

And Sir Mark thinks we are wrong to discuss the science?!

What does this tell you about our chief scientific adviser?

PrintView Printer Friendly Version

Reader Comments (159)

Martin A

By the way, Tamino chose a1979 start date "so we can include satellite data for the lower troposphere".

He wanted to use the RSS and UAH data.

Jan 31, 2014 at 7:46 PM | Unregistered CommenterEntropic man

EM did you get the point I was making? I can understand that you might think it is a silly point, but did you get it? (About warming before the temperature started increasing - I have the impression that you missed my point altogether. Perhaps I did not express it well..)


Two standard deviations and 95% confidence are synonymous.

You really think so? Where did you get that information? Do you believe that would also apply for, let's say, an exponential distribution?

As I said, using confidence intervals involves making assumptions about the distribution (of annual temperature) - such as it having a normal distribution or, at least, the same percentile(s) as a normal distribution. It's hard enough to sort out the effect of assumptions, either explicitly made or implicit, on this sort of analysis without bringing in additional unnecessary ones. Tamino wisely did not mention percentiles and I think it would be wise to follow him and stick to 2 sigma bounds, rather than percentiles.

If the annual values are a better fit to the blue line and its confidence limits, warming has paused and the sceptic meme "no warming since 1998" is correct.

If the fit is better to the red line and limits, this sceptic meme is wrong.

That needs to be qualified as being subject to the assumptions that are implicit in what Tamino has (with due credit to him) done. He did not seem to detail explicitly the assumptions he made. A different set of assumptions - not unreasonable ones in view of what is and what is not known about climate change - can result in different conclusions. If we want to take it further, I suggest we try to identify the assumptions that are implicit in what Tamino did.

I won't repeat what I said before about not having a statistical model but my remark remains pertinent. Did you agree with it?

Jan 31, 2014 at 11:28 PM | Registered CommenterMartin A

Martin A

I think I understand you. The warming trend of the latter 20th century did indeed start before 1979. However, extending the line back to 1969 would not have changed the slope much and would have made it harder to compare the thermometer records with the satellite record, which only began in 1979. Tamino says that he started all the calculations from then to allow proper comparison between different records.

My comparison between 2SD and 95% CL is, if memory serves, is valid for linear regressions and for individual sets of samples, in this case one year 's data. Whether it holds for exponential curves is above my statistical pay grade, and is not relevant here because one of Tamino's explicit assumptions is an approximately linear increase.
In practice the increase in CO2 looks exponential. The change in heating effect with concentration is natural logarithmic. Orher effects? Finagle knows.A linear rate is a reasonable compromise assumption.
The use of standard deviations also carries with it the assumption that the annual means come from data with a normal distribution.

IMHO Tamino's assumptions are:-

1) That the rate of warming is approximately linear.

2) That each year's average temperature arises from a normal distribution.

3) That 1979-1987 is sufficient to establish a linear regression trend line.

4) That the sceptic's dating of the pause from 1998 is valid.

5) That changes in ENSO, aerosols, insulation, etc can be regarded as noise, rather than materially affecting the long term trend.

Methinks he does have a statistical model, a linear temperature increase derived from annual averages, with standard deviations arising from normally distributed data.

His two outcomes are assumed to be a continuation of the linear trend or a change to constant temperature from 1998. The former would probably be the null hypothesis, as a continuation of the existing state.

Feb 1, 2014 at 6:32 PM | Unregistered CommenterEntropic man

Entropic Man -
I think you're on the right track. Here's an experiment you can do with Excel or your favorite programming language.

Take an ideal (noiseless) plateau-ing temperature history of the sort mooted by Tamino: set the anomaly to 0 in 1977, increase it by 0.01 each year through 1997 (when it reaches 0.2), and then hold steady at 0.2 thereafter. [I started in 1977 only to make the 1997 temperature come out to a nice even figure.] Compute the trend from 1979 to 1997 -- you should get 0.01 K/yr slope, and it will lie over the ramping part of the history. The post-1997 values all lie below the extension of this trend line. Just as Tamino writes.

Now set the temperatures in years 1992 through 1994 (inclusive) to 0, as if something exogenous such as a volcano temporarily depressed temperatures. [Note that in this idealization, there is no long-term effect; temperatures bounce right back to the original trajectory.] Temperatures in 1995 through 1997 are 0.18, 0.19, 0.20 respectively, and the post-1997 temperatures remain at 0.2, The plateau is still present. But what is the new trend line, and where are the plateau years relative to this trend line?

I'm not saying this is the whole story. But it suggests there is strong reason to doubt the trend line as a predictor.

Feb 2, 2014 at 4:30 AM | Registered CommenterHaroldW

EM - At present I'm struggling to understand what are the implicit assumptions that Tamino makes. Or what (if anything) we can say if we make no assumptions at all. (I've noted your opinions on his assumptions.)

On confidence limits etc -

I think there is no way of knowing what the underlying distribution for the residuals is. But I'm not sure that that matters unless we must have confidence limits for some reason.

(I don't think that the fact that the trend has been computed by regression says anything at all about the distribution for the residuals. I can easily construct a simple example where the residuals are nothing like normally distributed. )

Quoting for example (95% confidence == 2 sigma) implies an assumption they are normally distributed - for which we have no evidence. As I said, best to avoid chucking additional assumptions into the mix.

I think that the **parameters of the line** computed via a regression are probably close to normal (because of the central limit theorem, given some quite easy-going assumptions about the data). I think this may be what you are referring to.

Feb 2, 2014 at 1:38 PM | Registered CommenterMartin A

EM -a couple of questions:

1) That the rate of warming is approximately linear.

Why not *exactly* linear (with an unknown slope)? ('Approximately' then involves discussing what this means and its significance.)

2) That each year's average temperature arises from a normal distribution.


- I'm guessing that the variation in each year's temperature is assumed independent of that of other years. Would you agree with that? [it's clearly not totally valid - presumably the temperature on 1 Jan is correlated with the temp on 31 Dec, so annual averages cannot be completely independent]

- I don't think 'normal' contributes anything - if an unknown distribution is assumed, I don't think it affects his argument.

I think "That each year's average temperature arises from a normal distribution" equates to "each year's average temperature is given by the linear trend, with the addition of an independent random quantity of unknown but fixed standard deviation".

If you agree with that, here is a question: Do you think the randomly variable part of each year's temperature due to:

[A] The "actual" mean temperature having increased (or perhaps decreased) a random increment from the previous year's average temperature.

Or

[B] The "actual" temperature having followed a true straight line but with measurement error (eg due to local fluctuations in weather and other conditions) resulting in the year-year differences?

I ask that because I think it affects what happens what happens immediately after 'heating' is switched off.

Feb 2, 2014 at 4:54 PM | Registered CommenterMartin A

Martin A

CAGW regards the underlying warming trend as due to increasing CO2. The two major variables involved in that are the rate of concentration increase, which may be increasing exponentially, and the warming effect, which is proportional to the natural logarithm of the concentration and decreases gradually.
The two non linear effects cancel out over the concentration range experienced, giving an approximate linear temperature response.

Remember too, that there are other climate drivers. ENSO is a regular player. Think of the temperature boost produced by the 1997/1998 El Nino, which started the whole pause hare running.
There is also the roughly 11 year solar cycle, which pushes temperatures up or down about +/- 0.1C, and a possible 60 year cycle. Weather itself has a semi-random effect.

All of these create short term variations which can distort what would otherwise be a linear trend.

I have been looking again at the graphs and wonder if the 1998-2013 data is itself such a distortion. You may be familiar with the statistical phenomenon of regression to the mean. A one-off event pushes conditions a long way from the mean. The conditions then gradually return to the mean overtime. Hysteresis is an example, as is a pushed pendulum.

I was speculating that the 1998 El Nino was the disturbing force, lifting temperatures well above the trend. The pause is the regression to the mean.

I'm stretching the data a bit, but on that basis the post -1998 data resemble three damping cycles of four years each, resuming the trend from 2011. How to test it? Same as always, wait and see. :-)

Feb 2, 2014 at 6:47 PM | Unregistered CommenterEntropic man

"I'm stretching the data a bit, but on that basis the post -1998 data resemble three damping cycles of four years each, resuming the trend from 2011. How to test it?"

Well looking for patterns in random data invites discovery of meaning where there is none. I

Our basic problem is not having a model to use for statistical analysis. If we had a few thousand years of detailed measurements, we could extract a model from that and use it.

I'll concede that Tamino's analysis is thought-provoking on the question of pause/halt.

It, plus your discussion, has got me interested in the question of how you can draw conclusions from a time series where you don't have (or you don't trust) an underlying model for what is generating it. I think it might be possible to say: "we have two plausible or competing models, A and B. What is the probability of the observed time series being genrated by A and what is the probabilty of it being generated by B?".

At least (I think) the answer will tell you if both models are about equally likely OR one model is more likely than other.

Trying to make sense of what Tamino's analysis really says is making my brain go nonlinear. I'll post something if eventually finmd something worth saying.

Feb 3, 2014 at 10:14 AM | Registered CommenterMartin A

This example from 2011 may also be apposite. The Corps of Engineers had to blow levees and open spillways to stop levees overtopping in Baton Rouge and New Orleans.

http://web.archive.org/web/20110524191314/http://www.csmonitor.com/USA/2011/0508/Memphis-and-Baton-Rouge-brace-for-record-breaking-Mississippi-flood

Feb 3, 2014 at 1:17 PM | Unregistered CommenterEntropic man

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>