Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« ‘Landmark consensus study’ is incomplete | Main | Met Office admits claims of significant temperature rise untenable »
Monday
May272013

Met insignificance 

This is an ultrasimplified version of Doug Keenan's post this morning.

The Met Office has consistently said that the temperature rise since 1850 is too large to have been caused by natural causes. Questioning from Lord Donoughue elicited the information that they came to this conclusion by modelling temperatures as a straight-line trend (global warming) plus some noise to represent normal short-term variability.

However, would a model in which temperatures went up and down at random on longer timescales, but without any long-term trend at all, be a better match for the real temperature data? Doug Keenan has come up with just such a "temperature line wiggling up and down at random" model and it is indeed a much better match to the data than the "gradual warming plus a bit of random variation" model used by the Met Office. In fact it is up to a thousand times better.

In essence then, the temperature data looks more like a line wiggling up and down at random than one that has an impetus towards higher temperatures.* That being the case, the rises in temperature over the last two centuries and over the last decades of the twentieth century, look like nothing untoward. The global warming signal has not been detected in the temperature records.

 

*Here I'm only referring to the two models assessed. This is not to say there isn't another model with impetus to higher temperatures which wouldn't be a better match than Doug's model. It's just nobody has put such a )third model forward yet. (H/T JK in the comments)

PrintView Printer Friendly Version

Reader Comments (193)

Theo Goodwin (May 28, 2013 at 4:34 AM), I agree that natural variability is not a cause: it's the manifestation (i.e. what we observe) of a set of underlying and dynamic physical processes, many of which are coupled to varying degrees.

My concern is that Dr Schmidt seems so confident that we not only know *all* of these processes but that we can model them with the necessary fidelity to justify that confidence. Moreover, his confidence does not appear to be supported by the IPCC, since they are quite candid about the high level of uncertainty surrounding CAGW.

Just follow this link…
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html
…and simply look at the Table 2.11 in Chapter 2.9.1, which lists all possible sources of radiative forcings. Now, compare it to Fig. 2.20 (A) in Chapter 2.9.2 and you’ll see that those listed as ‘Very Low’ Level of Scientific Understanding (LOSU) are not included in any of the models.

Of course, the models may have improved since this report and all these unknowns and uncertainties may now be retired sufficiently to support his statement. However, I've yet to see any evidence to suggest that this is the case... rather the opposite, in fact.

May 28, 2013 at 10:08 AM | Unregistered CommenterDave Salt

Hi again, Nick,
The F&R paper is a step in completely the wrong direction. In fact, not to put too fine a point on it, it is total BS. The methodological problems can be easily dissected from a theoretical perspective - and have been by several blogs, but the easiest and most telling condemnation comes from a series of simple demonstrations by Troy. http://troyca.wordpress.com/2013/02/

In stating that "the significance levels are very high", I think you are missing the point completely. The significance is only a measure of the degree of "unexpectedness" from some assumed model. Doug Keenan demonstrates that an assumed model with 1000 times the relative likelihood shows no significance. The choice of model is the key element here.

May 28, 2013 at 10:11 AM | Unregistered CommenterPaul_K

Paul_K,
"Doug Keenan demonstrates that an assumed model with 1000 times the relative likelihood shows no significance."
It is, as you say, an unphysical model (and with more coefficients). So it's not clear how that helps.

But do you know what is the basis for saying that it shows no significance (of what)? All I can see is a claim that it has higher likelihood - ie is a better fit.

May 28, 2013 at 10:33 AM | Unregistered Commenternick stokes

Whilst all of this is of great interest, it is over looking one important fact, namely that the thermometer record is accurate and reliable to tenths of a degree.

At the crux of all of this debate is the realistic assessment of the margins of error in temperature measurments/assessments made as from the 1850s to date and to what extent more receant measurements/assessments may have become polluted by station drop outs, poor station siting, UHI and the like.

I suspect that when a realistic assessment of the bounds of errors involved is made, we cannot say whether it is today (ie., 2013) warmer than it was in the early 1880s, or in the 1930s/1940s.

For sure there have been temperature fluctuations (some up and some down) and it is almost certainly the case that there has been some warming trend since the 1850s, but within that I do not accept that we can say with a high degree of certainty that the peaks in and around the early 1880s and the 1930s/1940s are not about equal to the temperatures today.

That I consider to be a more fundamental issue and one which is often over looked.

May 28, 2013 at 10:53 AM | Unregistered Commenterricard verney

Surely the best possible replication of the Earth and its atmosphere and of that atmosphere's average temperature, given different levels of atmospheric CO2, and all else being effectively similar (real-time and actual-size) has been provided by the Earth itself and the results set down in the geological record ?

Does the Earth suffer thermal runaway when CO2 levels are much higher than at present? Look for the evidence. The answer is a chilly negative.

May 28, 2013 at 11:10 AM | Unregistered CommenterBob Layson

Further to my comment above at 10:53AM, quite by chance lord Monckto has posted an interesting article at WUWT which should be reviewed:

"HadCRUt4: revision or revisionism? Posted on May 28, 2013by Guest Blogger Guest essay by Christopher Monckton of Brenchley"

This article validates the point I made above, namely within the bounds of error, we cannot say with any high degree of certainty that it is today (2013) warmer than it was in the early 1880s (or for that matter in the 1930s/1940s).

just stop and consider that point for a momemt and its implication. If the warming was statistically significant and/or if climate sensitivity to CO2 is supposedly large, that is quite remarkable that the possibility exists that the 1880s are as warm, or even warmer than today.

May 28, 2013 at 11:52 AM | Unregistered Commenterricard verney

I have found this whole discussion baffling and I've spent some time trying to figure out what part of it I don't get. As far as I can see it doesn't really prove anything or help in any way. Though this might be because I haven't really understood the maths.

Lets start with the implicit assumptions that both sides are making - but have possibly forgotten. Doug assumes there is no warming caused by CO2. He therefore goes looking for some kind of random number generator that can produce a time series that looks like the climate over the last 150 years. This 150 years definitely includes a warming period in the 20th C - lets leave to one side whether it is significant. He ends up picking ARIMA(3,1,0).

The climate scientists believe that CO2 causes warming. They want a random number generator that fits the noise but doesn't fit the warming trend. They can't pick an option that fits the recent warming - they'd be putting themselves out of a job. Apparently AR(1) fits the bill.

This strikes me as two competing circular proofs. Each argument sets out to prove something which they have already assumed. This ain't science, it's theology.

Choosing ARIMA(3,1,0) also looks problematical. Firstly, it implies you would be willing to consider any of ARIMA(0,0,0 - 0,0,1 - 0,0,2 ... and so on. That's a choice of 64 models each with a number of arbitrary parameters. With only 150 years of data one or more of them is going to fit. I think ARI(3,0,1) implies something like 5 adjustable parameters. As Jonny von Neumann said, "with 4 parameters I can model an elephant with 5 I can wiggle it's trunk."

As this is bordering on theology we should probably apply Occam's razor. Fewer arbitrary assumptions/parameters is better. And on that basis AR(1) plus assuming global warming is probably better than ARIMA(3,0,1).

For me to be convinced of either argument, I think the approach needs to be different. Firstly you can't use instrumental data for the last 150 years - it contains the data that is the source of the dispute. A better starting point would be to ask which of the various statistical models was the best explanation - before we started emitting vast quantities of CO2. Sadly we probably don't have the data.

And that's the real lesson - we don't have the hard evidence required to prove this point. And probably civil servants should stop lying and claiming that they do.

May 28, 2013 at 12:05 PM | Unregistered CommenterNickM

@Nick Stokes,

"But do you know what is the basis for saying that it shows no significance (of what)? All I can see is a claim that it has higher likelihood - ie is a better fit."

The model is an AR(3) model on the difference series. You can test for the inclusion of a drift term into this model and find that it adds nothing to the information content. The drift term in the difference series is equivalent to a deterministic trend in the level series. The lack of need of the drift term to explain the data suggests that the temperature movement may be fully explained by the stochastic component in the model. In other words, the rise in temperature is "not significant" when measured against this model.

This model is 1000 times more likely than the linear trend model with AR(1) noise in terms of its explanatory power. But it is clear from Doug's writings on the subject that he is not wedded to this model. It merely serves as a ( powerful) illustration of the importance of specifying and defending the underlying model assumption in any test of significance. In declaring that the temperature rise from 1880 was statistically significant, the Met Office made a meaningless statement. Doug is highlighting the fact; he is not seeking to defend the ARIMA (3,1,0) model as the "correct" model, which it clearly isn't.

May 28, 2013 at 12:22 PM | Unregistered CommenterPaul_K

Paul_K
,i>"You can test for the inclusion of a drift term into this model and find that it adds nothing to the information content."
You can. But has it been done? The R program Doug included with the WSJ article didn't do it. And I haven't seen any figures quoted. I've seen lots of assertions of insignificance though.

May 28, 2013 at 12:33 PM | Unregistered Commenternick stokes

A few notes about significance. First, the term 'significant change' is a confusion of levels of discourse as change is the difference between measurements and significance comes from a statistical decision procedure as once devised by Fisher. Second, 'significant' is meaningless without the addition of level. It is the same as saying that someone is tall without mentioning how tall. Third, Fisher introduced the concept for making practical decisions about experimental results. In the context of post-hoc evaluations it becomes magic: we have seen something unusual and filtered it from a stream of numerous events we found less interesting. Fourth, each null hypothesis postulating an exact null is trivially false: a null hypothesis will be rejected at any significance level if the sample size is sufficiently large. This is a well known problem in statistics and shows at least that significance depends on sample size. An increase of temperature by one millionth of a degree would be called significant if we had one trillion of measurements. Connect this with my first remark.

May 28, 2013 at 12:45 PM | Unregistered CommenterMindert Eiting

@NickM,
With respect, you are missing a bit of basic statistics. You don't get a free choice in selecting the model. Among the first tests any statistician would run on the time series is a test for a unit root. Once it is clear that the series does test positive for a unit root (and all of the temperature series do), then you are immediately restricted in your structural choice of statistical model.

May 28, 2013 at 12:47 PM | Unregistered CommenterPaul_K

NickM: " I think the approach needs to be different. Firstly you can't use instrumental data for the last 150 years - it contains the data that is the source of the dispute. A better starting point would be to ask which of the various statistical models was the best explanation - before we started emitting vast quantities of CO2. Sadly we probably don't have the data.

Nick. I've never understood why anyone argues about whether this is natural because as an electronics engineer (and physicist) as soon as I started thinking of the signal as just another electronic signal, it never struck me as anything but obvious that it was noise.

No professional engineer would look at this signal for more than a few moments and not say something like: "there may be something there but if it is, it's hidden by the noise"

But a more formal approach would be fairly simple:

1. Take a period when the signal you do not expect to be present is present. This is roughly the period before 1950. From this in electronic terms we work out a "noise model". In physics this is a model of natural variation. This is pretty straightforward because it is something like a 1/f^(1.5) (from memory).

(The only difficult concept I can see is that f = 10E-9 seconds, which may frighten some people but shouldn't)

2. Next we compare the period in which we have the signal (1950-2013) with the noise generator or model of variability to see if one is compatible with the other.

Again, it isn't that difficult to create a noise generator with the right frequency of noise and it is painfully obvious that what we see in the 20th century temperature is about as common as muck. NO STATISTICS NEEDED!

In fact, we do not have to do any complicated maths, because the warming from 1910-1940 is precisely the same amount for the same time as from 1970-2000. One was "natural" so the other can be explained as natural.

In other words any electronics engineer looking at this as a problem would within seconds say: "it looks like noise". The only reason I was confused was that most electronic signals are a few ms long, whereas the equivalent time here is a century.

However, if you want a more formal approach, you could take the central England temperature series and produce a model for long term climatic noise for this station. Then you could examine this record to determine with a high degree of certainty whether the recent rise can be explained by the climate variation model for this single station.

But if one desired, I'm sure some statistician would be able to work out a way to take several of the long-series temperature records to create a very accurate model of the global natural variation (noise signal) and we could then come up with a statistic for whether that is compatible with the current global temperature record. But why bother when it is painfully obvious it is consistent with noise?

However, if one were to prove the obvious, the appropriate statistical test would be different from that in common use. I'm more used to Fourier transforms and there is a formula for the significance of a signal within a band.

This situation I imagine would require a Laplace transform and a test to see whether there is a perturbation corresponding to the CO2 curve which is greater than would be anticipated from the noise. The statistical test should be in frequency space (or the equivalent for Laplace) and not in the time domain.

May 28, 2013 at 1:40 PM | Registered CommenterMikeHaseler

ARIMA (3,1,0) uses a differencing method to detrend the data. Keenan finds no trend in the data because his proposed method removes it.

May 28, 2013 at 4:18 PM | Unregistered CommenterEntropic Man

May 28, 2013 at 9:59 AM | MikeHaseler

Very impressive analysis and argument, Mr. Haseler. But you are far too kind to Alarmists. Their assumption is not merely that all can be explained but that all causes are known. In its most dumbed down version, which is extremely popular among Alarmists, the position is that if one is to reference natural variation as an explanation of some rise in temperature then one must reference the cause that natural variation is. In other words, the Alarmists position is that all accounting for rising temperatures must be a discussion of causes and that all causes are known.

But natural variation is not a cause but an entirely different sort of thing. It is the range of our historical data. When we sceptics point out that a temperature rise falls within natural variation, our claim is that the rise is within the range of historical data and, therefore, represents something that is to be expected from nature because it has happened in the past (specifically, in this case, the past before manmade CO2).

Sceptics are pointing to the data. Sceptics demand respect for the data. Alarmists are notorious for being uninterested in the data. Alarmists are quite happy to substitute computer model scenarios for data. Alarmists believe that all the causes are found in their models. Such an assumption goes beyond anything that can qualify as scientific practice.. The Alarmist error is much more serious than your presentation suggests.

May 28, 2013 at 4:27 PM | Unregistered CommenterTheo Goodwin

'significant' is meaningless without the addition of level.

Mindeart Eiting

I do not have access to the Met Office's full analysis. However, inspection of the 95% confidence limits on the graph of combined land and sea temperature at

http://www.metoffice.gov.uk/research/monitoring/climate/surface-temperature

suggests a significant difference between the 1880s and and the 2000s of 4-5 sigmas, a probability for the null hypothesis of less than 1%.

May 28, 2013 at 4:40 PM | Unregistered CommenterEntropic Man

Does anyone have a link to a record of the actual Q&A's between Lord D and whoever he questioned from the Met Office? Thanks.

May 28, 2013 at 5:52 PM | Unregistered Commenterlucia

@Paul_K
>With respect, you are missing a bit of basic statistics.

Yeah well I did say I might not have understood the argument properly :-) I've just had to go and look up unit root and am not much the wiser. I've did a bit of stats decades ago, I feel the language must have changed.

So given they have unit root (did I get that the right way round) and that implies they are non-stationary which I can understand, how many models are allowed? And are we talking about a temperature series before the alleged onset of global warming or one that includes the warming that may or may not be caused by CO2?

May 28, 2013 at 6:06 PM | Unregistered CommenterNickM

May 28, 2013 at 1:40 PM | MikeHaseler

More good work, Mr. Haseler. Thank you.

May 28, 2013 at 6:17 PM | Unregistered CommenterTheo Goodwin

@Lucia

Here is an earlier thread
http://www.bishop-hill.net/blog/2013/4/23/advisers-advise-politicians-to-look-in-the-peer-reviewed-lit.html

May 28, 2013 at 6:38 PM | Unregistered Commenterdiogenes

May 28, 2013 at 10:08 AM | Dave Salt

I agree completely. I commented on your earlier post simply because I wanted to emphasize that natural variability is not a cause.

May 28, 2013 at 6:49 PM | Unregistered CommenterTheo Goodwin

@ Lucia

the ultimate source for Lord D's questions is this:

http://www.publications.parliament.uk/pa/ld201213/ldhansrd/ldallfiles/peers/lord_hansard_3000_wad.html

May 28, 2013 at 6:52 PM | Unregistered Commenterdiogenes

for the avoidance of doubt, Baroness Verma does not work for the Met Office - she is the spokesperson in the House of Lords for the relevant Government department.

May 28, 2013 at 6:58 PM | Unregistered Commenterdiogenes

My take is that for trends to occur with a random input, there must be persistance in the system.

If one assumes a persistance in a power law of >0.9 one can get 100 year trends without too much difficulty from the variability of local temperature signals. So, in view, the idea of 100 year trend with an underlying random signal isn't very unlikely. When I calculated it, it wasn't significant for a 0.7oC trend.

The problem I have with statistical modelling is that when one uses a particular ARMA, one is making a statement about the physical nature of the system - a question usually ignored by statisticians.

If one gets wildly different significances with different models, the problem is not statistical but sorting out the physics of the situation. Until this done, I would suggest that the simplest explanation should be taken.

May 28, 2013 at 7:10 PM | Unregistered CommenterRC Saumarez

@ RC Saumarez, 7:10 PM

Yes, a statistical model should be plausible on both statistical and physical grounds, and the onus is on the model’s proponents to present those grounds. Nobody has presented statistical or physical grounds for the model used by the Met Office (and IPCC, etc.), as noted in the post.

Regarding the statistical plausibility of the driftless ARIMA model, neither I nor anyone that I know of has been able to find a model that is much better, under relative likelihood. (Note that relative likelihood is not based directly on likelihood, but rather on Akaike information criterion; AIC is similar to likelihood, but takes the number of parameters into account.)

Regarding the physical plausibility of the driftless ARIMA model, I have no opinion; see too my prior comment, on May 27 at 8:36 PM. Until/unless physical plausibility can be justifiably claimed, it would be inappropriate to draw statistical inferences from the driftless model. The post only relies upon the driftless model to show that the model used by the Met Office should be rejected. The nonexistence of a significant increase is inherent in the definition of driftless (this means that when we take first differences of the series, the average of those differences tends to zero).

May 28, 2013 at 8:59 PM | Unregistered CommenterDouglas J. Keenan

May 27, 2013 at 4:40 PM | Jonathan Jones
I hesitate to get involved with an Oxford Don, but:
Isn't the mere existence of a (plausible, whatever you may think of it) model (ARIMA 3,1,0) excluding the malign effects of carbon dioxide, total destruction of the argumentum ad ignorantiam used by the warmistas? (We can't think of anything else, so it must be carbon dioxide).
What's their case without that argument?

May 28, 2013 at 9:02 PM | Unregistered CommenterEvil Denier

Evil Denier, the argumentum ad ignorantiam is indeed a deeply stupid argument, and if anyone has been foolish enough to rest their case on it then they are in trouble.

Personally I don't use that argument and have never done so. If the Met Office has used it in a moment of madness then more fool them. (I assume they don't actually rely on it, but it would still be stupid to give the impression that they do.)

May 28, 2013 at 9:44 PM | Registered CommenterJonathan Jones

Jonathan
Isn't what Evil Denier says in essence (if not explicitly so) the warmista position?
They are quite happy to attribute the 1910-1940 warming to natural variation — I've heard mention of solar activity and aerosols amongst other things — but 1970-2000 is "different" for some reason; neither solar nor aerosol hacks it apparently and therefore it has to be CO2.
In the first place the logic of that argument escapes me since I'm sure there must be several other things it could be but not being a scientist (in keeping with most of our politicians and media people) I don't know what they might be, a situation which allows the warmistas to tell me any old rubbish.
In the second place, as I have said many times, this whole AGW edifice was always about green politics and only tangentially about anything to do with science. Given the somewhat incestuous relationship between the warmistas in general and the eco-activists it seems that "has to be CO2" is a lot less likely than "we want it to be CO2".

May 28, 2013 at 10:05 PM | Registered CommenterMike Jackson

The level of debate and analysis in this blog stream has been excellent. I congratulate all those who have contributed so thoughtfully and intelligently. At the end of the day it seems to me that we are no further forward. It seems universally accepted there has been a temperature rise in the last 150 years, but we cannot establish the extent to which this can be attributed to (unknown) natural causes as opposed to man made causes. A bit of each would be an obvious a priori position. At present rate of progress we should reach a doubling of atmospheric CO2 levels relative to the 1850's around the end of this century. If we assume CO2 sensitivity at around 1.5 C, which is where the bulk of estimates now seem to be landing, and assume that maybe half of the warming since the 1850's may be attributable to man's impact on CO2 levels, then all other things being equal, which they never are, there may be about one degree of anthropogenic warming 'in the pipeline' to year 2100. That sounds a lot less than catastrophic to me. So my feeling is that the situation needs to be monitored closely and discussed extensively, while we continue business as usual, with best efforts to improve the efficiency and environmental acceptability of existing means of power generation. And at the same time apply our human capabilities to developing means of energy production such as fusion and gen4 fission reactors which can feasibly and realistically and economically lead to a world where the whole issue of man's impact on global temperatures is no longer a relevant cause for concern.

May 28, 2013 at 10:44 PM | Unregistered CommenterDespairing

Mike Jackson (May 28, 2013 at 10:05 PM) said "...I'm sure there must be several other things it could be but not being a scientist (in keeping with most of our politicians and media people) I don't know what they might be..."

Well, if you take a look at the IPCC figures I mentioned in my previous post (May 28, 2013 at 10:08 AM), you may get some idea of what they might be.

May 28, 2013 at 11:06 PM | Unregistered CommenterDave Salt

(Reposted from the Environmentalism brings you forest clear-cutting thread as being relevant here.)


@May 28, 2013 at 10:23 PM not banned yet

Thanks for the Met Office pamphlet.

One section in it says:

Are computer models reliable?

Yes. Computer models are an essential
tool in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.

Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate, which gives
scientists confidence that they can also
predict the future.

But computer models cannot predict the
future exactly. They depend, for example, on
assumptions made about the levels of future
greenhouse gas emissions.

Any ordinary and uncritical reader would understand that to mean that, except for uncertainties about future greenhouse gas emissions, the Met Office models can predict future climate precisely.

"Their reliability is tested by seeing if they are able to reproduce the past climate, which gives scientists confidence that they can also predict the future."

For the Met Office's management to claim this is irresponsible. It is on the same level as claiming things are statistically significant when they are not.

I can make a spreadsheet that reproduces past climate but with zero ability to predict future climate. Being able to reproduce the past does not confirm that the physical reality has been correctly represented in the model and is able to predict the future. In any case, it is not to hard to devise systems that, even if correctly modelled, are unpredictable because of the runaway growth of errors. Quite possibly, the climate is such a system.

May 28, 2013 at 11:48 PM | Registered CommenterMartin A

Entropic Man, thanks for the figures. 4-5 sigma is what physicists require for their Higgs Boson. Yes, that implies a probability less than one percent. If you win tomorrow the capital prize in a lottery, that event will be significant at 4 sigma at least (the lottery can tell you). So you may conclude that in your case the prize was not determined by a random number generator (natural causes).

May 28, 2013 at 11:50 PM | Unregistered CommenterMindert Eiting

MikeHaseler :

Hare is a Fourier plot of the Armagh Maximum Temperature record from January 1844 to December 2004 (160 years)
Tidy, isn't it:

http://futurehistoric.wordpress.com/2011/04/08/the-armagh-record-almost-a-random-walk/

May 29, 2013 at 12:13 AM | Unregistered CommenterJohn Silver

Mindert Earting

The two cases are not comparable.

For the temperature record the null hypothesis is that the changes are random, ie that the 1880s and 2000s data can be regarded as part of the same sample. For a 4-5 sigma difference that null hypothesis is less than 1% probable. The alternate hypothesis, that the two samples are different, is the more likely. Given such numbers one looks for some reason for the difference.

For a truly random process like drawing lottery numbers, all outcomes are equally probable, and my winning or losing cannot be attributed to any specific cause. The odds of a £10 win are 1 in 54, and over a large number of tickets my winnings would tend to that proportion, For the big win the odds are 1 in 14 billion.

There is no danger of me winning a fortune. After calculating the odds I stopped buying tickets.

May 29, 2013 at 12:49 AM | Unregistered CommenterEntropic Man

Martin A

A model is a mathematical representation of the main physical processes determining climate. To test a model you input the conditions for a particular date and run the model forward for a suitable time, perhaps a decade You then compare the output of the model with the observed record for that decade.

If the model accurately simulates known behaviour, it gives some confidence that it will predict future behaviour.

There are, of course, limitations. If changes in the processes occur, the model will not show their effects. For example, only the most recent models will include the effect of the reduced energy input from the Sun during Cycle 25, which was not predictable at the time of AR4. Older models will therefore oveerestimate the temperature rise for the last five years.

One of the big gaps in the literature is a successful sceptic model. I have never seen reference in the literature to a model which successfully reproduces even the known behaviour of the climate without including the effect of increased CO2 on warming.

I have played with the models on Dr Spencer's website, but found no configurations which give extra warming without extra CO2. You should try it yourself.

The publication of such a model , which then successfully withstood general scrutiny, would be the real trigger for a paradigm shift. Until then I will continue to regard all this unsubstantiated discussion of uncertainties and unknowns as arm-waving, rather than evidence.

May 29, 2013 at 1:30 AM | Unregistered CommenterEntropic Man

John Silver

You should try living in Ireland. It is universally acknowledged that we do not have climate, just weather!

As an aside, the location on Earth showing the least climate change is the South Pole, the next is Ireland. We are West enough to see very little continental weather, so we are dominated by the Atlantic Ocean and the Gulf Stream. Most other effects, ENSO etc, seem to damp out before they reach us.
The result is a climate which remains very stable from year to year and has shown less than average warming. We are due to get warmer and (even!) wetter, but there's not much sign of it yet.

May 29, 2013 at 1:48 AM | Unregistered CommenterEntropic Man

Martin A

A model is a mathematical representation of the main physical processes determining climate. To test a model you input the conditions for a particular date and run the model forward for a suitable time, perhaps a decade You then compare the output of the model with the observed record for that decade.

If the model accurately simulates known behaviour, it gives some confidence that it will predict future behaviour.

(...) Until then I will continue to regard all this unsubstantiated discussion of uncertainties and unknowns as arm-waving, rather than evidence.
May 29, 2013 at 1:30 AM Entropic Man

EM - have you ever programmed and verified simulation models (for continuous dynamic systems or for discrete event systems or for whatever? Your words have the sound of someone who has never had to face the challenge of validating models against the physical system they represent, knowing that users of the simulations will be watching like hawks for discrepancies against reality.

What you describe is more or less what the Met Office says it does. What you say is more or less what the Met Office says in claiming that its models have been validated.

There are several issues:

- If a model could not even reproduce the observed historical behaviour then it would have failed before even reaching the first hurdle. Being able to reproduce known behaviour is necessary in validating a model but is not sufficient to validate it. As I said, my spreadsheet simulator can reproduce the historic record but is useless for predicting the future.

- If your model contains gross simplifications ("parameterisations") of effects that are ill-understood (such as cloud effects in the Met Office models), then reproducing the historical data provides no confirmation that the model is valid under different circumstances. This is because observed data from the period being simulated was also used in performing the parameterisations. This is a well-known error in many branches of science - testing a model using the some of the data that was used to construct the model. Parameterisations involve modelling the reality by a gross approximation known to be valid only under the observed conditions, and most unlikely to remain valid under other conditions.

- Reproducing one single trajectory (the observed climate over a decade) does not provide testing under changed conditions, let alone conditions of extreme change. Testing of simulation models in other fields involves much testing of 'corner cases' where the most extreme conditions possible are simulated and verified.

~~~~~~~~~~~~~~~~~~~~~~

"One of the big gaps in the literature is a successful sceptic model..."

It is a common misconception amongst AGW True Believers that, before someone can point out a fallacy in some aspect of climate science, they should produce an alternative model (or theory or observations or whatever). It does not work like that. I do not need to produce a validated climate model (in my view something that is beyond the capability of current knowledge) before I have the right to point out the fallacies in the Met Office's ludicrous and irresponsible claims.

May 29, 2013 at 8:41 AM | Registered CommenterMartin A

Entropic Man (May 29, 2013 at 1:30 AM) said "If the model accurately simulates known behaviour, it gives some confidence that it will predict future behaviour."

But we know that these models are 'tuned' to match the historic data, so why should this give us confidence in their predictive skill? Looking at model future projections and then comparing them with what really happened seems the only practical way to judge their predictive skills and current evidence tends to suggest that this rather poor.

Entropic Man also said "One of the big gaps in the literature is a successful sceptic model."

Yes, this would be nice. However, the fact that one does not yet exist cannot be used to justify our confidence in the current set (i.e. absence of evidence is not evidence of absence).

May 29, 2013 at 8:55 AM | Unregistered CommenterDave Salt

Entropic Man (May 29, 2013 at 1:30 AM) said "If the model accurately simulates known behaviour, it gives some confidence that it will predict future behaviour."

The fact that the Met Office models did not predict the global average temperature stasis of the last fifteen years or so says all you need to know about their predictive ability.

May 29, 2013 at 9:38 AM | Unregistered Commentersplitpin

Evil Denier, the argumentum ad ignorantiam is indeed a deeply stupid argument, and if anyone has been foolish enough to rest their case on it then they are in trouble.

Personally I don't use that argument and have never done so. If the Met Office has used it in a moment of madness then more fool them. (I assume they don't actually rely on it, but it would still be stupid to give the impression that they do.)

May 28, 2013 at 9:44 PM Jonathan Jones

.


JJ - please see the Met Office's My Climate and Me web site, which explains the Met Office's science.

http://www.myclimateandme.com/2013/03/07/ok-the-earth-is-warming-but-how-do-we-know-its-us/#comments

The Met Office text is as follows (barring errors in my transcription):

Unfortunately it's not possible to conduct a laboratory experiment to conclusively test whether climate change is caused by man or not.

However, we do understand the natural factors that affect global temperatures; things such as volcanoes or solar variations or variations in the earth's orbit. These natural factors are not sufficient to cause the 0.8 °C rise in temperatures that we've seen since 1850.

We also know the properties of greenhouse gases such as carbon dioxide. Way back in 1859, British scientist John Tyndall showed that carbon dioxide could trap heat. So the physics is actually quite simple. CO2 is actually incredibly effective at trapping heat compared to another other gases in the air so if we increase the amount of carbon dioxide in the atmosphere, then we are trapping more heat in it. This has the effect of warming the earth.

Since the industrial revolution began in the late 1700's we've been steadily increasing our output of carbon dioxide. In 2012 we reached a concentration of 394 parts per million of carbon dioxide in our atmosphere

This represents a 25% increase since 1960 and is the highest level of carbon dioxide in our atmosphere for at least 800,000 years. Over the same period, we have noticed a 0.8 degree rise in global average temperatures.

The special increase in global temperature since 1850 can only be explained if we include the effect of increased greenhouse gas levels in particular carbon dioxide produced by us in burning fossil fuels.

(my emphasis)

My comment on MC&M was as follows:

I think the argument here can be summarised as “we can’t make any measurements that show warming is due to CO2 but the 0.8 degrees rise we have seen must have been caused by CO2, because we don’t know of anything else that could have caused it and it is, after all, a greenhouse gas”.

I understand that there has been no significant warming for 15+ years, although atmospheric CO2 has continued to rise.

Doesn’t it make sense to say now “well, we now see it could not have been that, because the warming has stopped, while the CO2 went on rising it. So it really must have been caused by something we don’t know about”?

Explanations such as “it is still warming but masked by something else we don’t know about” sound hollow. It is one thing without evidence being explained by a second thing without evidence.

May 29, 2013 at 10:08 AM | Registered CommenterMartin A

Entropic Man , I do not agree and I do not believe the 4-5 sigma story, but that is not the point. My example is comparable. Note the example Keenan used in his introduction. You introduced (false) knowledge about what is going on in lotteries, as you called them truly random. It has happened in lotteries or similar games, in Italy for example, that by cheating prizes were awarded to friends of the organization. The question therefore is of whether, if you get a capital prize, this was the result of human (or supernatural) intervention or not. In the lottery of the universe the earth got the prize of one century with rising temperatures. Was this the result of human CO2 intervention (or cheating with the record) or not? This is the whole issue. In my former comment I tried to say that the concept of significance is irrelevant in the context of post-hoc judgement. Of course, the rise of temperatures is 'significant at level so and so', otherwise we would not have had this discussion and BH would not have existed. If I would get a capital prize in a lottery and you would say that I have friends in the organization, my answer would be that each month someone somewhere gets this exceptional prize and that Italian practices are possible but not very plausible. It's up to you to demonstrate those practices. Human CO2 intervention is an extraordinary hypothesis needing very convincing evidence.

May 29, 2013 at 12:26 PM | Unregistered CommenterMindert Eiting

Martin A, I suspect that the brighter minds at the Met Office are swiftly realising that My Climate and Me was a ghastly mistake. Personally I would advise them to disown it, but I can see that might be politically tricky.

As I said


If the Met Office has used it in a moment of madness then more fool them. (I assume they don't actually rely on it, but it would still be stupid to give the impression that they do.)

I still see no evidence that the Met Office actualy relies on such weak arguments; I think they just parrot them when they want to avoid using long words.

May 29, 2013 at 4:38 PM | Registered CommenterJonathan Jones

Mike Haseler
I liked your simple contrasting of climate researchers, and sceptic assumptions.

Climate researchers:

1. “Everything has to be explained (this is not stated explicitly but is their basic philosophy that everything should be explained)
2. That "it cannot be explained as natural variation".
3. Therefore, even when we know that CO2's effect is far too small, that because something has to have done it, and CO2 is the only suspect they offer, they introduce ideas to scale up this effect and because there is only one main suspect…”

Sceptics:
1. “There is natural variation.
2. That the temperature signal could be produced in its entirety by natural variation.
3. So, natural variation is all that is needed to explain the climate signal”

As you point out, you can predict things without knowing how they work. Newton did that for gravity, and it’s served us well for over 300 years!

And you argue that the sceptics assumptions “…(are) really challenging the philosophy of scientists that everything can and should be explained scientifically…” and this is due to the difference between academia, and commercial engineering.

While it’s true that whole swathes of academia and many professions are on “…an intergenerational crusade..” It’s not, as you say, “…to understand everything…”, it’s to ‘save the Planet’ from nasty old us. They’re the generation of Rachel Carson, Blueprint for Survival, WWF & FOE, and “Captain Planet”.

I’ve tended to ‘explain’ the way science is portrayed as the megalomaniac hubris of the senior scientific priesthood, especially the climate priesthood. But it’s not just this that’s driving the craziness. Although the climate priesthood are a rampantly arrogant, hubristic lot. What we’re dealing with is a modern caricature of what science is. A sort of neurotic, Tomorrows World pastiche. A silly, inaccurate school booky, caricatured exactness. A know-all’s, phantasy land, perhaps reinforced by the digital game world. It’s a product of our ‘times’, and the shed-loads of money that’s been conned out of government, based on the phantasy. Climate ‘science’ is by far the worst con artists, but it’s one of many.

I thought your insight (in your second Comment), as an electronic engineer/physicist, that the Earth’s temperature over time looks like mainly (only?) noise and no signal, was spot on. And, as you say, fairly easy to analyse (if you understand how to do the stats – I don’t!).

But I can’t agree with your idea “…Scientists should leave decisions to professional decision makers “. Especially as I’ve no idea who you are thinking of. They might be in the Civil Service (CS), but they don’t seem capable of independent critical thought. Most policy wonks are CO2-is-evil believers to a man and a woman, and universities not only believe, but can’t believe their eyes as the UK state and EU sloshes billions of Euros their way. They’d be fools not to believe.

I wish there were “professional decision makers” involved, but I don’t know where you’d find them. Where would you find them?

May 29, 2013 at 5:46 PM | Unregistered CommenterMark Piney

Martin A, I suspect that the brighter minds at the Met Office are swiftly realising that My Climate and Me was a ghastly mistake. Personally I would advise them to disown it, but I can see that might be politically tricky.

As I said
If the Met Office has used it in a moment of madness then more fool them. (I assume they don't actually rely on it, but it would still be stupid to give the impression that they do.)

I still see no evidence that the Met Office actualy relies on such weak arguments; I think they just parrot them when they want to avoid using long words.
May 29, 2013 at 4:38 PM Jonathan Jones

JJ - Well, there is no doubt that the Met Office's management is dysfunctional - reminiscent of the BBC in some ways perhaps. My Climate and Me was obviously set up and went live without first getting the buy-in of the scientists involved. (My observations and speculations at Trouble At T'Jewel in the Crown).

I find the My Climate and Me argument from ignorance at least as convincing as the "our models confirm it's due to CO2" argument which seems to be the only other argument put forward by the Met Office.

In fact the latter can be regarded as a complicated and expensive version of the argument from ignorance although less obvious to people who find "it came from our supercomputer programmed on physical principles (whisper - - - except for the bits we don't understand, where we use fiddle-factors)" very convincing.

If there is any other argument that CO2 causes global warming used by the Met Office, then I have missed it and I'm keen to be put right.

May 29, 2013 at 6:09 PM | Registered CommenterMartin A

Martin A

Sceptics nitpick at the models, but if you want to convince anyone except your own choir you need to provide a better explaination than CAGW for the behaviour of the system. "Natural variation" gets waved about, but no candidates are put forward. The Sun produces an 11 year oscillation of 0.1C;the AMO a 60 year cycle of amplitude 0.3C. Go back to the Camp Century cycles and you see longer cycles, but all of similar small magnitude. The only observed cycles capable of generating the temperature variations we have seen since 1880 are Milankovich's.

There is a statistically significant rise in temperature since 1880 and a 75% correalation between temperature and CO2 both in recent and paleo data. If you cannot provide a scientifically sound alternative explaination for both, you may as well be slaying sky dragons.

May 29, 2013 at 7:05 PM | Unregistered CommenterEntropic Man

Remember that natural variation is not magic. It is a change in the quantity of energy in the system or its distribution. If you want to falsify CAGW using the natural variation argument AND HAVE IT GENERALLY ACCEPTED BY THE SCIENTISTS you need to have an alternative explaination showing how and why the energy changes we have seen occurred; one which fits the evidence better than CAGW. It also needs to be at least as successful as a predictor, whether forecasting or aftercasting.

Think of scientists as like mountaineers. A mountaineer is reluctant to let go of an existing hold until he is sure that the next one will bear his weight. Most scientists will not shift paradigms until they are confident that the new one is at least as good as its predecessor.

Consider geology. The first hints that continents might be mobile were published before WW1, but it took until the 1960s for plate tectonics to become generally accepted.

If you want to replace CAGW with something you regard as better, as classical geology was subsumed into plate tectonics, you need a lot more than negative nitpicking. You need the climate equivalent of positive evidence for sea floor spreading and subduction. You need a new paradigm which will be generally perceived in the profession as better than the existing one.

May 29, 2013 at 7:57 PM | Unregistered CommenterEntropic Man

EM

- As I've said before, if the evidence for a hypothesis appears weak to nonexistent, I have every right to point that out without needing to propose an alternative hypothesis complete with experimental evidence to confirm it.

- If 97% of "climate scientists" believe that a hypothesis is true without evidence, their religious faith is their problem, not mine.

- BH poster rhoda has repeatedly asked for experimental evidence of CO2 caused global warming. Her request remains unanswered.

- Out of interest, is there any firm evidence for what caused the warming from the little ice age until say 1950?

- I'm going to read up Murry Salby's recent stuff presented in seminars - have you come across it?

May 29, 2013 at 8:51 PM | Registered CommenterMartin A

Entropic Man (May 29, 2013 at 7:57 PM) said "If you want to falsify CAGW using the natural variation argument AND HAVE IT GENERALLY ACCEPTED BY THE SCIENTISTS you need to have an alternative explaination showing how and why the energy changes we have seen occurred..."

Ah, the inverted 'null hypothesis'... this is not how science works! Falsification means testing a prediction made by the theory against real-world data, not a competing theory.

EM then said "...one which fits the evidence better than CAGW. It also needs to be at least as successful as a predictor, whether forecasting or aftercasting."

Based upon CAGW theory's predictions concerning future global temperatures, the tropospheric 'hot spot' and the warming of Antarctica, the bar for measuring 'success' has been set rather low.

May 29, 2013 at 9:08 PM | Unregistered CommenterDave Salt

Is this the same Phil Jones in this paper, if so this may interest you
http://onlinelibrary.wiley.com/doi/10.1002/grl.50425/abstract

Now there’s an interesting line or two from the abstract -

We have ignored all air temperature observations and instead inferred them from observations of barometric pressure, sea surface temperature, and sea-ice concentration using a physically-based data assimilation system called the 20th Century Reanalysis. This independent dataset reproduces both annual variations and centennial trends in the temperature datasets, demonstrating the robustness of previous conclusions regarding global warming.

If this is our Met office man he has created a new physical model.

May 29, 2013 at 9:24 PM | Unregistered Commentertckev

EM- I forgot to ask. Why do you believe in AGW?

[If the answer is "because of the evidence", please be specific.]

May 29, 2013 at 9:33 PM | Unregistered CommenterMartin A

"Sceptics nitpick at the models"

ROFLMAO!!

http://pielkeclimatesci.wordpress.com/2011/10/28/a-literature-debate-on-the-quality-of-global-clmiate-model-multi-decadal-predictions/

If you want more nitpicks, use "skill" as a search term at Roger Pielke Seniors blog.

May 29, 2013 at 10:10 PM | Unregistered Commenternot banned yet

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>