Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Bob's funding feedback | Main | Climate physician, heal thyself! »
Tuesday
Feb232016

Two worlds collide

GWPF have release a very interesting report about stochastic modelling by Terence Mills, professor of applied statistics and econometrics at Loughborough University. This is a bit of a new venture for Benny and the team because it's written with a technical audience in mind and there is lots of maths to wade through. But even from the introduction, you can see that Mills is making a very interesting point:

 

The analysis and interpretation of temperature data is clearly of central importance to debates about anthropogenic globalwarming (AGW). Climatologists currently rely on large-scale general circulation models to project temperature trends over the coming years and decades. Economists used to rely on large-scale macroeconomic models for forecasting, but in the 1970s an increasing divergence between models and reality led practitioners to move away from such macro modelling in favour of relatively simple statistical time-series forecasting tools, which were proving to be more accurate.
In a possible parallel, recent years have seen growing interest in the application of statistical and econometric methods to climatology. This report provides an explanation of the fundamental building blocks of so-called ‘ARIMA’ models, which are widely used for forecasting economic and financial time series. It then shows how they, and various extensions, can be applied to climatological data. An emphasis throughout is that many different forms of a model might be fitted to the same data set, with each one implying different forecasts or uncertainty levels, so readers should understand the intuition behind the modelling methods. Model selection by the researcher needs to be based on objective grounds.

There is an article (£) in the Times about the paper.

I think it's fair to say that the climatological community is not going to take kindly to these ideas. Even the normally mild-mannered Richard Betts seems to have got a bit hot under the collar.

 

 

 

PrintView Printer Friendly Version

Reader Comments (109)

As Gavin Schmidt points out, the forecast provided by the report is already wrong.

https://twitter.com/ClimateOfGavin/status/702061603362045952

Feb 23, 2016 at 10:23 AM | Unregistered CommenterDoug McNeall

Forensic analysis of the report by Richard Betts - that'll show 'em

Feb 23, 2016 at 10:34 AM | Unregistered CommenterPatrick

Doug: which forecast is that? This one?

He [Mills] found the average winter temperature in central England, which has the world’s longest temperature records going back to 1659, had increased by about 1C over 350 years. Based on that change, he forecast an additional increase of about 0.25C by 2100. He said the average temperature would continue to be “buffeted about by big shocks” caused by natural events, such as the El Nino weather phenomenon.
I don't know about you, but I haven't reached 2100 yet so I can't say the forecast is wrong. Do you really mean it is at odds with what the MO forecast?

Feb 23, 2016 at 10:36 AM | Unregistered CommenterHarry Passfield

O dear, seems to be a time for ruffled feathers, Cameron and clan yesterday. Now we have another clan shooting from the hip because somebody dares, dares to voice a different view. One clan I can excuse, they are politicians. But the latter are scien.... ? Forget it!

Feb 23, 2016 at 10:53 AM | Registered CommenterGreen Sand

The Times may well be desperate for subscribers, but their printed version still sells more than twice as many as The Guardian of Global Warming.

Feb 23, 2016 at 10:56 AM | Unregistered Commentermichael hart

Harry Passfield

which forecast is that?

The one in which the GWPF hilariously "forecasts" last year's temperatures to be lower than what they actually were....

(see here)

I see Ed Hawkins has asked the Bish whether he agrees with the report's assumption that climate sensitivity is zero. Will be interesting to see the response.

Feb 23, 2016 at 10:59 AM | Registered CommenterRichard Betts

Hmmm... Met Office vs GWPF it's tough to choose between two different approaches that are both likely to be wrong 5 years from now. Billion pound model vs free one? Ooh decision, decisions.

Feb 23, 2016 at 11:11 AM | Unregistered CommenterTinyCO2

@Feb 23, 2016 at 10:59 AM | Richard Betts
==============================================

Whereas the Met Office's forecasts are always spot on, eh, Richard?

"The programme went on to target a particularly scary prediction, first announced by the Met Office in 2007, that the world's temperature was set to rise from 2004 to 2014 by 0.3C.

The difference being that we get to fund the Met's hopeless forecasting. Hmmm.

Feb 23, 2016 at 11:12 AM | Unregistered CommenterJeremy Poynton

Or these hopelessly inaccurate forecasts - which we, again,. had the honour of funding,

http://www.telegraph.co.uk/news/weather/11202650/Millions-for-the-Met-Office-to-carry-on-getting-it-wrong.html


In 2007, its computer predicted that this would be the “warmest year ever”, just before global temperatures temporarily plummeted by 0.7C, equal to their entire net rise in the 20th century. That summer in the UK, it told us, would be “drier than average”, just before some of the worst floods in living memory.
From 2008 to 2010 the models consistently predicted “warmer than average” winters and “hotter and drier summers”: three years when much of the northern hemisphere endured record winter cold and snow; while in the UK, as in that promised “barbecue summer” of 2009, we had summers wetter and cooler than usual. A particular triumph, in October 2010, was the prediction that our winter would be up to “2C warmer than average”, just before the coldest December since records began in 1659.

In November 2011, the computer forecast global temperatures rising over the next five years by up to 0.5C from their 1971-2000 average, a prediction so embarrassingly off-beam that, a year later, it was quietly removed from the Met Office website, replaced with one showing the flat-lining temperature trend as “likely to continue”. In 2012, it told us that spring would, yet again, be “drier than average”, just before the wettest April on record. Last November, the computer predicted that the winter months would be “drier than usual” – then came the wettest three winter months on record. And today, we can measure the success of that 2004 forecast that, by 2014, the world would have warmed by 0.3C – when temperatures have now not risen for 18 years, and not one has got near 1998’s record as the “hottest ever”.

Glasshouses & bricks, Richard?

Feb 23, 2016 at 11:17 AM | Unregistered CommenterJeremy Poynton

Doesn't The Thunderer realise that all Betts are off?

Feb 23, 2016 at 11:19 AM | Unregistered CommenterNCC 1701E

"The one in which the GWPF hilariously "forecasts" last year's temperatures to be lower than what they actually were.."

As opposed to all those Comedy Consensus Central forecasts made by our dear old Met Office, Richard, eh?

Oh how we all laughed during those barbecue summers, the drought floods and that comedy classic with its punch-line that told us about snow being a thing of the past.

Irrespective, of whether the GWPF got it wrong or not, at least they didn't get a £97 million bung for a supercomputer from a grateful nation to get us rolling about in the aisles!

Feb 23, 2016 at 11:19 AM | Unregistered CommenterRoyFOMR

Betts goes snarky, McNeall gives us a link to asinine tweets. Interesting!

As for the paper, it is surely a very welcome addition to the literature. I like the link Mills makes to the junky (or if you prefer, Richard, 'trashy') economic models of the 1970s - some of us do recall the hoo-hah generated by 'Limits to Growth' and the odious 'Club of Rome'. I suspect the latter were so encouraged by how supine the mass media/political nexus was when presented with authoritatively presented and highly-scary 'the computer says this' stuff, that they made the most of the subsequent opportunities to try the same PR trick with GCMs. It worked beyond their wildest dreams, but perhaps now we are seeing more calmer minds getting stuck into the field, and that may well lead to the GCMs being sidelined just as the monstrous economic models have been.

This paper also provides independent support for the forecasting position taken by Armstrong who noted the appalling lack of good forecasting practice exhibited by the IPCC (see Green and Armstrong, 2007) and made a public wager that a forecast of 'no change' was methodologically justified for the construct 'global mean temperature' and worth betting on.

Feb 23, 2016 at 11:19 AM | Registered CommenterJohn Shade

If you have just put a 100 million quid bill into the government for a superer-duperer computer. A computer needed for modelling the interactions of even smaller and smaller and smaller units of sand) and someone says here's a big wave, and this is what it does to the beach, then you might get somewhat defensive.

Feb 23, 2016 at 11:23 AM | Unregistered CommenterMedia Hoar

The science is settled so who needs climate scientists anyway?

Feb 23, 2016 at 11:28 AM | Unregistered CommenterSchrodinger's Cat

The paper nicely shows the limits of timeseries forecasting if you ignore useful information about the system from other sources. As a result, the real-world data are already well outside the envelope of prediction uncertainty (and we have good reason to believe that they'll stay there for 2016). This is shown in Gavin's tweet, linked above.

In one of Mill's previous papers, he correctly identifies that a number of timeseries models will fit the temperature series. Unfortunately, this prediction shows what happens when you then just use the one that suits your agenda, without adequate justification.

Feb 23, 2016 at 11:30 AM | Unregistered CommenterDoug McNeall

Jeremy Pointon 11.17

Ouch!

Feb 23, 2016 at 11:37 AM | Unregistered CommenterBarbara

Jeremy Poynton, I think Dr Betts level of cheek has reached the level of - people with glass stones shouldn't throw houses.

Feb 23, 2016 at 11:38 AM | Unregistered CommenterTinyCO2

Richard Betts: I'm afraid I'm not a twatter as I feel it demeans your argument to rely on that medium to argue against Mills's article. However, having looked at the tweets (twatter really does make juveniles of you all) I see that GS managed to include a copy of the Times's article - and the fact that the Times included the snark that Mills was paid £3,000 by GWPF for the article. I wonder, how much are you paid (on my tax pound) to twat around and 'publish' 'rebuttals'? How much is Ward paid? (I bet it's a lot more than you).

Anyway, from the twatter stream I have difficulty in seeing what the 'forecast' was - except to say, the chart does not seem to be accurately calibrated.

Feb 23, 2016 at 11:47 AM | Unregistered CommenterHarry Passfield

Had another look at the 'chart' and also Mills's explanation. He says that the average temp will be 1.25 C higher by 2100 - but that it will fluctuate up and down from that average because of things like El Ninos/La Ninas etc. And wasn't 2015/16 a large El Nino year? Would you extrapolate an average T based on that? (Well, warmists might...)

Feb 23, 2016 at 11:54 AM | Unregistered CommenterHarry Passfield

Back in the days I had many a discussion with Richard Betts around the MO's 'Decadal Forecasts' and there were some very good and enlightening discourse.

This was in the days when forecasts were decadal, nowadays it's a 5 year 'Decadal' forecast' (lack of computer power/resource I believe). I recall on at least one occasion the discussion was about where would you expect the greatest accuracy to be. At the start of the decade, say the first year, initial starting data well known or say the last year of the 10?

The discussion arose because the forecast had dropped out of the 'range' during the first year. I was assured this was due the natural variations which must be expected and that the increase over the whole length of the forecast that was important. Sound familiar?

I gave up on the 5 year 'Decadal Forecast' as it is updated every year so it is virtually impossible, well it was to me, to ascertain if the forecasts were actually improving the main reason for my interest.

Being able to forecasting our climate is a massive undertaking, I welcome the fact that it is being researched. I do not welcome the 'certainty' that many in the clan and it is a clan put on the present day ability. They are now impacting on everyday people's lives, for one look up Fairbourne.

Feb 23, 2016 at 12:05 PM | Registered CommenterGreen Sand

I sent this email to the BBC's Countryfile in late August, after a really dismal wet week, despite what the Met had promised:

'Is there any chance of repeating last Sunday's weather forecast, and giving us all a good laugh? We arable farmers are all mid-harvest, and the utter ocean-going weapons-grade wrongness of Tomasz Schafernaker's 'weather for the week ahead' has had us all chuckling, as we struggled to clear sopping wet crops from inside bunged up combine harvesters.
In fact, there's a move in the farming community to name the third week of August 'Schafernaker Week' in his honour.'

Quite impressed that the Met were 'dropped' as official forecast suppliers a few days later.

Feb 23, 2016 at 12:20 PM | Unregistered CommenterCharlie Flindt

Please all stop referring to this as a GWPF paper, after all

Views expressed in the publications of
the Global Warming Policy Foundation
are those of the authors, not those of
the GWPF, its Academic Advisory Council
members or its directors

Feb 23, 2016 at 12:25 PM | Unregistered CommenterPhil Clarke

A major reason for my being sceptical of global warming was that the forecast models struck me as having similar characteristics as the economic forecast models of the 1980s. Whether theory-based or time-series, economic forecasting needs some empirical relationships to remain unchanged. They are therefore quite good at forecasting over a few months when an economy is fairly steady, but pretty useless are predicting a large scale downturn such as the credit crunch. That is, the accuracy is inversely related to the utility. At least with the economic forecasting models there is competition to get the most accurate. When I subscribed to the Economist they used to look at the various forecasts and compare them for accuracy. As a result, forecasters constantly try to bring the models into line with the real world.

Along comes the climate models which are used to forecast well beyond any known data range. The catastrophic of global warming is based on tipping points. In forecasting language, climatologists are predicting discontinuous functions, when empirical generalisations fail. Lack of competition - in fact the active elimination of any potential for competition by many in the climate community - means there is no incentive to recognize the inevitable divergence between models and reality.

Feb 23, 2016 at 12:32 PM | Unregistered CommenterKevin Marshall

Struggling in the undertow, thrashing further out to sea.
===================

Feb 23, 2016 at 12:43 PM | Unregistered Commenterkim

Ben Webster is incorrect to extrapolate Mills' argument to 2100 ("Based on that change, he [Mills] forecast an additional increase of about 0.25C by 2100.") Mills' graphs extend only to 2020, and I see no claims in his paper about 2100. It's not appropriate to project the fitted segmented model further as it is based on a short time period.

I also agree with McNeall and Betts: the approach of a fully stochastic model without any forcing terms presumes insensitivity to effects from greenhouse gases, etc., which is contrary to our knowledge of the physical systems.

Feb 23, 2016 at 12:52 PM | Registered CommenterHaroldW

Kevin Marshall:

Macro economic forecasting is not about being accurate, but about producing forecasts that are consistent with being politically acceptable (when produced by the IMF, OECD, or Treasury), or are attempts at producing forecasts that turn out to be close to the consensus opinion. Outlier forecasts (e.g. those made by Nouriel Roubini "Dr Doom" in the run up to the financial crisis) are usually largely suppressed and ignored. Large sums are wagered on the basis of the consensus, and the wall of money usually wins. Or, as Keynes allegedly ruefully put it "The market can remain irrational longer than you can remain solvent"

Feb 23, 2016 at 12:54 PM | Unregistered CommenterIt doesn't add up...

It's worth bearing in mind that even if more statistically-based models are better than GCMs, the world of economic forecasting hasn't exactly covered itself with glory over the last few years. Or ever.

Feb 23, 2016 at 12:57 PM | Unregistered CommenterJonathan Abbott

I don't know about you, but I haven't reached 2100 yet so I can't say the forecast is wrong. Do you really mean it is at odds with what the MO forecast?

The forecast has upper and lower limits plotted, Observed temperatures are already outside the limits.

For consistency, there should now be opprobrium heaped high on the forecast. Christopher Monckton should do one of his tricked up 'forecast clocks'. [Though, to be truly consistent, Monckton should plot some other prediction altogether]

Feb 23, 2016 at 1:10 PM | Unregistered CommenterPhil Clarke

Skimming through the paper, it seems to me that the joke is on Schmidt and Betts. The author compared ARIMA and segmented trend models for each data set. Schmidt shows Fig 6 (segmented trend model for HADCRUT4) in which actual temps do indeed break through the upper bound in Q3 2015 (note that actual data up to April 2015 was available when the paper was written). Schmidt conveniently cuts off Fig 5, the corresponding ARIMA model for HADCRUT4, in which, funnily enough, current temps would (just) be within the upper bound. The author states very clearly in relation to Figs 5 and 6 that '...these examples effectively illustrate how alternative models can produce different forecasts having different levels of precision.' The same message is repeated in the summary.

As others have pointed out, Dr Betts, there is much hilarity to be had from studying the outcome of Met Office forecasts. Perhaps you should spend more time on improving them. Twitter can make anyone look foolish.

Feb 23, 2016 at 1:12 PM | Unregistered CommenterDaveS

Schmidt conveniently cuts off Fig 5, the corresponding ARIMA model for HADCRUT4, in which, funnily enough, current temps would (just) be within the upper bound.

The joke is on you. Click on Gavin's graphic to expand - he shows both Figures and HADCRUT4 for December 2015 was >1.0C, comfortably outside the 95% upper bound of both. The forecast has already failed.

Feb 23, 2016 at 1:48 PM | Unregistered CommenterPhil Clarke

Well, it took Gavin a lot more time to answer PaulK's questions on RC, actually evading his questions....

Feb 23, 2016 at 1:52 PM | Unregistered CommenterHoi Polloi

Professor Mills analysis is not at all surprising. This is what happens when someone who actually knows statistics in detail looks at climate data.

Similar findings by Professor Wegman who sharply criticised the statistical methods by which Michael Mann's infamous "Hockey stick" paper.

And look how many times Steve McIntyre has impaled the statistically unsound practices of much climate "science".

Of course the usual suspects and troughers have started squealing.

Feb 23, 2016 at 1:52 PM | Unregistered CommenterBitter&Twisted

If you look at the forecasts of the MET with a computer that did cost as much as it would cost to feed a midsized town, then you realise that Dr.Betts could,do do with some introspection before he indulged himself in schadenfreude.

Feb 23, 2016 at 1:58 PM | Unregistered CommenterHoi Polloi

Phil Clarke appears not to understand that a 95% confidence interval implies that 1 in 20 observations are expected to lie outside it. Is he forecasting that el Niño will keep temperatures permanently above the model confidence limit until 2020?

Feb 23, 2016 at 2:00 PM | Unregistered CommenterIt doesn't add up...

The joke is on you. Click on Gavin's graphic to expand - he shows both Figures and HADCRUT4 for December 2015 was >1.0C, comfortably outside the 95% upper bound of both. The forecast has already failed.
I must have missed that forecast: where did Mills forecast the temp for 2015?

Based on your understanding of things, Phil Clarke, I bet you think that money invested in shares can only ever go up - never, ever, down.

Feb 23, 2016 at 2:05 PM | Registered CommenterHarry Passfield

DaveS - I agree, in the rush to falsify a "prediction" which he didn't make, Mills' point was rather lost.

The problems I have with figure 5, though, is that the model to which it refers contains an integration term (which makes it unphysical, in my opinion), and the uncertainty derived from that model is so large as to make it unhelpful in the extreme. By 2020, the uncertainty is already +/-0.6 deg C, which encompasses pretty much all reasonable possibilities including a massive volcanic explosion.

Feb 23, 2016 at 2:11 PM | Registered CommenterHaroldW

I must have missed that forecast: where did Mills forecast the temp for 2015?

Strewth, is it so hard to read the paper? Figs 5 & 6.

Feb 23, 2016 at 2:12 PM | Unregistered CommenterPhil Clarke

Phil Clarke, your same comment could so easily be passed back to you. You say "Strewth, is it so hard to read the paper?". Clearly not, but then it must take special effort to deliberately misrepresent the content in the way you have. Either that, or you suffer from a severe English comprehension problem.

But lets allow Mills to say it properly. The starting paragraph of the Section 8 Discussion is:

The central aim of this report is to emphasise that, while statistical forecasting appears highly applicable to climate data, the choice of which stochastic model to fit to an observed time series largely determines the properties of forecasts of future observations and of measures of the associated forecast uncertainty, particularly as the forecast horizon increases. The importance of this result is emphasised when, as in the examples presented above, alternative well-specified models appear to fit the observed data equally well – the ‘skinning the cat’ phenomenon of modelling temperature time series.

Read very carefully the sentence I have emphasised in bold. This is about pointing out, by a series of suitable chosen examples, the results depend very much on the model choice and that these choices are equally admissible statistically.

The penultimate paragraph in the Discussion is also very important to comprehend:

What the analysis also demonstrates is that fitting a linear trend, say, to a preselected portion of a temperature record, a familiar ploy in the literature, cannot ever be justified. At best such trends can only be descriptive exercises, but if the series is generated by a stochastic process then they are likely to be highly misleading, will have incorrect measures of uncertainty attached to them and will be completely useless for forecasting. There is simply no substitute for analysing the entire temperature record using a variety of well-specified models.

Again my bold.

I am also very pleased to see Mills explicitly discussing the stationarity question, a subject of which I know a little. Readers interested in the technical aspects of the models shown by Mills might also like to look up Douglas Keenan's neat summaries of the problems of fitting models and assigning uncertainty in his note on "Is a Line Trending Upward" which can be found at:

http://www.informath.org/Trends.pdf

Feb 23, 2016 at 2:35 PM | Registered Commenterthinkingscientist

It would be misleading to give the impression that central banks etc had given up 1970's style multiequation macroeconomic models for time series models. The widely adopted DSGE models have replaced the hundreds/thousands of microeconomic equations with fewer, aggregated, equations with dynamic uncertainty built into the model using Bayesian methods (the DS part). The lack of a sophisticated financial sector was implicated as one reason for their poor performance post 2006. It might be amusing therefore to quote Prof John Taylor (Stanford) on how the "hindcasting" (in climate speak) is going:


"One surprising finding,—......—is that when you add such financial factors to the mainline macro models used at central banks, they do not help that much in explaining the financial crisis. To paraphrase simply, they can change the financial crisis from something like a 6-sigma event in the models to a 3-sigma event—an improvement but not ready to help much help in the next crisis."

Models, heh!

Feb 23, 2016 at 2:39 PM | Unregistered Commenterbasicstats

McNeall: "As Gavin Schmidt points out, the forecast provided by the report is already wrong."

Clearly, neither Schmidt nor McNeall have even read the simple article in The Times, where Mills said: "the average temperature would continue to be buffeted about by big shocks caused by natural events, such as the El Nino"

Feb 23, 2016 at 2:50 PM | Registered CommenterAlbert Stienstra

Phil Clarke is now going to claim accuracy for Mann's Hockey Stick and Hansen's Scenarios A, B and C, two of the best predictive 'tools' in Climate Science.

Feb 23, 2016 at 2:52 PM | Unregistered Commentergolf charlie

"Strewth, is it so hard to read the paper?"
Says the man who won't (as in, I'm a coward) read HSI.

IAC, there is not a form of words that say, "the forecast is x for 2015" What you have is a chart akaics.

Feb 23, 2016 at 3:02 PM | Registered CommenterHarry Passfield

I also agree with McNeall and Betts: the approach of a fully stochastic model without any forcing terms presumes insensitivity to effects from greenhouse gases, etc., which is contrary to our knowledge of the physical systems.

Feb 23, 2016 at 12:52 PM | Harold

Presuming insensitivity is one thing, quite another thing is assuming that sensitivity is very high, with no proof whatsoever. Wouldn't you say?

Feb 23, 2016 at 3:04 PM | Unregistered CommenterJeremy Poynton

Love it. first Bob 'fast fingers' Ward. Now Richard 'hilarious forecasts' Betts.

Feb 23, 2016 at 3:08 PM | Registered Commenterflaxdoctor

To summarise the paper: we've seen this all before with people attempting to use complex models and failing - instead based on what works in other areas, the best model is this: "it's natural variation" (described using one of several models for natural variation).

Feb 23, 2016 at 3:18 PM | Unregistered Commentermike Haseler

a 95% confidence interval implies that 1 in 20 observations are expected to lie outside it.

Thanks, Sherlock. It was Mills who chose to plot the 95% interval, in line with the de facto statistical standard. As McKitrick wrote in the foreward

In this insightful essay, Terence Mills explains how statistical time-series forecasting methods can be applied to climatic processes. The question has direct bearing on policy issues since it provides an independent check on the climate-model projections that underpin calculations of the long-term social costs of greenhouse gas emissions. In this regard, his conclusion that statistical forecasting methods do not corroborate the upward trends seen in climate model projections is highly important and needs to be taken into consideration

Presumably the result that the statistical model is now not corroborated by observations is ' is highly important and needs to be taken into consideration' ?

The central aim of this report is to emphasise that, while statistical forecasting appears highly applicable to climate data, the choice of which stochastic model to fit to an observed time series largely determines the properties of forecasts of future observations and of measures of the associated forecast uncertainty, particularly as the forecast horizon increases. The importance of this result is emphasised when, as in the examples presented above, alternative well-specified models appear to fit the observed data equally well – the ‘skinning the cat’ phenomenon of modelling temperature time series

And when the model ceases to fit the observations, at the 95% level? Strong evidence that you've chosen the 'wrong' model to fit. Try adding in some Physics.

Feb 23, 2016 at 3:23 PM | Unregistered CommenterPhil Clarke

Sorry Phil Clarke, just remind us how you think you can, in any meaningful way, fit a model to the output of "a coupled non-linear chaotic system" as the IPCC likes to describe the climate. Note also that they go on to say "and therefore the long-term prediction of future climate states is not possible".

But just keep sending us money anyway. Oh - and try adding in some Physics. You can always take up this argument with Rob Brown at Duke if you like.

And just as a final food-for-thought from Mills' report:

If the process is a random walk, the optimal forecast of any future value is the last observed value of the series.

On a time scale of years, decades or even several hundred years, is climate a random walk Phil? And whether you answer Yes or No, how would you prove it?

Feb 23, 2016 at 3:40 PM | Registered Commenterthinkingscientist

Highly selective quotation. Only a matter of time.

The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.  Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

Which is what the IPCC does. Rather well.

Feb 23, 2016 at 3:56 PM | Unregistered CommenterPhil Clarke

Highly selective quotation. Only a matter of time.

The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.  Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

Which is what the IPCC does. Rather well.

Feb 23, 2016 at 4:01 PM | Unregistered CommenterPhil Clarke

Yet again Betts shows himself to be another climate clown who doesn't do science any more. Sack the lot of them at the Met Office I say and save us taxpayers a fortune. They are afraid of a CSIRO effect.

Feb 23, 2016 at 4:36 PM | Registered CommenterPhillip Bratby

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>