Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > The Sceptical General Circulation Model

They partition the available historical data into 'training' and 'verification' periods. The training data is used to tune the input parameters until the model matches the trend in the training data.

Then the model is run against the verification data to make sure that the model still performs outside the training range with no further tweaks made to the parameters. After that. it moves into hindcasting and forecasting. Hindcasting can be used as further verification.

Unfortunately, with such a short historical record in decadal detail, training periods tends to be long, and verification shorter than you would want.

It's no surprise to me that most models are starting to wander.
Jan 14, 2014 at 7:11 PM TheBigYinJames

All the met. office does is use their models to produce forecasts with a huge error band then match it to the "actual" within another large error band and note that it matches most of the time. This they call validation. The percentage match is directly related to the size of the error bands.
Jan 14, 2014 at 7:24 PM NW

Because of the short history of measurements available, 'testing on the training data' is pretty well inevitable with climate models. The training data is used for parameterisation of poorly understood physical effects as well as for final tuning of the model . The Met Office has proudly stated that the ability of the models to reproduce past climate provides validation for them.

But 'testing on the training data' is a fallacy that has been recognised at least since the early 1970's as giving hopelessly optimistic estimates of the accuracy of the predictions of a model. It tests how well the model has been tuned to reproduce the training data, not how well it represents the physical reality. So far as I can make out, it is a fallacy that has simply not been recognised as an issue in climate modelling.


-------------------------------------------------------------------------
Added: And even if the training and testing periods don't overlap, correlation due to slow acting physical effects means the fallacy still applies, because the testing and training data are not independent

Anyway, a bucket of tile cement awaits...

Jan 14, 2014 at 9:32 PM | Registered CommenterMartin A

Big, I don't think that is quite the same thing, although it is a subtle difference. In a GCM the resulting temperature projections are an emergent property not a design goal. It is true that if the hindcast doesn't match historic data then the model is not working very well, but the objective is to model the physical properties of the climate system.

Your description of training and verification phases, might well apply to any model for all I know. Are the methods public? If the algorithms and data are kept secret, do the modeling teams all publish their verification methods? Can you point me to them? And also to documentation of GCM's failures of "numerous sanity checks, from the way that distributed systems are represented numerically to the software development methodology" as mentioned by Martin.

So how are you going to improve the handling of the unpredictable (discussed above) in your model? Will you not need to do that if you hope to improve on current GCMs?

I still have not seen any concrete evidence that the models are as bad as people here claim. With the degree of emphasis that I have seen put on denigrating models and modelers I would have though it would be easy to present convincing evidence. Where is it?

Jan 15, 2014 at 12:40 AM | Unregistered CommenterChandra

Chandra, why discuss the existing models, unless you want to learn from them? How would you learn anything if they are already everything a model can be?

The temperature field at the climatic timescale can emerge as a predictable property only if the models successfully emulate all elements that interact to produce it in the real world. This is not possible.

Instead, why not pursue creating models that produce what looks like the earth's temperature with no consideration for trying to emulate land, ocean, sunlight, atmosphere etc, i.e., with no consideration of the basic components of the climate system? Clearly, we are not capable of bringing together these elements and make them interact meaningfully in a predictive fashion.

Jan 15, 2014 at 1:50 AM | Registered Commentershub

Chandra

do the modeling teams all publish their verification methods? Can you point me to them?

I'm too lazy to look them up when you can do it yourself.

Jan 15, 2014 at 9:22 AM | Unregistered CommenterTheBigYinJames

Shub, nobody has ever said that current GCMs are already "everything a model can be". You seem to like making statements that cannot be justified.

Instead, why not pursue creating models that produce what looks like the earth's temperature with no consideration for trying to emulate land, ocean, sunlight, atmosphere etc, i.e., with no consideration of the basic components of the climate system?
Was that a joke?

Big, I have the impression that the grand statements about model failure, inadequate testing methodology or failing basic tests are no more than bluster. There seems to be no concrete evidence. I think you might have seen discussion of one or two models and have extrapolated that to fit the whole gamut of models. Similarly, Martin's supposed basic failures are most likely the result of some publicity for one model and he has tarred all models with the same brush. In short, I don't believe any of these claims unless you produce some evidence that extends to the majority of models .

It is much the same as the argument that reducing our use of fossil fuels and switching to renewables will cause "economic ruin". I have seen these claims made here and elsewhere but there is never any evidence produced.

In both arguments, people here seem to be all hat and no cattle.

Jan 15, 2014 at 4:58 PM | Unregistered CommenterChandra

Not at all Chandra, I see that they are all running warmer than reality, that is their failure mode. I may speculate on the reasons, but to be honest I don't care.

Jan 15, 2014 at 5:05 PM | Unregistered CommenterTheBigYinJames

A few days ago you seemed to care enough to consider writing your own model, which has to involve many man-years of effort. Now you don't care. Oh, the capricious nature of that which delights your mind ;-)

On that "running warmer" thing, we discussed above that it might well be due to the unpredictable forcings (ENSO, solar, volcanic) of climate. You still didn't say how you are planning to adress this in your model. If I were to be unkind, I might say that you have lost interest in the project because you now see that there is no good solution to that connundrum. Thankfully I am not so ungenerous of spirit :-)

Jan 15, 2014 at 11:02 PM | Unregistered CommenterChandra

Chandra, apparently you've successfully talked yourself into complete irrelevance again.

TBYJ, in my view a GCM cannot be skilful where it needs to be, no matter who develops it, because modelled theory and actuality will always be distinct. In order to beat the GCMers at their own game we would need to have something they don't have. And it's something nobody can have. And that's the point.

GCMs are everything they ever can be, relative to what they need to be. The purpose of the investment in GCMs is to inform policy on a useful level; to predict, with accuracy, what the future holds. They needed to be able to peer through the chaotic uncertainty and pluck out what WILL happen - different from what MIGHT happen. They were never going to be able to do that.

Jan 16, 2014 at 2:19 AM | Registered CommenterSimon Hopkinson

The presently-accepted 'cause-effect' framework in climate model development should be abandoned. Many of the cause and effect entities are guessed on a retrospective basis, and therefore useless predictively speaking.

Jan 16, 2014 at 2:29 AM | Registered Commentershub

Chandra, I said I don't care about the specifics of their failures. I wasn't going to use their techniques, so it doesn't matter to me why they are failing.

Simon,

I agree, the system is stochaistic in detail, so using a numerical method was never going to work. But I believe there may be some deterministic overall metrics that emerge from the chaos, the trends of which can be modelled. I still believe temperature is one of them.

I wasn't going to use numerical methods though, was thinking more along the lines of pattern recognition (Hopfield nets, etc)

Jan 16, 2014 at 9:25 AM | Unregistered CommenterTheBigYinJames

Yes, I'm with shub there.

Judith Curry comments on a Nobel Prize lecture by an economist, The Pretence of Knowledge by Friedrich August von Hayek. It spells out problems in trying to analyse problems that are really too complex, with too many unknowns, to be handled as physics problems.

It lists some of the effects that result by attempting to handle such problems as if they were physics problems:

- detailed study of variables that can be measured, while disregarding variables of greater importance that cannot be measured.

- Acceptance of bogus theory because it appears 'scientific', with rejection of valid theory because of lack of quantitative data.

- Excessive expectation and overconfidence in the ability of the methods of physics to analyse systems of great complexity.

____________________

Here are a couple of suggestions that could perhaps lead to something....

1. Use linear system identification methods to identify the transfer functions relating various observed physical climate quantities.

(yes, I know the system is nonlinear. But so are all physical systems and there may be some aspects that can be realistically represented as linear time invariant systems so having definable and, in principle, identifiable transfer functions.)

2. Derive simple nonlinear dynamic models with feedback (just two or three - a few at most - state variables) that can reproduce the statistics of historic temperature variations.

Jan 16, 2014 at 10:19 AM | Registered CommenterMartin A