What the public needs to know about GCMs
Anthony has a completely brilliant comment from Robert Brown about the ensemble of climate models and the truth about them that is never explained to the public:
...until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.
Reader Comments (52)
Ha Martin. I think Robert Brown has just driven a fair-sized truck through the Met's advocacy of the reliability of GCMs, don't you? And given their fundamental role in the arguments of the IPCC, and resulting effusions by policy makers, including our own disastrous monkeying around with energy markets, that's no small beer.
I have copied and pasted a response from Richard Betts to a question about computer models for anyone interested.
(3) How reliable are computer models, particularly your own?
Depends what you want to use them for. General circulation models have demonstrated a high level of success for weather forecasting on the timescale of a few days. OK they don’t always get it right, but they’re doing pretty well these days, especially when ensembles (several models) are used for a few days ahead and beyond, and especially with some expert human interpretation with knowledge of model biases and limitations (ie: don’t just treat them as a black box!)
On climate timescales the models are generally looking reasonable for large-scale and multi-year averages, eg: reproducing decade-by-decade changes in response to external forcing (both anthropogenic and natural). Regional climate changes are much more uncertain, and models often disagree on things like regional rainfall changes – although in some regions there is higher agreement between different models. Part of the problem is that there has not yet been large enough systematic climate change in comparison with natural variability to be able to test the models. Forecasting of internal variability (the natural fluctuations in climate that emerge just through the oceans and atmosphere affecting each other, and not through external forcing) is much more challenging, but this is what we really need to achieve because the most important requirement is to be able to forecast regional variability at timescales of a season to a few years ahead (a high aspiration!)
Since you ask, the Met Office Hadley Centre models are generally regarded as among the better models.
There is an enormous amount of quantification of climate model performance compared to observations, too much to do justice here in a short blog post. Sorry if it's a bit of a cop-out but I really would recommend reading the relevant AR4 chapter which gives lots of figures and tables showing the strengths and weaknesses of the models used at the time of that report (analysis of the latest models for AR5 has not been published yet).
Interestingly, I think palaeoclimate reconstructions and climate models have opposite problems. The former use point data and the uncertainty is in reconstructing the large-scale signal, whereas the latter are more skilful at large-scale changes and less reliable at individual points.