GCMs and public policy
Aug 24, 2014
Bishop Hill in Climate: Models, Climate: Parliament

In the thread beneath the posting about the Chen and Tung paper, Richard Betts left a comment that I thought was interesting and worthy of further thought.

Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.

Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
Everyone* agrees that CO2 rise is anthropogenic
Everyone** agrees that we can't predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don't know. The old-style energy balance models got us this far. We can't be certain of large changes in future, but can't rule them out either.

So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future - decarbonising or not decarbonising.

A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.

*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence.

So, let me try to explain why I think GCMs are so important to the policy debate.

Let us start by considering climate sensitivity. As readers here know, the official IPCC position on climate sensitivity is largely based on the GCMs. This time round we have had some minor concessions to observational estimates, but a significant proportion of the probability density of the observational studies remains outwith the IPCC's likely range of 1.5-4.5°C. Proponents of GCMs might counter that the upper end of the GCMs are ignored too, but I would suggest that one should conclude that an ECS of 5-6°C in the light of temperature history.

Estimates of climate sensitivity - and therefore in practice GCM estimates of climate sensitivity - directly inform estimates of the social cost of carbon. So when people like Chris Hope are arguing for a carbon tax of $100/tCO2, this is a function of GCMs. I recall, I hope correctly, that Chris suggested a figure of $18/tCO2 if one used an ECS of 1.6, in line with observational estimates. This matters of course, because the policy response, if any, to an $18 problem is significantly different to that for a $100 problem.

Wherever we look in the interactions between scientists and politicians on climate questions, we see an emphasis on catastrophe. We see no confessions of ignorance, but only occasional reference to uncertainties. Here's some notes of Tim Palmer addressing the All-Party Climate Change Group:

With the amount of carbon dioxide already in the atmosphere, future emissions will need to be reduced by half to that of historical emissions to limit global average temperature rise to 2°C. However, if emissions are not curbed (under the business as usual scenario), the amount of carbon dioxide in the atmosphere will be three times the historical emissions and the temperatures might rise up to 4°C.

And on the other hand they might not. This idea does not, however, seem to have been put forward for consideration.

Readers might also wonder what explanations were given to our political masters on the credibility of the GCMs. Here's what Palmer said:

Climate models are only flawed only if the basic principles of physics are, but they can be improved. Many components of the climate system could be better quantified and therefore allow for greater parameterisation in the models to make the models more accurate. Additionally increasing the resolution of models to allow them to model processes at a finer scale, again increasing the accuracy of the results. However, advances in computing technologies would be needed to perform all the necessary calculations. However, although the accuracy of predictions could be improved, the underlying processes of the models are accurate.

Apart from the transport of heat to the deep ocean, if Friday's paper from Chen and Tung is to be believed.

You can see that policymakers are getting a thoroughly biased picture of what GCMs can do and whether they are reliable or not. They are also getting a thoroughly biased picture of the cost of climate change based on the output of those GCMs. They are simply not being asked to consider the possibility that warming might be negligible or non-existent or that the models could be complete and utter junk. They are not told about the aerosol fudging or the GCMs' ongoing failures.

And this is just scratching the surface.

[BTW: Could commenters who like to amuse themselves by baiting Richard please refrain from so doing!]

Article originally appeared on (http://www.bishop-hill.net/).
See website for complete article licensing information.