It's better than we thought
A new paper by Gillett et al finds that transient climate response (i.e. short-term sensitivity) is lower than previously thought.
Projections of 21st century warming may be derived by using regression-based methods to scale a model's projected warming up or down according to whether it under- or over-predicts the response to anthropogenic forcings over the historical period. Here we apply such a method using near surface air temperature observations over the 1851–2010 period, historical simulations of the response to changing greenhouse gases, aerosols and natural forcings, and simulations of future climate change under the Representative Concentration Pathways from the second generation Canadian Earth System Model (CanESM2). Consistent with previous studies, we detect the influence of greenhouse gases, aerosols and natural forcings in the observed temperature record. Our estimate of greenhouse-gas-attributable warming is lower than that derived using only 1900–1999 observations. Our analysis also leads to a relatively low and tightly-constrained estimate of Transient Climate Response of 1.3–1.8°C, and relatively low projections of 21st-century warming under the Representative Concentration Pathways. Repeating our attribution analysis with a second model (CNRM-CM5) gives consistent results, albeit with somewhat larger uncertainties.
It's still a model though, isn't it?
Reader Comments (7)
My usual thought of 'why oh why does anyone take these climate models so seriously?' came to mind, and then I checked. All the authors are from a climate modelling centre in Canada. They have to take them seriously, their jobs depend on it.
From the conclusions
It's still a model though, isn't it?
It depends on your definition of "model". Sort of like calling self-stimulation "sex". Fun, but doesn't produce much.
John Shade asks 'why oh why does anyone take these climate models so seriously?'
I think it does not matter in this case whether you take the models seriously or not. You are debating a question that arises within the research programme based on the use of such models; in that context, you show that under the same principles used (e.g. by the IPCC) to project catastrophic warming along the 21st century, but using a better calibration of model results based on the historical record, gives a much lower degree of warming. For those customarily using models for climate projection, this is a powerful argument: they believe that the historical record is untainted by station siting, UHI or debatable ex post adjustments, they are using the Concentration Pathways adopted by IPCC for AR5, and they also believe that the aforementioned models are useful instruments for climate projection. Thus they should consider this argument Instead, a more radical argument against the use of models or a lengthy and inconclusive debate on the quality of station data and UHI are unlikely to have much immediate effect.
For those reasons I think the argumentative line of these papers is correct, irrespective of the degree of belief of authors or readers in the validity of the models or data that are used..
Sort of puts the perverse selection of the Bayesian priors for AR4 sensitivity in perspective. And that was deliberate.
\==============
This breaks the 1st law of Climatology
Suppose everyone in the world agreed on the value of CO2 sensitivity. It wouldn't help much, would it? Let's say we all agreed the sensitivity was 1.5 degrees. Then if we doubled CO2 from pre-industrial levels, we would expect the global temperature would be 1.5 degrees above... above what? Above the temp of the MWP? Above the temp of the LIA? We don't know why the global temp bobbles around in the absence of any CO2 forcing. So even if we knew the effect of the forcing, we'd be none the wiser as to what the future climate would be like.
My fecking computer ate my previous comments, so apologies if this is a quadruple post.