I had an interesting exchange with Richard Betts and Lord Lucas earlier today on the subject of climate model tuning. Lord L had put forward the idea that climate models are tuned to the twentieth century temperature history, something that was disputed by Richard, who said that they were only tuned to the current temperature state.
I think to some extent this we were talking at cross purposes, because there is tuning and there is "tuning". Our exchange prompted me to revisit Mauritzen et al, a 2012 paper that goes into a great deal of detail on how one particular climate model was tuned. To some extent it supports what Richard said:
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850-1880 observed global mean temperature of about 13.7◦C [Brohan et al., 2006]...
We tune the radiation balance with the main target to control the pre-industrial global mean temperature by balancing the [top of the atmosphere] TOA net longwave flux via the greenhouse effect and the TOA net shortwave flux via the albedo affect.
OK, they are targeting the start of the period rather than the end, but I think that still leaves Richard's point largely intact. However, Mauritzen et al also say this:
One of the few tests we can expose climate models to, is whether they are able to represent the observed temperature record from the dawn of industrialization until present. Models are surprisingly skillful in this respect [Raisanen, 2007], considering the large range in climate sensitivities among models - an ensemble behavior that has been attributed to a compensation with 20th century anthropogenic forcing [Kiehl, 2007]: Models that have a high climate sensitivity tend to have a weak total anthropogenic forcing, and vice-versa. A large part of the variability in inter-model spread in 20th century forcing was further found to originate in different aerosol forcings.
And, as they go on to explain, it is quite possible that a kind of pseudo-tuning - I will call it "tuning" - is going on through the choice of aerosol forcing history used (my emphasis):
It seems unlikely that the anti-correlation between forcing and sensitivity simply happened by chance. Rational explanations are that 1) either modelers somehow changed their climate sensitivities, 2) deliberately chose suitable forcings, or 3) that there exists an intrinsic compensation such that models with strong aerosol forcing also have a high climate sensitivity. Support for the latter is found in studies showing that parametric model tuning can influence the aerosol forcing [Lohmann and Ferrachat, 2010; Golaz et al., 2011]. Understanding this complex is well beyond our scope, but it seems appropriate to linger for a moment at the question of whether we deliberately changed our model to better agree with the 20th century temperature record.
They conclude that they did not, but effectively note that the models that find their way into the public domain are only those that, by luck, design or "tuning", match the 20th century temperature record.
In conclusion then I conclude that Lord Lucas's original point was in essence correct, so long as you conclude both tuning and "tuning". Richard's point was correct if you only include tuning.