The empty set
Readers will recall my posts on two recent papers which looked at how climate models simulated various aspects of the climate system, using these to draw inferences about our future. The Sherwood et al paper picked the models that best simulated clouds and showed that these predicted that the future would be hot. "Planet likely to warm by 4C by 2100", wailed the Guardian. Meanwhile, the Cai et al paper picked the models that best simulated extreme rainfall and showed that these predicted more frequent extreme El Nino events. "Unchecked global warming 'will double extreme El Niño weather events'", the Guardian lamented.
Reader Patagon wondered, not unreasonably, which models fell at the intersection of "best climate model simulation of clouds" and "best climate model simulation of extreme rainfall", and his question prompted the following response from Nic Lewis:
I was also wondering that. So I've cross-referred between the new Cai et al. ENSO/extreme rainfall paper, and the recent Sherwood et al. paper tracing the spread in climate sensitivity to atmospheric convective mixing and implying therefrom that climate sensitivity is over 3°C.
The Cai paper analyses 40 CMIP3(last generation - AR4) and CMIP5 (latest generation - AR5) models. Out of those 40, it selects 20 that are able to produce the high rainfall skewness and high rainfall over the Nino3 region (Supplementary Tables 1 and 2). It finds that those 20 models generate twice as many extreme ENSO events in the 100 years after 1990 than the 100 years before 1990.
The Sherwood paper shows 7 CMIP3 and CMIP5 models that have a lower-tropospheric mixing index, their chosen measure, falling within their observational uncertainty range (Figure 5(c)). It takes a little effort to work out which models they are, as some of the colour codes used differ little. For the record, I make them to be: ACCESS1-3, CSIRO-Mk3-6-0, FGOALS-s2, HadGEM1, IPSL-CM5A-LR, MIROC3-2-HIRES and MIROC-ESM.
Two of the seven models that Sherwood's analysis favours are not included in the Cai paper. Of the other five, by chance you might typically expect two or three to be in the 50% (20 out of 40) of models that Cai's analysis favours. But in fact not one of those five models is.
So the answer is that there are NO MODELS at the intersection of "best lower-tropospheric mixing" and "best simulation of extreme rainfall etc".So, if the Sherwood and Cai analyses are valid, it looks as if with CMIP3 and CMIP5 models you have a choice. You can find selected models that have realistic lower-tropospheric mixing, strong positive low cloud feedback and high climate sensitivity. Or you can choose models that produce realistically high rainfall skewness and rainfall over the Nino3 region and generate a large increase in extreme ENSO events with global warming. But you can't have both at once.
Of course, the real world climate system may differ so much from that simulated by any CMIP3 or CMIP5 model that the Sherwood and Cai results have little relevance.
FWIW, if one assumes a binomial distribution with each of the five models favoured by Sherwood's analysis having a 50% chance of being favoured by Cai's analysis (no better or worse than average), then I calculate there would be only a 3% probability of none of the five models being so favoured.
Pure climate comedy gold.
Reader Comments (34)
http://en.wikipedia.org/wiki/Blind_men_and_an_elephant
With the models (and their developers) as the blind men and the planetary atmospheric conditions as the elephant.
Tough call. Just imagine you have several competing models of the solar system. One model is quite good for Saturn - but lousy for the other planets. A different model is OKish at hindcasting Mercury and Venus for most of the 18th century - but drifts off for recent times and just ignores the outer planets.
What to do? Would an ensemble of the models be better than any individual model? I mean I watched a video about team-building and we all now that a team can do better than an individual at the Sonora Desert Exercise. Surely models are like that as well ? They kind of work-as-a-team ?
Ah yes, the now becoming an old chestnut "model based evidence". Ille est. Not evidence of anything except the models inability to model. Beyond tedious. This has gone way beyond being anything to do with science, it's a clash of cultures, with groupthink and authoritarianism on one side, and free range humanity on the other.
Well said Jeremy Poynton.
So 97% of the time, such a coincidental non-overlap of models couldn’t arise by chance. And wherever 97% are gathered together...
..which rather suggests that JEM’s parable of the blind men is not appropriate. I’ll give odds of 31:1 that those blind men knew exactly what they were looking for. One was after the ivory, another going for the juicy bits...
Many thanks to Nic Lewis and to your Grace for that comic masterpiece. I can understand the humour, with my simple maths A-level, and your thousands of better educated readers will understand it better than me. Can the Guardian’s environmental staff, and the BBC’s science correspondents, and the Oxbridge PPEs with which the government and civil service is stuffed? Will their scientific advisors be explaining it to them? Or will it go down the memory hole along with the rest of the insights aired here?
What’s the point of being right day after day, year after year, if no-one outside our tiny world knows about it? Can’t you arrange some financing from a corrupt Oil Oligarch? it’s the only way anyone will take any notice.
Is it just me that thinks calculations of 'climate sensitivity' are nothing more than another climate science will-o'-the-wisp?
Clearly more research into climate models is needed, where do I send the cheque?
Jan 21, 2014 at 8:18 PM | JEM
I take it the JEM stands for "Joke Entropic Man".
I'll get me coat ....
@ geoffchambers
".....those blind men knew exactly what they were looking for."
I'll get my coat ................
This is evidence of the fundamental problem with the model-based climate studies that have existed since the first CMIP archive. Climate model enthusiasts pick and choose the models that fit the agenda of a specific paper. For the next study of something slightly different, those same enthusiasts will select another group of models that fit the new agenda. If it wasn't so sad, it would be laughable.
Regards
Caveat: I haven't seen the Cai et al. paper yet, so this is based on the abstract, supplementary information, press releases, and various second-hand sources.
Supplementary Table S2 for Cai et al. shows that the 20 selected models show a wide variation in their precipitation characteristics. For example, in the control period 1891-1990, the average Niño3 rainfall produced varied (model-to-model) 0.64 to 4.78 mm/day. The observed average, from 1979 to 2013, was 2.05 mm/day. The number of extreme El Niño events (defined a year with average Niño3 rainfall above 5 mm/day) ran from 0 to 16; from 1979-2013 there were 2 such, which would have to be scaled by a factor of 100 years to 25 years = 4x to produce 8 events per century.
Clearly, a bunch of these models, while generating a wide year-to-year variation in rainfall, don't come close to replicating actual rainfall values. I guess I don't understand why a model's extrapolation of global warming effects should be considered reliable when its baseline is not correct. There's a certain amount of plausibility to the idea that a temperature baseline needn't be precisely right if we're mainly interested in the rise in temperatures. I don't see why that should apply to precipitation distributions, though. Perhaps someone who has seen the text of the paper can enlighten me in this regard.
I wonder what the point of a journal article is. To share in new scientific discovery or to publish hunches and do general brain storming? If it's the second - because you can publish a modelling paper that doesn't even need to fit existing observational data - then what authority is there in citing a journal article these days?
If, rather than trying to produce all-seeing models, we broke this down into subsystems, and the inputs, outputs, and relationships of those specific areas of those specific models that seemed to get certain things 'more right' than others were documented and discussed, we might get somewhere in modeling.
But there's not a lot of glamor in that, and it would be a lot of very long-term work with no scare headlines, and when you start tying the 'more right' functions together and realizing that there's (quelle horreur!) some negative feedbacks in there somewhere, all the grant money would evaporate.
So I don't have much faith that something like that will happen unless a whole lot of folks suddenly discovered their funding was dependent on it.
O/T Beeber onboard the Ship-of-Fools starts to spill the beans
@geoffchambers - well, I think more like "each of the blind men thought they already knew the answer" and so their model found the answer in the place they wanted to find it.
The real question is, if a model gets one thing right (or at least arguably 'more right' or more verifiable against empirical data) then what's effin' wrong with the REST of the model?
If one were conspiratorially-minded one would concern oneself not with what a model got right, but with what everything else it got wrong.
I wrote last year on how climate models get top marks.
Pretty much the same holds true for picking particular aspects of a few models and then imply that the models are capable.
Re: Jack Hughes
There is a problem with your analogy. The planets influence on each other is small compared to the suns influence on the planets so combining separate models might be valid.
On the other hand, clouds (Sherwood et al) and precipitation (Cai et al) will have a strong relationship so your analogy should be a model of the Earth's motion and a model of the Moon's motion. Any model that gets close to correctly hindcasting the past motion either the Earth or the Moon without correctly hindcasting the other is unlikely to have any predictive power.
'wailed the Guardian'
that could actual be used as the by-line for virtual ever Guardian story especially the environmental ones.
How long are we to be forced to listen to these people trying to square a circle, that cannot be squared? There has been years of no global warming: yet all climate models forecast warming, sometimes catastrophic warming. CO2 emissions have clearly increased, yet there has been a steady mean global temperature. What other branch of science could survive such a difference between theory and observation?
'wailed the Guardian'--"that could actual be used as the by-line for virtual ever Guardian story especially the environmental ones." --KNR
'groaned the Guardian' is more alliterative.
Jan 21, 2014 at 8:19 PM @ Jack Hughes
Oh, yeah, that old standby, the Sonora Desert Exercise. Went through it in a team-building scenario many years ago. The team did score higher than I did individually. But, the team also opted to try hiking to the nearest town in 52C temperatures. They all died.
Geoff:
...the Anointed Consensus is in the midst. Except in this case Nic Lewis shows by simple probability that the 97% belongs to us. The Deniers are the Consensus. All Cretans are liars, as one of their own prophets said. Where will it all end?
For a true liberal this is not about being right or wrong, but more about being neutral? So the badest non working model is the one?
Most or all models are based on the political decided UNFCCC.
So the question is less if the models are right or wrong. It's more if the UNFCCC is scientifically right or wrong?
"For a true liberal this is not about being right or wrong, but more about being neutral? So the badest non working model is the one?"
A true liberal is not supposed to discriminate anything and stay neural. Maybe that's the reason IPCC uses the average of more than 40 non working models?
Another question is why they then discriminate UNFCCC sceptics and label them climate deniers? Isn't that act to discriminate?
Good work by Nic Lewis. One has to question why these papers were put forward for publication in the first place.
The authors must have been aware of the deficiencies of the models but chose to cherry pick the aspects that suited their agendas.
Just in case there are any policy makers lurking, here's a graphical summary which raises a few questions about the accuracy and value of the climate models:
Models verses observations
Source: http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Nuccitelli discussed the Sherwood paper in some detail at
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/jan/09/global-warming-humans-not-sun
There are 760 comments, and embedded in the article is a neat and very professionally made video interview with Sherwood, in which he predicts 3°C+ warming by the middle of the century (that’s nearly 1°C per decade, presumably starting soon, coming to a thermometer near you).
He ends by saying:
Does a principle analogous to Heisenberg's Uncertainty Principle apply to climate models?
The more accurately you can predict clouds, the less accurately you can predict extreme rainfall.
And there was I, naively thinking that rainfall had something to do with clouds! Well, you learn something every day.
This is what I like to call 'Tooth paste advert science'. 97% of climate scientists when asked about climate models said they preferred this one. Except when they don't...
Jorge
"grauned the Guardian", perhaps.
Jack Hughes
"shortcomings in the logistics department"
Understatement of the year..?
It's so strange to try multiple models with presumably fixed inputs rather than pick the best and try varying the inputs. It's really strange to conclude anything from results of a model known to be fatally flawed, rather than just that it is inadequate for the task.
Do you actual believe the BS that you write???
Bernd, that is the funniest and most accurate analogy I've read in a long time, well done.