Buy

Books
Click images for more details

Twitter
Support

 

Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Exploring the fascist borderline | Main | Montague's triumph »
Tuesday
Jan212014

The empty set

Readers will recall my posts on two recent papers which looked at how climate models simulated various aspects of the climate system, using these to draw inferences about our future. The Sherwood et al paper picked the models that best simulated clouds and showed that these predicted that the future would be hot. "Planet likely to warm by 4C by 2100", wailed the Guardian. Meanwhile, the Cai et al paper picked the models that best simulated extreme rainfall and showed that these predicted more frequent extreme El Nino events. "Unchecked global warming 'will double extreme El Niño weather events'", the Guardian lamented.

Reader Patagon wondered, not unreasonably, which models fell at the intersection of "best climate model simulation of clouds" and "best climate model simulation of extreme rainfall", and his question prompted the following response from Nic Lewis:

I was also wondering that. So I've cross-referred between the new Cai et al. ENSO/extreme rainfall paper, and the recent Sherwood et al. paper tracing the spread in climate sensitivity to atmospheric convective mixing and implying therefrom that climate sensitivity is over 3°C.

The Cai paper analyses 40 CMIP3(last generation - AR4) and CMIP5 (latest generation - AR5) models. Out of those 40, it selects 20 that are able to produce the high rainfall skewness and high rainfall over the Nino3 region (Supplementary Tables 1 and 2). It finds that those 20 models generate twice as many extreme ENSO events in the 100 years after 1990 than the 100 years before 1990.

The Sherwood paper shows 7 CMIP3 and CMIP5 models that have a lower-tropospheric mixing index, their chosen measure, falling within their observational uncertainty range (Figure 5(c)). It takes a little effort to work out which models they are, as some of the colour codes used differ little. For the record, I make them to be: ACCESS1-3, CSIRO-Mk3-6-0, FGOALS-s2, HadGEM1, IPSL-CM5A-LR, MIROC3-2-HIRES and MIROC-ESM.

Two of the seven models that Sherwood's analysis favours are not included in the Cai paper. Of the other five, by chance you might typically expect two or three to be in the 50% (20 out of 40) of models that Cai's analysis favours. But in fact not one of those five models is.

So the answer is that there are NO MODELS at the intersection of "best lower-tropospheric mixing" and "best simulation of extreme rainfall etc".

So, if the Sherwood and Cai analyses are valid, it looks as if with CMIP3 and CMIP5 models you have a choice. You can find selected models that have realistic lower-tropospheric mixing, strong positive low cloud feedback and high climate sensitivity. Or you can choose models that produce realistically high rainfall skewness and rainfall over the Nino3 region and generate a large increase in extreme ENSO events with global warming. But you can't have both at once.

Of course, the real world climate system may differ so much from that simulated by any CMIP3 or CMIP5 model that the Sherwood and Cai results have little relevance.

FWIW, if one assumes a binomial distribution with each of the five models favoured by Sherwood's analysis having a 50% chance of being favoured by Cai's analysis (no better or worse than average), then I calculate there would be only a 3% probability of none of the five models being so favoured.

Pure climate comedy gold.

PrintView Printer Friendly Version

Reader Comments (34)

http://en.wikipedia.org/wiki/Blind_men_and_an_elephant

With the models (and their developers) as the blind men and the planetary atmospheric conditions as the elephant.

Jan 21, 2014 at 8:18 PM | Unregistered CommenterJEM

Tough call. Just imagine you have several competing models of the solar system. One model is quite good for Saturn - but lousy for the other planets. A different model is OKish at hindcasting Mercury and Venus for most of the 18th century - but drifts off for recent times and just ignores the outer planets.

What to do? Would an ensemble of the models be better than any individual model? I mean I watched a video about team-building and we all now that a team can do better than an individual at the Sonora Desert Exercise. Surely models are like that as well ? They kind of work-as-a-team ?

Jan 21, 2014 at 8:19 PM | Unregistered CommenterJack Hughes

Ah yes, the now becoming an old chestnut "model based evidence". Ille est. Not evidence of anything except the models inability to model. Beyond tedious. This has gone way beyond being anything to do with science, it's a clash of cultures, with groupthink and authoritarianism on one side, and free range humanity on the other.

Jan 21, 2014 at 8:28 PM | Unregistered CommenterJeremy Poynton

Well said Jeremy Poynton.
So 97% of the time, such a coincidental non-overlap of models couldn’t arise by chance. And wherever 97% are gathered together...
..which rather suggests that JEM’s parable of the blind men is not appropriate. I’ll give odds of 31:1 that those blind men knew exactly what they were looking for. One was after the ivory, another going for the juicy bits...
Many thanks to Nic Lewis and to your Grace for that comic masterpiece. I can understand the humour, with my simple maths A-level, and your thousands of better educated readers will understand it better than me. Can the Guardian’s environmental staff, and the BBC’s science correspondents, and the Oxbridge PPEs with which the government and civil service is stuffed? Will their scientific advisors be explaining it to them? Or will it go down the memory hole along with the rest of the insights aired here?
What’s the point of being right day after day, year after year, if no-one outside our tiny world knows about it? Can’t you arrange some financing from a corrupt Oil Oligarch? it’s the only way anyone will take any notice.

Jan 21, 2014 at 8:33 PM | Registered Commentergeoffchambers

Is it just me that thinks calculations of 'climate sensitivity' are nothing more than another climate science will-o'-the-wisp?

Jan 21, 2014 at 8:50 PM | Registered CommenterMartin A

Clearly more research into climate models is needed, where do I send the cheque?

Jan 21, 2014 at 9:06 PM | Unregistered Commenterjaffa

Jan 21, 2014 at 8:18 PM | JEM

I take it the JEM stands for "Joke Entropic Man".

I'll get me coat ....

Jan 21, 2014 at 9:28 PM | Unregistered CommenterThinking Heretic

@ geoffchambers

".....those blind men knew exactly what they were looking for."

I'll get my coat ................

Jan 21, 2014 at 9:31 PM | Unregistered CommenterJoe Public

This is evidence of the fundamental problem with the model-based climate studies that have existed since the first CMIP archive. Climate model enthusiasts pick and choose the models that fit the agenda of a specific paper. For the next study of something slightly different, those same enthusiasts will select another group of models that fit the new agenda. If it wasn't so sad, it would be laughable.

Regards

Jan 21, 2014 at 9:41 PM | Unregistered CommenterBob Tisdale

Caveat: I haven't seen the Cai et al. paper yet, so this is based on the abstract, supplementary information, press releases, and various second-hand sources.

Supplementary Table S2 for Cai et al. shows that the 20 selected models show a wide variation in their precipitation characteristics. For example, in the control period 1891-1990, the average Niño3 rainfall produced varied (model-to-model) 0.64 to 4.78 mm/day. The observed average, from 1979 to 2013, was 2.05 mm/day. The number of extreme El Niño events (defined a year with average Niño3 rainfall above 5 mm/day) ran from 0 to 16; from 1979-2013 there were 2 such, which would have to be scaled by a factor of 100 years to 25 years = 4x to produce 8 events per century.

Clearly, a bunch of these models, while generating a wide year-to-year variation in rainfall, don't come close to replicating actual rainfall values. I guess I don't understand why a model's extrapolation of global warming effects should be considered reliable when its baseline is not correct. There's a certain amount of plausibility to the idea that a temperature baseline needn't be precisely right if we're mainly interested in the rise in temperatures. I don't see why that should apply to precipitation distributions, though. Perhaps someone who has seen the text of the paper can enlighten me in this regard.

Jan 21, 2014 at 9:49 PM | Registered CommenterHaroldW

I wonder what the point of a journal article is. To share in new scientific discovery or to publish hunches and do general brain storming? If it's the second - because you can publish a modelling paper that doesn't even need to fit existing observational data - then what authority is there in citing a journal article these days?

Jan 21, 2014 at 10:24 PM | Unregistered CommenterWill Nitschke

If, rather than trying to produce all-seeing models, we broke this down into subsystems, and the inputs, outputs, and relationships of those specific areas of those specific models that seemed to get certain things 'more right' than others were documented and discussed, we might get somewhere in modeling.

But there's not a lot of glamor in that, and it would be a lot of very long-term work with no scare headlines, and when you start tying the 'more right' functions together and realizing that there's (quelle horreur!) some negative feedbacks in there somewhere, all the grant money would evaporate.

So I don't have much faith that something like that will happen unless a whole lot of folks suddenly discovered their funding was dependent on it.

Jan 21, 2014 at 10:26 PM | Unregistered CommenterJEM

O/T Beeber onboard the Ship-of-Fools starts to spill the beans

Whatever the truth, some of the paying passengers on the Australasian Antarctic Expedition 2013 spoke unfavourably about the manner in which the situation at the islands was handled. Everyone I spoke to asked to be quoted anonymously, mindful of the considerable media interest that may await in Tasmania.

"The teacher in me cringes at the logistics," said one of the paying members of the expedition.

Another said the expedition was run like a "boys own adventure" and expressed concern over what she believed was a lack of thorough briefing on safety procedures throughout the Antarctic leg of the expedition.

Others I spoke to agreed that the expedition had its shortcomings in the logistics department.

Jan 21, 2014 at 10:28 PM | Unregistered CommenterJack Hughes

@geoffchambers - well, I think more like "each of the blind men thought they already knew the answer" and so their model found the answer in the place they wanted to find it.

The real question is, if a model gets one thing right (or at least arguably 'more right' or more verifiable against empirical data) then what's effin' wrong with the REST of the model?

If one were conspiratorially-minded one would concern oneself not with what a model got right, but with what everything else it got wrong.

Jan 21, 2014 at 10:34 PM | Unregistered CommenterJEM

I wrote last year on how climate models get top marks.


An ensemble of climate models is like a class full of imbeciles who, when tested, show that no individual scored better than 20%. Nevertheless, the teacher finds that every single question has been answered correctly by at least one imbecile so awards an A+ to the whole class.

Pretty much the same holds true for picking particular aspects of a few models and then imply that the models are capable.

Jan 21, 2014 at 10:58 PM | Unregistered CommenterBernd Felsche

Re: Jack Hughes

Just imagine you have several competing models of the solar system. One model is quite good for Saturn - but lousy for the other planets. A different model is OKish at hindcasting Mercury and Venus for most of the 18th century - but drifts off for recent times and just ignores the outer planets.

There is a problem with your analogy. The planets influence on each other is small compared to the suns influence on the planets so combining separate models might be valid.

On the other hand, clouds (Sherwood et al) and precipitation (Cai et al) will have a strong relationship so your analogy should be a model of the Earth's motion and a model of the Moon's motion. Any model that gets close to correctly hindcasting the past motion either the Earth or the Moon without correctly hindcasting the other is unlikely to have any predictive power.

Jan 21, 2014 at 11:09 PM | Unregistered CommenterTerryS

'wailed the Guardian'

that could actual be used as the by-line for virtual ever Guardian story especially the environmental ones.

Jan 21, 2014 at 11:35 PM | Unregistered CommenterKNR

How long are we to be forced to listen to these people trying to square a circle, that cannot be squared? There has been years of no global warming: yet all climate models forecast warming, sometimes catastrophic warming. CO2 emissions have clearly increased, yet there has been a steady mean global temperature. What other branch of science could survive such a difference between theory and observation?

Jan 22, 2014 at 12:21 AM | Unregistered CommenterPeter Stroud

'wailed the Guardian'--"that could actual be used as the by-line for virtual ever Guardian story especially the environmental ones." --KNR

'groaned the Guardian' is more alliterative.

Jan 22, 2014 at 12:22 AM | Unregistered Commenterjorgekafkazar

Jan 21, 2014 at 8:19 PM @ Jack Hughes

Oh, yeah, that old standby, the Sonora Desert Exercise. Went through it in a team-building scenario many years ago. The team did score higher than I did individually. But, the team also opted to try hiking to the nearest town in 52C temperatures. They all died.

Jan 22, 2014 at 1:04 AM | Unregistered CommenterBart

Geoff:

So 97% of the time, such a coincidental non-overlap of models couldn’t arise by chance. And wherever 97% are gathered together...

...the Anointed Consensus is in the midst. Except in this case Nic Lewis shows by simple probability that the 97% belongs to us. The Deniers are the Consensus. All Cretans are liars, as one of their own prophets said. Where will it all end?

Jan 22, 2014 at 1:32 AM | Registered CommenterRichard Drake

For a true liberal this is not about being right or wrong, but more about being neutral? So the badest non working model is the one?

Jan 22, 2014 at 3:18 AM | Unregistered CommenterJon

Most or all models are based on the political decided UNFCCC.
So the question is less if the models are right or wrong. It's more if the UNFCCC is scientifically right or wrong?

Jan 22, 2014 at 3:23 AM | Unregistered CommenterJon

"For a true liberal this is not about being right or wrong, but more about being neutral? So the badest non working model is the one?"

A true liberal is not supposed to discriminate anything and stay neural. Maybe that's the reason IPCC uses the average of more than 40 non working models?

Another question is why they then discriminate UNFCCC sceptics and label them climate deniers? Isn't that act to discriminate?

Jan 22, 2014 at 5:37 AM | Unregistered CommenterJon

Good work by Nic Lewis. One has to question why these papers were put forward for publication in the first place.

The authors must have been aware of the deficiencies of the models but chose to cherry pick the aspects that suited their agendas.

Jan 22, 2014 at 8:39 AM | Unregistered CommenterSchrodinger's Cat

Just in case there are any policy makers lurking, here's a graphical summary which raises a few questions about the accuracy and value of the climate models:

Models verses observations

Source: http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/

Jan 22, 2014 at 8:49 AM | Registered Commenterlapogus

Nuccitelli discussed the Sherwood paper in some detail at
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/jan/09/global-warming-humans-not-sun
There are 760 comments, and embedded in the article is a neat and very professionally made video interview with Sherwood, in which he predicts 3°C+ warming by the middle of the century (that’s nearly 1°C per decade, presumably starting soon, coming to a thermometer near you).
He ends by saying:

Climate sceptics like to criticise the models and point out how they don’t do this right or that right, and of course they’re not perfect. But what we found is actually that the mistakes are being made by the models that were predicting less climate warming.

Jan 22, 2014 at 9:17 AM | Registered Commentergeoffchambers

Does a principle analogous to Heisenberg's Uncertainty Principle apply to climate models?

The more accurately you can predict clouds, the less accurately you can predict extreme rainfall.

And there was I, naively thinking that rainfall had something to do with clouds! Well, you learn something every day.

Jan 22, 2014 at 9:45 AM | Unregistered CommenterRoy

This is what I like to call 'Tooth paste advert science'. 97% of climate scientists when asked about climate models said they preferred this one. Except when they don't...

Jan 22, 2014 at 11:20 AM | Unregistered Commenterconfused

Jorge

"grauned the Guardian", perhaps.

Jan 22, 2014 at 12:17 PM | Registered Commenterjamesp

Jack Hughes

"shortcomings in the logistics department"

Understatement of the year..?

Jan 22, 2014 at 12:18 PM | Registered Commenterjamesp

It's so strange to try multiple models with presumably fixed inputs rather than pick the best and try varying the inputs. It's really strange to conclude anything from results of a model known to be fatally flawed, rather than just that it is inadequate for the task.

Jan 22, 2014 at 12:22 PM | Unregistered CommenterJamesG

Do you actual believe the BS that you write???

Jan 23, 2014 at 1:49 AM | Unregistered CommenterPeterK

Bernd, that is the funniest and most accurate analogy I've read in a long time, well done.

Jan 23, 2014 at 8:14 PM | Unregistered CommenterJohn Brady

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>