Click images for more details



Recent comments
Recent posts
Currently discussing

A few sites I've stumbled across recently....

Powered by Squarespace
« Climate cuttings 39 | Main | UVa versus Cuccinelli redux »

Mike Hulme on climate models

There is video available here of a lecture given by Professor Mike Hulme entitled "How do Climate Models Gain and Exercise Authority?". Hulme asks whether deference towards climate models is justified and whether we should have confidence in them. I think the answer is "We don't know".

PrintView Printer Friendly Version

Reader Comments (36)

I think the answer is "We don't know".

If you don't know if you should have confidence in a thing, you do not have confidence in it.

So the answer is "No".

Oct 22, 2010 at 9:13 AM | Unregistered CommenterMartin A

We don,t know is right.
The frightening thing is that our government of which ever hue has decided they CAN trust them and as Lord Lawson said "the cost of meeting our current carbon reduction commitments in this country is somewhere between £800 billion and £1 trillion." At a time when we are so much in debt is simply too horrific to contemplate.

Oct 22, 2010 at 9:29 AM | Unregistered CommenterChrisM

All climate research is done in such a way to 'validate' climate models. This is the anti-science approach, a matter of belief, an absolute test of adherence - the models are always right, the data must be wrong and so it must be corrected, picked or ignored.

If you take a closer look you realise that no research is being conducted, or is allowed to be conducted, or even allowed to be published that tests the validity of these models, nor the extent of their limitations, nor that they are internally consistent. This abuse of the scientific method defies the logic that only one model can be correct, or even more profound thay can all be wrong. This how science is actually meant to work.

It is only in climate science that the model is king, exalted and untouchable

In all other areas of science the model is relentlessly kicked from pillar to post to test its fragility, if it breaks it is thrown into the ever growing scientific dustbin along with all the other failed paradigims, hypotheses and theories.

Models are there to be broken, it is a scientist's job to break them.

Climate scientists are simply not doing their job. The sceptics have made them aware of that, but now they are hiding behind the uncertainty arguement. Where the models are still deemed to be correct and it is only the problem of the uncertainty in the range of scenarios which feed the models that are now of concern.

Oct 22, 2010 at 10:10 AM | Unregistered CommenterMac

As there are approx 25 million households in Britain, £800 billion to 1 trillion is a total scam cost of £32,000 to 40,000 per household.

Oct 22, 2010 at 10:11 AM | Unregistered CommenterPharos

People in general are wrongly inclined to believe results produced by computers. The climate models are a propaganda tool.

Oct 22, 2010 at 10:31 AM | Unregistered Commentercosmic

What a very poor, lifeless and unpersuasive speaker! Fluent enough, but didn't seem to know how to make his points (if any) strike home. He could have been dictating the phone book with an equal lack of intensity.

Would it not have been easier just to send round the transcript on e-mail, and save everybody's carbon footprint travelling to hear him read it out?

Or is that the normal standard of discourse and presentation in academe? If so, then it is very disappointing. They should take lessons from Kinnock, Blair, Hague, Scargill, Monckton etc if they really want their argument to be accepted.

Oct 22, 2010 at 10:37 AM | Unregistered CommenterLatimer Alder

There is an interesting and ongoing (two-part) discussion of a potential error in the GCMs (Makarieva et al) at Tav. Confidence in GCMs has to be zero.

Oct 22, 2010 at 11:41 AM | Unregistered CommenterPhillip Bratby

Computers do not yet have self intelligence, they just have the capability to process data faster than a human. Problem with this faster processing is the humans in charge cannot check the result because that would take too long as humans cannot match the computing power.

So Faster Garbage In Faster Garbage out, but Garbage is Garbage just more of it.

Oct 22, 2010 at 11:44 AM | Unregistered CommenterJohnH

Judith Curry had a recent thread on The Culture of Building Confidence in Climate Models.

It was posted on 10/10/10. There is no pressure to read her post. :)

Oct 22, 2010 at 11:45 AM | Unregistered CommenterDon B

I have listened to Prof Hulme's talk a couple of times now, and may put some notes about here later. In the meantime, I will just note that I found his talk interesting and informative. Informative not merely for the ostensible content (which essentially serves to provide a plausible framework for reviewing climate models without actually conducting such a review) but also for what was missing. Contrary to some other posters here, I liked his style of presentation. It was very well structured, very clearly delivered, and avoided histrionics of any kind. In fact, he displayed what Shakespeare called a 'modest stillness and humility'. I on the other hand, feel 'the blast of war' blow in my ears with regard to what has been done with climate models, and so I want to review my notes carefully before sharing them with anyone else!

“In peace there's nothing so becomes a man
As modest stillness and humility;
But when the blast of war blows in our ears,
Then imitate the action of the tiger:
Stiffen the sinews, summon up the blood,
Then lend the eye a terrible aspect;
Now set the teeth, and stretch the nostril wide,
Hold hard the breath, and bend up every spirit
To his full height!”

Henry V

Oct 22, 2010 at 12:12 PM | Unregistered CommenterJohn Shade

Third Assessment Report:

“In climate research and modeling, we should recognize that we are dealing with a coupled nonlinear chaotic system, and therefore that long-term prediction of future climate states is not possible.”

Did they forget about this in AR4 (and now AR5)

Just because you have bigger and faster computers, doesn’t mean that this does NOT still hold…..

Oct 22, 2010 at 12:46 PM | Unregistered CommenterBarry Woods

CLIMATE MODELS ARE THE SOLE BASIS OF ATTRIBUTION OF CLIMATE CHANGE TO ANTHROPOGENIC CAUSES. The Royal Society statement, In the section “Attribution of climate change” state:
“37 The size and sustained nature of the observed global-average surface warming on
decadal and longer timescales greatly exceeds the internal climate variability simulated
by the complex climate models. Unless this variability has been grossly underestimated,
the observed climate change must result from natural and/or human-induced climate

They use terms such as “greatly exceeds” and “grossly underestimated”. To put this in perspective, the observed global warming over the last century stated by the IPCC was 0.7C. About half of that, the warming in the early part of the century, was due to natural causes. Concerning the uncertainties with climate models mentioned elsewhere in the report and the inaccuracies of surface measurements, how can a fraction of a degree C over 100 years be considered “greatly exceeds” or “grossly underestimated”? Note that Mike Hulme referred to Hansens 1988 model that predicted a temperature rise of .26C per decade and the actual turned out to be .18C. This is a 40% error in his model prediction.

In the next paragraph the Royal Society goes on to state:

“38 When only natural climate forcings are put into climate models, the models are
incapable of reproducing the size of the observed increase in global-average surface
temperatures over the past 50 years. However, when the models include estimates of
forcings resulting from human activity, they can reproduce the increase.”

Without models, there is no basis for attribution of climate change to anthropogenic causes.

Oct 22, 2010 at 1:11 PM | Unregistered CommenterB. Kindseth

Any model must be tested to demonstrate its accuracy (skill in current parlance). The testing must always involve the future; looking at the past tells us little about how accurate the model is. Consider attempts to model the solar system or tides over past centuries. Models can be shown accurate only by future behaviour of the system under study. Climate models are no different. Any other approach is equivalent to soothsaying.
Morley Sutter

Oct 22, 2010 at 1:23 PM | Unregistered CommenterMorley Sutter

Essentially, the good prof seemed to be saying that climate models have to be just right in several different ways for them to be trusted at all and maybe not even then.
I should like to have heard more about the success (or as I suspect, lack of success) various model have had in later verification of their previous projections. One example of (one of!) Hansens surface temp predictions that was deemed to have performed slightly better than an educated guess was not sufficient to fill me with confidence.
The Met Offices "long term" predictions from their models have been so bad they have stopped making them public..
In any event, it would seem it is too late now for some academic to be tentatively exploring the reliablity or not of the models. Their projections have been seized upon as gospel and the money is being spent, the taxes are being raised, the windfarms are being built and our economies are being picked up and shaken.
If I do not get my global warming...because even after all the above drastic changes have been made the amount of CO2 man is putting in the atmosphere will go on rising at the present or faster than the present rate...I will be turning on these scientists in ire. They in turn are going to shrug their shoulders and say...We told you not to trust the models. To be fair to Mike Hulme, he has been one of the few insiders who has for some time rather nervously been saying that the models may not be all they have been cracked up to be.
Some have shouted "Fire" when in fact all it has been is they thought they might have seen a wisp of smoke and are now sheepishly going to row back from it. They will all be drawing their public sector pensions while I am struggling to pay my electricity bill.
I will not be happy.

Oct 22, 2010 at 1:33 PM | Unregistered CommenterJack Savage

Backcasting is just are relevant as forecasting in 'testing' the validity of climate models.

How do the climate models do on backcasting over the past 100 years, the so called era of man-made global warming?

They all do poorly.

That is why climate models are recalibrated/reset continually to provide a projection starting point.

Oct 22, 2010 at 1:45 PM | Unregistered CommenterMac

I missed out this link on backcasting.

Oct 22, 2010 at 1:50 PM | Unregistered CommenterMac

Here is a more up to date sceptical position on climate models, as well as the climate debate.

Oct 22, 2010 at 1:56 PM | Unregistered CommenterMac

When it comes to modeling anything with a computer program the only thing that matters is its forward predictive power. You can give me any data you want and I can fit a model to it with something like a non-linear regression, but what does that tell you about tomorrow? Nothing.

In the case of weather forecasting, about the best predictive power I see on the tele weather shows is about one week maximum, with a much lower likelyhood of accuracy in the last couple days of that week than the first couple. When it comes to climate predictions, I have seen one abject failure after another. Last winter was suppose to be the warmest on record, and instead it was the coldest in 30 years.

As for the question of "whether we should have confidence in them" --- ABSOLUTELY NOT!

Oct 22, 2010 at 2:23 PM | Unregistered CommenterDon Pablo de la Sierra

@ B Kindseth
Without models, there is no basis for attribution of climate change to anthropogenic causes.

The whole idea of models seems to be to undermine one of the areas of common ground on climate, which is that the climate is a chaotic system.

If one maintains that one can discern in it - and predict - the effect of CO2, this amounts to saying that the rest of the system self-cancels, and that the CO2 is the only vector of variation.

I don't understand how that is compatible with the belief that the system is chaotic, unless every model is a Monte Carlo model - but I don't think they are.

@ Jack Savage:
If I do not get my global warming...I will be turning on these scientists in ire.

You'll get your global warming, Jack. They'll make the past colder.

Oct 22, 2010 at 2:37 PM | Unregistered CommenterJustice4Rinka

Trust in computer models of climate is rather like accepting an argument from authority.

To put it rhetorically, "These models are developed by some of the foremost computer experts using the most advanced computers and working with the top climate scientists. Is anyone seriously suggesting that these people don't know what they are doing to the extent of making a fundamental blunder or that they are all colluding in a gigantic and deliberate deception?".

Oct 22, 2010 at 4:00 PM | Unregistered Commentercosmic

Physical science tell us that by simply increasing a greenhouse gas should lead to more warming.

Problem No.1: The theoretical warming from releasing more CO2 into the atmosphere, AGW, is lower than calculated. Natural climate cycles, known and unknown negative feedbacks probably account for this discrepancy. It could well be the case that the earth's climate is so complex and random that an increase in CO2 makes no difference.

Problem No.2: The amplified warming from releasing more CO2 into the atmosphere, CAGW, is solely based on climate model projections that assume large positive feedbacks. These climate models are not validated, and in some sense can never be. Further more, climate model projections and backcasts are in serious error, and have been for the last 20 years.

Conclusion: When a doctrine falls due to criticism it can only be saved if it is accepted as dogma. To believe in CAGW you have to profess faith in climate models, and find ways to bolster that faith thru ritual. That is the essence of the modern climate religion

Oct 22, 2010 at 4:03 PM | Unregistered CommenterMac

Here's a nice survey of model, with comparisons to what has been happening.

Oct 22, 2010 at 4:14 PM | Unregistered CommenterZT


These models are developed by some of the foremost computer experts using the most advanced computers and working with the top climate scientists.

<sarc>Yeap, absolutely true, just look at the Harry_Read_Me file for proof of all that. </sarc>


Spot on.

Oct 22, 2010 at 4:15 PM | Unregistered CommenterDon Pablo de la Sierra

Don Pablo,

This propaganda isn't aimed at people who are capable of reading the Harry_Read_Me file and understanding its implications. It isn't aimed at people who've worked in the computer business and on occasion seen "the foremost computer experts using the most advanced computers" fall flat on their faces and waste millions on the way.

I don't think the persuasive effects of climate models are so easily dismissed as simply referring to the Harry_Read_Me file. You have to start to explain what computer models can and cannot do and what validating the model means. There's not much doubt that they have had a persuasive effect
grossly disproportionate to their 'skill'.

Oct 22, 2010 at 4:51 PM | Unregistered Commentercosmic

"These models are developed by some of the foremost computer experts using the most advanced computers and working with the top climate scientists. Is anyone seriously suggesting that these people don't know what they are doing to the extent of making a fundamental blunder or that they are all colluding in a gigantic and deliberate deception?".

Having written my first program in 1976 and having been in the computer industry ever since (on the technical side, not the management) (but now retired) I think I could safely be described as a computer expert. In answer to the question "Is anyone seriously suggesting that these people don't know what they are doing to the extent of making a fundamental blunder...?". The answer is yes. Not to first part (dont know what they are doing), but to the second. It still amazes me how often fundamental blunders are made both by me and by pretty much everybody I know working in computers. Subtle blunders are even more likely.

Oct 22, 2010 at 5:06 PM | Unregistered CommenterTerryS


I quite agree -- for a long time I have been saying that this is a fight about rhetoric, not facts.

Oct 22, 2010 at 5:22 PM | Unregistered CommenterDon Pablo de la Sierra

I don't think we should be in too much of a rush to say that we do not "know" whether the IPCC climate models are likely to be correct or not.

We know for example that while the atmospheric CO2 rises at a monotonously even rate almost all versions of the climate record show clear evidence of cycles.

These include the Milankovitch cycles which we know correlate beautifully with the glaciation episodes of the last million years, the sunspot cycles and c. 60 year cycle which, whatever its cause, is emblazoned on all climate records for recent centuries I am aware of. (apart notably from the tree ring based portion of Mann's hockey stick).

Authors like N. Scafetta 2010 in the "Journal of Atmospheric and Solar Terrestial Physics" looks at all this and states baldly that the failure of all the IPCC 2007 models to reproduce these climate oscillations indicates that they are "incomplete and possibly flawed". He also states that "by failing to simulate the observed quasi-60 year temperature cycle" they "have significantly over-estimated climate sensitivity to anthropogenicGHG emissions by likely a factor of three"

This all looks to me like pretty powerful evidence that the IPCC climate models have got it wrong.

Oct 22, 2010 at 5:30 PM | Unregistered Commenterdiggerjock

On the subject of computer models, I thought I’d just give my experience of working with complex computer models in the nuclear industry and comparing my experience with that of climate models, since the problems are not disimilar. This is written without having listened to what Hulme has to say.

In both instances, the models are solving complex thermal-hydraulics problems. In the nuclear industry there were a variety of computer models (I call them codes), ranging from codes solving the physics from first principles (known as hands-free codes because all the user had to do was input the initial and boundary conditions and the code would do the rest) to those containing many correlations and specific sub-models based on experimental data.

Having developed the codes, validation would consist of running the codes against existing data obtained from experimental facilities at various scales. Code documentation would be developed, including user guidelines. Final validation was performed as part of what was known as the International Code Analysis and Assessment Program (ICAP). This consisted of International Standard problems (ISPs), consisting of a program of blind and double-blind calculations of experiments, prior to the experimental results being revealed. Double-blind calculations were of experiments in new facilities for which no prior experiments had been performed. After the calculations had been submitted to ICAP and the results released, every participant would redo the calculations to see where they had gone wrong and meetings were held where everybody could openly share the lessons learned.

In the industry these calculations were used to further develop user guidelines and demonstrate code validation. Universities and government bodies took part as well as industry. My experience was that the university people were just interested in publishing and getting on to the next ISP, similarly government organisations which would also be pushing for more funding to sort out the code deficiencies. The spread of results, even from participants using the same code, was enormous. Needless to say, the hands-free codes proved inadequate and sub-models had to be incorporated to cater for specific phenomena that were too difficult to calculate from first principles. In addition, within the industry extensive verification of the coding was performed to minimise coding errors. And everything was independently verified, fully documented and archived.

My take on the climate models in comparison to the nuclear industry codes is that, regardless of whether it is possible to solve the physics with a satisfactory 3-d grid, there is no way that acceptable validation can be obtained (except by calculating the future climate and waiting hundreds of years for the “experimental results” to be available). Back-casting is inadequate and experiments cannot be performed. There are just too many unknowns and uncertainties in the climate and the problem is too complex. I am also pretty confident that there will be inadequate verification and documentation of the climate models.

If we got the code wrong in the nuclear industry, the consequences could be serious (but the safety margins were enormous). The “climate industry” doesn’t care if the models are wrong, since we will only be spending trillions as a result.

Oct 22, 2010 at 5:39 PM | Unregistered CommenterPhillip Bratby

I foolishly talked earlier of having made some notes to share. Here they are, Sorry for the length, but they are easily skipped over. Lots of good points made by other posters.

I must confess to having enjoyed the talk by Professor Hulme. He spoke clearly and in a structured way, presented lots of ideas, and generally came across to me as intelligent and thoughtful. I just wish I were bright enough or informed enough to follow all of them. As it is, I still got lots to think about from it, and the notes which follow are in part intended just to share my puzzlements and prejudices.

In an age when computer models led the British government into patently foolish and expensive policy decisions on, for example, BSE, Swine Flu, and Climate, and into many an extravagant IT venture of its own, the scope for taking the piss about governments and computer models seems very large indeed. However, this was not the aim of the conference for which Prof Hulme gave this keynote address.

To my shame, I have a picture of him as a trendy CofE vicar trying to impress the WRI with his breadth of analysis and willingness to tackle controversy, say on the use of computer-generated parables for spreading the faith. The talk will take place in the church hall, at 5.3opm next Tuesday. All are welcome. He will mention some ways in which that practice could be assessed, and give the ladies a bit of a frisson by appearing to be open to at least the possibility that all is not well with it. However, he will avoid reference to the trenchant analysis of those skeptical of such products, and stick to the safer ground of quoting only from the faithful. At no point will he give the slightest hint that the whole silly business is a shameless artifice to win people over to his side, a side which is otherwise rather short of arguments in its favour. So at the end of the talk, covering ways in which the parables and their generators might be assessed, and sprinkled with hints that they are to be taken seriously nevertheless and might foretell severe calamity, he notes that care is required when using the parable-generators in a variety of cultures. So who could argue with that? The ladies will go off into the night, content that their Vicar is a bright chap who thinks deeply about many things. A bit like I did after watching the actual talk, but with some snarkiness building up inside due to my having bits of prior knowledge and opinions of my own.

He began by noting that humans are inclined to defer to authority, be it church or state or science, and find it hard to give it up this deference. His talk will itself provide an illustration of this effect, epitomised by the respect he gives to the IPCC and to the UK Met Office, for example.

He mentions the considerable confidence asserted by the IPCC in climate models. <not always consistently with the deliberations within the IPCC however, e.g. see >

< I will use text in angular brackets where it seems necessary to make it clear that these are my remarks, or asides, and not a report from the talk.>

‘Are climate models worthy of our deference?’ - MH He doesn’t provide a conclusive response to this, so the merit of his talk is in providing a plausible framework within which the question can be examined.

He notes that as climate models become more complex, the greater the uncertainties, Trenberth: ‘the spread in modellimg results will be larger in the next IPCC assessment’.

<There is just more that can go wrong. A tacit admission of the relative crudity and misleadingly high confidence touted by the IPCC?. I suppose this may mean a downplaying of loose talk of confidence in the next IPCC reports. They will need to find something else to keep the PR folk happy. Yet, he overlooks the fact that if this is in fact the case, the modelers are making things worse and worse for themselves by trying to include more and more. Does it not occur to him that something is fundamentally wrong here? Perhaps that such modeling, with our modest grasp of the science and the interactions, the stochastic messiness of it all, our modest programming skills, and our modest hardware, that the whole business is intrinsically impossible? Weather forecasting computer models make sense because they are dealing mainly with things that already exist, such a warm front, or a hurricane, and they provide extrapolations of their development aided by hourly or three hourly observations of the real situation. Climate models, or longer-range forecasts, on the other hand, have to deal with things that do not yet exist, and the options for them are myriad in terms of when where and how they will form and proceed, and what they will lead to next.>

He, Hulme, has developed a model, originally due to someone in the Netherlands called Arthur Peterson, with 4 'reliability’ dimensions to help pose relevant questions around whether, or to what extent, we should trust climate models. He talked through these, labelled R1 to R4, raising lots of interesting points in each. His own heart is clearly in the last of these, which is to do with the acceptance of models within cultures/societies:

R1 Coding precision. Is the computer code stable. By which he seems to mean, is it packed full of bugs or not, but that is not clear. Is the code, portable , moveable? Assessed internally by coding centres. Public demands may be made to open up the model codes. Mentions CRU being asked to release computer code for their data analysis work. <no mention of the shambles revealed by the Harry Read Me file>. Mentions the Climate Code Foundation, who want to help restore public trust in climate science by getting code published automatically and routinely. <I suspect the climate establishment is not quite ready for that.>

R2 Statistical accuracy. To do with verification or validation. Do the results resemble observed climates. Recalled his own work with model validation in 1988 to 1998. <did not mention prediction which is more important, since models can be tweaked to fit any historical sequence> Asserts that models of open and complex systems cannot be verified (Oreskes)?, and that‘well-rehearsed difficulties’ include the quality of observations, and the extent to which models are independent of data sets used to validate them. He notes there are many ways of judging model outputs. How is a model to be judged adequate? Refers to one assessment by Julia (he refers to her as ‘Jules’) Hargreaves, of a forecast by James Hansen at GISS. < Was it my fevered imagination, or does his delivery became a bit more hesitant, with more hums and ehs and little coughs, as he moved on to this topic?>. <Now she seems to have had the novel wheeze of comparing Hansen’s predicted rising trend with a temperature held constant at the 1988 value to see if there was skill in Hansen’s forecast. This is a bit like seeing a bus approaching in the distance and guessing when it will arrive, and claiming skill-superiority over a hypothetical someone who asserted that it would stay where it was. Not impressive. No mention was made of corollaries to Hansen’s temperature forecasts, such as his claim that New York would be flooded by an increase in mean sea level, and the high temperatures would cause civil unrest in the remaining dry streets (see: ) . For various other forecasts or extrapolations or projected scenarios or whatever associated with computer models, the partial list to be found here is instructive about ways in which these models have failed and can be expected to fail: (scroll down several screens to the ‘Computer Climate Models’ section) >

R3 Methodological quality. Focuses on quality of model inputs, structure, expertise captured in the model, standing of the modellers. Refers to criticisms of inappropriate levels of expertise in UK Met Office modeling made in he House of Lords some years ago < what’s this?> Raises question of how to interpret differences between models. IPCC uses ‘multimodel ensembles’ and uses the spread to assess uncertainty as if they were all independent. MH notes that the models are not independent.

R4 Social acceptability. Requires scrutiny of ‘climate models as social objects in social worlds’. Which ‘political actors are offering their endorsements’, or even ‘bearing witness’ for or about these models. Refers to his PhD student, Martin Mahoney’s work on PRECIS. This is a Met Office model, distributed to over 100 countries in last 10 years or so. PRECIS seen as effective for ‘raising awareness’. It ‘can make climate change real’. It can ‘attract public endorsements’. It was said by some player that it had been ‘useful to have the UNDP seal of approval on it’. Clearly PRECIS has been found by activists to be useful for ‘convincing policy makers that they should take a stand’. Hulme also claimed that the ‘epistemic authority of the model’ is safeguarded by its continued links to the UK Met Office. <Now a Met Office governed by the man who turned the WWF into a poltical organisation zealously campaigning on AGW, and which backed-away from seasonal forecasting after making a fool of itself with confident talk of snowless winters and BBQ summers, would not appeal to me as a source of ‘epistemic authority’ if I was to be marketing climate models to anyone other than the most gullible of governments. I suspect the apparent success of PRECIS will one day make a useful study for anthropologists trying to make sense of the late 20th and early 21st century adoration of computer models and the ways in which they seemed to overwhelm policy makers.>

A statement towards the end of his talk: ‘Models offer the single most powerful way for scientists to organise their knowledge’. < I am inclined to the view that they can get in the way of scientists’ knowledge. I remember when I studied theoretical physics, as a not very bright student – probably the least able in my class, but still having disdain for those who resorted to computers to get results, often in very crude, inelegant ways, rather than grappling with theory and experiment and new ideas. Computer modeling easily became an end in itself, with its own language and challenges, and it seemed like something which contributed nothing of intellectual value, albeit being most welcome for doing arithmetical chores. In this way, I suspect the apparent rise to prominence of computer models has damaged progress with climate science, diverting funds and fame away from work of more modest import but of more lasting value. As it is, it seems sometimes, to me as a rank outsider, that climate science has degenerated into a study of climate models.>

Another interesting remark towards the end: ‘Understanding the social authority of climate models becomes critically important’. <This I think is what particularly interests Prof Hulme, and which may have led to the otherwise misleading title of his book, ‘Why We Disagree about Climate Change’ a book which concentrates on social and political topics, by and large taking the IPCC line on climate as a given..>

Final slide showed that UK Met Office climate projections totally subject to climate models. The Netherlands on the other hand, were much more eclectic.

‘UK 2009 – scenarios extracted from one model hierarchy from one modeling centre funded by the British Government. Offered sophisticated probabilities of outcomes at fine temporal (hours) and spatial (5 km) scales.

Netherlands 2006 – scenarios which used a synthesis of insights and methodologies: climate modeling, historical evidence, local meteorological reasoning. Expert judgement to blend different information sources.’

<Now I haven’t seen either or them. I suspect the UK one is fine-grained baloney, replete with snowless winters and long hot summers, and that the one from the Netherlands is altogether more grounded in reality. But that’s mere prejudice, from someone whose first formal instruction in FORTRAN came from someone who advised that computers should not be used for anything where you do not know what the right answer is. Now I see that that can include the writing of software to provide predetermined answers of highly desirable ‘rightness’, such as ‘exposing’ a dominating role for CO2 in the atmosphere.>

Final quote, referring to the IPCC ‘Its authoritative deployment of climate models and their simulated knowledge of putative future climate becomes dangerous if, I would suggest, in doing so it erases the differences that exist between cultures and societies in the different forms of social authority that are granted to climate models.’

<I still don’t know what this means. He read it out very deliberately from his notes, and so it is not just a remark thrown in at haste to conclude his talk. He seems a bright chap, much brighter than me, so I think there will be a lot of interesting ideas in and around it. I just need more help to see them. >

Overall, an interesting talk, one in which Prof Hulme has provided a useful framework for assessing computer models, but a talk in which he has shied away from any ruthless application of the framework. It is as if he is reasonably content with the way in which things have been going with the IPCC and its impact on world affairs, and it is only his intellectual curiosity that draws him to talk of model impacts in society. But for me, and many others, our main concern is that this impact, this use of computer models, has been excessive and harmful, and could yet cause a great deal more harm before the dust they deserve has settled upon them.

Oct 22, 2010 at 6:50 PM | Unregistered CommenterJohn Shade

When people talk of the models, I link directly to the IPCC

If you do not have clouds, you do not have jack an albedo decrease of only 1%, bringing the Earth’s albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, which is the same amount a doubling of CO2 will give without the unproven feedbacks.

And…"... the amplitude and even the sign of cloud feedbacks was noted in the TAR as highly uncertain

"... the amplitude and even the sign of cloud feedbacks was noted in the TAR as highly uncertain, and this uncertainty was cited as one of the key factors explaining the spread in model simulations of future climate for a given emission scenario. This cannot be regarded as a surprise: that the sensitivity of the Earth’s climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth’s radiation budget since the early 1970s (Vonder Haar and Suomi, 1971). Clouds, which cover about 60% of the Earth’s surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth’s albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. Simultaneously, clouds make an important contribution to the planetary greenhouse effect. ..."

Oct 22, 2010 at 7:21 PM | Unregistered CommenterEd Forbes

Thank you John S for that summary. Hulme often talks in riddles that make no sense to a normal person. I think it's deliberate obfuscation.

I think the message has to continuously be put out that climate models are not as good at predicting (or projecting) the future as is astrology.

Oct 22, 2010 at 7:26 PM | Unregistered CommenterPhillip Bratby

As I recall, they get their best backcasting results from mixing an ensemble of 22 runs with different starting conditions and different models. Color me unimpressed. I also disagree with mixing proxies into mush. Pick your best proxy and let's examine it by itself. If it's garbage (has significant divergence problems) then you don't increase the intelligence of the data by mixing it with other noisy data.

It works great if you're trying to make hockey stick handles to append to your hockey stick blade.

Oct 22, 2010 at 8:05 PM | Unregistered CommenterKen Coffman

A useful talk, if only to see which way the climate community are going to spin next.

From the talk I got the impression that they were hawking one of the models round to different countries and then getting meteorologists to illustrate it with local trends to make it more realistic. I think in other fields it’s called artistic license.

He also seemed more concerned with how to get people to believe the models were reliable than actually prove that they were.

Oct 22, 2010 at 9:30 PM | Unregistered CommenterTinyCO2

Great comments, people. The Ronco Wank-O-Matic Climate Model is starting to make funny noises. Did anyone check the oil lately?

Oct 22, 2010 at 11:13 PM | Unregistered Commenterjorgekafkazar

Hulme presents a procedure for the validation of a climatological model that is fundamentally illogical. According to Hulme, one compares the predicted values of the dependent variable of the model to the measured values. If the predicted values "resemble" the measured values, the model is validated. Otherwise, it is invalidated.

However, perceived resemblance lies in the eye of the beholder with the result that person A may perceive resemblance while person B may perceive non-resemblance. In this way, Hulme's idea violates the law of non-contradiction. Non-contradiction is the cardinal principle of logic.

Oct 23, 2010 at 2:43 AM | Unregistered CommenterTerry Oldberg


In validating a model or a method, say studying fingerprints taken at a scene of a crime, you do not just look for points of resemblance or similarity to determine a match you also look for mismatches. Just one mismatch can mean the difference between innocence and guilt, or whether a hypothesis is correct or completely wrong.

The models, all the models, can nor forecast the future climate, or even be used in backcasts.

Oct 23, 2010 at 10:01 AM | Unregistered CommenterMac

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>