Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Section 14 for Balcombe protestors | Main | Diary date: persuasion »
Friday
Sep062013

Garden shed tinkerers

There is a fascinating layman's intoduction to climate models over at Ars Technica. Author Scott Johnson starts out with the standard potshot at global warming dissenters, takes a look at how a GCM is put together and talks to lots of climate modellers about their work and all the testing they do; it has something of the air of a puff piece about it, but that's not to say that it's not interesting.

Here's how it opens:

Talk to someone who rejects the conclusions of climate science and you’ll likely hear some variation of the following: “That’s all based on models, and you can make a model say anything you want.” Often, they'll suggest the models don't even have a solid foundation of data to work with—garbage in, garbage out, as the old programming adage goes. But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?

Climate models are used to generate projections showing the consequences of various courses of action, so they are relevant to discussions about public policy. Of course, being relevant to public policy also makes a thing vulnerable to the indiscriminate cannons on the foul battlefield of politics.

Skepticism is certainly not an unreasonable response when first exposed to the concept of a climate model. But skepticism means examining the evidence before making up one’s mind. If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.

"Useful scientific tools"? Well yes, I think I would agree with that. The article describes how a divergence of model and real-world behaviour can help uncover gaps in our knowledge. This is great - this is what a GCM should be for. What it isn't is a prediction of the future - something we can pin policy measures on. But while the article is entitled "Why trust climate models?", in fact to its credit, the article doesn't push this more expansive claim about the usefulness of climate models very much. The case seems to be that because modellers do a lot of testing against historic data we should trust the models. Not convincing at all, in my opinion.

The flimsiness of the case also becomes clear when Steve Easterbrook makes his entrance:

Easterbrook has argued against the idea that an independent verification and validation protocol could usefully be applied to climate models. One problem he sees is that climate models are living scientific tools that are constantly evolving rather than pieces of software built to achieve a certain goal. There is, for the most part, no final product to ship out the door. There's no absolute standard to compare it against either.

To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well. Whether you add it or not then depends on what you're trying to achieve. Is the primary test of the model to match certain observations or to provide the most realistic possible representation of the processes that drive the climate system? And which observations are the most important to match? Patterns of cloud cover? Sea surface temperature?

Here, Easterbrook seems to be making a pretty strong case that climate models have no part to play in the policy process. How can you have a model that can't be built and tested to engineering standards informing policy? How can the public trust the moving feast that he describes? And if models really are being built without a specific goal in mind then the funding councils surely have some fairly pointed questions to answer.

The public is being asked to fork out lots of money on the basis of climate model output. Climate modellers have to decide if they are going to be garden-shed tinkerers or engineers whose findings are robust enough to inform the policy process.

 

PrintView Printer Friendly Version

Reader Comments (105)

GIGO is a standard problem for 'all models ' not just climate, the assumptions you make are key to what you see as much as any maths you use . And it’s those 'assumptions' were human factors come in , remember many working in this area ‘believe’ in what they do and that is often the route to poor work . While all the increase in computer power as meant is that they can be wrong faster , not that they get it right more often .

Sep 6, 2013 at 9:03 AM | Unregistered Commenterknr

"How much evidence? Shed loads"

Sep 6, 2013 at 9:10 AM | Unregistered Commentercreosote

One factor they dont factor in is their motive.

Sep 6, 2013 at 9:11 AM | Unregistered Commenterjamspid

Hi Andrew,

Thanks for highlighting this article, and for not dismissing it out of hand.

I think it is incumbent upon climate modellers to explain what might be reasonable expectations of their models, particularly in terms of prediction. I think that expecting engineering-level verification and validation is optimistic for Earth-system class models. I don't think that climate scientists generally claim that level of V&V - although weather modellers probably can.

Basically, the adequacy of the models is different for different things. You shouldn't throw out all of the information that you gain from the models, because one small part of the model is inadequate or experimental. It would be foolish to ignore robust information gained from these things - and no matter which way you slice it, lots of the information from climate models is robust.

Climate science isn't the only place that you see these big models being used. There is lots of literature on galaxy formation, nuclear physics etc. You should check it out!

Doug

Sep 6, 2013 at 9:14 AM | Unregistered CommenterDoug McNeall

But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?"

Everybody who understands that an unvalidated model is useless* really knows enough to judge what comes out of a climate model.

(And who understands that approximately reproducing a small sample of historical data in no way constitutes "validation". It constitutes what can be termed a 'sanity check' but that is a long way from being a validation.)

_______________________________________________________________________
* I should have written "worse than useless".

Sep 6, 2013 at 9:17 AM | Registered CommenterMartin A

The thought occurred to me that perhaps it would be interesting to apply some of the techniques of artificial intelligence (AI) to historic data and then see if they could predict future climate changes better than existing models do. A quick Google search turned up the article below. Unfortunately neural networks are like black boxes. Even if they were to produce better results than climate models they would not necessarily improve our understanding of the processes controlling the climate.

The article is behind a pay-wall but there is a lengthy abstract which I have copied below.

Application of artificial neural networks in global climate change and ecological research: An overview
ZeLin Liu, ChangHui Peng, WenHua Xiang, DaLun Tian, XiangWen Deng, MeiFang Zhao
Chinese Science Bulletin December 2010, Volume 55, Issue 34, pp 3853-3863
http://link.springer.com/article/10.1007%2Fs11434-010-4183-3

Abstract

Fields that employ artificial neural networks (ANNs) have developed and expanded continuously in recent years with the ongoing development of computer technology and artificial intelligence. ANN has been adopted widely and put into practice by researchers in light of increasing concerns over ecological issues such as global warming, frequent El Niño-Southern Oscillation (ENSO) events, and atmospheric circulation anomalies. Limitations exist and there is a potential risk for misuse in that ANN model parameters require typically higher overall sensitivity, and the chosen network structure is generally more dependent upon individual experience. ANNs, however, are relatively accurate when used for short-term predictions; despite global climate change research favoring the effects of interactions as the basis of study and the preference for long-term experimental research. ANNs remain a better choice than many traditional methods when dealing with nonlinear problems, and possesses great potential for the study of global climate change and ecological issues. ANNs can resolve problems that other methods cannot. This is especially true for situations in which measurements are difficult to conduct or when only incomplete data are available. It is anticipated that ANNs will be widely adopted and then further developed for global climate change and ecological research.

Sep 6, 2013 at 9:19 AM | Unregistered CommenterRoy

I work in software, and the fact that software evolves is no reason not to validate it. That's why you have controlled releases and versioning. Each release of the model should have proposed goals which can be measured against reality. The fact that version N + 1 is being developed while this is going on has no relevance except that bugs uncovered in the previous version can then be corrected in the next version.

It's not only sensible, but it's standard practice in every other field of software engineering.

They are trying to argue climate models are a special case of software that should be excluded from the normal industry standards of verification and validation. Too magic and complicated to be ordinary software, in other words. More self-puff.

Sep 6, 2013 at 9:23 AM | Unregistered CommenterTheBigYinJames

As a good cook says "The proof of a pudding is in the eating".

The track-record of historic GCMs is abysmal, so wait until they've consistently proved themselves before basing policy on their predictions.

Sep 6, 2013 at 9:26 AM | Unregistered CommenterJoe Public

Doug

Can we think of other models on the scale of a GCM with policy relevance?

Sep 6, 2013 at 9:29 AM | Registered CommenterBishop Hill

This is a very good reason not to trust the models.

http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/

Sep 6, 2013 at 9:30 AM | Unregistered CommenterDon Keiller

@TheBigYinJames Software engineering practices like version control have been standard in climate modelling for many years. Model releases are indeed measured against reality in a planned and controlled way.

Doug

Sep 6, 2013 at 9:30 AM | Unregistered CommenterDoug McNeall

What a great question at 9:29 AM.

Sep 6, 2013 at 9:32 AM | Registered CommenterRichard Drake

No one is saying that models are always bad.

What sceptics are saying is that the models used by the Global Warming community were not sophisticated enough to properly model the real world. And yet they were held up as proofs that the science was understood. Then, when they started to go wrong, the models were not amended in the light of our new experience, but politicised by groups who believed that CO2 concentrations alone held the key to climate changes. So important features of the climate were ignored and repressed in favour of the AGW hypothesis.

If the models were to include thunderstorms and accurate cloud cover, for example, incorporating Svensmark's work, then they would probably act as good predictors of future climate. Unfortunately, we would then probably find that the climate will not actually change very much at all, and that there is no point in doing anything about it...

Sep 6, 2013 at 9:32 AM | Unregistered CommenterDodgy Geezer

Doug,

that's good to know. So why was the article claiming that this was somehow impossible for climate models due to their constantly evolving nature? He says he talked to loads of climate modellers, did he just make that part up?

Sep 6, 2013 at 9:36 AM | Unregistered CommenterTheBigYinJames

More alarmist prognostication, post normal justification stirred in with speculation on what realists actually think, how they do try to second guess and thus to then to strike down - real science with a blarney of words and irrationality.

The game goes on and among many, if not one of the greatest refrains from the green lobby is, "the climate models predict", when nothing could be further from the truth, climate models postulate to suggest - that is all. Climate modelled statistical projections, are a bolster to the mind game casuistry - the environmentalist campaign groups specialise in.

Politicians love the 'what if' scenario, the IPCC uses it and GCM's are WMD as part of its fake armoury - no matter how many algorithms, no matter how many mega-giga-yottabytes - modelling dynamic and chaotic climate systems when the processes are as yet dimly understood - is so far beyond our capabilities, as to make them [GCM's] about as useful as reading the runes and soothsaying.

"Garden shed tinkering" - is about right.

Sep 6, 2013 at 9:38 AM | Unregistered CommenterAthelstan.

"But skepticism means examining the evidence before making up one’s mind."

However being a climate modeler means making up your mind first, fitting the model to your assumptions then declaring that your circular reasoning of getting out exactly what you put in is somehow proof of the assumptions made.

Then they proclaim that a hindcast is sufficient for validation when real modelers know it is very easy to accurately hindcast anything even with rank bad assumptions; the real test is prediction.

And when the predictions fail the climate modeler first tries to protect his assumptions for 10 years at least before finally being forced to accept (kicking and screaming) that the assumptions might possibly be wrong. He is however absolutely sure that the putative extra warming he expected must be hidden by an invented cooling even if by an entirely unphysical mechanism or by contradictory assumptions. At no time will he accept the manmade warming is not there even though nature is obviously trying to tell him this.

And faced with the fact that the model can predict nothing with any useful accuracy his duty is always to pretend that this garbage output is still good enough for policy with the facile and erroneous assumption that "it's all we have".

Well I use models every day; creating and running them. My models are all a darn site more accurate than any climate model but I still don't trust them. In fact most folk in other disciplines do not trust models to the extent that climate modelers do because we live in the real world where errors have big consequences.

Climate models are not just imperfect they are utterly inadequate for policy. Without the models there is no need for alarm and no capability to separate out any contribution from humankind at all. So model adequacy is not a point to simply dismiss. It is of vital importance!. Nobody should just be taking a climate modelers word that his model is good when his results clearly show otherwise.

Sep 6, 2013 at 9:40 AM | Unregistered CommenterJamesG

Ars on climate is a bit like the Guardian but more so. Name one article reporting any study with doubts about CAGW. Thought not. And the comments sections are unreadable rants.

Sep 6, 2013 at 9:47 AM | Unregistered Commentermichel

Doug McNeall

If as you claim models are constantly being tested against reality before their output is released for public policy use then it would help your cause if you can explain just exactly why they have been so utterly bad at forecasting.

Sep 6, 2013 at 9:52 AM | Unregistered CommenterJohn B

One of the biggest sources of confidence in the models is that they give results that are broadly consistent with one another (despite some very different scientific choices in different models), and they give results that are consistent with the available data and current theory,” Easterbrook said.

No, this is one of the biggest reasons to be skeptical about them.

Sep 6, 2013 at 9:54 AM | Unregistered Commentermichel

Doug McNeall

" I think that expecting engineering-level verification and validation is optimistic for Earth-system class models. I don't think that climate scientists generally claim that level of V&V "

Agreed Doug, but the establishment is claiming a far higher level of V&V one claimed leaves them with no option other than to enact legislation that WILL negate the predicted scenarios.

Doug, I have one question, a simple one, are model predictions/projections of future rates of warming improving? The UKMO have been producing "Decadal Forecasts" annually since 2005. How are they performing? Simple and easy to demonstrate plot month by month model mean against HadCRUT3, not 4 because it was not in existence when the predictions/scenarios were made.

There are 8 scenarios being played out, monthly model mean plotted against actual observed data. Just update all 8 every month and publish 8 charts with two lines on each. UKMO has a very nice web site and is always looking to post interesting aspects of "Climate Science". What is not to like?

If done and it shows an ongoing improvement, you never know it might just help towards placating this particular "sceptic" and I suspect quite a few others. In my experience they tend to be rather keen on actual data.

Sep 6, 2013 at 9:55 AM | Registered CommenterGreen Sand

@Bishop Hill

That's a great question. I'm not aware of any that share *all* the same features as the "climate modelling challenge", but some share individual features.

*These guys (for example) run very large simulations, with complex physics models, that have a stong bearing on the future wellbeing of humans/the planet.

Weather models are large, complex, expensive, and need human interpretation. They've been very sucessful at informing decision makers.

Ironically, the oil and gas industry uses lots of the same techniques for uncertainty analysis that we're using in climate models - mainly for working out what is underground, given limited data.

I'm sure there are more, it'd be interesting to list them all.

* https://newsline.llnl.gov/employee/articles/2002/03-08-02-asci.htm (just in case the html doesn't work)

Sep 6, 2013 at 9:56 AM | Unregistered CommenterDoug McNeall

Doug McNeal states

" I think that expecting engineering-level verification and validation is optimistic for Earth-system class models. I don't think that climate scientists generally claim that level of V&V - although weather modellers probably can."

As a person who has spent his working life developing complex models I think this is a load of nonsense. If you cannot validate a model it has no use other than playing games. I get depressed when I hear these kinds of comments from people who shppuld know better.

Sep 6, 2013 at 9:57 AM | Unregistered CommenterConfusedPhoton

" I think that expecting engineering-level verification and validation is optimistic for Earth-system class models. I don't think that climate scientists generally claim that level of V&V - although weather modellers probably can."

"although weather modellers probably can"

Wow, that is some statement.

Sep 6, 2013 at 10:03 AM | Unregistered CommenterAthelstan.

lots of the information from climate models is robust.

Could you elaborate please? I'd like to know which models output has withstood the test of time.

Climate science isn't the only place that you see these big models being used. There is lots of literature on galaxy formation, nuclear physics etc. You should check it out!

Models used in galaxy formation are not presented as either fact, experimentation, or used as a replacement for observation.

It is the same in nuclear physics. Their models showed the existence of the Higgs-Bosun but it wasn't until their experimentation showed with 99.9999426697% certainty that it existed that the Higgs-Bosun was finally accepted.

Sep 6, 2013 at 10:03 AM | Unregistered CommenterTerryS

Doug
You'll see from the comments above that people here are not exactly totally convinced by your defence!
Let me agree that models have their uses, in climate as in other areas. The difference is that in those other areas reality acts as a restraint. In climate circles we have seen the most unlikely ideas postulated apparently in a form of mad auction to see who can make the output as scary as possible.
We have x models, all supposedly based on the laws of physics, that come up with x different results and there is no way in this universe of establishing which, if any, of these make the best sense simply because the reality that they are supposedly trying to model will not manifest itself for anything between 10 years and 100.
Meanwhile as Mother Nature decides not to co-operate the climate modellers continue to argue — not that models have their place (agreed) but that they continue to show that the hypothesis is correct and that the models are still within the error boundaries while they push the falsification date ever further into the future.
That is where the problem lies.

Sep 6, 2013 at 10:04 AM | Registered CommenterMike Jackson

It is a mistake to judge model output solely by its ability to match the fake figure of global average temperature. It could match and be wrong, or it could fail but match in terms of regional weather trends. I would rather judge a model by its intermediate results. Out here we don't see much of that reported. Like how does the water vapour change. Does any model match observations there? If not, knowing their assumptions about feedback, they can't be right. Here's where I mention my 'model one square meter' challenge, in vain I suppose.

Why don't they chuck out the consistently worst models from the ensemble? Politics, that's why. Why don't they talk about their failures? Funding, that's why.

Anyhow, the real problem with models is the cart before horse problem. Work on your modelling techniques, throw out the bad, reinforce the good, get to the point where a model, no matter whose, gets somewhere close to matching observations in detail. Then we can talk about using it to inform policy.

Sep 6, 2013 at 10:08 AM | Unregistered Commenterrhoda

Other models for policy: econometry

Thats why haalvelmo got a nobel in the 40s

Tsa, different approach in modeling focusing on
Real world data gathering

Been said loads of times last 10 years

Sep 6, 2013 at 10:20 AM | Unregistered CommenterPrw

Doug "Ironically, the oil and gas industry uses lots of the same techniques for uncertainty analysis that we're using in climate models - mainly for working out what is underground, given limited data."

Well firstly although the oil industry uses large scale models for uncertainty analysis (I do this for a living), that is not at all the same as what happens in climate modelling. Oil industry reservoir models (which is what we are talking about) are static models in the first case, that are then subsequently subject to fluid flow using reservoir simulation. These are linear systems. We generally have good knowledge of the structural container (becasue of the large quantity of 3D seismic data), knowledge of the sedimentary/depositional environment (from wells) but not so much their lengths scale or continuity. The fluid flow equations are pretty well know - Darcy's law. We also know other things, for example the boundary between oil and water, we also have other knowledge - if a reservoir model is initiated and run then fluids should not move about until we start simulating production. We can also (with time-lapse/4D seismic) monitor large scale pressure and saturation changes in some reservoirs. Its not real time monitoring, as it is at snapshot times usually several years, but it does cover the whole reservoir response.

Climate models are quite different. They are models of chaotic physical processes based on a dual coupled Navier-Stokes equation problem. There are unknown or poorly specified initial conditions with woefully sparse data, many relevent parts of the initial state are not measured instead they are driven by interpolations from other models. And they attempt to model physical processes in nature, where many of the mechanisms may not even be known or processes may not even be known to exist. Climate models are riddled with unknown unknowns, much more so than reservoir models. And climate models don't have nice initial conditions which follow simple buoyancy laws, they are hyper-sensitive to even slight changes in the (unknown) initial conditions.

Secondly, and the most important part of the Bish's question: oil industry reservoir models are not used to inform public policy, they are used privately by oil companies to assess risk and uncertainty and make risk based decisions based on expected monetary value. And the other big difference is in the oil industry we have a very clear understanding of how fickle mother nature can make our predictions invalid. The biggest difference in the oil industry is that we have to put our money where our mouth is and actually test our predictions against hard data: drilling wells and producing oil and gas. that means that experience people in the oil industry have some sense of humility rather than hubris and recognise its only a model and that reality is way more complicated - even for the comparitively simple linear systems that we model in the oil industry.

Sep 6, 2013 at 10:25 AM | Unregistered CommenterThinkingScientist

Let's see, 73 models are built by people who a priori believe that the average height of a human being in 2100 will be 100 mm because of CO2 in the atmosphere. They then run their programmes and, lord help us, we have 73 different answers ranging from 120mm down to 0.3mm. To check their results they take the rise in CO2 out of their models, lo and behold, the average height in 2100 turns out to be 2.5M. From this they can assume with 95% certainty (or at least 97% of them can) that CO2 causes a reduction in average human height.

Legions of other scientists then write papers giving figures for the numbers of humans likely to be victims of cats, birds of prey end falling down drains, and there is a widespread rolling of eyes and rending of clothes at the upcoming doom.

Meanwhile, the "come off it" brigade, with a peerless prediction rate of 100%, are called holocaust deniers and likened to Nazis.

In 2020 the average height of a human being has risen by 100mm from the prediction date. The modelers insist that the models are right, maybe just a little wrong, while Kevin Trenberth and his growing band of followers claim that the smaller people are hiding in caves in Afghanistan, and moreover that the measured height increase is wrong.

Isn't that the state of height science at the moment?

Sep 6, 2013 at 10:36 AM | Unregistered Commentergeronimo

Too late...its all well underway: (Environmental Modelling & Software)

http://www.sciencedirect.com/science/article/pii/S1364815212002435

I largely skimmed this piece...saw some stuff about Standards, and Black Box Modelling. Don't think it gets into the deep grass around IVV&T. Probably remains at various flavours of evaluation? Test against 5 other models and hand tweak it down the middle I suspect. Stakeholders need big wallets and little brainpower...computer says Yes !!

Sep 6, 2013 at 10:38 AM | Unregistered Commenterex - Expat Colin

"Ironically, the oil and gas industry uses lots of the same techniques for uncertainty analysis that we're using in climate models - mainly for working out what is underground, given limited data."

Seriously?

If the accuracy of climate model predictions had been the basis for private money investment you would all be in prison for fraud.

Which, incidentally, is where I think the CAGW hysteria will finally end up. At some point in the next ten years some brave soul is going to sue a government. In court they'll pass the buck to the MET or NOAA, who'll pass to the IPCC, who'll pass it to the University climate depts, who'll drop it on the individual scientists, who'll pass it to the activists/journos.

And everyone will get sued. Can't wait.

Sep 6, 2013 at 10:40 AM | Unregistered CommenterStuck-Record

As for examples of other models which inform policy, economic models come to mind. However I have no idea how they compare for complexity etc and, as we know, economists have successfully predicted 15 of the last 6 recessions (with apologies to whoever coined that phrase).

Sep 6, 2013 at 10:43 AM | Registered Commentermikeh

Just curious to know what models were used in the 70's to predict the coming ice age scare popular at the time.

Sep 6, 2013 at 10:47 AM | Unregistered CommenterStu

I've said it before and no doubt I'll say it again. As someone who worked in the nuclear industry with thermal-hydaulics computer models, 90% of the effort was spent on the expensive task of V&V. A lot of money was also spent on experimental facilities to provide the development and validation data. In comparison, very little time and money was actually spent on using the models to inform the design and safety aspects (the predictions).

Sep 6, 2013 at 10:47 AM | Registered CommenterPhillip Bratby

"Can we think of other models on the scale of a GCM with policy relevance?"

An excellent question. Perhaps an example is the modelling of a nuclear detonation, where models are used in the design of new weapons. The models became necessary when the nuclear test ban treaties came into force, and there were no methods to test new designs. I wonder how well they work? Another example could be the modelling of fusion reactors. I wonder if models accurately predicted the eventual "break even" at JET a few years ago?

To claim that, although past GCM methodology has been falsified by nature, the new GCMs are fit for purpose, even though they can not be validated for decades, is incredible.

AFAIK GCMs are numerical time step integrations that simulate a poorly defined, chaotic system. As such they have exponential error accumulation and are forced to apply low pass filters to the propagation of scalar and vector fields in order to maintain "stability". In my opinion what they are trying to do is mathematically impossible.

Sep 6, 2013 at 11:00 AM | Unregistered CommenterRoger Longstaff

"But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?"

All of us. At least anyone able to compare the output of the models with actual data, and recognise whether the results are compatible. This requires no special skill beyond a general education.

The conceit that those unfamiliar with the inner workings of a model are unqualified to criticise it is a pitiful fallacy. After all, very few of us could create a computer model of an aeroplane, but none will be confused as to whether the resulting actual plane can fly or not.

Might one suggest the current climate models are failing the flying test?

Sep 6, 2013 at 11:05 AM | Unregistered CommenterPeter Wilson

rhoda said:

Out here we don't see much of that reported. Like how does the water vapour change. Does any model match observations there? If not, knowing their assumptions about feedback, they can't be right.

Like the tropospheric hot spot prediction? A problem is trying modelers down into making a prediction rather than the vague climate projections they make at the moment which are themselves based on economic projections.

They aren't modeling the climate response to what we are doing.

Sep 6, 2013 at 11:12 AM | Unregistered CommenterGareth

To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well. Whether you add it or not then depends on what you're trying to achieve.

I found this paragraph particularly scary. I can't think of any reason other than to confirm a bias that one would deliberately leave out "more realistic physics".

If the "more realistic" code produces a model that matches reality less well, then the solution is to include this code, and work out why you now have a worse match. Even suggesting that the code may be excluded makes the entire motivation of the modelling effort suspect to me.

Sep 6, 2013 at 11:27 AM | Registered Commentersteve ta

To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well.... (followed by an utterly fatuous justification.

No, Mr Easterbrook... What it means is that another part of your model is wrong and that the errors are compensating for each other, and as we all know (or should!) "two wrongs don't make a right". It's symptomatic of the shoddy work that's passed off as "science" by the computer-jockeys, I remember some years ago being horrified by the dog's dinner that called itself NASA GCM ModelE, I can only hope that the later versions are no longer comprised of reams of uncommented, unstructured, spaghetti FORTRAN.

As to "other complex models" - until the government sees fit to raise taxes based upon, for example, the structure of spiral galaxies, I'm not overly bothered about their accuracy or otherwise.

Sep 6, 2013 at 11:28 AM | Unregistered CommenterPogo

"climate models are living scientific tools that are constantly evolving rather than pieces of software built to achieve a certain goal." There have been so many thousands of model runs since this all started that they are unauditable and earlier errors are transposed into the next run, ad infinitum.

I commented on this on August 3rd, 12.55pm, referring to the "Proceedings of the ECLAT-2 Helsinki Workshop , 14-16 April, 1999, A Concerted Action Towards the Improved Understanding and Application of Results from Climate Model Experiments in European Climate Change Impacts Research - Representing Uncertainty in Climate Change Scenarios and Impact Studies"

http://bishophill.squarespace.com/blog/2013/8/3/the-validity-of-climate-models-a-bibliography.html

Sep 6, 2013 at 11:30 AM | Registered Commenterdennisa

@ Doug McNeall Sep 6, 2013 at 9:56 AM

"Ironically, the oil and gas industry uses lots of the same techniques for uncertainty analysis that we're using in climate models - mainly for working out what is underground, given limited data."

The oil & gas industries invest their own money on the outcome of their models. Not so with climate related modellers who themselve depend upon generating more grants for more research.

Sep 6, 2013 at 11:45 AM | Unregistered CommenterJoe Public

If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.

This made me think of Tamsin Edwards oft used quote 'all models are wrong, but some can be useful' as seen @flimsin.
The thought was reinforced when you wrote, 'The flimsiness of the case also becomes clear when Steve Easterbrook makes his entrance:' Where's Freud when you need him?

Sep 6, 2013 at 11:49 AM | Unregistered CommenterBloke down the pub

But the problem is that government policy IS based on the models.

Sep 6, 2013 at 11:57 AM | Unregistered CommenterJohn Marshall

Pretty much everything has been said by those commenting above, however Doug has not responded to the points they have made. Can anything be realistically assumed from this lack of response?

Sep 6, 2013 at 12:12 PM | Registered CommenterDung

@Dung How about that Doug was doing some work? ;) Some interesting comments here, I'll try and respond to some later.

Sep 6, 2013 at 12:25 PM | Unregistered CommenterDoug McNeall

Climatologers showed their models reproducing every wiggle in the 'global temperature' of the 20th Century, even with Mickey Mouse models from decades ago, yet still resort to handwaving and speculation when trying to explain recent phenomena such as the 'pause', despite claimed improvements in modelling.

You want to know where the 'missing heat' went? Just look in the model. What? You can't? So what are you modelling? Obviously not the climate. I see you cashed your paycheck, though. You're really good at that.

In what other field is an average of multiple runs of differently parameterised models presented as a model of a single 'run' of a real system?

In what other field are model runs subject to ad hoc filtering by an Annointed Practitioner prior to inclusion in results?

In what other field are noisy claims made about accuracy in modelling the past, while future predictions are little more than guesses?

Etc.

Three 'fields' spring to mind, in descending order of respectability: astrology, homeopathy, climatology ;)

Sep 6, 2013 at 12:33 PM | Unregistered CommenterJake Haye

It's vastly worse than garbage in garbage out.

Any knowledge of exponential error consigns most of these "models" to the bin.

With a 1% initial data error on a perfect model (LOL) you get only 2.5% "signal" (97.5% error )after a just 365 days.

It's worse than that on time scale, initial data quality and model quality. Makes homoeopathy look scientific.

Sep 6, 2013 at 12:36 PM | Unregistered CommenterAC1

@Dung How about that Doug was doing some work? ;)

It's not allowed Doug. Once you make a contribution here you are legally bound to concentrate on Bishop Hill 24/7 for the rest of your days. Sometimes you Met Office modellers are so slow :)

Sep 6, 2013 at 12:36 PM | Registered CommenterRichard Drake

steveta

I can't think of any reason other than to confirm a bias that one would deliberately leave out "more realistic physics
Quite so. And what about "Whether you add it or not then depends on what you're trying to achieve."?
If you're not trying to achieve the truth (or something akin to it) then why are you doing what you call science? And why are you trying to convince people of the accuracy and reliability of your research when you are prepared to use cod physics and dodgy data to come up with an answer that may have no meaning in the real world?

Sep 6, 2013 at 12:59 PM | Registered CommenterMike Jackson

I would echo thinking scientist's comments. I have been involved in modelling cardiac activation and arrhythmias, This is an area where one can collect tons of data and there is a good deal of knowledge about the behaviour of isolated components of the system. However any model is highly dependent on initial conditions, the internal structure and most important, the non-linear dynamics of the cardiac cell and its distribution within the heart.

I grant that one can model some of the broad features of cardiac arrhythmias and the mathematical brigade regard this as a great acheivement. However, the use of models in determining things that actually matter: the way in which arrhythmias arise and how to predict them in an individual patient are questions that to date cannot be modelled as we simply don't know enough about the basic science underlying arrhythmias.

Sep 6, 2013 at 1:16 PM | Unregistered CommenterRC Saumarez

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>