Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« EU backs down on fracking | Main | Davey's reckless gamble »
Saturday
Dec212013

Unqualified evidence

Following my post on the Royal Meteorological Society's evidence to the AR5 inquiry, Doug McNeall and I had a long and interesting exchange on Twitter. Although he arrived at his point somewhat elliptically, Doug appeared to want to suggest that although in Ed Hawkins' graph the observations are on the cusp of falling outside the envelope described by 90% of model runs, this did not actually represent falsification. In his view, the test was too harsh.

The precise determination of when the observations should be seen as inconsistent with the models is one for the statisticians, and I know that Lucia, for one, disagrees with Doug's view (and I feel pretty sure that Doug Keenan will say that they are both wrong). However, this is not actually germane to my original point, which is that the poor performance of the models to date - as represented by Ed's graph - needs to be communicated to policymakers. We are without doubt less confident than we were that the model ensemble captures the true behaviour of the Earth, even if we are not (in Doug M's view at least) absolutely certain that it does not.

So returning to the Royal Meteorological Society's evidence, I asked Doug why they made the following unqualified statement of confidence in the models (my emphasis):

The Report devotes Chapter 9 to a comprehensive, balanced and realistic evaluation of climate models which is based on the published literature and draws extensively on the results of the Coupled Model Intercomparison Project Phase 5 (CMIP5). As stated in the report (Chapter 9, final draft) climate models are based on physical principles, and they reproduce many important aspects of observed climate. We agree with the report when it states that both these aspects contribute to a “confidence in the models’ suitability for their application in detection and attribution studies and for quantitative future predictions and projections”, and when it notes that “whereas weather and seasonal climate predictions can be regularly verified, climate projections spanning a century or more cannot. This is particularly the case as anthropogenic forcing is driving the climate system toward conditions not previously observed in the instrumental record, and it will always be a limitation.”

I was rather taken aback by Doug's response:

You call that unqualified!? Ha!

To me, the part that I have highlighted seems absolutely to represent a statement of unqualified support for the models. The remainder the merely says that scientists can't say whether they are any good in the very long term. I can only assume that Doug's response is based on a narrow reading of the text - in other words that they are only saying that the basis in physics and the hindcasting ability contribute to a "suitability for prediction", not that such suitability has been achieved. If so I would say that the Royal Meterological Society has grossly misled the inquiry.

But even if this is the case, we know that the models run too hot over the medium term and the short term. We know that they incorporate the wrong value for aerosol forcing, so we would expect them to run too hot anyway. We know that scientists are now theorising that heat is currently being transported into the deep ocean by a process as yet undetermined and entirely unrepresented in the models.

So even if Doug's position is "the models are not yet falsified", we have to ask where is the communication of the known problems with the models. Why has the Royal Meterological Society not explained the situation to politicians?

PrintView Printer Friendly Version

Reader Comments (61)

Models are not falsified they are either validated or not! These models have not been validated nor are they seen to be showing any kind of realistic behaviour.

As a person involved in complex modelling for many years, I find this continual waffle about whether they are falsified or not, nauseating and totally unscientific.

Signed a real scientist

Dec 21, 2013 at 9:44 AM | Unregistered CommenterCharmingQuark

Another emergent property of the models is that they are suitable "in detection and attribution studies and for quantitative future predictions and projections" no matter what.

Dec 21, 2013 at 9:50 AM | Registered Commenteromnologos

Are the models designed to be used to bolster the case for more research money and also the case for more green taxes? If so, they are eminently suitable.

Dec 21, 2013 at 9:54 AM | Unregistered CommenterRoy

May I repeat the three questions I'd like the other side to answer?

Why don't the models match observation?

How can you propose unmodelled processes to explain the hiatus and still support the reliability of the models?

Why aren't the worst-performing models chucked out?

And may I observe that the desperation of those who continue to defend the ensemble is unseemly. When did clutching at straws become part of the scientific method?

Dec 21, 2013 at 10:00 AM | Unregistered Commenterrhoda

The Royal Meterological Society have not explained the poor performance of the models because

1. the models are more closely aligned with their ideology than the observations.
2. many jobs in the field only exists because of alarm-ism driven by the models running hot
3. they're pompous people who won't admit to mistakes
4. they get money and in some cases fame and power from being at the centre of a crisis
5. everyone around them has the same mindset so they can't be completely wrong - can they?

I imagine many 'scientists' involved in climate alarmism are actually quite incompetent (like Jones) and have only succeeded because they stumbled upon this goldmine and were left to run amok, while others are bald bearded psychopaths.

Dec 21, 2013 at 10:08 AM | Unregistered Commenterjaffa

@ Rhoda

"When did clutching at straws become part of the scientific method?"

As Roy (above), states - it's when research funds approach exhaustion.

Dec 21, 2013 at 10:10 AM | Unregistered CommenterJoe Public

I don't believe that pedantic definitions or statistics matter. A layman can easily see the divergence between the models and observation.

Dec 21, 2013 at 10:18 AM | Unregistered CommenterSchrodinger's Cat

The precise determination of when the observations should be seen as inconsistent with the models is one for the statisticians, and I know that Lucia, for one, disagrees with Doug's view (and I feel pretty sure that Doug Keenan will say that they are both wrong).

I am in agreement with Doug McNeall on this. As for Lucia, I have tried in the past to discuss statistical issues with her, and was unable to even establish real communication.

I am also with Doug Mc in disputing the claim that the RMS statement of confidence was “unqualified”.

The main point, however, is clearly correct: that the RMS, and many others, are failing to inform policymakers about potentially-severe problems with the climate models—and policymakers are thereby being materially misled.

Dec 21, 2013 at 10:31 AM | Unregistered CommenterDouglas J. Keenan

Joe Public
My phrase was going to be "are running low" rather than "approach exhaustion". Great minds and all that ...

Rhoda
Re your Q3: our friend Brown from Duke University has argued cogently and trenchantly at WUWT that none of the models is right and on that basis an ensemble is equally wrong. Being nearly right is like being a little bit pregnant. For political decision-making purposes they are all wrong and not fit for purpose. Any rightness they may have today they didn't have yesterday and probably won't have tomorrow. They have all the predictive power and reliability of horse-racing systems.
Fling the lot of them out and start again with real scientists getting dirt under their fingernails and wet patches on the knees of their jeans taking measurements, looking at what is happening outside the window, reading reports of what happened over the last 1000 years and more from what evidence there is in old writings as well as rock strata, stalactites, ice cores, even tree rings and above all get the science right and stop making assumptions about greenhouse gases, black bodies, and hiding heat. They can stop listening to eco-warriors with an axe to grind and telling politicians what they think they think they want to hear (sic).
Then they can start building some models as long as they don't pretend they actually mean anything more than they've programmed them to mean.
And with any luck they can learn a bit of humility along the way.
And stop pretending any of this is one-tenth as important as they want us to think it is.

Dec 21, 2013 at 10:33 AM | Registered CommenterMike Jackson

Schrodinger's Cat: I don't believe that pedantic definitions or statistics matter. A layman can easily see the divergence between the models and observation.

As a layman, I can confirm that! I'd need pretty good odds before I would even consider betting on the models being right. Anyone willing to offer me 66/1?

Dec 21, 2013 at 10:35 AM | Unregistered CommenterTC

Apparently, the front fell off the models. The RMS needs to be towed outside the environment.

Dec 21, 2013 at 10:37 AM | Unregistered CommenterManniac

Models are not falsified they are either validated or not! These models have not been validated (...)
Dec 21, 2013 at 9:44 AM CharmingQuark


Absolutely. As I have said before, an unvalidated model, if you are foolish enough to believe and act on its results, is worse than having no model and simply saying "sorry, we just don't know".

The unanimous agreement between the models says more about the deluded groupthink of their programmers than it does about their correctness, although in 'climate science' agreement with previous models is taken as validation of a newly programmed model.

If, as somebody said, "A model is an illustration of a hypothesis", then I suppose you could argue that the hypothesis they illustrate has been pretty conclusively falsified.

Dec 21, 2013 at 10:43 AM | Registered CommenterMartin A

The climate is a bit like the stock market. Past performance is not an indicator of future performance.

If the models in the graph were tuned to hindcast the past, then it is very clear that the hindcasting ability is no indicator of forecasting ability. The graph validates that conclusion.

What is the matter with science? The scientists produce data that shows their models are wrong, but then carry on insisting that the models are not wrong. Then the rest of academia, learned bodies and government advisors accuse sceptics of being deniers and claim that the models are right.

I'm glad I practised science in a different era. I don't think I could cope with it today.

Dec 21, 2013 at 10:46 AM | Unregistered CommenterSchrodinger's Cat

The real problem is that state sector bureaucracies have no real responsibility mechanisms.

Imagine a large company preparing for a stock market flotation. The chief executive and his prospectus team are signing their names to a legally binding honest opinion of the company's financial prospects. Meanwhile the marketing director has seen a graph showing that their competitors are overtaking them and the sales graph has begun a pronounced downward trend. He decides that the obvious trend is not enough to "falsify" his optimistic feelings about the firm's prospects - so he doesn't tell the CEO's flotation team and the prospectus doesn't mention the problem.

In the real world - a shareholder class action and criminal fraud investigation would ensue - resulting in long term unemployment and, possibly, a few years jail for all concerned.

Dec 21, 2013 at 10:47 AM | Registered CommenterFoxgoose

The stats guys may get their jollies from arguing over precise definitions as to falsification. I don't care, that is not a question which bothers me. The better question is ' Are these models suitable as an input to policy decisions?'. They failed that years ago. What ought to be happening in the world of models is a serious effort to find out why all the models fail the same way. To a layperson, it would seem useful to ask whether they model all the processes in play and get them all correct. And the answer just must be no. It's possible that is being done in private. Not good enough. It ought to be played out in the public arena. Or at least in the scientific. But there is too much blind position-defending taking place. They are doubling down on their errors.

Dec 21, 2013 at 10:49 AM | Unregistered Commenterrhoda

I agree with charming quark that:

Models are not falsified they are either validated or not! These models have not been validated nor are they seen to be showing any kind of realistic behaviour.
As a person involved in complex modelling for many years, I find this continual waffle about whether they are falsified or not, nauseating and totally unscientific.

I could have written that myself, as my experience is the same as his.

Dec 21, 2013 at 10:51 AM | Registered CommenterPhillip Bratby

The policymakers are being told what they want to hear and the quangos, government departments, charities and green pressure groups are all complicit. Their aim is the same, to pillage the public finances for their own gain.

The policymakers say "Tell me that black is white and I'll give you a million quid (and take a million for myself)."

Guess what colour the Royal Society of Colour Identifiers tells them black is?

Dec 21, 2013 at 10:59 AM | Unregistered CommenterBuck

You mention in passing what statisticians might be interested in. In my view statisticians would wish to see the models converging over time both in near term and longer term projections.

But for a very long time we see the same order of magnitude in the differences in projections, indicating wide divergence in the projections and presumably in the assumptions.

This is not the way science works. One of the main goals of science is to develop theoretical models that describe the real world. These models are usually consistent with some fundamental metric, say Avogadro's number. So when Einstein wrote his 1905 paper, he was able to show that he could approximate Avogadro's number with a solid diffused in a liquid. He knew as all physicists know that had his estimate diverged by much, the methodology would be rejected.

But that does not happen with the GCMs. The value for climate sensitivity to CO2 is all over the place and has been from the start with little indication that the community of scientists working on the problem are anywhere near a consensus.

In my opinion, the continuing wide divergence of the models from a central estimate is the key statistic that all scientists and statisticians should focus upon. I suggest that lack of convergence over time may itself be considered refutation of at least the most alarmist theories.

Dec 21, 2013 at 11:00 AM | Unregistered CommenterFred Colbourne

“whereas weather and seasonal climate predictions can be regularly verified, climate projections spanning a century or more cannot. This is particularly the case as anthropogenic forcing is driving the climate system toward conditions not previously observed in the instrumental record, and it will always be a limitation.”

What this is saying is that climate is mathematically chaotic, mathematical models cannot predict long term chaotic behaviour.

I would add that anyone who professes to be able to predict actual long term chaotic behaviour of climate on a regional basis is lying. Even phrases such as 'the climate in X will be better or worse than now, in 100 yers time' would be the words of a charlatan.

Dec 21, 2013 at 11:18 AM | Unregistered Commenterson of mulder

Rhoda said 'What ought to be happening in the world of models is a serious effort to find out why all the models fail the same way''.

Is that so difficult? I would say that it must be a characteristic all these models have in common, which seems to be a positive climate snsitivity. Set that value at zero and try it again. Or is setting the value at zero a capital sin?

Dec 21, 2013 at 11:33 AM | Unregistered CommenterMindert Eiting

Son of Mulder,

The climate will be worse everywhere. It was perfect, everywhere, circa 1960.

A. Charlatan

Dec 21, 2013 at 11:38 AM | Unregistered CommenterNeil McEvoy

Mindert, if CS is a set value of the models, they have already failed. (Leaving aside that CS is an invalid concept, of course). The models are supposed to produce temperature figures, predictions, projections, whatever. You get the CS afterwards from the graph. It may well be that the failing premise is that of positive WV feedback, or clouds, or something else. One would seek a common factor, of course. But that common factor is, must be, the underpinning crux of the entire AGW hypothesis. It cannot be questioned.

Dec 21, 2013 at 11:52 AM | Unregistered Commenterrhoda

CharmingQuark is quite correct on "model validation" nomenclature. We should all be talking that way. Validate against historic data, and then as time marches forward continue to demonstrate validity of forecast (something the climate models are not able to do).

Dec 21, 2013 at 12:00 PM | Unregistered CommenterRob Schneider

"The precise determination of when the observations should be seen as inconsistent with the models is one for the statisticians"

You don't need statisticians when a thing is obvious at the first glance.

Dec 21, 2013 at 12:18 PM | Unregistered Commentersplitpin

Seems to me that the argument that climate models are not fit for purpose is the same argument used by those claiming that they are not yet falsifiable. In neither circumstance is there an argument for their use in policy prescription.

Dec 21, 2013 at 12:32 PM | Registered CommenterSimon Hopkinson

@ Foxgoose Dec 21, 2013 at 10:47 AM

"The real problem is that state sector bureaucracies have no real responsibility mechanisms."

Except self-preservation.

Dec 21, 2013 at 12:35 PM | Unregistered CommenterJoe Public

I've said this before:

A model is only a small imitation of the real thing.

Dec 21, 2013 at 1:24 PM | Unregistered CommenterLevelGaze

For what it is worth, I agree with Doug M and Doug K that the statement is a very weak endorsement of models. I also agree with Charmed Quark - validation is the litmus test.

Dec 21, 2013 at 1:32 PM | Unregistered Commenterbernie1815

Rhoda

Why don't the models match observation?
I'm sure there are many reasons, for mismatches. The models do match pretty well with historical observations where the forcings are known. For projections (post 2005) the models must guess the forcings based upon historical averages: aerosols, the solar cycle, el Nino, la Nina etc. So for example the solar cycle just gone would have been modeled as a normal one, although it turned out abnormal. The degree to which the guesses match will change whether the models match observations in the period between 2005 and now.

How can you propose unmodelled processes to explain the hiatus and still support the reliability of the models?
I tend to doubt the 'hiatus'. I'm also not very familiar with the suggested processes or who is proposing them.

Why aren't the worst-performing models chucked out?
Is that what you would want or expect? I would want to understand what it is that causes "bad performance" and make the models better. MartinA seems to disagree with your diagnosis as he says there is, "...unanimous agreement between the models...". I find that hard to understand unless he means they are unanimous in projecting rising temperatures resulting from rising CO2 levels. Is that really in doubt - even in your community?

Dec 21, 2013 at 1:40 PM | Unregistered CommenterChandra

Chandra is one of our more sophisticated trolls in that his levels of obfuscation are craftier than most.

I challenge him to tackle this simple truth restated so elegantly by Dave Hoffer just the other day on WUWT:
"...Fear Not! For the CO2 is logarithmic and the T varies with the 4th root of P and that is the Physics."

Dec 21, 2013 at 1:52 PM | Unregistered CommenterLevelGaze

Models are qualified for use in the "real world" by engineers not statisticians. Not only must the numbers match, but the physics and chemistry of the model must be correct if you want confidence in the model predictions.

As some have said, the initial sorting of models is not that difficult. Simply plot the data against the model for history and for predictions and if the model falls outside the range of the scatter in the measured data then the conclusion is that the model has the physics wrong or incomplete. What other conclusion can there possibly be?

So the message to policy makers is that the climate models have the physics wrong or incomplete and that the IPCC process has not been able to improve the models over their five iterations. There are no climate models that would survive an engineering review. I say this not having reviewed all climate models, but by inferring from the IPCC summary and the Ed Hawkins graph that if a completely meaningless "range of climate models" must be used to present a convincing picture then the individual models would certainly fail the simple test I described.

Dec 21, 2013 at 1:57 PM | Registered Commentervermonter

Well. When reality falls outside of 90% of the model-spread, then they can simply widen the spread.

Please, anyone, show me where model-spreads are narrowing, and how (and by whom) those models/model-runs were selected for incorporation into the spread.

Even if the basis for the use of the model ensembles was established, what sensible person would not be suspicious about results where the 'big answer' is so easily bolstered by adding WORSE performing models to an ensemble? They should be getting better at predicting (sorry, "projecting"), not worse.

Dec 21, 2013 at 2:18 PM | Unregistered Commentermichael hart

" they are unanimous in projecting rising temperatures resulting from rising CO2 levels. Is that really in doubt - even in your community?"

As you know very well, the question is how much. They are unanimous in having a higher idea of how much than observations appear to show. That alone gives cause to question their basic assumptions. I don't see that happening at least openly. I do see attempts to justify the models as they are. That is clutching at straws. Covering up any failures true or alleged is a political act.

Dec 21, 2013 at 2:46 PM | Unregistered Commenterrhoda

I think if they changed the water vapour feed back to negative it would be about right.

Dec 21, 2013 at 2:50 PM | Unregistered CommenterSchrodinger's Cat

Well, sure, water vapour feedback is probably positive; it's a greenhouse gas, after all. The question should examine the feedback of water in all its phases, because it is ever more obvious that the miscalculation is from ignorance of cloud behaviour, causes and effects.
==============

Dec 21, 2013 at 3:12 PM | Unregistered Commenterkim

Chandra - a more helpful post. Low abuse factor, attempt to answer questions raised.

Comments:
1. As you say, there may be many reasons for mismatches - such as people guessing wrong. But if mismatches are caused by wrong guesses, how will we know (in advance) when the guesses are right? Is it reasonable to spend bilions on guesses? A model that fits the past may be intellectually satisfying, but pretty useless if it doesn't fit the future. That is the test.

2. In the Rorschach inkblot of the surface temperature record, you do not see a hiatus. This is not helpful to those who can.

3. It would be great to understand why the models aren't working, agreed. But the fact that we don't understand this doesn't remove the perception we have (which you may not share) that they aren't. We cannot understand why failed models should retain credit (even - or particularly - if we don't know why they failed). If you have a theory that will improve them, by all means test it - but by making a prediction, please.

Dec 21, 2013 at 3:20 PM | Unregistered Commenterosseo

"seasonal climate predictions can be regularly verified"

But have they been verified?

The UK Met Office season predictions seem to be either wrong or so general (30% chance below and 30% chance above average etc.) that it would seem impossible to verify.

Dec 21, 2013 at 3:33 PM | Unregistered Commenterclimatebeagle

It is clear that demonstrably intelligent people have totally different interpretations of the following sentence:

We agree with the report when it states that both these aspects contribute to a “confidence in the models’ suitability for their application in detection and attribution studies and for quantitative future predictions and projections”

The question is whether the use of the term "contribute" without defining the extent of the "contribution" is accidentally or intentionally ambiguous.

Perhaps the Society could provide an opinion as to where this "contribution" lies on the spectrum between the models' showing early promise for perhaps eventually becoming useful for forecasting versus the current certainty of their predictive powers as a definitive information source for the immediate implementation of major policies.

Dec 21, 2013 at 4:01 PM | Unregistered CommenterPav Penna

I saw the discussion on Twitter. Two things were evident (1) an insistence on not seeing the others' point, and (2) an insistence on talking about something different.

The first was useful in not answering Bish's simple question. The second is useful in discrediting your discussant.

Dec 21, 2013 at 4:52 PM | Registered Commentershub

CAM5 model has been "scientifically validated", whatever it means:
http://www.cesm.ucar.edu/models/cesm1.0/notable_improvements.html

Dec 21, 2013 at 5:26 PM | Unregistered CommenterCurious George

Being somewhat bemused in general and on this subject specifically, aren't the models simply updated and hind-cast so no-matter how rubbish they are, to the untrained eye they will always look as if they're a perfect fit, more or less.

Dec 21, 2013 at 6:07 PM | Unregistered CommenterRobinson

Rhoda, I've seen at least one model run that shows your 'pause', for what it is worth, so I'd question the unanimity.

Osseo, if the models are run with historic forcings only up to 2005, then they have to guess the occurrence of el Nino/la Nina, the solar cycle, volcanic activity, CO2 emissions etc. Unless we can somehow predict these natural/anthro forcings, guesswork is what is left. If the model is run many times using these random guesses, a range of projections will result. If some of these fit the actual outcome then the model has some skill (assuming they matched in these runs because the guesses were correct).

On the 'pause', you see what you want to see. The red line in Figure 1 in Rahmstorf et al 2011 shows another view - spot the 'pause'.

Dec 21, 2013 at 6:13 PM | Unregistered CommenterChandra

@Robinson: no truly professional scientist or engineer can accept the mistaken heat transfer and IR absorption claims made in the IPCC 'Energy Budget'. However, the former comes from Meteorology which teaches incorrect radiation physics, it spread to Climate Science and is taught to schools. The latter a misinterpretation of Tyndall experiment, was warned about in 1993, but is also taught. Two other mistakes including the use of exaggerated cloud albedo in hind casting are used to correct exaggerated warming.

Dec 21, 2013 at 6:28 PM | Unregistered CommenterMydogsgotnonose

One run shows the pause. Well that's all right then.

And how will I know whether all the runs are reported or whether culling of runs which are going the wrong way is part of the process? Because I can trust the scientists? DMMFL.

Dec 21, 2013 at 6:50 PM | Unregistered Commenterrhoda

Rhoda, spot on - I'd check that 'run' very closely for signs of Tipp-ex and felt-tipped pen... and does this finally nail the claim that 'ensemble' means have any validity - i.e. the average of all lies is the truth?

Dec 21, 2013 at 7:05 PM | Registered Commenterflaxdoctor

"Rhoda, I've seen at least one model run that shows your 'pause', for what it is worth, so I'd question the unanimity."

So 1 out of 73 removes the uncertainty for you. Would you care to enlighten us as to which model you've seen, or didn't SkS say?

Bish,

I've had a few exchanges with Doug M. it's quite difficult as he makes statements, saying that statistical significance isn't important in the temperature records between 1880 and 2000, but that the they do have scientific significance. Naturally, I asked him if the rise in temperature between 1880 and 1940 had scientific significance, not having the slightest idea what he meant by "scientific significance". His response was to tell me that that was an interesting question and to close the conversation.

Dec 21, 2013 at 7:11 PM | Unregistered Commentergeronimo

Bish wrote: "We know that [climate models] incorporate the wrong value for aerosol forcing, so we would expect them to run too hot anyway.

The situation with aerosols can be confusing: Observational estimates of the cooling due to aerosol forcing weakened between AR4 and AR5, but the effect of aerosols in climate models is weaker than the observational estimate in either report. See Figure 7.19. So the aerosol effect in AR5 models is now CLOSER what we would expect from observations. The optical depth (and therefore the cooling effect of aerosols) has increased since 1950, but it was unchanged during the pause.

Nic Lewis recognized that the weaker estimate of aerosol forcing in AR5 than AR4 required reducing the estimate of climate sensitivity because less of the GHG-mediated warming in the past has been negated by aerosols.

However, climate models exhibit less aerosol cooling during the 1950-1990 period than would be expected from observations, causing them to run a little warmer during this period than they should, but there isn't a big discrepancy during this period between observations and hindcasts during this period. Models can run too hot from underestimating aerosol cooling, but this only happens when aerosols are increasing or when you are looking at absolute temperatures rather than the usual temperature anomalies.

http://www.climatechange2013.org/images/uploads/WGIAR5_WGI-12Doc2b_FinalDraft_Chapter07.pdf

Dec 21, 2013 at 8:11 PM | Unregistered CommenterFrank

A model is just an abstraction from reality. They are ubiquitous (think of one's perception of reality). Where they are deliberately constructed it is usual that the model maker has a purpose in mind, and with that comes a view about its utility. This in turn will typically be a function of the relationship between information the model produces and information that can be independently derived from reality.

Checking (or validating) one's model is pretty typically something model makers do. They might look from a different angle or ask someone else to have a look to see if they agree its a good rendition. They might check some points they didn't use when making the model to see if the model and reality agree. I should note that validating ones model is usually something that is done prior to using it for its purpose, so "validation" tends to have a more limited meaning.

I also should note in passing that the phrase "the model was true" has use in the English language in precisely this context. We also talk about how well the model "fits" reality.

Checking will no doubt concentrate on those aspects that are important to the model makers purpose. If a 2D facsimile is required they won't be worried about depth.

Now statistics is all about measurement. So it obviously gets used when models have a strong quantitative basis. In this case what it helps you do is make assumptions about reality and measurement and then quantify how likely a result in the model will be. If it proves to be unlikely then it gives pause to consider those assumptions. Note it might just be that the model maker assumed variables were normally distributed, but they weren't. Equally it could be something about the nature of the relationships.

The point is if the model doesn't fit within the normal (conventional) bounds of certainty there is a conventional expectation that this is a matter for checking. If the model is being used for a critical application (eg buildings might fall over) it stops being used and other (likely less sophisticated) models are used.

Now in the case of climate prediction the purpose of modelling is to allow policy makers to make judgement about actions they should take today to control human behavior to avoid adverse consequences in the future.

Policy makers need to know how well the models perform on the factors of interest and the ability of the models to forecast the future. A basic couple of issues is how they reproduce reality on these dimensions and do they outperform other models. Reporting this information is integral to reporting any results from the models.

So at the end of this meander "falsification" or "validation" isn't a absolute test. The question is what is the risk for policy makers in using the forecasts from these models (presumably high if in the short-term we are seeing long-term behavior emerging that is not explained by the models) and are there better approaches to use (i.e. less sophisticated models are performing as well if not better).

Unfortunately IMHO AR5 is silent on these matters.

Dec 21, 2013 at 9:22 PM | Unregistered CommenterHAS

@HAS

+1

Dec 21, 2013 at 10:11 PM | Unregistered Commenterdiogenes

I much support the comment by HAS.

Scientists should not decide how much policymakers rely on the models. Rather, scientists should fairly describe the problems with the models, and then let the policymakers decide.

Dec 21, 2013 at 10:12 PM | Unregistered CommenterDouglas J. Keenan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>