Uncertain uncertainty
Richard Rood's article about uncertainty in climate projections is a few weeks old now, but I came across it only today after someone tweeted a link to it. Rood is trying to make the case that:
the uncertainty in climate projections associated with the physical climate model is smaller than the uncertainty associated with the models of emission scenarios that are used to project carbon dioxide emissions.
His argument seems to rest partly on the fact that climate models include well-understood physical laws at their heart, while economic models are much more empirical. This argument seems to me to be somewhat spurious. The fact that an aeroplane includes a number of transistors, whose behaviour is well-understood, does not make it necessarily more likely to fly than one that doesn't.
He argues that the spread in the models would be much less if it were not for the different economic scenarios that feed them. This seems flawed to me. Rood argues that the spread in the models represents "simple estimate of uncertainty". I'm not sure this is right. To the extent that the models make the same erroneous assumptions and have the same unknown unknowns, surely the climate model uncertainty is much larger?
Reader Comments (58)
Either you are correct or the IPCC in setting up the scenarios is incorrect.However, I do think you are missing his point that there is spread due to the emmissions models he has in his post. Further, it would be relatively easy and should be done as well: Take the parameters and physics packages of the models as they were run for that graph, substitute actual daya rather than scenario data, rerun. Take the SD's and responses and extrapolate accounting for the number of data points, and see what you get. If there is a large difference, and there will be for many of the models, his claim is incorrect. The recent warming relative to expected warming is substantial for data to date. Perhaps this post is just take attention away from models and excuse them without actually running them and determining the results. Perhaps not.
It seems a feeble point to me.
If I have a thermometer in my garden which is accurate to the nearest ten degrees, then I can read it three times a day and find that the temperature was 10 degrees, 10 degrees and 20 degrees. The "average" is therefore 13.333r degrees.
I have a high degree of certainty that my maths are correct, but this shouldn't be allowed to impart a wholly spurious impression of robustness or accuracy to my "finding". The fact is that I don't know to within ten degrees what the temperature was at any point. The ability to divide three numbers by three doesn't alter this and it doesn't mean my forecast of 13.3333 degrees tomorrow is any good either.
The assumptions that go into climate model themselves depend similarly on wholly speculative economic, technological and demographic assumptions. Nobody knows what the price, source, and geographic distribution of fuel use will be in 75 years' time. Nobody in human history has ever made an accurate prediction of any of these 75 years out. 75 days would be hard (as would be accuracy 75 minutes out, for energy cost).
Any model of climate thus depends on assumptions about emissions, that themselves depend on untestable assumptions about imponderables. It is thus a vain and essentially frivolous activity. The conclusion is not that we should develop better crystal ball technology, but rather that people should stop doing this at all, unless they want to waste their own money on it.
It seems to me that it is very easy to get lost in convoluted arguments about scenarios and how good the economic models are at assessing the rate of CO2 increase into the future.
Being a simple man - It doesn't matter how good or bad the economic models are because the climate models assume between them a wide range of scenarios for CO2, into which reality over the last 30 years has been towards to the top end. The climate models have then produced temperature forecasts which are way above the real temperature outcome.
So, not so much a garbage in / garbage out situation but a complete failure of the theory that the models are based on, in terms of the sensitivity to CO2.
Of course it gets worse when you start using doubtful model data to feed more doubtful models. Chuck Spinney points out the cul-de-sac that such model linking is likely to get us into in this article.
http://www.counterpunch.org/2012/02/09/climate-science-goes-megalomaniacal/
I particularly enjoyed his description of the "self-licking ice cream cone" and he mentions and recommends some book called "The Hockey Stick Illusion".
Even Heisenberg would be proud. The only certain thing about Climate Science is that we can be uncertain about the certainty of the uncertainty, or conversely certain about the certainty of the unceertainty, or something....
Garbage in, Garbage out isn't Science.
The way uncertainty is defined for these model problems is incomplete. They are basically assuming the physcial premise of the mode re climate physics is correct ie a deterministic model. The error terms are then the residuals around the specfiied model. This is incomplete because it does not take into account the possibility the models are wrong/falsified.
Bizarely Donald Rumsfeld very eloquently stated the problem with the nature of uncertainty when describing the war in Iraq when he talked about:
1. There are known knowns
2. There are known unknowns
3. And there are unknown unknowns
http://en.wikipedia.org/wiki/There_are_known_knowns
(1) is the data we measure, things we know. Our problem here is that there is much uncertainty on the known inputs - emissions, temperatures, boundary conditions etc
(2) Is the uncertainty the modellers claim to be quantifiying. These are uncertainties they know about and think they can model, eg as residuals around a trend in a regression.
(3) The unknown unknowns are outside the scope of the model or thinking. They could be a something as fundamental as wrong physics in the climate model itself. Ignoring these gives a false sense of security about the model and leads the modeller to make a statistical Type 1 error which is where they make a wildly inaccurate prediction with great confidence. A good example from yesterdays blog is DEFRA and their scientific advisors confidently predicting in 2001 that the UK problem due to climate change will be floods. Ten years later they have an emergency debate about drought....
I think the climate models are classic example of a Type 1 prediction error - inaccurate predictions made with great confidence.
Rumsfeld got pilloried for his comments and received an ignominious award for the worst use of English by a politician ( I think it was about 2003). The organisation that awarded him that are pig ignorant because Rumsfeld is not using his own words but is in fact effectively quoting Plato from the play Meno - and Plato is himself quoting Socrates. Not even Wiki seems to realise that.
"the uncertainty in climate projections associated with the physical climate model is smaller than the uncertainty associated with the models of emission scenarios that are used to project carbon dioxide emissions. "
Well, yes, of course. In fact I would rather hope so. The physical climate models are at least nominally trying to zero in on the effects of something (any given level of emissions) happening.
The economic models are trying to explain the output (ie, emissions) from a number of different inputs. You really would rather hope that a deliberate variation in inputs will give greater uncertainty of output than an attempt to calculate the effects of a known input.
No?
The only way it could be otherwise would be if climate change modelling was even more inaccurate than economic such. In which case astrology starts to look pretty accurate.
To make the same point again. The economic models have paramaters that range from 7 billion to 16 billion people in 2100, a global GDP of anything from $250 trillion to $550 trillion, attempt to capture the difference between the widespread adoption of low carbon technlogies (but no deliberate attempts to force people to use them, nor tax policies etc) and just burning all the coal, do we continue to globalise or do we localise the economy and finally, do we go for real capitalism or some touchy feely social democracy?
Now, given all those variations in inputs into the economic models then I'd damn well hope that the physical models are producing less variance in their results when they try to calculate the effects of, say, a 100 ppm rise in CO2.
And of course, that's why we have those very eonomic models: because there are so many possible variations in inputs.
How does one go about testing assumptions concerning uncertainty be it in models or in emprical studies?
Or, how does one model models' uncertainties be they climatic or economic?
I view Rood's statement as being not so much about economic model uncertainty per se, as about the uncertainty due to policy adoption. That is, depending on the extent to which countries adopt policies similar to Britain's CCA or Australia's carbon tax, emissions will vary accordingly from the "business as usual" trajectory. True, there is also significant uncertainty in making such projections even if we knew what legislation was to be enacted, but the questions of how many (and which!) countries and the stringency of their controls, produce a greater range of variation.
One point about the economic models, that all the resulting models are based on; The high emission scenario projects North Korea to have twice the per capita GDP as the US, by 2100.
Castles and Henderson (and nearly every economist in the world), also says the IPCC erred by using MER (monetary exchange rate) data, rather than PPP (purchasing power parity) data.The IPCC is the only function even in the UN to use MER. All others use PPP.
So its models all the way down, and the bottom one is obvious rubbish.....
His main visual exhibit to illustrate his argument that physical uncertainty is small seems to be the infamous hockey stick.
Somewhat off topic. The link below points to an EXCELLENT BALANCED program on the Canadian Broadcacasting Corporation program "IDEAS" - 'Demon coal."
It starts off slowly, but later includes testimony to the Canadian Senate by Ross McKitrick and others and an excellent lengthy interview with Judith Curry.
This is very unusual behaviour for the CBC which has for decades proudly and prominently featured David Suzuki as their chief alarmist and seldom strays off the CAGW party line.
http://www.cbc.ca/video/#/Radio/Ideas/1453660136/ID=2208383550
"climate models include well-understood physical laws at their heart": I dare say. When I made a living modelling physico-chemical systems, I was always careful to include well-understood physical laws at their heart. But that did not, of itself, justify my claiming that the science was settled, or the confidence intervals narrow. There's too much juvenile braggartry in Climate Science, too much hubris in the face of the complexity of things.
justice4rinka has neatly put his/her finger on the issue -- you cannot "average in" accuracy:
Given the accuracy is 10 degrees we only know that the temperature was ± 5 degrees -- and in reality, all three readings could easily have actually been 15 degrees and all we actually saw was instrument error.
Yet this is exactly what all these
predictions-- ah,projections-- ah, wildass speculations are -- garbage.I might also add that the dearly beloved "Bayesian" statistics, so dearly loved by the Climate Scientists™ suffer from the same logical weakness. You cannot improve their accuracy simply by running them over and over again.
Climate shows what I call "fractal noise". In other words, there is no scale at which you can view the noise, without seeing .... noise.
This is quite intuitive because if we plot 100,000 years, we expect to see variation. But this isn't the assumption on which the climate predictions are based. Their idea (which is not at all born out by reality) is that if you wait long enough, the noise disappears.
In other words ... in the long run, the only thing effecting the climate is deterministic forcings.
The reality, is that in the long run, there are random forcings wich make it just as hard (if not harder) to predict the climate in the long run as in the short run.
YOU CANNOT AVERAGE OUT CLIMATE NOISE. There is no amount of averaging that will ever separate the "normal" from the "abnormal", because the whole nature of the climate is that it has multiple drivers acting over multiple timescales.
So, it is entirely dishonest to say that you can ever be certain about the climate. There will always be uncertainty. To suggest otherwise is tantamount to fraud, particularly by a scientist who should know better.
I am mystified by the "ensemble" model results. I suppose this may be a way to include the effects of more variables in a "single" result, but it seemed to me specious to suppose that finding the mean of twelve or thirteen concocted speculative models had any greater significance than any one of them.
Economists have a much more healthy (and skeptical) view of their predictive prowess. That they allow that to be communicated in any predictive output or statements isn't a sign of weakness.
Ignoring it or (yes, I'm talking about you climate modellers) understating it is not a sign of strength of you model.
Re: unknown unknowns...
I am reminded of the (allegedly true) story of the head of the Patent Office, who in 1896 (or thereabouts), asked for his office to be disbanded, 'Because everything which could be invented, has been invented'.
Or the quote by an expert in the field in 1946 that: 'Worldwide, there is probably a market for five or six computers...'
Right, he used the word representation, & therefore he open game. As with all these puter models it's the language they use that's the giveaway. I recently posted on WUWT on the "novel simulation" in another puter model. Pocket OED, 1925: Representation/Represent, Call up by description or portrayal, or imagination, place likeness before the mind, allege that, make out to be, describe or depict, fill place of, work of art portraying something, best substitute. Add to that, Simulation, feign, pretend to have, wear the guise of, act the part, counterfeit, having the appearance of, shadowy likeness of, mere pretence, unreal thing. Add in Novel, of new kind, strange, hitherto unknown, fictitious prose. When they choose such words, & the word real never enters the text you have to wonder, don't you? They certainly don't fill me, as an engineer, with much confidence!
Wow. This Rood is a laughing stock right?
All recursive simulations suffer from exponential error accumulation. In effect becoming expensive random number generators.
It's not garbage in garbage out, It's much worse. It's near perfect In => Recurse => Junk out.
Or the quote by an expert in the field in 1946 that: 'Worldwide, there is probably a market for five or six computers...'
Mar 13, 2012 at 2:56 PM | Unregistered CommenterDavid
==========================================================
Well, he thought he was an expert. That was Thomas J Watson, founder of IBM.
Reading over the various comments, including Mike Haseler's interesting idea about "fractal noise" it suddenly dawn me that Rood is falling into the same trap that gives rise to the silly statement that:
What he is saying is:
Is his argument that the wider the uncertainty limits of a model become over time, the sooner it becomes important by their being outside of limits obtained from other sources of variation? I think there must be those out there who even now are planning to sell him Tower Bridge.
The uncertainty levels associated with climate models are truly slippery. As presented by the BBC and rest of the MSM they are a sort of measure of overall confidence (by scientific authority) in the "projections" (predictions to you or I and the rest of the population). As such they are utterly misleading and intellectually bankrupt. The internal processes within models give a range of outcomes and there is also uncertainty in the starting conditions. BUT the crucial question is what confidence should we have in the models themselves? There is no track record of succesfull predictions from this young (juvenile?) academic discipline. Recent climate metrics would be regarded as falsifying the IPCC predictions if rationality hadn't been abandoned. I'm absolutely sick of the sleight of hand that implies that the high degree of confidence in knowing the physical characteristics of C02 means we should have the same confidence in climate models! The nearest analogy for climate modeling is econometrics and financial modelling. With a bitter laugh I recall being shown the predictions for my my pension funds in 2007. When reality hit it was off the chart! It's not as though the amplification of warming caused by increasing C02 is very plausable as a theory to anyone with even a passing awareness of historic and prehistoric climate variability.
On modelling, from the inimitable EM Smith.
http://chiefio.wordpress.com/2012/03/13/model-science-predicts-more-model-scientists/<.a>
The IPCC science is as broken as it ever could be - 4 basic physics' mistakes, 2 elementary, 2 subtle. Also to offset 3-5 times exaggerated warming assuming most is CO2-(A)GW, and that is badly wrong, the models claim twice real optical depth for low level clouds and imaginary AIE cooling.
The old joke around my way is that they invented economic models in order to make the climate models look good :)
Tim Worstall seems to hit the nail on the head here - we would expect the uncertainty due to climate response to be small compared to the uncertainty of the (human) driving forces themselves.
There is a large (and growing) literature that looks at estimating* uncertainty in the state of the future climate, using climate models. Rood is right in saying that the 'model spread' is a very simple method of estimating that uncertainty - but it has lots of advantages. It is very simple, easy to understand, and everybody gets to join in. There is some literature which points to its disadvantages (possible shared biases etc.), and some which suggests that it is conservative in its estimate of uncertainty.
The 'model spread' is found by looking at the CMIP (climate model intercomparison project) archive. A bunch of countries submit their best climate models, and run them with a shared protocol. They upload the data to a publicly available archive, and analysts get to compare the data with the historical record, and look at the future projections. This is a huge task - I've seen an estimate that the AR5 archive will take 30 years to download at 1Mb/s, once all the data is in.
Of course, the models might all be wrong. I've used the Rumsfeld quote myself in presentations in the past. The point is, that we have a process for summarising the current state of knowledge, and explaining the consequences of that knowledge. We have a process for updating that knowledge, if new theories arrive that better explain the data, or if new data arrives that show the theories to be wrong. And we are striving, all of the time, to reduce the (unknown) length of the list of 'unknown unknowns'.
* and it is always an estimate, the best that we can make at the time.
Well, Doug, your spectacles are surely rose-tinted. From here, it looks more like the models have been hijacked as vehicles for the promotion of climate scares, and far from being a repository of our knowledge about climate, they have become a showcase of much that is wrong with it. It is odd, is it not, that in such a messy, some say impossible, area as climate modelling there seems to be an all but universal pre-occupation with CO2 and the granting of it an appreciable, even dominating, effect? But the main 'wrong' I think is in the failure of owners and operators of these models to behave like responsible adults, and do all they can to counter the scaremongering and insist on the realistic approach to model limitations which I see reference to from time to time, even within IPCC publications, of their manifest unfitness for prediction. From your point of view, that means climate prediction is not possible based on our 'current state of knowledge'. Yet we are confidently told, from Met Offices, that snowfall in the UK was to be a thing of the past, and drought in Australia was to be the permanent future. Mother Nature kindly stepped in to refute these particular pieces of sloppy science, as she has done on many others over the last 30 years.
It may be that the climate models will be thrown under the bus by the political class as their limitations become more widely known, thereby going the way of the greenhouse analogy, the hockey-stick, the polar bears, the Himalayan glaciers, and so on, and on. They all served their purpose for a while, but the scaremongering can go on without them, and you can go on with more and more model runs, more and more scenarios, but perhaps with reduced prospects of more staff and bigger and better computers every few years.
ThinkingScientist on Mar 13, 2012 at 1:12 PM
"Bizarely Donald Rumsfeld very eloquently stated the problem ...."
I am concerned about:
4) the unknown knowns, like the raw temperature data!
And the e-mails not subject to FOI legislation, even if the contents are unknown.
"I am concerned about:
4) the unknown knowns"
Like - who could possibly have faked the Gleick strategy document?
I find it difficult to understand the fact that grown men with excellent qualifications from well-regarded institutions can waste so much time in pretending that speculation about nonsense, and wildly inaccurate nonsense at that, is actually a proper way to spend their time, time which is paid for by long-suffering taxpayers who must earn their own money doing stuff which has far less social cachet than that enjoyed by those speculating about nonsense.
As our American cousins would say " Go figger!"
"I find it difficult to understand the fact that grown men with excellent qualifications from well-regarded institutions can waste so much time in pretending that speculation about nonsense, and wildly inaccurate nonsense at that, is actually a proper way to spend their time, time which is paid for by long-suffering taxpayers who must earn their own money doing stuff which has far less social cachet than that enjoyed by those speculating about nonsense."
Look on the bright side. The models are in serious trouble. They know it. We know it. So lots of excuses are being got in early. And lots of "engagement" is taking place with sceptics to try and soften the landing.
It's just a matter of time. As the song has it: "Ti-i-i-ime is on my side, yes it i-is."
Bishop Hill
This is an excellent point, which is nicely illustrated by comparing the figure from the Third Assessment Report (TAR) as used by Rood with the equivalent figure from the Fourth Assessment Report (AR4). You can see that the ranges of projected warming in AR4 (grey bars on the right) are larger than those in the TAR (coloured lines on the right).
For example, let's look at the A1B emissions scenario (just because it's a very widely used scenario, not because I think it is any more or less likely than any others). In the TAR, under the A1B scenario, the models project global warming of between 2.1 and 3.8 degrees C relative to 1990. In AR4, the "likely range" of projected warming was 1.7 to 4.4 degrees C. There are some nuances that are not particularly important here, but the main difference between the TAR and AR4 projections was that TAR did not consider uncertainties in translating a given CO2 emissions scenario into atmospheric concentrations, whereas AR4 did consider this uncertainty.
Since the natural processes of CO2 uptake and release by the oceans and land biosphere are themselves dependent on climate and CO2 concentration, the strength of these natural sources and sinks may change in the future as CO2 rises and the climate changes. There are large uncertainties in how these sources and sinks may change, and this is reflected in the wider range of projected warming in AR4 (with a smaller lower estimate of warming but also a higher upper estimate).
There are other systematic issues with the AR4 projections, some of which are addressed in the AR5, but again these have their own limitations.... :-)
A further difficulty is that these multi-model studies are not systematic estimates of uncertainty. The multi-model ensembles are sometimes called "ensembles of opportunity", which basically means that they used the information they already had to hand rather than deliberately setting out to systematically cover all eventualities. The ensembles of climate models used in IPCC are the set of models available from all climate modelling institutions, which happen to differ from each other because they have been developed separately (well, mostly) - but the differences between the models do not reflect the full range of possibilities.
The "perturbed physics ensemble" we (Met Office) used for the UKCP09 projections attempted to make a more systematic exploration of uncertainties, and that's the kind of thing that Doug McNeall has been working on. However this is of course also limited in the extent to which we can explore the uncertainties, because there are simply so many factors involved. It is therefore important to regard this studies as merely illustrating or exploring uncertainties, rather than giving a full quantification.
So, yes, BH, you are right to question the validity of the statement that "the uncertainty in climate projections associated with the physical climate model is smaller than the uncertainty associated with the models of emission scenarios". Neither are true reflections of the actual uncertainty.
OT but ArchNincompoop Jonathon Poritt weighs in against nuclear:-
http://www.guardian.co.uk/environment/2012/mar/13/uk-energy-future-france
Some of the comments are quite aposite!
indeed neither reflect the uncertainty fully. However, in the end I'd say that Rood's conclusion is correct, as our economic knowledge and our ability to make reliable economic projections is less well founded than the knowledge in the climate model's and contains at least as many unknown unknowns.
AFAIKS modelling of complex and chaotic systems isn't particularly successful unless substantial relationships between the variables are identified and quantified (erm... let's say by observation).
Modelling has been dramatically successful in many areas and it strikes me that the climate community are looking to feed on that acknowledged success and bath in the reflected glory - see! we're using models too! (h/t to Bish's transistor / airplane)
If your model doesn't work and doesn't hindcast or forecast with any confidence - then there's something you don't know about. Blindly fiddling with the arithmetic innards and tweaking stuff to try and get it to behave - sigh ... and ... deliberately fiddling with the innards to get the result you've been paid to produce?
I'm happy with the models I use because most of the time the loop has been reasonably closed - but there are still surprises large and small, some explained some still the subject of conjecture.
The way models are presented by the climate community seems willful in the extreme - sometimes it seems that models trump observation - and that ... can't be right at all.
Richard Betts: with respect, your support of the status quo is eloquent. However, because the climate models are based on four basic mistakes in the physics, two of which no professional should have made, it won't wash with us grey heads taught by the greats [in my case a past student of Planck].
For your information, the correct solution of the aerosol optical physics of clouds involves an extra optical effect triggered by a bimodal droplet size distribution. There is an established physical principle by which it operates but until yesterday it was still an educated guess.
However, in a recent Tallbloke post, reference is made to an unusual optical effect in clouds which is the subject of a recent Scientific American article. First observed in the late 18th Century there has been no explanation [the 'Glory']. There is now,
Net AIE forcing is positive not negative. The IR physics is also wrong. The models need rebuilding.
Sorry, but this is science.
Mar 13, 2012 at 10:22 PM | TomO
Nobody pays us, or even asks us, to produce a particular result.
Cheers
Richard
ThinkingScientist,
"Rumsfeld got pilloried for his comments and received an ignominious award for the worst use of English by a politician ( I think it was about 2003). The organisation that awarded him that are pig ignorant because Rumsfeld is not using his own words but is in fact effectively quoting Plato from the play Meno - and Plato is himself quoting Socrates. Not even Wiki seems to realise that."
That's because Philosophy is frowned upon nowadays. Obscure branches of Philosophy, such as Epistemology, are ignored even more; which is odd, because one would assume that Epistemology (the Theory of Knowledge) would be of interest to anyone in search of knowledge. When I took the course I had no idea what it was about. By the end of it, I didn't understand why it was not a compulsory course for all undergraduates.
When I heard Rumsfeld saying on TV:
I instantly thought of my good professor who was the only -and very lonely- epistemologist at the university. He would have been thrilled by the unexpected, massive international public exposure for his discipline. That Rumsfeld quote will live on in his book and all other Epistemology textbooks for eternity; it is the E=mc2 of the discipline.
The reason Rumsfeld was ridiculed was not only ignorance of the theory of knowledge on the part of the audience, but also it is unusual for a politician to go on such frolics during press conferences, let alone an international statesman and war-manager like Rumsfeld. It showed the intellectual aspect of his character; neither the Left nor the Right could have that. So he was pilloried.
Unless I have overlooked something or misunderstood, all Rood seems to be saying is that if you remove the sources of uncertainty in climate modelling your models become accurate. Well, yes. That is bloomin obvious. Doesn't make them correct though.
The giveaway for this position is here:
As Richard Betts explains above the climate models are not like that. They have their own internal uncertainties. In Rood's description of modeling that internal uncertainty is a statistical uncertainty akin to the economic uncertainty rather than a known physical process accurately replicated in the software.
In my view, now being restructured and quantified with respect to experiment, Hansen has made some serious mistakes.
I could well be wrong though which is why independent peer review is essential.
CSIRO has a new 'climate snapshot' out today which includes this statement:
"There is greater than 90% certainty that increases in greenhouse gas emissions have caused most of the global warming since the mid-20th century."
I really wonder what that means, scientifically. Does it mean "Our conclusions are almost, but aren't quite accurate at the 95% confidence level" ?. Because surely if these were 95% confidence level they would say so.
So to paraphrase- These are pretty crappy 1-sigma observations which mean nothing about one out of three times.
Interesting discussion. The comment by Rood however fails to consider the time dependence of the uncertainties - their sizes, or even relative sizes, are not constant when looking at different time horizons.
There is a freely available published article here which may help start a discussion. [Disclaimer: I helped write it.]
cheers,
Ed.
Models are models, ie, I can make any model of anything, initiate it, parametrize it as I want. It has NOTHING to do with the real world, it is simply a silly model.
I am flabbergasted by the observation that so many people give any model so much credulence.
One can make any model for any situation correlation is NOT causation, this is ridiculous.
mydogs...
where does the iron Sun hypothesis fit into your thinking? You are almost as prevalent as Dr Oliver Manuel....;)
Thought it worth just throwing in this Q/A from the recent Judith Curry interview re verification and validation of climate models
OP. I saw an interesting comment on another site regarding climate science that i thought i’d get your opinion on as it raises some very interesting arguments:
Climate science has claimed for 30 years that it affects the safety of hundreds of millions of people, or perhaps the whole planet. If it gets it wrong, equally, millions may suffer from high energy costs, hunger due to biofuels, and lost opportunity from misdirected funds, notwithstanding the projected benefits from as yet impractical renewable energy.
Yet, we have allowed it to dictate global policy and form a trillion dollar green industrial complex - all without applying a single quality system, without a single performance standard for climate models, without a single test laboratory result and without a single national independent auditor or regulator. It all lives only in the well known inbred, fad-driven world of peer review.
JC: I agree that there is lack of accountability in the whole climate enterprise, and it does not meet the standards that you would find in engineering or regulatory science. I have argued that this needs to change, by implementing data quality and model verification and validation standards.
http://oilprice.com/Interviews/The-IPCC-May-Have-Outlived-its-Usefulness-An-Interview-with-Judith-Curry.html
Mar 13, 2012 at 10:32 PM | mydogsgotnonose
I think glories are, and have been for some time, quite well understood:
http://atoptics.co.uk/droplets/glodrps.htm
http://www.philiplaven.com/Publications/AO-44-27-p5675.pdf
I do not at all understand the confidence ascribed to climate models. In most sciences, models may be used to test ideas or scenarios, but not as a test of science. For most things, people don't assume they can predict much.
In particular, while climate models may include 'well understood physics' or whatever, you have a complex system with things like clouds and topology which you have to model to fine granularity. Kilometers won't do. This assumes you can model clouds, of course. Another thing which strikes me as odd is the idea the biosphere is somehow a constant which doesn't respond to temperature or CO2 or variations in humidity or precipitation. Unless you can model the biosphere (and you probably can't) you can't model the climate.
@Richard Betts
some unfortunate phrasing there on my part. I am too quick sometimes with the publish button (the Captcha usually presents me with a chance to review but sometimes the comment sails through - moving finger writes and all that)
I am not accusing anybody particularly of cynically manufacturing policy based evidence here.
What I am saying is that if the model doesn't provide a reasonable facsimile of observation i.e. a "known physical process accurately replicated in the software" which can be relatively safely moved between scenarios then clearly there is perhaps more than one physical process that is unknown and effort (and funds) should be directed at identifying the culprit(s)
The "paid" part of my comment is really directed at what I perceive as wasted effort trying to kludge software to synthesize a variable which is controlled by an unknown (and in some cases not accurately/sensibly knowable) process - the real world regularly defeats explanation - we're getting better at it but we're not there yet.
I suppose I'd also have to snipe at what I believe to be confirmation bias (or worse) in some of the interpretations of the models when presented to policy makers who provide the funding... otherwise I wouldn't be here.... .
A scientific approach has to be enlightened with observation and the pressure to provide projections which are close relatives to speculation when confidence levels are very low must be resisted - from experience that's a path to ruination.
What on earth is wrong with saying "we don't know, but we're looking at all sorts of possible explanations"? - and sacking the entire corporate communications department.
TomO
If only the world worked that way.
I made the mistake of trying to interpret Dr Rood's statement that "the uncertainty in climate projections associated with the physical climate model is smaller than the uncertainty associated with the models of emission scenarios"
This seemed to me a perfectly reasonable and plausible proposition,(1) and still does. I interpret it as saying that the specific model runs included are more constrained than our guesses at future social and economic changes. As best I can work it out from the figure he shows, which I take to be the basis of his statement, his assertion stands, but it's tighter than I would have thought.
The figure shown in the climateknowledge page is of poor quality, so I tracked down the original from grida.no: http://www.grida.no/climate/ipcc_tar/vol4/english/images/fig9-1b.jpg
This shows what I presume to be model ensemble results for 2100 temperature for each of the scenarios as vertical lines on the right.
As best I can eyeball them using a graphics package for magnification and grid, these can be represented as central points of temperature increase and what Dr Rood characterises as the uncertainty(2), which I shall call the +/-, because I don't think the word uncertainty is appropriate here. The figure says that the bars represent the "range in year 2100 produced by several models".
A1B 2.95 +/- 0.85
A1T 2.55 +/- 0.75
A1F1 4.45 +/- 1.15
A2 3.65 +/- 0.95
B1 2 +/- 0.6
B2 2.65 +/- 0.75
IS92a 2.25 +/- 1.25
The average +/- for the models is 0.9.
The mean of the central temperature estimates is just over 2.9 degrees. All but one of the central temperature estimates are inside that 0.9 +/- we averaged from the models, but the A1F1, the "we all get rich" scenario, goes to 4.45,
which means that on that one point alone, Dr Rood's statement is true for this data.
(1) noting that no reality at all is involved: in this discussion, we are just comparing the spread of two computer-generated sets of numbers, not whether any of them relates to temperatures, playing cards or marbles.