Brown out
Jun 21, 2013
Bishop Hill in Climate: Models, Climate: Surface

Robert G. Brown (rgbatduke) has posted another devastating comment at WUWT, which I am again taking the liberty of reproducing in full here. For the counter-view, see Matt Briggs here.

Sorry, I missed the reposting of my comment. First of all, let me apologize for the typos and so on. Second, to address Nick Stokes in particular (again) and put it on the record in this discussion as well, the AR4 Summary for Policy Makers does exactly what I discuss above. Figure 1.4 in the unpublished AR5 appears poised to do exactly the same thing once again, turn an average of ensemble results, and standard deviations of the ensemble average into explicit predictions for policy makers regarding probable ranges of warming under various emission scenarios.

This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together. One can at best take any given GCM run and compare it to the actual data, or take an ensemble of Monte Carlo inputs and develop many runs and look at the spread of results and compare THAT to the actual data.

In the latter case one is already stuck making a Bayesian analysis of the model results compared to the observational data (PER model, not collectively) because when one determines e.g. the permitted range of random variation of any given input one is basically inserting a Bayesian prior (the probability distribution of the variations) on TOP of the rest of the statistical analysis. Indeed, there are many Bayesian priors underlying the physics, the implementation, the approximations in the physics, the initial conditions, the values of the input parameters. Without wishing to address whether or not this sort of Bayesian analysis is the rule rather than the exception in climate science, one can derive a simple inequality that suggests that the uncertainty in each Bayesian prior on average increases the uncertainty in the predictions of the underlying model. I don’t want to say proves because the climate is nonlinear and chaotic, and chaotic systems can be surprising, but the intuitive order of things is that if the inputs are less certain and the outputs depend nontrivially on the inputs, so are the outputs less certain.

I will also note that one of the beauties of Bayes’ theorem is that one can actually start from an arbitrary (and incorrect) prior and by using incoming data correct the prior to improve the quality of the predictions of any given model with the actual data. A classic example of this is Polya’s Urn, determining the unbiased probability of drawing a red ball from an urn containing red and green balls (with replacement and shuffling of the urn between trials). Initially, we might use maximum entropy and use a prior of 50-50 — equal probability of drawing red or green balls. Or we might think to ourselves that the preparer of the urn is sneaky and likely to have filled the urn only with green balls and start with a prior estimate of zero. After one draws a single ball from the urn, however, we now have additional information — the prior plus the knowledge that we’ve drawn a (say) red ball. This instantly increases our estimate of the probability of getting red balls from a prior of 0, and actually very slightly increases the probability of getting a red ball from 0.5 as well. The more trials you make (with replacement) the better your successive approximations of the probability are regardless of where you begin with your priors. Certain priors will, of course, do a lot better than others!

I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “signficant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? Is it even weak evidence for? Or is it in fact evidence that ought to at least some extent decrease our degree of belief in aggressive warming over the rest of the century, just as drawing red balls from the urn ought to cause us to alter our prior beliefs about the probable fraction of red balls in Polya’s urn, completely independent of the priors used as the basis of the belief?

In the end, though, the reason I posted the original comment on Monckton’s list is that everybody commits this statistical sin when working with the GCMs. They have to. The only way to convince anyone that the GCMs might be correct in their egregious predictions of catastrophic warming is by establishing that the current flat spell is somehow within their permitted/expected range of variation. So no matter how the spaghetti of GCM predictions is computed and presented — and in figure 11.33b — not 11.33a — they are presented as an opaque range, BTW, — presenting their collective variance in any way whatsoever is an obvious visual sham, one intended to show that the lower edge of that variance barely contains the actual observational data.

Personally, I would consider that evidence that, collectively or singly, the models are not terribly good and should not be taken seriously because I think that reality is probably following the most likely dynamical evolution, not the least likely, and so I judge the models on the basis of reality and not the other way around. But whether or not one wishes to accept that argument, two very simple conclusions one has little choice but to accept are that using statistics correctly is better than using it incorrectly, and that the only correct way to statistically analyze and compare the predictions of the GCMs one at a time to nature is to use Bayesian analysis, because we lack an ensemble of identical worlds.

I make this point to put the writers of the Summary for Policy Makers for AR5 that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world. Literally meaningless. Without meaning.

The probability ranges published in AR4′s summary for policy makers are utterly indefensible by means of the correct application of statistics to the output from the GCMs collectively or singly. When one assigns a probability such as “67%” to some outcome, in science one had better be able to defend that assignment from the correct application of axiomatic statistics right down to the number itself. Otherwise, one is indeed making a Ouija board prediction, which as Greg pointed out on the original thread, is an example deliberately chosen because we all know how Ouija boards work! They spell out whatever the sneakiest, strongest person playing the game wants them to spell.

If any of the individuals who helped to actually write this summary would like to come forward and explain in detail how they derived the probability ranges that make it so easy for the policy makers to understand how likely to certain it is that we are en route to catastrophe, they should feel free to do so. And if they in fact did form the mean of many GCM predictions as if GCMs are some sort of random variate, form the standard deviation of the GCM predictions around the mean, and then determine the probability ranges on the basis of the central limit theorem and standard error function of the normal distribution (as it is almost certain they did, from the figure caption and following text) then they should be ashamed of themselves and indeed, should go back to school and perhaps even take a course or two in statistics before writing a summary for policy makers that presents information influencing the spending of hundreds of billions of dollars based on statistical nonsense.

And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.

Predictive modeling is difficult. Using the normal distribution in predictive modeling of complex multivariate system is (as Taleb points out at great length in The Black Swan) easy but dumb. Using it in predictive modeling of the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output — is beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb. The phase space filling exponential growth of probable error to the physically permitted boundaries dumb.

In my opinion — as admittedly at best a well-educated climate hobbyist, not as a climate professional, so weight that opinion as you will — we do not know how to construct a predictive climate model, and will never succeed in doing so as long as we focus on trying to explain “anomalies” instead of the gross nonlinear behavior of the climate on geological timescales. An example I recently gave for this is understanding the tides. Tidal “forces” can easily be understood and derived as the pseudoforces that arise in an accelerating frame of reference relative to Newton’s Law of Gravitation. Given the latter, one can very simply compute the actual gravitational force on an object at an actual distance from (say) the moon, compare it to the actual mass times the acceleration of the object as it moves at rest relative to the center of mass of the Earth (accelerating relative to the moon) and compute the change in e.g. the normal force that makes up the difference and hence the change in apparent weight. The result is a pseudoforce that varies like (R_e/R_lo)^3 (compared to the force of gravity that varies like 1/R_lo^2 , R_e radius of the earth, R_lo radius of the lunar orbit). This is a good enough explanation that first year college physics students can, with the binomial expansion, both compute the lunar tidal force and compute the nonlinear tidal force stressing e.g. a solid bar falling into a neutron star if they are a first year physics major.

It is not possible to come up with a meaningful heuristic for the tides lacking a knowledge of both Newton’s Law of Gravitation and Newton’s Second Law. One can make tide tables, sure, but one cannot tell how the tables would CHANGE if the moon was closer, and one couldn’t begin to compute e.g. Roche’s Limit or tidal forces outside of the narrow Taylor series expansion regime where e.g. R_e/R_lo << 1. And then there is the sun and solar tides making even the construction of an heuristic tide table an art form.

The reason we cannot make sense of it is that the actual interaction and acceleration are nonlinear functions of multiple coordinates. Note well, simple and nonlinear, and we are still a long way from solving anything like an actual equation of motion for the sloshing of oceans or the atmosphere due to tidal pseudoforces even though the pseudoforces themselves are comparatively simple in the expansion regime. This is still way simpler than any climate problem.

Trying to explain the nonlinear climate by linearizing around some set of imagined “natural values” of input parameters and then attempting to predict an anomaly is just like trying to compute the tides without being able to compute the actual orbit due to gravitation first. It is building a Ptolemaic theory of tidal epicycles instead of observing the sky first, determining Kepler’s Laws from the data second, and discovering the laws of motion and gravitation that explain the data third, finding that they explain more observations than the original data (e.g. cometary orbits) fourth, and then deriving the correct theory of the tidal pseudoforces as a direct consequence of the working theory and observing agreement there fifth.

In this process we are still at the stage of Tycho Brahe and Johannes Kepler, patiently accumulating reliable, precise observational data and trying to organize it into crude rules. We are only decades into it — we have accurate knowledge of the Ocean (70% of the Earth’s surface) that is at most decades long, and the reliable satellite record is little longer. Before that we have a handful of decades of spotty observation — before World War II there was little appreciation of global weather at all and little means of observing it — and at most a century or so of thermometric data at all, of indifferent quality and precision and sampling only an increasingly small fraction of the Earth’s surface. Before that, everything is known at best by proxies — which isn’t to say that there is not knowledge there but the error bars jump profoundly, as the proxies don’t do very well at predicting the current temperature outside of any narrow fit range because most of the proxies are multivariate and hence easily confounded or merely blurred out by the passage of time. They are Pre-Ptolemaic data — enough to see that the planets are wandering with respect to the fixed stars, and perhaps even enough to discern epicyclic patterns, but not enough to build a proper predictive model and certainly not enough to discern the underlying true dynamics.

I assert — as a modest proposal indeed — that we do not know enough to build a good, working climate model. We will not know enough until we can build a working climate model that predicts the past — explains in some detail the last 2000 years of proxy derived data, including the Little Ice Age and Dalton Minimum, the Roman and Medieval warm periods, and all of the other significant decadal and century scale variations in the climate clearly visible in the proxies. Such a theory would constitute the moral equivalent of Newton’s Law of Gravitation — sufficient to predict gross motion and even secondary gross phenomena like the tides, although difficult to use to compute a tide table from first principles. Once we can predict and understand the gross motion of the climate, perhaps we can discern and measure the actual “warming signal”, if any, from CO_2. In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.

Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization.

Does anyone doubt that the ratio of the latter to the former — even granting the accuracy of the former — is at least a thousand to one? Think of what a billion dollars would do in the hands of Unicef, or Care. Think of the schools, the power plants, the business another billion dollars would pay for in India, in central Africa. Go ahead, think about spending 498 more billions of dollars to improve the lives of the world’s poorest people, to build up its weakest economies. Think of the difference not spending money building inefficient energy resources in Europe would have made in the European economy — more than enough to have completely prevented the fiscal crisis that almost brought down the Euro and might yet do so.

That is why presenting numbers like “67% likely” on the basis of gaussian estimates of the variance of averaged GCM numbers as if it has some defensible predictive force to those who are utterly incapable of knowing better is not just incompetently dumb, it is at best incompetently dumb. The nicest interpretation of it is incompetence. The harshest is criminal malfeasance — deliberately misleading the entire world in such a way that millions have died unnecessarily, whole economies have been driven to the wall, and worldwide suffering is vastly greater than it might have been if we had spent the last twenty years building global civilization instead of trying to tear it down!

Even if the predictions of catastrophe in 2100 are true — and so far there is little reason to think that they will be based on observation as opposed to extrapolation of models that rather appear to be failing — it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.

I will conclude with my last standard “challenge” for the warmists, those who continue to firmly believe in an oncoming disaster in spite of no particular discernible warming (at anything like a “catastrophic” rate” for somewhere between 13 and 17 years), in spite of an utterly insignificant rate of SLR, in spite of the growing divergence between the models and reality. If you truly wish to save civilization, and truly believe that carbon burning might bring it down, then campaign for nuclear power instead of solar or wind power. Nuclear power would replace carbon burning now, and do so in such a way that the all-important electrical supply is secure and reliable. Campaign for research at levels not seen since the development of the nuclear bomb into thorium burning fission plants, as the US has a thorium supply in North Carolina alone that would supply its total energy needs for a period longer than the Holocene, and so does India and China — collectively a huge chunk of the world’s population right there (and thorium is minded with rare earth metals needed in batteries, high efficiency electrical motors, and more, reducing prices of all of these key metals in the world marketplace). Stop advocating the subsidy of alternative energy sources where those sources cannot pay for themselves. Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.

Otherwise, while “deniers” might have the blood of future innocents on their hands if your future beliefs turn out to be correct, you’ll continue to have the blood of many avoidable deaths in the present on your own.

[Updated to add link to Briggs.]

 

Article originally appeared on (http://www.bishop-hill.net/).
See website for complete article licensing information.