Uniform priors and the IPCC
Last week, I posted about a comment Nic Lewis had written at RealClimate. In that comment, Lewis had spent some time discussing a study by Aldrin et al, and noted that its findings were distorted by the use of a uniform (or "flat" prior). Although Gavin Schmidt did not respond directly to this point, one commenter pushed the question of the validity of the uniform prior approach a little further.
Graeme:
I thought James Annan had demonstrated that using a uniform prior was bad practise. That would tend to spread the tails of the distribution such that the mean is higher than the other measures of central tendency. So is it justified in this paper?
This elicited a response from a statistician called Steve Jewson (a glance at whose website suggests he is just the man you'd want to give you advice in this area):
Following on from the comments by Nic Lewis and Graeme,
Yes, using a flat prior for climate sensitivity doesn’t make sense at all. Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.
Nic (or anyone else)…would you be able to list all the studies that have used flat priors to estimate climate sensitivity, so that people know to avoid them?
RC regular Ray Ladbury then chimed in with this:
Steve Jewson,
The problem is that the studies that do not use a flat prior wind up biasing the result via the choice of prior. This is a real problem given that some of the actors in the debate are not “honest brokers”. It has seemed to me that at some level an Empirical Bayes approach might be the best one here–either that or simply use the likelihood and the statistics thereof.
To which Steve Jewson replied:
Ray,
I agree that no-one should be able to bias the results by their choice of prior: there needs to be a sensible convention for how people choose the prior, and everyone should follow it to put all studies on the same footing and to make them comparable.
And there is already a very good option for such a convention…it’s Jeffreys’ Prior (JP).
JP is not 100% accepted by everybody in statistics, and it doesn’t have perfect statistical properties (there is no framework that has perfect statistical properties anywhere in statistics) but it’s by far the most widely accepted option for a conventional prior, it has various nice properties, and basically it’s the only chance we have for resolving this issue (the alternative is that we spend the next 30 years bickering about priors instead of discussing the real issues). Wrt the nice properties, in particular the results are independent of the choice of coordinates (e.g. you can use climate sensitivity, or inverse climate sensitivity, and it makes no difference).
Using a flat prior is not the same as using Jeffreys’ prior, and the results are not independent of the choice of coordinates (e.g. a flat prior on climate sensitivity does not give the same results as a flat prior on inverse climate sensitivity).
Using likelihood alone isn’t a good idea because again the results are dependent on the parameterisation chosen…you could bias your results just by making a coordinate transformation. Plus you don’t get a probabilistic prediction.
When Nic Lewis referred to objective Bayesian statistics in post 66 above, I’d guess he meant the Jeffreys’ prior.
Steve
ps: I’m talking about the *second* version of JP, the 1946 version not the 1939 version, which resolves the famous issue that the 1939 version had related to the mean and variance of the normal distribution.
Nic Lewis was happy to concur and to provide a list of flat-prior studies.
Steve, Ray
First, when I refer to an objective Bayesian method with a noninformative prior, that means using what would be the original Jeffreys’ prior for inferring a joint posterior distribution for all parameters, appropriately modified if necessary to give as accurate inference (marginal posteriors) for individual parameters as possible. In general, that would mean using Bernardo and Berger “reference priors”, one targeted at each parameter of interest. In the case of independent scale and location parameters, doing so would equate to the second version of the Jeffreys’ prior that Steve refers to. In practice, when estimating S and Kv, marginal parameter inference may be little different between using the original Jeffreys’ prior and targeted reference priors.
Secondly, here is a list of climate sensitivity studies that used a uniform prior for main results when for estimating climate sensitivity on its own, or when estimating climate sensitivity S jointly with effective ocean vertical diffusivity Kv (or any other parameter like those two in which observations are strongly nonlinear) used uniform priors for S and/or Kv.
Forest et al (2002)
Knutti et at (2002)
Frame et al (2005)
Forest et al (2006)
Forster and Gregory (2006) – results as presented in IPCC AR4 WG1 report (the study itself used 1/S prior, which is the Jeffreys’ prior in this case, where S is the only parameter being estimated)
Hegerl et al (2006)
Forest et al (2008)
Sanso, Forest and Zantedeschi (2008)
Libardoni and Forest (2011) [unform for Kv, expert for S]
Olson et al (2012)
Aldrin et al (2012)This includes a large majority of the Bayesian climate studies that I could find.
Some of these papers also used other priors for climate sensitivity as alternatives, typically either informative “expert” priors, priors uniform in the climate feedback parameter (1/S) or in one case a uniform in TCR prior. Some also used as alternative nonuniform priors for Kv or other parameters being estimated.
Steve Jewson again:
Sorry to go on about it, but this prior thing this is an important issue. So here are my 7 reasons for why climate scientists should *never* use uniform priors for climate sensitivity, and why the IPCC report shouldn’t cite studies that use them.
It pains me a little to be so critical, especially as I know some of authors listed in Nic Lewis’s post, but better to say this now, and give the IPCC authors some opportunity to think about it, than after the IPCC report is published.
1) *The results from uniform priors are arbitrary and hence non-scientific*
If the authors that Nic Lewis lists above had chosen different coordinate systems, they would have got different results. For instance, if they had used 1/S, or log S, as their coordinates, instead of S, the climate sensitivity distributions would change. Scientific results should not depend on the choice of coordinate system.
2) *If you use a uniform prior for S, someone might accuse you of choosing the prior to give high rates of climate change*
It just so happens that using S gives higher values for climate sensitivity than using 1/S or log S.
3) *The results may well be nonsense mathematically*
When you apply a statistical method to a complex model, you’d want to first check that the method gives sensible results on simple models. But flat priors often given nonsense when applied to simple models. A good example is if you try and fit a normal distribution to 10 data values using a flat prior for the variance…the final variance estimate you get is higher than anything that any of the standard methods will give you, and is really just nonsense: it’s extremely biased, and the resulting predictions of the normal are much too wide. If flat priors fail on such a simple example, we can’t trust them on more complex examples.
4) *You risk criticism from more or less the entire statistics community*
The problems with flat priors have been well understood by statisticians for decades. I don’t think there is a single statistician in the world who would argue that flat priors are a good way to represent lack of knowledge, or who would say that they should be used as a convention (except for location parameters…but climate sensitivity isn’t a location parameter).
5) *You risk criticism from scientists in many other disciplines too*
In many other scientific disciplines these issues are well understood, and in many disciplines it would be impossible to publish a paper using a flat prior. (Even worse, pensioners from the UK and mathematicians from the insurance industry may criticize you too :)).
6) *If your paper is cited in the IPCC report, IPCC may end up losing credibility*
These are much worse problems than getting the date of melting glaciers wrong. Uniform priors are a fundamentally unjustifiable methodology that gives invalid quantitative results. If these papers are cited in the IPCC, the risk is that critics will (quite rightly) heap criticism on the IPCC for relying on such stuff, and the credibility of IPCC and climate science will suffer as a result.
7) *There is a perfectly good alternative, that solves all these problems*
Harold Jeffreys grappled with the problem of uniform priors in the 1930s, came up with the Jeffreys’ prior (well, I guess he didn’t call it that), and wrote a book about it. It fixes all the above problems: it gives results which are coordinate independent and so not arbitrary in that sense, it gives sensible results that agree with other methods when applied to simple models, and it’s used in statistics and many other fields.
In Nic Lewis’s email (number 89 above), Nic describes a further refinement of the Jeffreys’ Prior, known as reference priors. Whether the 1946 version of Jeffreys’ Prior, or a reference prior, is the better choice, is a good topic for debate (although it’s a pretty technical question). But that debate does muddy the waters of this current discussion a little: the main point is that both of them are vastly preferable to uniform priors (and they are very similar anyway). If reference priors are too confusing, just use Jeffreys’ 1946 Prior. If you want to use the fanciest statistical technology, use reference priors.
ps: if you go to your local statistics department, 50% of the statisticians will agree with what I’ve written above. The other 50% will agree that uniform priors are rubbish, but will say that JP is rubbish too, and that you should give up trying to use any kind of noninformative prior. This second 50% are the subjective Bayesians, who say that probability is just a measure of personal beliefs. They will tell you to make up your own prior according to your prior beliefs. To my mind this is a non-starter in climate research, and maybe in science in general, since it removes all objectivity. That’s another debate that climate scientists need to get ready to be having over the next few years.
Steve
I wonder how many of the flat prior studies will make it to the final draft of AR5? All of them?
Reader Comments (87)
Your Grace: on a point of English you should surely have written "commentator" not "commenter."
[I've thought about this before. "Commenter" may be a neologism, but I think it conveys a subtly different meaning to "commentator" so I'm inclined to stick with it.]
I wonder how many of the flat prior studies will make it to the final draft of AR5?
We know very well all of them will. They've not discarded "correct" answers on the basis of them being poorly done in the past.
What a brilliant debate. Thanks Bish. It's clear, precise, polite and from 2 people one can respect.
How long will the comments remain open? Will the promised "On sensitivity -- Part III" (http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/comment-page-2/#comment-313424) ever appear?
To avoid (my initial) confusion, note that the Nic Lewis and Steve Jewson comments on RC are in the thread On Sensitivity Part I (and not Part II).
[censored]
The distinction between a non-informative or Jeffrey's prior and a uniform prior is a key part of an introductory course in Bayesian statistics.
Steve Jewson says:
"4) *You risk criticism from more or less the entire statistics community*
5) *You risk criticism from scientists in many other disciplines too*"
But here's the problem with the modern politicised science world: Where are those critics?
Climate scientists have been using these techniques for years; activists and journalists bigging them up, and politicians changing our world on the strength of them.
If statisticians are horrified by these techniques where are their howls of protest; their letters to the papers?
My brain hurts!
But I don't need to know squat about statistics (which is just as well!) to understand that there is a world of opportunity out there for climate scientists — or indeed anyone else, I suppose —to use whatever statistical method they think is going to give them the answer they want.
Whether Jewson is right about Jeffrey's Prior I obviously couldn't say but it seems blindingly obvious that unless your statistics are based on one agreed statistical method, and preferably one approved by statisticians as appropriate for whatever field you are working in, the output is going to be at best unreliable and at worst meaningless.
But I fear Mooloo is right. The IPCC has too much invested in the cult to start being honest and objective at this late stage!
I wonder how many of the denizens of Climate Science ™ will have any idea what this discussion is really about?
I suspect that many, of not most, will be like me - taught some statistical methods in a science course, but never really understood what it all meant!
@Stuck Record
In academia, it is not done to critique others. Instead, one critiques and offers a better alternative. That is a lot more work: It requires gathering and cleaning the data, redoing the analysis, writing the paper, and spending a lot of time on a polite yet clear critique of previous work.
However, there are no credits for an academic statistician who publishes in a climate journal.
And there is of course a good chance the journal would reject the paper, or release but not publish it.
The latter happened to me last year:
Here is what the leaked AR5 SOD says on this, Ch 12 page 65
I am not as pessimistic as some. Science is (supposed to be) self-correcting. Scientists in one field cannot ignore the real experts in other fields forever. IPCC may not take note - in fact it is probably too late for AR5 - but I don't think any realist has particularly high hopes for AR5. But this is now out there and the RC crew will not be able to ignore it forever. Whatever you may think of Gavin this will have made its mark. Good work Nic and Steve.
That link did not go through
http://jpr.sagepub.com/content/suppl/2012/03/07/49.1.177.DC2/JPR_49_1_Response_from_Tol_to_Gartzke.pdf
[Snip - venting]
The Jeffreys Prior: fine, but one must be careful not to follow Sir Harold in all his science.
From Wikipedia:
Jeffreys was a strong opponent of continental drift. For him, continental drift was "out of the question" because no force even remotely strong enough to move the continents across the Earth's surface was evident.
The initial use of uniform priors in climate science (on the naive assumption that they are uninformative) is perfectly understandable: it's the sort of thing you expect when scientists first start using a technique which they only half understand (it's obvious how to use a uniform prior, but it is far less obvious to a newcomer what the Jeffreys prior even is). One would then normally expect more sophisticated methods to be taken up as experts begin to comment on what you are doing right and what you are doing wrong.
This process is clearly happening at the moment: if you look at the list of papers at http://www.stephenjewson.com/ you will find quite a few familiar names. What is surprising, however, is that this improved understanding is still partial and localised. What is depressing is that the IPCC is still taking seriously early papers which used uniform priors, when it should be obvious to them by now that those results are biased high.
People might find the paper by Jewson et al at http://arxiv.org/pdf/0908.4207.pdf a reasonable place to start: it is mathematical but it does at least try to explain what the issues are.
Careful you don't throw the baby out with the bathwater. A legitimate (but ugly) use of a uniform prior might be to show that results are insensitive to prior assumptions (i.e. that you have enough data that they don't matter).
Also, this isn't really news. Here is James Annan in 2006.
http://julesandjames.blogspot.co.uk/2006/03/climate-sensitivity-is-3c.html
Cheers,
Doug
[BH adds: Thanks for your thoughts Doug. Why then are we still seeing all those papers based on uniform priors in the draft AR5?]
The IPCC will not start publishing good science or practicing good data management now or ever.
That is not their goal.
And that so many AGW promotional efforts use bad stats practice is of no surprise to anyone who reads our host's books, Donna Laframboise, or the climategate leaks, the IPCC leaks, or the many excellent analytical efforts at Climate Audit.
@ Richard Tol,
Can you please show us where the process of science requires that when flaws are found in a scientific theory a better alternative must be provided for the finder of the flaws to be considered correct?
And all that tool place on real climate? Perhaps there is hope eh?
Mailman
Thanks Richard.
I understand. But it's a bit of a shame when there are so many alarmist scientists who are very happy to rubbish papers that don't back their position. Often before reading them.
One can invert Doug's comment to get something like this quote from Chapter 5 of The BUGS book
@Mailman
The relevant RC thread has been displaced by more recent issues: to find it one must now look under "Older Entries".
Yes, this is all very well, but can these guys plot a trend line in MS Excel? ;-)
I am very much glad to see this post. There is one point that I will dispute.
It would be nice if that were true. In fact, there are large portions of global warming science that are statistically invalid. The biggest such portion is the claim that the increase in global surface temperatures has been shown to be statistically significant. Statisticians do not speak up about those things, although they will often acknowledge those things privately.
The use of statistically-invalid methods is a common problem in science, not just global warming. Part of the reason statisticians do not speak up is that they have found that scientists typically do not want to listen. Indeed, when scientists (in any field) are told that much of the work they having been doing during their careers is invalid, they tend to react negatively—this is only human. Additionally, the scientists sometimes lack the aptitude required to do the statistical analyses properly.
As an example unrelated to climate science, I found that the statistical procedure used to calculate radiocarbon ages from 14C measurements is flawed. Thus, most radiocarbon ages published, during the past half century, need to be reassessed. I submitted a paper on this to a journal. Altogether, about 40 radiocarbon scientists were asked to review the paper. Most declined. The six who reviewed the paper recommended rejection; for each of those reviews, I wrote a rebuttal. After over three years of this, a statistician was brought in to do a review; I was told that the statistician was “eminent” and was not involved in radiocarbon. The statistician said, in effect, “this is all obvious”. And then my paper was accepted.
Before submitting my paper, though, I sent it to several statisticians who had been involved in radiocarbon. None would give me comments on the paper.
Steve Jewson is presently at the "Shock" stage of the climate debate awareness sequence.
When he discovers that the consensus scientists would rather include these errors because the error goes the 'right' way then he'll finally be at the Acceptance stage. That is Acceptance that the debate has been hijacked by activists who couldn't care less about correctness.
@lurker
You'll have to take my word for it: 20 years of experience; 6 different universities, 9 different departments across the natural and social sciences; 200 papers published; journal editor for 10 years.
Richard Tol, I think he mistook what you said was required in the real world to be done to get acceptance in a journal, and what the platonic 'scientific method' requires. Sceince only requires you only to find a flaw in someone's work to refute it... but getting a paper published about it requires you to build on that error and present an alternative. Just the way it is. There's no 'paper' in just saying 'this is wrong'.
Jonathan, "The initial use of uniform priors in climate science is perfectly understandable: it's the sort of thing you expect when scientists first start using a technique which they only half understand"
I don't agree with that. As someone whose understanding of the subject is definitely less than a half (I have to stop and think before writing down Bayes Theorem) it's immediately obvious to me that a uniform prior is daft for a continuous unbounded variable. For example the prior of Hegerl et al 2006 is that the pdf is const at 0.1 for all sensitivities up to 10 then suddenly drops to zero. And they don't even seem to have tested the dependence on the cutoff point.
Given my virtually zero knowledge of the subject I would start off with something smooth like xe^(-x).
Table 2 in Forest et al 2008 seems to show how the choice of a uniform prior inflates the mean.
DaveA, Stephen Jewson works for Risk Management Solutions. I think you'll find he can be very calm about issues of this kind.
Paul Matthews, you're a mathematician, not a physicist, never mind a geographer. What is immediately obvious to you is not obvious to everyone. The instinct that a uniform prior is uninformative, and that improper priors can just be truncated "far enough away", is deeply ingrained. If I was starting in this field I would probably start exactly the way they started, but I hope that I would have upped my game by now.
Its hardly the first time climate 'scientists' have been caught pants-down over their use of statistics is it.
The proper selection and use of statics is seen has a standard requirement for any physical science undergraduate, but it seem not to be a requirement for the 'professionals' in the area . Are we really saying that these self claimed leaders in their field cannot match the acedmic standards expected of their own students ?
But has been said a;ready, it is clear that has long as the results are 'right ' in that they support 'the cause ' how you actual get there is not an issue .
It's about this point in proceedings that I start wishing I had put more effort at school into studying statistics. Is there an idiots guide that the more enlightened out there could point to, that would help us slow coaches to catch up?
@Jonathan Jones.
Thanks for the reference to Jewson. I've always been baffled by priors, but this is really good explanation.
Jan 25, 2013 at 9:24 AM | Paul Matthews
Hmmm ...Chapter 12 of AR5 ... must be WG1's Chapter 12, whose Lead Author is Andrew Weaver "one of the world's leading climate experts" because ... well, because someone said so, I suppose.
And, of course, it matters not in the slightest that he peddles Greenpeace polemics or that his passion for politics has led him to run as a candidate for the BC Green Party, because he's "stepping up to make a difference".
But speaking of making a difference ... Weaver has also decided that computer model simulations are "experiments".
Leave the question, how the papers managed to mnake it through peer review.
Doug Keenan--
On p. 346 of your paper, you state:
"By Bayes’ Theorem, and assuming
a uniform prior distribution on T (i.e. the calendar
years are a priori equally probable),"
Discuss.
I suppose this is analogous to what happens in the financial risk management community. If you trade, say, wheat, your company will have a department that estimates the risks in your positions using a methodology called, amazingly, “value at risk”.
There are various approaches to how this is done and many players apply more than one variant, but simply put, one way of doing it – “historical V@R” – compares your positions and their prices today with what they were yesterday, and the day before, and the day before that. The values are exponentially weighted, making the most recent the most heavily weighted, to model the assumption that the price of wheat is slightly likelier to return today to yesterday’s price than to the price of the day before, which in turn is slightly likelier than the price of the day before that, and so on.
It is obvious, then, that if your historical V@R looks at only one day of history, very few positions will ever look risky. Whereas if you look at a week’s worth of price history, they will appear slightly more so. As you look further back, these differences eventually become less severe: the difference in apparent risk of a reversion to the price of one year ago versus five years ago is actually quite small, because the exponential weighting discounts most of it out anyway.
The point is that it has been well understood for at least ten years now that you can fiddle the apparent risk of a trading book by altering the historical basis you’re using for comparison. For example, if you had bought a lot of oil at $140 in mid-2008 and all you compared it to was the prices of that week, it would look like a low-risk bet. If you went back six months it would look very risky indeed, because the price six months before was only about $90 and you’d be valuing your risk against the possibility of having to sell it at $90.
Consequently , it’s been considered the minimum of good practice to establish the prior history you’ll use at the outset and then stick to it. Fiddling the baseline and continuing to do so can produce or massage away any level of risk you’d like.
It sounds to me like, at best, the cli-sci mob have picked a baseline for some or their analysis that handily supports alarmism, but don’t know enough about statistics to notice this. At worst, they do know, but don’t care.
@Richard,
Thank you for clearing that up.
I was indeed referring to the idea that the thesis/theory itself was the point.
Einstein's famous quote about (paraphrased) that theories must be supported each and every time and that a disproof must only be correct once seems to be long lost in the post modern world
It would seem that the grand edifice of science is even farther from its alleged ideals than many of us realized. The implications this has for the usefulness of this sort of science as a method for finding truth are not good.
@ Lance Wallace
Well put! In radiocarbon dating, it is assumed that the possible calendar years (which are finite in number) are a priori equally probable. That is assumed/agreed by all parties. My paper just accepts that, then does the calculation.
@lurker
There's an easy rule of thumb: We are career academics first, scientists later.
@dearieme
"Jeffreys was a strong opponent of continental drift. For him, continental drift was "out of the question" because no force even remotely strong enough to move the continents across the Earth's surface was evident."
Please be careful here. In 1948, "continental drift" was a beautiful theory for which there was no known physical basis. Plate tectonics was nearly 20 years in the future. Subduction and sea floor spreading were not known.
(FWIW, for all the smartarse critics of UEA, Professor Fred Vine [of Vine & Matthews 1962 fame] was my geophysics lecturer at UEA. He was the man who interpreted magnetic stripes from the E90 ridge as sea floor spreading)
The talk of ‘why don’t more statisticians speak up’ reminds me of the epic struggle undertaken by VS (Visiting Statistician) at
http://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comments
This thread begins with Bart Verheggen stating that, on the basis of a linear trend in GISS, HadCRUT3 and NCDC:
“There is no sign that the warming trend of the past 35 years has recently stopped or reversed.”
VS takes exception to this in the second post:
“Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics.”
After being cudgeled by the likes of Eli, Scott Mandia and Bart himself, VS says at March 8, 2010 22:44
“What’s so strange about the whole debate however, is that these tenants (which I’m elaborating on here) of modern statistical testing are not at all so ‘arcane’. Cointegration and unit root testing is widely taught, and should be a standard part of the toolkit of anybody wading into the analysis of time series. Clearly evident is the fact that this entire field is completely ignored in the debate.”
Then the brick wall sets in:
“Bart dammit! (excuse the agitation :)
The post you link to on realclimate is taking Tamino’s blog (we discussed above) as a serious reference. How on Earth can I then take it seriously? Did you compare what Tamino wrote with what is written on unit roots (the definition is given in the wiki link above)?
His claims are simply wrong. ‘Long term memory’? No, it’s in fact ‘perfect memory’ on our subsample, as the series contains a unit root.
I feel am now writing this down for the 10th time: Calculating a deterministic trend on a process containing a unit root is misspecification. Hence it is meaningless. That discussion at realclimate is simply flawed in its postulates.”
And why bother?
“Keep in mind that while Bart feels his discipline is under attack, my discipline is, in my eyes, being completely abused here by various individuals.”
And on, and on, and on and on and on, it goes, for 2,187 comments – surely a climate blog record.
The last substantive comment by VS is
“I would like a clear reply to those two questions, while referring to the IPCC chart you linked to, since I spent over a month, and God knows how many ’000 of words, making my point.”
So, the reason why? Because the brick wall eventually wins.
Re: AR5 preference for "expert elicitation" over statisticians' advice (with which at least Annan agrees), see Paul Matthews' comment at 9:24 above.
The authors combined responses and determined the likelihood of such an event by 2200 under low, medium, and high warming scenarios, arriving at 10-40%, 30-70%, and 70-90% respectively. [Individual responses indicate up to 90% for the scenario with less than 2 degrees of average warming.] "Error bars" and all, appears very objective and alarming..
I lost respect for this method as a scientific mode of inquiry with Kriegler et al PNAS 2008, which discusses possible tipping points in climate. They interviewed various scientists and came up with collective assessments of the likelihood of occurrence of various climate "tipping points". One such question was:
.
Then one looks at the responses, and finds that 2 of the 15 interviewees did not provide probability estimates. One said it would take at least 600 years, and another said that such an event would be too far in the future for elicitation of probabilities to be appropriate. [Those dissents agree with various other opinions I have seen expressed which said that GIS disappearance would take millennia. For example AR4 WG1 talks of millennia.]
.
So despite the fact that some of their experts believe that it's not physically possible to melt GIS over that time span, the authors discard those opinions, and average the remaining ones. I stopped reading the paper at that point.
Well - I know a tiny bit more about 'priors' than I did half-an-hour ago. However, my question to those of you well-versed in statistics and their application to climate - is basic, and in two parts:
Are you saying that the IPCC used/uses statistics which include a uniform prior..?
If so, are you then saying that such statistics are, at best, distorted; probably misleading; or at worst, useless..?
It's about this point in proceedings that I start wishing I had put more effort at school into studying statistics. Is there an idiots guide that the more enlightened out there could point to, that would help us slow coaches to catch up?
Jan 25, 2013 at 11:32 AM | Bloke down the pub
===================================================================
Mr. Pub,
Maybe this?
http://www.amazon.co.uk/Statistics-Dummies-Lifestyles-Paperback/dp/0470911085
The "Dummies" guides are in my experience pretty good.
I did Statistics as what was then called 'AO' level back in the 60s. This was an exam level in between O and A levels. Sadly, I can't remember any of it, so may avail myself of this title.
Somewhat off topic ... A few months back, I stumbled across the Maths text book we used for Oxford & Cambridge Maths O level, back in 1966. I intend to use it to re-educate myself in a subject I used to adore. Anyway, we have friends with a very bright lad who is studying Physics at Durham (and already, bless him, very sceptical about AGW). He did Maths A level. He noted that what I learnt for O level in the 60s, he learnt in his second year A level maths. So we are talking a slippage of two to three years in what kids are expected to be able to take in.
Add to that grade inflation, and no wonder we are tumbling down the OECD league tables. What I also recall is that anyone way back then who got three As in their A levels really was super bright. Now it's the norm. Who does that help? Not the kids, that's for sure.
This has been a public service announcement.
One can see the effect of priors in figure 4 of Foster et al. (Science 2002) [free registration required], which comments, "The main impact of the expert prior is to eliminate the 'fat tail' of high sensitivity values that are not excluded on the basis of recent obervations alone."
IPCC AR4 WG1 figure 9.20 (and Box10.2 figure 1) shows 'fat tails' on all the distributions. As Nic noted in his RC comment in the original post.
The statistically challenged could do worse than starting here:
http://wmbriggs.com/blog/?p=3145
@DaleC (Jan 25 2:13PM)
Glad you remember VS's epic struggle in which IIRC he completely destroyed tamino and Eli (among others). The entire thread is well worth revisiting IMHO. (Jonathan Jones, you might be astonished).
@DaleC (Jan 25 2:13PM)
Btw, I don't remember it being apparent at the time what VS's initials stood for. So Dale, please forgive the impertinence in my asking, but do you happen to know VS personally? And if so, do you know him particularly well by any chance? :)