Buy

Books
Click images for more details

Twitter
Support

 

Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Diary date: Keeping the lights on edition | Main | BREAKING! IPCC responds - Josh 239 »
Thursday
Sep262013

Steve Jewson on Bayesian statistics

Steve Jewson is the statistician whose trenchant comments on climatologists' use and misuse of Bayesian statistics was discussed here some months ago.

This presentation he gave to a conference at the University of Reading is in similar vein, but goes beyond Bayes' theorem to areas such as the UKCP09 climate predictions.

UKCP

  • unashamedly uses subjective methods
  • Includes subjective beliefs that go beyond the models and the data

Nic Lewis has analysed the impact of these additional subjective factors:

  • And it seems that they push the rate of climate change higher than that suggested by the evidence
  • If true, then UKCP predictions of future temperatures would be higher than their own models and data would suggest
  • And should not be expected to be ‘reliable’

Oh dear.

His conclusion - that the Met Office could easily strip the subjectivity out of their predictions - seems to me to be of critical importance. Time, I would say, for the empanelling of the review that Nigel Lawson called for.

PrintView Printer Friendly Version

Reader Comments (17)

Are computer models reliable?

Yes. Computer models are an essential
tool in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.

Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate, which gives
scientists confidence that they can also
predict the future.

Met Office Publication

Sep 26, 2013 at 5:49 PM | Registered CommenterMartin A

(duplicate posting deleted)

Sep 26, 2013 at 5:51 PM | Registered CommenterMartin A

You can fit most 2-d functions with a 4th order polynomial. The climate models are worse in terms of the heat transfer.

The Kiehl-Trenberth 'energy budget' incorrectly assume real surface IR flux at black body level. 333 W/m^2 'back radiation' isn't real so they assume Kirchhoff's Law of Radiation applies at ToA. This reduces IR energy input to 333-238.5 = 157.5.

In reality it is 157-5/23 = 6.85x too high. In turn this overheats the hypothetical sunlit ocean and increases evaporation (exponential kinetics) much more than the cooler cloudy ocean. The temperature is then corrected by using double real low level cloud optical depth in hind casting.

The 'positive feedback' is an artefact as is the.missing heat'. At the best this is amateurish.

Sep 26, 2013 at 6:03 PM | Unregistered CommenterAlecM

"I wonder how many of the flat prior studies will make it to the final draft of AR5? All of them?"

The answer should be coming to a climate blog near you quite soon.

Sep 26, 2013 at 6:48 PM | Unregistered Commenterpesadia

Its a sign of their arrogance and of their knowledge in the weakness in their own data , that climate scientists will not get statistical experts involved in their studies . They rather come up with 'novel' approaches and bugger it up they take advice from 'the experts '

Funny that the 'worlds most important issue ever ' and think of the children , or so we are told , and they will make use of best people for the job but rather get it wrong . Almost makes you think its not the 'worlds most important issue ever ' after all .

Sep 26, 2013 at 7:05 PM | Unregistered CommenterKNR

If Steve Jewson wants to discuss this with the Met Office, he'd better bloomin' well do it through the peer-reviewed literature!?!

Sep 26, 2013 at 7:06 PM | Unregistered CommenterRichieRich

^^

Maybe Steve Jewson should call the Met Office's bluff and follow the lead of McShane and Wyner, and publish in the Annals of Applied Statistics or some other such Stats Journal.

Sep 26, 2013 at 7:23 PM | Unregistered CommenterKnockJohn

Just you all hold your horses there. This presentation appears to have been given to meteorologists. So there seems to be an attempt by someone to teach statistics to the mumpties. That is the good news. The fact that it wasn't done 30 years ago is the bad news. The storm is gathering strength.

Sep 26, 2013 at 7:43 PM | Unregistered CommenterDolphinhead

I'd venture to say that 1/3 of 'climate scientists' are activists in the business only because they're able to be unabashedly subjective in their work, and another 1/3 are financially in thrall to those who expect results that back up their less-than-objective work.

Of course, that's a conclusion arrived at subjectively...

Sep 26, 2013 at 7:47 PM | Unregistered CommenterJEM

The Lawson review followed by a public enquiry followed by some jail time for the main offenders? Too much to hope for.

Sep 26, 2013 at 7:58 PM | Unregistered CommenterDolphinhead

Maybe Steve Jewson should call the Met Office's bluff and follow the lead of McShane and Wyner, and publish in the Annals of Applied Statistics or some other such Stats Journal.

Yes, a Lewis and Jewson paper in a stats journal sounds like a plan. A stats journal is the correct place to publish and has the added advantage of lessening the chances of obstructionist reviewers being in the majority.

Sep 26, 2013 at 8:11 PM | Unregistered CommenterRichieRich

I guess I'm fussy in that I can only imagine Bayesian priors based on extensive testing, long experience of predictive skill or observed hard evidence and not from uniform, subjective, objective or any other type of guesswork from a standpoint of ignorance, belief, ideology or even a half-understood theory. If you don't have good solid priors then the entire exercise is a priori futile. In this case, if you don't really know where natural variation actually comes from then it's all just biased guesswork.

There was a Bayesian expert trying to elicit predictive hurricane predictive skill a few years ago from learned people in the field. Pielke Jr correctly criticized the exercise as little more than guesswork. The statistical procedures were likely flawless but the trouble is that we don't need 100 guessers, we just need to talk to the one or two people that have a long history of being right. In the case of hurricanes, that was precisely nobody.

When I did a Bayesian expert system for a petrochem plant I just needed to talk to the one guy that everyone called when something went wrong. If I conducted a poll among the folk who didn't really know what dial to turn or which way to turn it, or by how much when something went wrong but who were nevertheless willing to have a best guess then I would have had plant explosions every week.

Sep 26, 2013 at 8:29 PM | Unregistered CommenterJamesG

O/T. BBC R4 Inside Science today have a very balanced segment on fracking. http://www.bbc.co.uk/programmes/b03bfszg

Sep 26, 2013 at 8:35 PM | Unregistered Commenterauralay

Forgive my ignorance but doesn't this miss the point of Bayesian Stats?

The idea is that you acknowledge your biases and consider them. Then you promote those that fit your total knowledge and rule out as "outliers" those that make no sense.

It may seems like rubbish because you can't know your self fully. But the idea is that you present and acknowledge your bias so as everyone can see your sources of error. Then the alternative views can offer their own bias and thus create their own worldview to present probabilities within.

So Bayesian Statistics allow quantitative discussions of partial observations.
(Perhaps at best)

Sep 26, 2013 at 8:49 PM | Unregistered CommenterM Courtney

M Courtney,

"But the idea is that you present and acknowledge your bias so as everyone can see your sources of error."

It works beautifully in a world where you and everyone else know all the information about reality. You can buy yourself a Las Vegas style casino and discover all your bad habits in betting. Unless the dice are loaded, so to speak. Then you cannot discover anything.

In other words, your perfect bias detection system assumes that there is nothing to learn from experience apart from your biases.

Sep 27, 2013 at 4:13 AM | Unregistered CommenterTheo Goodwin

JEM sasys:

"Of course, that's a conclusion arrived at subjectively..."

Ergo, a Bayesian prior.

Sep 27, 2013 at 9:39 AM | Unregistered Commentertty

I'm not sure if anyone is reading this thread any more, but let me make some comments on the comments.

@Martin A:
Note that in this presentation the word 'reliable' has a specific technical meaning, which is that a prediction that says that some event has a probability of 10% will validate in the sense that that event really will occur 10% of the time.

Did the Met office really say "computer models are the only reliable way to predict changes in climate"? That would be a silly thing to say. In many cases simple statistical models are just as good, or even better (depending on the location, the variable, the timescale and the length-scale). Many people in the met office know that, and the met office themselves have often made statistical predictions in the past. The main advantage of computer models is that you can run scenarios, not particularly that they make reliable predictions.

@KNR:
Well, the Met office did involve some statistics experts in UKCP. But the particular experts that they talked to have a strong preference for defining probabilities in terms of personal beliefs, and incorporating subjective information. If they'd talked to different experts (or even me...although I wouldn't claim that I'm a statistics expert) they might have done it using objective methods instead. Hopefully in the future they will, at the very least, do both, so we can see the impact on the results of the somewhat arbitrary decision about which framework you use (objective or subjective).

@RichieRich:
Unfortunately I don't have time to write peer-reviewed papers. I have in the past, but it's too time-consuming, and I have a day job. Unless I win the lottery.

@JamesG:
Objective priors are very often based on nothing more complex than the idea that, in perfect model tests (a sanity check in which you see whether a computer model could predict itself), the results should be reasonably 'reliable' (per the technical definition given above i.e. that 10% really means 10%). That requirement is often enough to fix the prior (which will end up being very vague) and it means the results can be interpreted as confidence intervals, which is neat. There's a massive literature on this, although it's pretty hard to understand.

One of the ideas of objective Bayesian statistics is that it removes that whole need to ask experts for their opinions, which is always going to be controversial in an area as important and controversial as climate change, but still allows you to incorporate parameter uncertainty. As I say above, I think it would make sense for the met office to present both objective and subjective results, so we can see the impact of the subjective opinions being fed in.

@M Courtney
You are talking about subjective Bayesian statistics.
The point of objective Bayesian statistics is to propagate parameter uncertainty into your predictions, without having to also incorporate personal opinion.

Some general comments:

When they set up UKCP, I'd be pretty sure the folk at the met office simply didn't know about objective Bayesian methods. Very few people do in climate research. I don't think we should beat them up about that. But now that it's been pointed out, I would certainly hope they'd look into it. They are smart and curious people, whose job is to understand the climate as well as possible, so I'm pretty sure they will look into it at some point.

Btw, if you *do* want to beat up the met office, I'd suggest you beat them up about the following:

a) the fact that they don't make UK weather and climate observations freely available, making it really difficult for anyone except them, a few academics and a few wealthy companies to study weather, climate and climate change in the UK. That is really holding back the UK's ability to adapt to climate change, plus it's just wrong, since the data was paid for by tax-payers money. I'd really like to see them do something for the planet and make that data freely available. If they really are concerned about climate change and its impact (and I'm sure they are), then they should show their true colours by doing that.

b) the fact that they don't make their climate models freely available. Why not? Why hide them?

By comparison, in the US, both weather data and climate models have been freely available for years: you can just download them from the internet.

Steve

Sep 28, 2013 at 7:51 PM | Unregistered CommenterSteve Jewson

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>