Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Desperate woo | Main | The bureaucrat as green activist. Again. »
Monday
Mar042013

IPCC statistics ruled illegal

Bayesian statistics, the approach favoured by the IPCC in its assessments of the world's climate, has been ruled illegal by the Appeal Court in London. As the judge explained in a case revolving around possible causes of a fire:

Sometimes the "balance of probability" standard is expressed mathematically as "50 + % probability", but this can carry with it a danger of pseudo-mathematics, as the argument in this case demonstrated. When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.

David Spiegelhalter notes that "[to] assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability". That being the case, one wonders whether this opens up the possibility of legal challenges to the IPCC assessment reports.

For once, however, I find myself on the IPCC's side. I imagine a higher court will set the ruling aside.

PrintView Printer Friendly Version

Reader Comments (68)

Mar 5, 2013 at 12:09 AM | Morley Sutter

Richard Betts,
Thank you for your forthright responses.
I guess I am not of the same belief as you: the ability of humans to alter temperature is miniscule, the financial and social costs are immense and the models not good enough to be predictive of future temperatures (i.e., models predicting too rapid rise of temperature, failing to predict the present slowing of rise and predicting a hotspot in the atmosphere that has not been found) do not produce confidence in our ability to prevent a rise in temperature, in the models or in the prediction of CAGW.

Indeed there is interesting discussion at Climate Audit presently in respect of Mann and his "Scenarios". It cannot be said that the models have come anywhere near observations for the right reasons, Mann's Scenario C is the closest to the observations and its acceptance on the basis of RB's statement would clearly invalidate the premise that CO2 is responsible for the gentle warming and that of catastrophic consequences predicted.

Mar 5, 2013 at 4:30 AM | Unregistered CommenterStreetcred

Copner at 11:47 pm. You are asking a crucial question. My answer is 'undecided'. Statistical propositions as derived from a theory are tested against other statistical propositions as derived from the data. One event does not make a statistical proposition. The meteorologists should first collect an ensemble of similar cases. Note that the probabilities in your example may get other values depending on which conditions are considered. If you measure the length of a rod you will never get different values depending on a subjective choice of conditions. Ergo probabilities do not apply at single events.

Mar 5, 2013 at 6:37 AM | Unregistered CommenterMindert Eiting

Those full of opinions about Bayes should read "The Theory that would not die".

http://www.amazon.com/dp/0300188226

After reading and digesting, then comment.

Mar 5, 2013 at 7:03 AM | Unregistered CommenterHarry

"I often get told here that climate scientists should not use models..."

I agree that it is wrong to say climate scientists shouldn't use models. It's unavoidable.

I think the distinction being made is between models used for developing understanding and models being used for prediction, and the rule they're referring to ought to be that you shouldn't use unvalidated models for prediction.
(...)

I hope that helps.
Mar 4, 2013 at 8:30 PM Nullius in Verba

How very well put.

I'd just add that checking that a model roughly reproduces historical observations does not count as "validation" - although the Met Office has often said it does. Validation includes verifying that the model is a correct representation of the physical processes involved. However, a simple lookup table can replay past observations but without involving any representation of the physical processes.

Mar 5, 2013 at 8:45 AM | Registered CommenterMartin A

@Nullius in Verba, Mar 4, 2013 at 7:55 PM, last paragraph:

“The IPCC make this same distinction too, in the definitions for the terms "likelihood" and "confidence". "Likelihood" means the probability of an outcome assuming the model is correct. "Confidence" means the probability that the model is correct. So when they say "it is very likely that..." it doesn't actually tell you what the probability is.”

The IPCC should be compelled to put a prominent warning to this effect at the front of their reports. They quote probabilities such as “very likely, 90% probable” but as nullius explains this is only true in the academic sense that their models are correct. The problem is that all the politicians take these probabilities as irrefutable facts. Even the IPCC acknowledge that their models are incomplete, e.g. in modelling clouds, the very area crucial to their alarmist positive feedback theories. We also know that the IPCC take worst case assumptions which make their predictions more alarmist, e.g. on climate sensitivity, CO2 retention times, etc.

Mar 5, 2013 at 9:17 AM | Unregistered Commenterdougscot

Richard Betts (Mar 4, 2013 at 10:54 PM) said:
"The models projected warming of global mean temperature, with faster warming over land and in the Arctic, and all these have happened in reality since the 70s."

Does the data shows a real and unambiguous 'signature', or does it merely hint at a signal buried within the noise? My understanding is that it's the latter, but I'd be greatful for any convincing evidence that it's the former.

Mar 5, 2013 at 10:09 AM | Unregistered CommenterDave Salt

Richard

Just because the models match, does not mean they match for the right reason..

look at the recent Decadal forecast debacle.. where we had since 2005 labelled as 'previous predictions' (now relabeled retrospective forecasts - which is still laughable.. try saying that swapping climate for economics, to see why),

magically predicting the a future volcanic eruption.. where as the previous prediction, of course could not 'predict' the effects on temperature and diverged.


my question.. at what point should it start warming, at a rate required to hot the 1.3-1.7C above average by 2040, in the 2011 Foresight report, thi srange is said to cover all emission sceanrios, from whence they diverge.

This is only 27 years away, and to reach the low end, we need to see about 0.3C per decade warming. (not seen in the thermometer temp record?)

so how long does the current decadal rate of warming need to continue, before we say that those models are wrong? and those projections are running warm. - ie a test?

Mar 5, 2013 at 10:32 AM | Unregistered CommenterBarry Woods

...I often get told here that climate scientists should not use models, and use observations instead. But as you say, although observations can tell you what it happening, they cannot tell you why - unless you are observing a controlled experiment, which is not possible with the global climate as there is only one Earth! So, we use observations and experiments and existing theory (eg: fluid dynamics) to produce models which are then used to try to understand and explain the observed behaviour of the atmosphere and forecast its behaviour in the future.

Mar 4, 2013 at 8:02 PM Richard Betts


I don't remember RB being often told here that climate scientists should not use models.

Although I can remember instances where the tendency of some climate scientists to state that, because the observations disagree with their models, then the observations must be wrong has been ridiculed.

I think some commenters have said they are skeptical of the possibility of using general circulation models to predict future climate because of the factors that are unknown or are incompletely understood and the likelihood of rapid accumulation of error. Especially in view of the nonexistence of a method for validating such models.

But, so far as I can see, physical science essentially consists of formulating and testing models that describe what goes on. So if climate scientists are to advance their understanding, they need to go ahead and produce models. I can't believe that many BH commenters would disagree with this.

What climate scientists do need to avoid, and which results in immense skepticism, is their tendency to mistake their models for reality.

Mar 5, 2013 at 12:07 PM | Registered CommenterMartin A

Well put, Martin A.

The creation and discussion of conceptual models, including computer models, can provide a very productive environment for the creation of testable ideas.

I have seen that countless times in industry when, for example, a process model made from Post-Its and lines on a flipchart can lead to discussions that lead to breakthroughs as well as more mundane improvements.

But that 'testable' part is important. All ideas generated in and around a model should ideally be tested before being used to guide practical decisions. We used the PDSA sequence to organise this: Plan the change as a test, Do the change, Study the results, Act on what has been learned. Only if this test is satisfactory should the change be implemented as part of the new routine.

Zealots at the model stage would very often insist on 'just doing it', no need for a test. For them, the model was sufficient. But the model world is a simple one - that's why we like it and can progress more readily in it. But we should not forget that simplicity. When the real world is far more complex, testing is an appropriate discipline.

We cannot do such deliberate tests in general in the climate system, but we can look for observations to test or refute what a model has encouraged us to think may be the case as a precaution before using other outputs of the model for practical guidance.

For example, suppose a climate model says that the upper troposphere should get relatively warmer than elsewhere because of CO2. That's testable by observations. Or that hurricanes should become more/less intense/frequent. That's testable. Or that both the north and the south pole should be getting warmer. Or that ice should be diminishing at both poles. Or that sea levels rise rates should be increasing. Or that snow is becoming a thing of the past in the UK. Or that global mean temperature should rise monotonically with the CO2. Or that Australia was entering permanent drought.

Mar 5, 2013 at 2:11 PM | Unregistered CommenterJohn Shade

"I think some of the claimers also are under the impression that climate by its very nature cannot be predicted which appears to be the IPCC's position."

Weather can't be predicted. It might be possible to predict climate (the statistical distribution of weather), although so far I don't think it has been.

"Validation includes verifying that the model is a correct representation of the physical processes involved. However, a simple lookup table can replay past observations but without involving any representation of the physical processes."

No model is a correct representation of the physical processes. Nobody uses quantum mechanics or general relativity to model routine mechanical engineering. All models are wrong, but some are useful.

A model does not have to have any realism at all - all it has to do is be accurate. A polynomial fit to the real curve will do. It certainly helps if the model is related to the physics, and it is in general hard to construct very accurate models that are not, but all one really has to do is demonstrate the accuracy of the predictions under the relevant circumstances. How the black box does it internally doesn't matter.

(Of course, if you're using the models for understanding rather than prediction, then being an explanatory model with some relationship to the physics is necessary.)

Replaying past weather doesn't tell you the climate. You can't predict with it. You can't say how likely some particular outcome was. Quantum mechanics (which is a model) predicts that radioactivity will decay with a particular exponential distribution. It won't tell you the precise times of the Geiger counter clicks, and a replay of past clicks won't tell you anything about the future. But the predictions that you *can* do have been measured in a wide range of physically extreme circumstances and confirmed to be accurate, both broadly and in detail. There are no known instances where it has been observed to get it wrong. When climate models can say the same thing, we'll pay more attention.

"The IPCC should be compelled to put a prominent warning to this effect at the front of their reports."

The IPCC can tell people or not as they choose. *We* will tell people, and they'll have to make their own minds up what that means for the honesty of the IPCC.

Mar 5, 2013 at 7:54 PM | Unregistered CommenterNullius in Verba

"No model is a correct representation of the physical processes. Nobody uses quantum mechanics or general relativity to model routine mechanical engineering. All models are wrong, but some are useful."

I think this is splitting hairs over the meaning of "correct". If a model enables things to be calculated to the needed precision, it is correct in my book.

"All models are wrong" - has anyone ever found any deficiency in quantum electrodynamics as a model for non-nuclear, non-gravitational physical effects?

Mar 5, 2013 at 8:15 PM | Registered CommenterMartin A

Nullius In Verba writes:

"(Of course, if you're using the models for understanding rather than prediction, then being an explanatory model with some relationship to the physics is necessary.)"

It is my understanding that climate modelers claim that their models give them understanding of climate. Where am I wrong on this?

(How many times have I heard climate modelers claim that "If you accept the model then you accept the physics that it incorporates." Physics always provides understanding of what it can be used to predict.)

Mar 5, 2013 at 8:26 PM | Unregistered CommenterTheo Goodwin

...I have seen that countless times in industry when, for example, a process model made from Post-Its and lines on a flipchart can lead to discussions that lead to breakthroughs as well as more mundane improvements.
(...)
Mar 5, 2013 at 2:11 PM John Shade

I too have been involved in business process modelling, sometimes with a process that involved several different parts of an organisation, geographically distributed and each with their own priorities and objectives.

One of the most striking things about getting such groups to work together to create an agreed model of the process was how different their individual conceptions of the process were. In many cases, the groups concluded that simply having reached an agreement on what the process was and how it worked was a major breakthrough. Without the joint construction of a model that would never have occurred.

Mar 5, 2013 at 9:07 PM | Registered CommenterMartin A

"I think this is splitting hairs over the meaning of "correct". If a model enables things to be calculated to the needed precision, it is correct in my book."

I'd have said it was hair-splitting over "representation of the physical processes". If you consider a curve fit to a segment of an empirically measured function to be a representation of the physical processes, then fair enough.

"has anyone ever found any deficiency in quantum electrodynamics as a model for non-nuclear, non-gravitational physical effects?"

Yes. That it only models non-nuclear, non-gravitational physical effects! :-)

But seriously, QED models the electron as a point particle with no internal structure, which it can't actually be. The electric field (for example) is 1/r^2 where r goes all the way down to zero. QED dodges the resulting infinities by renormalising, (which is a dubious mathematical trick that even its inventors felt was unsatisfactory,) but that only really works as the "long"-range approximation to the as-yet-unknown ultra-short-range physics.

Mar 5, 2013 at 10:39 PM | Unregistered CommenterNullius in Verba

Nullius

Roger Longstaff has posted this on a number of occasions

"In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” IPCC From the 3rd IPCC report, Section 14.2.2.2 “The Climate System”, page 774"

Mar 5, 2013 at 11:18 PM | Unregistered CommenterDolphinhead

Hi Green Sand

I'm not really sure what you are referring to - please could you provide a link or reference? Thanks!

Mar 6, 2013 at 12:43 AM | Registered CommenterRichard Betts

"In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

They mean weather states.

A chaotic system converges on a particular complicated subset of all the possible states called an 'attractor'. Where it lies within the attractor is random and unpredictable, but the attractor itself is a property of the equations of the chaotic system and it's *general* shape and behaviour is often remarkably robust. The current state and trajectory represent weather, but the attractor itself represents the climate. It's certainly *possible* to know exactly what the attractor is for a chaotic system. That doesn't of course mean that it *is* known, or that it is easy to find out.

There are some nice videos explaining the concept at a very basic level here: http://www.chaos-math.org/en They're a bit slow going to start off with if you already know the subject well, but it's still worth it just for the visual experience.

Mar 6, 2013 at 7:58 PM | Unregistered CommenterNullius in Verba

Nullius

thanks for the post above

I hope you continue to frequent BH. I have always appreciated your contributions on this and other blogs.

Mar 7, 2013 at 9:11 AM | Unregistered CommenterDolphinhead

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>