Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Shale gas dropped? | Main | Is it or ain't it Rashit? »
Sunday
May202012

Myles Allen on Berlin's two concepts of liberty

Simon Anthony sends this report of Myles Allen's recent lecture at Oxford.

Myles (I think he'd prefer I call him "Myles" rather than Prof Allen as most people in the audience seemed to refer to him thus) is prof of geo-system science in the school of geography and the environment and heads the climate dynamics group in physics dept, both in Oxford. His main interest has been in attribution of aspects of climate, particularly "extreme events" to human activities. Recently he's been working on how to use scientific evidence to "inform" climate policy.

The lecture's title comes from Isaiah Berlin's contrast between "negative" and "positive" liberty. These can be (slightly) caricatured as, respectively (and perhaps contrarily) freedom from constraints (eg tyranny) and freedom to do particular things (eg vote for the tyrant). Amongst other things, Berlin was concerned about the possible abuse of positive liberty in which the state prescribes what is permitted rather than ensuring the conditions in which individuals were free to make their own choices.

Myles contrasted two extreme views of how to address climate change: either continue as currently so 0.001% of the world's population choose to benefit from emissions of CO2 and the poorest 20% involuntarily suffer the consequences or halt emissions and so demolish the capitalist, liberal, market system. In conversation afterwards he accepted this was a rhetorical flourish rather than a genuine choice. 0.001% of the world's population is ~700,000. He said this number was those who profited directly from extraction and burning of fossil fuels. But it omits shareholders or citizens who benefit from taxes paid by oil cos etc. And it omits those who, for example, drive or keep warm or light their houses. If these people were included, the number of beneficiaries would likely be rather more than the number suffering. So it seems more than a little disingenuous to characterise the "sides" in these terms. In any case, rather than have states impose strict controls, Myles wanted to investigate means by which emissions could be voluntarily curtailed and suffering compensated through negative liberty.

So, he says, assume that IPCC's predictions are correct but it'll be 30 years before confirmation. What measures can be taken to reduce CO2 emissions? Offsetting doesn't work because what counts is cumulative emissions, not the rate. Centrally imposed limits would potentially mean big opportunity costs as beneficial activities might not be undertaken. Is there instead some means by which the impacts can be traced back to CO2 emissions and the originators made to pay (cf Deep Water Horizon)?

An essential component of any such scheme is that harm caused by climate changes should be correctly attributed to fossil fuel CO2 emissions. If that were possible then, on a pro rata basis of some kind, those companies responsible for the emissions and which had directly benefitted from extraction and burning of fossil fuels (oil, coal, gas, electricity, car manufacturers, airlines...) could be penalised and the proceeds distributed to those who were judged to have suffered.

Now Myles (I think somewhat inconsistently) seemed to accept that climate predictions for 30 years into the future were unverifiable, unproven and unreliable (perhaps not surprising when, as Richard Betts confirmed in another thread, even when the Met Office has the opportunity to assess its 30+-year temperature anomaly predictions in, for example, forecasts made in 1985, it chooses not to do the tests. One can only speculate as to why that might be.) He also accepted that the public might justifiably not believe the assurances of climate experts, particularly given the patchy record of mighty intellects in predicting the future (examples he gave were Einstein post-war seeing imminent disaster unless a world government was immediately set up; a Sovietologist who in the mid-1980s confidently predicted the continuing and growing success of the Soviet Union; 30-year predictions of US energy use which turned out to be huge overestimates and Alan Greenspan's view that derivatives had made the financial world much secure. I'd have been tempted to add Gordon Brown's (or George Osborne's) economic predictions but time was limited.) There was very little reason to expect people to believe in the extended and unfeasible causal chain leading to predictions of temperatures in 30 years time

Instead Myles proposed that the frequency and pattern of "extreme" events was now well enough understood that the effect of CO2 emissions could be reliably separated from natural variations. He gave various examples of how models had been validated: the extent of human influence on the European heatwave of 2003 has been "quantified"; the Russian heatwave of 2010 was within the range of natural variation; model predictions of annual rainfall in the Congo basin matched uncannily well the "observations" (Myles himself initially doubted the extraordinarily good match, although he now accepts it's genuine. However, the "observations" weren't all one might expect because conditions for meteorologists in the Congo are understandably difficult, so there aren't any actual measurements. Instead an "in-fill" procedure was used to extend readings from safer locations to the Congo basin. I asked whether this agreement between a model and, um, another model was really a good test of either. Myles assured me that both models were reliable and show good agreement with measured data in, for example, western Europe. Still, an odd way to illustrate reliability of model predictions.).

So although it wasn't possible reliably to predict climate to 2050, current near-term regional forecasts may be good enough to show that the probability of extreme events was changed by CO2. In any case, the people who believe they've been adversely affected by climate change are free to take legal action against the companies they believe are responsible. Myles foresaw such litigation growing as the effects of climate change became more apparent.

An obvious question arises, rather like the "dog that didn't bark": if the evidence for the effect of AGW on extreme events is as strong as Myles and others claim, why haven't class actions already been brought, particularly in the US? "Ambulance chasing" lawyers aren't renowned for their reticence but so far there has been no action of great significance. I don't think it's wild speculation to suggest that lawyers have examined possible cases but haven't yet thought the evidence strong enough to make it worth while proceeding. Of course at some stage such cases will come to court and then Myles may find that his hope that they'll change the "climate" of debate will cut both ways. Because if a major class action against, say, oil companies claiming compensation because the 2003 European heatwave was due in part to CO2 emissions, was brought and failed, it would be a major setback to hopes for international laws to limit further emissions. While litigation won't advance science, it could be very politically significant - as well as entertaining - to have the arguments for AGW tried in court.

Finally, having been to three of the Wolfson lectures on climate change, I'd like to add a couple of observations. First, although all the speakers talked about the evidence for AGW, not one of them mentioned hockey-sticks. Stocker came closest when he said that current temperatures were the warmest for 500 years but didn't venture an opinion on the medieval warm period. I wonder whether it's too much to hope that the more scrupulous climate scientists are distancing themselves from the petulant antics and inept science of hard-core "Team" members. And second, two of the three speakers (Wunsch and Allen) said that there was little reason for people to believe that 30-year climate predictions were reliable. So perhaps the better climate scientists will stop relying on magical trees and statistics to make up the past and dubious models for scary futures. Instead they might try to do what Myles advocates and concentrate on shorter term understanding of the climate which might at least be testable.

PrintView Printer Friendly Version

Reader Comments (204)

Hi guys

I'm out most of today but will try to respond further later. In the meantime, a small self-correction - when I checked the IPCC First Assessment Report this morning, I found that they did actually cite a couple of papers of transient runs not just doubled-CO2 as I told Green Sand. This doesn't affect the rest of what I said, but for the sake of accuracy I just wanted to correct this!

I'll be back later....!

Richard

May 22, 2012 at 9:16 AM | Registered CommenterRichard Betts

May 21, 2012 at 7:54 PM | Richard Betts


1: Unfortunately the first link you give has been disconnected.

The second, to the IPCC data distribution centre, links to what seems to be a huge amount of information. No doubt if I had time I could familiarise myself with this information and find what I was looking for. Or I could if it was there.

I'm just trying to find published data showing the effectiveness of genuine predictions against measured data. Surely if there were such data, given their importance, that should be easy? I'm beginning to doubt whether such a thing exists because otherwise surely you'd have told me exactly where to find it rather than showing me a haystack which might, or might not, have a needle somewhere inside.

2: I've tried to look at the Sawyer paper but it's on Nature's website, Nature still wants income from 40-year old papers and I don't have a subscription. I've read some commentaries on the paper and it sounds interesting. Is it available from some other source?

May 22, 2012 at 9:20 AM | Unregistered CommenterSimon Anthony

May 22, 2012 at 8:37 AM | Roger Longstaff

"it is axiomatic that it is impossible to model a non-linear, chaotic and multivariate system, defined by a large number of variables and in which the dependency of some of the variables is not perfectly understood. No useful information can be generated by such a model. "

I think "axiomatic" and "no useful information" are a bit strong. However, in practice for the foreseeable future you're almost certainly right.

You previously asked (at least I think it was you) whether Andrew might write another book, this time on climate models. While that's an excellent suggestion, in that current climate models seem to be no more reliable about future temperatures than tree rings are about the past, there's a major problem: no equivalent to Steve McIntyre to criticise the modellers and their toys to within an inch of their lives.

In a sense, Steve M's work was "easy" in that he could do just what the hockey-team said they did (at least when they let on what they'd done and let him have the data) using a PC. Now whoever tried to do the same with climate models might be able to model the "essentials" of global climate on a PC. They'd nonetheless certainly find that if their results differed significantly from those groups whose work figures in IPCC reports, they'd be told that their models were pitifully inadequate to capture the full complexity of the world's climate. That, they'd be told, could only be done by supercomputers running models so complicated that no one understands them.

So unless an independent group has several 10s or 100s of millions of dollars available to set up and run independent models, I don't think more modelling is going to help. The best hope to test the adequacy of the models is to get them to make definite predictions but, as Richard Betts illustrates with his appeal to the great "confidence" in predictions which actually have great uncertainty, that will be difficult to do.

May 22, 2012 at 9:59 AM | Unregistered CommenterSimon Anthony

Thanks Simon,

Perhaps axomatic it the wrong word to use (although it works for me). How about calling it a hypothesis, that needs to be falsified by the scientific method?

I disagree about the analogy with Steve M's work. He (and Andrew and others) did a sterling job in debunking the hockey stick. What I am talking about is pure logic. If I am wrong then somebody should be able to explain why.

Regards, Roger

May 22, 2012 at 10:08 AM | Unregistered CommenterRoger Longstaff

May 21, 2012 at 11:52 PM | Simon Anthony

Couple of things:

1. The Met Office are in the business of making short and medium term predictions. It’s entirely within their remit to attempt the extremely tricky goal of making decadal scale weather/climate predictions; pretty much all of the simulations were discussing on this thread relate to these.

So it's not that helpful to castigate Dr. Betts for explaining in rather a lot of detail the nature, purpose and outcomes of the Met Office decadal hindcasts/forecasts! They are of quite a different order to the multidecadal simulations of future climate response to enhanced greenhouse forcing, and their uncertainties relate to the strong influence of internal variation on short timescales. The fact that these uncertainties may be large is not unexpected considering the (presumed!) stochastic variability that manifest strongly on short time (e.g. interannual to decadal) time scales but which broadly average towards zero on multidecadal timescales of interest to long term projections according to various emission scenarios. This is surely obvious.

So any “consequent changes in society” that we pursue as a result of our understanding of the greenhouse effect and the consequences of greatly augmenting this, will happen as a result of a consideration of all of the evidence that informs our understanding. It doesn’t arise from an inspection of efforts by the Met Office to assess the prospects for decadal forecast! If you’re interested in an assessment of climate models en masse and evidence for their utility, there are plenty of places to look (try IPCC AR4 Chaptr 8 “Climate Models and Their Evaluation” for example).

2. Your comments on “temperature”. There is a very good reason why temperature anomaly is used. As Richard says global temperature is pretty much without meaning. Of course if we wish to know what an average temperature might be in a local area as a result of climate change from a model then we can reassign this from the local anomaly according to the local baseline temperature.

May 22, 2012 at 12:14 PM | Unregistered Commenterchris

May 22, 2012 at 8:37 AM | Roger Longstaff

"it is axiomatic that it is impossible to model...."

Roger, the whole point of modern science from around the 18th century is that we don’t use Aristotelian axiomatic argumentation to address scientific issues! Your axiom contains a false premise that rather locks in the conclusion that you desire…

In reality the essential elements of the climate system that result from enhanced radiative forcing at the top of the atmosphere can be modeled on the broad scale – the fundamental questions relate to the rate at which thermal energy is driven into the climate system and the equilibrium accumulation of energy. This is straightforward to model even if there are associated uncertainties. Second order questions of how the excess energy is distributed through the climate system can be addressed at the large scale, with less confidence as we progress to more local scales.

The rather significant success of models to date rather disproves your “axiom”. The early modeling of Sawyer and Broecker from the 1970’s that projected significant surface warming by the 20th century ( not to mention the modeling of Arrhenius!) demonstrate that the accumulation of thermal energy under greenhouse radiative forcing is pretty much a no-brainer. Early modeling predicted a large number of consequences, some of which are specific to enhanced greenhouse forcing, that have been found to accord with empirical measurement: the focusing of warming in the high Northern latitudes, delayed Antarctic warming, cooling of the stratosphere and raised altitude of the tropopause, atmospheric warming; raised tropospheric water vapour concentrations; raised sea levels; melting of polar and mountain ice; effects on the hydrological cycle with specific latitudinal-trends in drying/moistening and so on…

“Pure logic” doesn’t work in science unless one’s premises are properly grounded in evidence! :-)

May 22, 2012 at 12:23 PM | Unregistered Commenterchris

Thanks Chris,

My axion / theorem / hypothesis has an analogy in an even earlier century than the one you mention - Fermats Last Theorem also dealt with mathematical impossibility. Let us hope that it does not take over 300 years to settle this one!

As for your statement "The rather significant success of models to date rather disproves your axiom", I would have thought that the rather significant failure of climate models to date shows that the "axiom" could indeed be correct. But I guess that we will have to differ on this.

May 22, 2012 at 12:57 PM | Unregistered CommenterRoger Longstaff

May 22, 2012 at 10:08 AM | Roger Longstaff

"What I am talking about is pure logic."

Ah, logic, if only. Next you'll be asking for critical reasoning and rational examination of the evidence.

The people who run large-scale GCMs hope that the problems to which you refer (essentially deterministic chaos in non-linear systems) don't occur in their models. They can't prove this but, although such behaviour is commonplace in weather systems, they assume that it doesn't apply to climate predictions.

The rationale, such as it is, is that the latter refer to long-term averages and that they are somehow immune to chaotic behaviour. There's no fundamental reason at all why this should be so.

You know all this but so do the people who run the models. In their more honest private moments I suspect they'd go along with Richard Betts' remark that "my confidence is not in any particular model or projection, but in the fact that we have expressed the range of possible future changes as well as is currently possible." They're just doing the best they can but unfortunately for all of us, their competence , as perceived by many people who don't understand the limitations of models, has been wildly exaggerated.

Nonetheless, if Andrew (or whoever) wrote a book which accurately characterised the models as, in the well-worn cliche of the past few years, "not fit for purpose", it would be met by a defence claiming
the author's limited understanding and appreciation of the exquisite subtleties of the gossamer threads that comprise the latest climate models made his views irrelevant.

May 22, 2012 at 12:59 PM | Unregistered CommenterSimon Anthony

May 22, 2012 at 12:14 PM | chris

"1. The Met Office are in the business of making short and medium term predictions. It’s entirely within their remit to attempt the extremely tricky goal of making decadal scale weather/climate predictions; pretty much all of the simulations were discussing on this thread relate to these."

Agreed. My points are:

a) The predicted errors in MO weather forecasts for 5 days are about twice those for 1 day.
b) Forecasts for, say, 10 days are essentially useless.
c) The MO's medium term weather predictions have famously been spectacularly wrong (I hope I don't need to rehearse just how wrong)
d) The MO uses the same model for short, medium and decadal weather/climate prediction.
e) a + b + d + e => very little grounds for confidence in decadal predictions and so
f) The MO's decadal central predictions of the rate of warming made in 2005 (and according to RB the first "genuine" prediction) was for 0.25 degrees in the following decade. The observed change by 2011 was a fall, of about 0.1 degree.
g) f) is entirely consistent with e) and gives no grounds for believing that the models have any useful predictive ability.

"So it's not that helpful to castigate Dr. Betts for explaining in rather a lot of detail the nature, purpose and outcomes of the Met Office decadal hindcasts/forecasts!"

I've not castigated RB for his explanations; quite the opposite, I've praised and thanked him. But that doesn't mean I have to agree with what he says and, if what he says appears inconsistent, I'll ask him about it. If I'm wrong, I'll be happy to be corrected. I hope that he (and you) react likewise.

"They are of quite a different order to the multidecadal simulations of future climate response to enhanced greenhouse forcing, and their uncertainties relate to the strong influence of internal variation on short timescales. The fact that these uncertainties may be large is not unexpected considering the (presumed!) stochastic variability that manifest strongly on short time (e.g. interannual to decadal) time scales but which broadly average towards zero on multidecadal timescales of interest to long term projections according to various emission scenarios. This is surely obvious."

The key word in that para is "presumed". That's a useful thing to do only if the presumption is rigorously tested. As I can make up simple mathematical models (modifications of the Lorenz equations if you're interested) in which the "presumed" stochastic assumption fails, I'm very confident (in the sense of being sure, rather than very uncertain) that the far more complex climate models will have many sub-systems which exhibit just that kind of behaviour. Their complexity is such that it's impossible thoroughly to investigate their behaviour so the only reason one might have confidence in the predictions is if they've been shown to work. And, it's become increasingly clear during this thread, that they haven't been shown to work.

In this regard you suggested I read IPCC AR4 Chaptr 8 “Climate Models and Their Evaluation”. Here's a quote from that chapter, in answer to the question "How Reliable Are the Models Used to Make Projections of Future Climate Change?"...

"There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the
foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes"

Note "their ability to reproduce observed features of current climate and past climate changes". The output of models tuned on particular data sets match those data sets. If that kind of validation satisfies you, I have a model which accurately reproduces the FTSE over the past 30 years. Would you like me to invest your money?

2: "global temperature is pretty much without meaning"

Please explain. It's a bit puzzling to me as thermometers (for example the ones that take the measurements with which the models are compared) measure temperatures, not changes in temperature. And from these measurements an average can be calculated with which the global temp from a model can be compared. That seems quite well-defined. In what sense is it "pretty much without meaning"?

May 22, 2012 at 1:41 PM | Unregistered CommenterSimon Anthony

Simon, Chris, Andrew, and others,

There has been some confusion about what I posted at 8.37 AM. Let me try to explain:

The axion / theorem / hypothesis (call it what you will)...........

"It is axiomatic that it is impossible to realistically model a non-linear, chaotic and multivariate system, defined by a large number of variables and in which the dependency of some of the variables is not perfectly understood. No useful information can be generated by such a model."

.....is nothing to do with physics - it is a question of pure mathematics. The mathematics department of any good university should be able to furnish an absolute proof. The wording could possibly be improved, but I am sure that everybody now understands what I mean.

The examples (A & B) that I gave are arguments of mathematical physics, independent of the general mathematical plausibility of climate models, that show how GCMs as currently constituted contain fatal errors that render their output completely uselss.

I hope that clears the matter up.

May 22, 2012 at 1:48 PM | Unregistered CommenterRoger Longstaff

May 22, 2012 at 12:59 PM | Simon Anthony

Nonetheless, if Andrew (or whoever) wrote a book which accurately characterised the models as, in the well-worn cliche of the past few years, "not fit for purpose", it would be met by a defence claiming the author's limited understanding and appreciation of the exquisite subtleties of the gossamer threads that comprise the latest climate models made his views irrelevant.

Hi Simon

I think first it would be necessary to accurately define the purpose of the models, before assessing whether they are fit for it or not.

What do you (and others) think the purpose of climate models is?

Cheers

Richard

May 22, 2012 at 5:41 PM | Registered CommenterRichard Betts

And here it comes. Richard will tell us they are not used for political advocacy but for scientific investigation. Maybe that they are not really accurate in any sense, but indicative within a range of uncertainty. That if this year's model is just so good, give us a few more quid and a bigger computer and we will get better. All good stuff. But people of a particular opinion are using model results in order to advocate serious changes in poliicy. Now Richard, are the models good enough for that, or would you like a few more quid and a bigger computer?

May 22, 2012 at 6:15 PM | Unregistered CommenterRhoda

And one for Roger. I've always had a reservation about that chaos unpredictable thing. And it is this: some things are predictable even if the detail is not. Take my example of a rubber ball bouncing down a flight of steps. The best mathematicians we have would not be able to tell you where the second bounce will be, or any subsequent bounce. But any daft idiot can tell you that the ball will end up on the floor at the bottom of the flight. Climate modellers cannot possibly tell us what the weather will be like this time in ten years (sunny periods and showers, in the UK, is my forecast) but they claim they can tell us that we will be so much warmer, if the emissions scenario is correctly put in.

Now, it is true that the carbon and the forcing in the met model are in fact 'black boxes' of code which enshrine assumptions, or does everything work according to physics in each run?

May 22, 2012 at 6:22 PM | Unregistered CommenterRhoda

May 22, 2012 at 1:41 PM | Simon Anthony

I'll just answer your question about temperatures/temperature anomalies for now since I'm a little busy....maybe come back to your other points later/tmara.

Thermometers give rather accurate temperature readings and so are very useful indeed for measuring temperature at a particular locale. Can we simply average all the temperature readings to get a global temperature? Not really.

After all the average temperature falls as one rises altitude-wise. If you live at the top of Ben Dorain the ambient temperature on any particular day is likely to be around 6 oC cooler than someone living in adjacent Tyndrum...of course no one lives on the top of Ben Dorain but there could be a weather station there (there is one on the top of the Cairngorm for example). The temperature in Denver is pretty much always colder than in Death Valley and if you live in the Alps you might well reside in Termignon at 1300 metres and walk up to the high pastures at 2500 metres, and there might well be a temperature station at 3500 metres. These will record vastly different temperatures at any given time. So how can there possibly be an average global temperature when the temperature is so dependent on how high you are above seal level (since seals generally live in the sea!)? It's pretty much a meaningless concept.

On the other hand in a world warming under the influence of enhanced greenhouse forcing, a one degree temperature rise at the top of Ben Dorain is likely to be mirrored by a 1 C temperature rise in adjacent Tyndrum and so on. Changes in temperature (anomalies) show strong correlation over rather large differences even if the absolute temperatures are very different.The use of temperature anomalies allows the whole massive matrix of 1000's of temperature stations to be incorporated into a single measure (the anomaly, which is the difference in temperature at that locale from a base period). It allows the use of records that are interrupted or that start and stop so long as these overlap with other records.

In fact whoever recognised the value of the temperature anomaly as a metric for integrating temperature measures from vastly different locales had a rather brilliant insight!

May 22, 2012 at 6:51 PM | Unregistered Commenterchris

May 22, 2012 at 12:59 PM | Simon Anthony

"The people who run large-scale GCMs hope that the problems to which you refer (essentially deterministic chaos in non-linear systems) don't occur in their models."

The assertion that something can't be modeled because the system is "chaotic" or "deterministically chaotic" or "non-linear" or some combination of all of these ("deterministically chaotic non-linear system"; Yikes!) is a little whimsical, and seems to be made to pretend that what can be done with various degrees of success, actually can't be done, and so we might as well act ignorant and just do nothing (rather like little frogs sitting in a pan of water on a heated stove!).

But you might as well say that one can't possible model what happens when one drops a drip of ink into a bucket or heats a pan of water on a stove (poor frog!). After all those water molecules jigger around in a completely chaotic (or "deterministically chaotic") manner, and we can't possibly model their behaviour. Of course we can.

All of these systems are bounded and the collective chaotic properties of the elements of the system (water molecules or ink molecules) contribute to macroscopic behaviour that is eminently modelable. We can calculate the behaviour of ink in a bucket from a knowledge of the diffusion properties of molecules in a fluid of such and such a viscosity at this or that temperature.

The climate system is rather similar in many respects. If you add thermal energy to the climate system it heats up and things generally intensify. This may be a consequence of the collective behaviour of gazzilions of molecules of the system acting entirely chaotically (or "deterministically chaotically") but they still manifest as calculable macroscopic phenomena.

I've describe a whole slew of things that early climate models have predicted rather successfully (see May 22, 2012 at 12:23 PM | chris). But that seems to be met here with a response rather reminiscent of the "What have the Romans ever done for us" episode in Monty Python's Life of Brian.

The bottom line is that climate models have been rather successful in the essential predictions that arise from a pretty basic physical understanding (and will presumbaly continue to do so). If climate models successfully predict rising temperatures, raised tropopause, enhanced tropospheric water vapour, cooling stratosphere, intensification of the water cycle, delayed Antarctic warming, enhanced Arctic warming...and so on, I don't see how one can assert that climate models can't predict anything because the climate is a "non-linear chaotic system"!

May 22, 2012 at 7:30 PM | Unregistered Commenterchris

May 22, 2012 at 6:51 PM | chris

Thanks for the response but you seem to have adopted a very strange position.

When you say global temperature measurement is "a meaningless concept", what you seem to mean is that because temperature is generally sparsely sampled it's likely the averaged value might have a large error.

Yet I'm sure you're aware that in-filling methods are quite popular among climate scientists, at least when it suits them, for example in areas of the world where there aren't currently measuring stations (there's even an example in the post which began this thread). I can give you numerous examples if you need them. So it seems that climate scientists generally have no problem in inferring data where there are no measurements. What you've described isn't a fundamental problem and it won't be long before global temperatures can be established from surface measurements with reasonable precision.

In the meantime satellites measure temperatures (via radiance) globally and have done for decades.

Despite these, are you going to stay on the burning deck or are do you now accept that average global temperature is a "pretty meaningful concept" after all?

If so, we can go on to discuss just how well models do at estimating that average global temperature.

May 22, 2012 at 7:49 PM | Unregistered CommenterSimon Anthony

May 22, 2012 at 7:30 PM | chris

"The assertion that something can't be modeled because the system is "chaotic" or "deterministically chaotic" or "non-linear" or some combination of all of these ("deterministically chaotic non-linear system"; Yikes!) is a little whimsical"

Could well be whimsical but I didn't assert that climate can't be modelled, merely that models are subject to the problems of modelling complex non-linear systems.

"After all those water molecules jigger around in a completely chaotic (or "deterministically chaotic") manner, and we can't possibly model their behaviour. Of course we can."

Whether accidentally or otherwise, you're mixing up statistical and dynamical properties. I'm fairly sure you know the difference so this is either careless or misleading.

As for your list of "predictions", you've fallen prey to "The Texan Sharp Shooter" fallacy. You've selected predictions which have either been made post hoc or contemporary with other, contradictory predictions. You've ignored the unsuccessful predictions in the light of later measurements.

As I'm sure you know, the Texas Sharp Shooter peppers the whole wall with random shots then, having finished shooting, puts his target where most of his random shots ended up.

You choose to interpret the success of some predictions (of course some successful predictions have been made), alongside the failure of other predictions, as a vote-of-confidence in climate science. Would you feel the same about my staggering abilities to analyse non-linear dynamical systems if I got right fully 50% of my predictions of whether a tossed coin lands heads or tails?

May 22, 2012 at 8:06 PM | Unregistered CommenterSimon Anthony

May 22, 2012 at 7:49 PM | Simon Anthony

"When you say global temperature measurement is "a meaningless concept", what you seem to mean is that because temperature is generally sparsely sampled it's likely the averaged value might have a large error."

Please...you've completely misunderstood again Simon. Not sure there's much point in responding if you can't be bothered to read what others write, or to comprehend trivial and rather well established concepts like a temperature anomaly.

Try reading my post again. If I describe the essential problem re "global temperature" by illustrating the altitude-dependence of Earth temperature, and you interpret this to mean that the problem arises from the sparseness of temperature sampling, then there is something seriously wrong with your reading comprehension.

It's worth pointing out that the satellite measures you refer to were hopelessly in error due to some rather catastrophic misanalyses right up until these were independently corrected by competent research groups through 2005. The imperative to address the problem with the satellite temperature data arose from the clear mismatch between the apparent empirical measures of temperatures (from satellites) and those predicted from models (and basic physics).

It turned out that the models were largely correct and the analyses of empirical tropospheric temperatures were entirely incorrect.

In a nutshell that episode encapsulates one of the immense values of models, and if I were to answer Richard Betts' very sensible question (see May 22, 2012 at 5:41 PM | Richard Betts), that would be number 2 (or maybe # 3) on my list of "the purpose of climate models".

May 22, 2012 at 8:30 PM | Unregistered Commenterchris

May 22, 2012 at 8:06 PM | Simon Anthony

You've selected predictions which have either been made post hoc or contemporary with other, contradictory predictions.

Nope. These predictions came from the work of Arrhenius (late 19th/early 20th century,predictions of consequences of enhancing the greenhouse effect);
Sawyer, Broecker (1970's; temperature rise on enhanced greenhouse forcing),
Manabe and others (1970's/1980's; modeled effects of enhanced greenhouse forcing including rapid Arctic warming and delayed Antarctic warming);
Hansen 1980's; modeled temperature rise in response to greenhouse warming);
Brasseur and Hickman/Roble and Dickenson 1980's; effect of greenhouse forcing on tropopause height and stratospheric cooling);
Cusbach et al. late 1990's/early 2000's modelling of global redistribution of precipitation trennds under greenhouse forcing)

and so on...

in other words not "post hoc" at all. And it seems dumb to assert that things that are clearly not "post hoc" are "post hoc" without actually investigating the "post-hoc"-ness of those things! ;-)

May 22, 2012 at 8:48 PM | Unregistered Commenterchris

May 22, 2012 at 8:30 PM | chris

"Please...you've completely misunderstood again Simon. Not sure there's much point in responding if you can't be bothered to read what others write, or to comprehend trivial and rather well established concepts like a temperature anomaly.

Try reading my post again. If I describe the essential problem re "global temperature" by illustrating the altitude-dependence of Earth temperature, and you interpret this to mean that the problem arises from the sparseness of temperature sampling, then there is something seriously wrong with your reading comprehension."

Chris, you seem upset about something.

If you read my post, you'll find that I refer to "sparse sampling". I didn't say 2-dimensional sparse sampling. The problem you've described is just that there are sparse samples in the 3D atmosphere.

You didn't respond to my point about in-filling of data which is a standard technique in climate science and addresses your problem with "surface" measurements.

As for the earlier problems with the satellite measurements, again you're letting detail get in the way of a fuller understanding. As I said, satellites provided global temperature measurements. Their accuracy might have needed improvement but they were most certainly not "a meaningless concept"; they just needed to be done better.

And really, do please moderate your tone; when you let yourself get upset it detracts from whatever points you're trying to make

May 22, 2012 at 8:52 PM | Unregistered CommenterSimon Anthony

May 22, 2012 at 8:48 PM | chris

"And it seems dumb to assert that things that are clearly not "post hoc" are "post hoc" without actually investigating the "post-hoc"-ness of those things! ;-)"

I'm afraid you still haven't grasped the Texas Sharp-Shooter fallacy. It's common for climate scientists to fall for it so perhaps you've been spending too long in their company.

I didn't deny that there'd been "correct" predictions (I explicitly said so). I pointed out that you'd selected the "correct" predictions post-hoc and missed out the much greater number of incorrect predictions. It's a common failing, everyone is prone to remembering when their predictions were right rather than when they were wrong, so not really anything to be especially ashamed of. But it's important at least to try to get beyond these weaknesses. If you spend a few days listing all the wrong predictions made by climate scientists, I think you'll benefit enormously.

May 22, 2012 at 8:59 PM | Unregistered CommenterSimon Anthony

May 22, 2012 at 8:52 PM | Simon Anthony

Nope, you've misunderstood again Simon. The value of the temperature anomaly, and the essentially meaninglessess of "global average temperature" has got rather little to do with sparse sampling. In fact the surface temperature is, if anything, rather over-sampled as is indicated by a number of studies in which sub-sets of regional or global temperature are used that give rather similar temporal progression of temperature anomalies as the full set.

It's very surprising, if I may say so, that you don't understand why temperature anomalies are used. If the average 1000 metre altitude temperature is around 6 oC warmer than the Earth temperature at sea level, then at what height do we specifiy the earth globally averaged temperature? We can specifiy the 1.5 metre temperature if we wished, but then we're restricted to the tiny sub-set of temperature data collected at sea level. The temperatures collected at stations at 50 metres, or 100 metres or 300 metres or 1000 metres or 1300 metres (Termignon) or 1600 metres (Denver) or 2200 metres (Mexico City) would be useless.

The temperature anomaly allows all of the data to be used, including broken temperature records (or records that covered some periods but not others), due to the essential fact that the temperature anomaly is highly correlated across distance, and so a change in temperature (say over 20 years due to enhanced greenhouse forcing) of 1 oC at the top of Ben Dorain is likely to be closely matched by a similar temperature change in nearby Tyndrum, even though the absolute average temperatures of these locales are wildly different.

Simon, it was you that asked why temperature anomalies rather than absolute temperatures were used (we've shown that the absolute temperature is easily reconstructed from the local temperature anomaly if desired). The explanation is very simple and well understood. You asked...we gave you the answer.

If you still don't understand this there are a number of other ways to frame it. I suggest that if you want to address climate science with any degreee of seriousness you do need to get to grips with the rather excellent concept of temperature anomaly! ;-)

May 22, 2012 at 9:25 PM | Unregistered Commenterchris

whoops, the average 1000 metre temperature is around 6 oC cooler than the temperature at seal level (the height on the Earth surface where seals reside!)....

May 22, 2012 at 9:27 PM | Unregistered Commenterchris

May 22, 2012 at 8:52 PM | Simon Anthony

Simon, since you brought up satellite tropospheric temperature measures it's worth pointing out that these are generally presented as temperature anomalies too. Again similar (but not completely the same) reasons apply. The tropospheric temperature above the seas varies from around 1-15 oC at the surface (depending on latitude) to -45 - -75 oC at the top of the troposphere. So what is the "global temperature" of the troposphere? It's a pretty meaningless concept since it is entirely dependent on altitude.

And if we consider that the earth comprises the oceans, land surface and troposphere then how can we properly assign a "global temperature"? Pretty meaningless wouldn't you say?

However if one considers the effect of an increased forcing at the top of the atmosphere (say from enhanced greenhouse forcing), we might calculate (and measure in the real world using thermometers and satellite brightness measures) that the temperature anomaly is of the order of 1 oC at the surface and around 1 oC in the troposphere (we might expect a tad more). In other words the temperature anomaly allows a rather coherent assessment of the response of the important elements of the Earth surface from a forcing (of whatever sort), and since the temperature anomaly is quite highly correlated spatially, we can get a rather a good assessment of global changes (whether at the surface, or in the tropsophere) with a rather limited number of measurements.

May 22, 2012 at 10:49 PM | Unregistered Commenterchris

May 22, 2012 at 9:25 PM | chris
May 22, 2012 at 9:27 PM | chris
May 22, 2012 at 10:49 PM | chris

Sorry for the delay, children to put to bed and dinner to cook.

Chris, as I read your posts, I couldn't help but think of a man with a shovel digging ever deeper...

You seem to be fairly bright so I can only think that you're deliberately missing the point. It would really be more dignified and helpful for the discussion if you had the grace to accept your error and move on. Perhaps I should just accept that you're incorrigible but I'll give it one more try...

You said that global temperature measurement is "a meaningless concept". I've pointed out to you that it isn't, that it's a well-defined concept which is approximated to some accuracy by ground and satellite measurements.

I don't disagree with your points about the use anomaly measurements, but it was a pragmatic approach, a stop-gap measure when it wasn't possible to do the measurements needed to get the global average temp. It's been superceded by both satellite measurements and in-filling of surface measurements which, whether you like the idea or not, allow calculation of global average temperature.

Now of course satellite data are presented as anomalies. I'm sure you understand that's for comparison with the surface anomaly data and it's either disingenuous or desperate of you not to recognise that. In any case that doesn't affect the fact that satellite measurements calculate global average temperatures. I honestly can't see why you have a problem with this.

I strongly suggest you take a break, sleep on it. You're doing your case nothing but harm by this dogmatic insistence in direct contradiction of the facts. And really: you probably won't lose much by relinquishing this rather difficult position into which you've manoeuvred. It's perfectly obvious that global average temp is in principle a well-defined measurable quantity and is measured in practice. Let it go.

May 22, 2012 at 11:36 PM | Unregistered CommenterSimon Anthony

As Chris and Simon continue to slug it out, I have been searching for a mathematical proof for the axiom that I posed yesterday (that numerical climate models can have no predictive capability). I think (but am no means certain) that such a proof may exist within mutual information theory in multivariate analysis, relating to Shannon entropy.

Are there any mathematicians out there who could give an informed comment?

May 23, 2012 at 10:32 AM | Unregistered CommenterRoger Longstaff

May 23, 2012 at 10:32 AM | Roger Longstaff

"Slug it out"? It's like being savaged by a dead sheep, with extremely woolly arguments and (mixing metaphors) trying not to slip on too many red herrings.

On your question, I don't think it can be true that "numerical climate models can have no predictive capability". If based on weather forecasting models (which do have predictive ability) they must have some ability. Also, in perhaps trivial ways, the values of some quantities will be conserved and variables will be restricted to within certain bounds.

Now people who claim that long term climate predictions work while accepting that, say, 10-day forecasts don't, generally argue that this is because climate models predict slowly varying "average" values, which aren't subject to the rapid changes of weather. It might in principle be true that mean values are fairly stable and predictable while detailed dynamics are unpredictable. And if that's the case then the climate models might work well enough while nonetheless satisfying your conjecture.

May 23, 2012 at 11:13 AM | Unregistered CommenterSimon Anthony

Thanks Simon,

You are doing a sterling job, but I must admit that my concentration wandered.....

I was talking about adding axioms to the use of accepted information theory. Something like:

1. Errors (deviation from reality) equate to loss of entropy

2. Loss of entropy occurs in integration if the dependency of any variable is imperfectly defined

3. Loss of entropy occurs in integration if bandpass filtering procedures are employed

4. Loss of entropy occurs in integration if reversion to boundary conditions is implemented

5. Errors are cumulative in a numerical integration

I am sure that this could be expressed better, but if correct it proves that numerical models can NEVER accurately predict the climate, and that their accuracy diminishes progressively as the integration proceeds. We know from practice that their current usefulness diminishes to almost zero after just a few days, and can therefore infer that they have no predictive capability at all over longer time-scales.

As I said, any mathematicians out there?

May 23, 2012 at 11:41 AM | Unregistered CommenterRoger Longstaff

"As I said, any mathematicians out there?" Umm, yes. Interesting discussion (Roger chris Simon Rhoda).

First, Roger, you're not going to get any proof that numerical climate models can't predict anything.

Yes it's right that there are always numerical errors and the weather is chaotic. But what climate scientists seem to believe is that although the short-term dynamics is very complicated and unpredictable beyond a few days, in the longer term, things are simpler and the climate will "settle down'' to a response to some imposed 'forcing', when averaged over a medium time scale. In its simplest form this belief is represented in the idea of a 'climate sensitivity' where the 'response' is just a linear function of the 'forcing'. This is like Rhoda's analogy of a bouncing ball coming to rest. Or think of taking a dog for a walk - the dog scampers around all over the place but when averaged out goes the same way as you. To put it a bit more mathematically, there are short-term perturbations but these all decay exponentially on to some 'preferred' behaviour of the system.

As the numerous quotes imply, I don't buy this. Why should the short-term dynamics be complex, chaotic but the long-term dynamics very simple? I think the long-term dynamics is probably chaotic too, involving complicated nonlinear interactions of slowly changing things like ocean currents and ice sheets, and therefore equally unpredictable.

A problem with the former approach is that they have to dream up some kind of 'forcing' to 'explain' every little wiggle in the earth's temperature history. What 'caused' the medieval warm period or the little ice age? (Look this up on the web and you'll find some hilarious attempted explanations!) With my viewpoint, there is nothing to explain - irregular oscillations are to be expected in the long-term climate just as they are in the short-term weather. It's not a view that comes up very often, and I certainly can't prove it. Of course I could very easily come up with a model to support it!

May 23, 2012 at 1:49 PM | Registered CommenterPaul Matthews

I think Roger Pielke sr is sympathetic to my viewpoint. He was quite scathing of a claim in the last IPCC report:


Their claim that

"Projecting changes in climate due to changes in greenhouse gases 50 years from now is a very different and much more easily solved problem than forecasting weather patterns just weeks from now.”

is such an absurd, scientifically unsupported claim, that the media and any scientists who swallow this conclusion are either blind to the scientific understanding of the climate system, or have other motives to promote the IPCC viewpoint. The absurdity of the IPCC claim should be obvious to anyone with common sense.


PS This interesting thread has gone way off topic. Should it go to the discussion section?

PPS Roger's 5 is not necessarily true. If you solve dy/dt = noisy errors - y, the introduced noisy errors and any numerical errors get damped out.

May 23, 2012 at 2:08 PM | Registered CommenterPaul Matthews

Thanks Paul,

I agree with everything you say, with the possible exception of "you're not going to get any proof that numerical climate models can't predict anything".

Please could you say if you think that I am on the wrong track with my axioms involving information theory and entropy? (It is a long time since I studied maths at university, and even then it was as part of a physics degree).

May 23, 2012 at 2:14 PM | Unregistered CommenterRoger Longstaff

Paul,

"Roger's 5 is not necessarily true. If you solve dy/dt = noisy errors - y, the introduced noisy errors and any numerical errors get damped out."

Would " given the above, a loss of entropy is cumulative in a numerical integration" be true, and a satisfactory replacement?

May 23, 2012 at 3:01 PM | Unregistered CommenterRoger Longstaff

Longstaff's Last Theorem (particularly if the Met Office get their hands on him):

"The loss of entropy in predictive numerical climate models is exponential with respect to time."

PS I have discovered a truly marvelous proof of this, which this comment box is too small to contain

May 23, 2012 at 4:51 PM | Unregistered CommenterRoger Longstaff

Based on Roger's look at the met's model logic, would I be suspicious in wondering whether the black box carbon model and radiation model actually program in the answer that it is going to get warmer, and that no input variable, parameter or algorithm is going to overcome that effect? That the model may guess what it likes about the ball bouncing, but the black boxes predict the end come what may? Am I also being cynical in wondering whether anybody within the met is taking up the cause of holding the model to a rigorous standard by criticism or are they all expected to support it. And yes, I've worked in many an organisation where it would be unwise to speak up.

May 23, 2012 at 4:59 PM | Unregistered CommenterRhoda

Rhoda, well said.

May 23, 2012 at 5:43 PM | Unregistered CommenterRoger Longstaff

Roger, sorry, a bit of a cop-out but I don't really know anything about information theory and Shannon entropy. So I'm not able to prove or disprove your theorem.

May 23, 2012 at 5:59 PM | Registered CommenterPaul Matthews

To get back to the subject of Myles Allen, there is a video here where he makes a stunning misrepresentation of the climategate issue, about 3 mins in. He claims that the only effect of the whole of climategate is one tiny correction to temp around 1870.

May 23, 2012 at 6:11 PM | Registered CommenterPaul Matthews

One area of research I'm involved in is molecular dynamics (MD) simulations. This is computer modelling, aimed at understanding things like the structure of proteins, or the way in which molecules exchange energy with the surrounding solvent prior to or after chemical reaction.

There are loads of issues with the accuracy of such simulations. Here too, you have a chaotic system, and you have parameters that are known not to be correct. Here too, you can ask fundamental mathematical questions about what it means to do an MD simulation, given the very high sensitivity to initial conditions, etc. Here too you can argue that certain properties of the simulated ensemble - especially averages over many time steps - are less sensitive than others to the initial conditions. What the logical/mathematical approach can't do, Roger, is decide whether MD is useful or not. Partly because 'useful' is not a logical category - usefulness is to an extent in the eye of the beholder, and depends what you want to use the model for. For example, you could argue that the output of a given MD simulation is 'useful' because once printed out, you could use it as a doorstopper. To decide if it is useful, you can look to see whether it is fruitful in terms of helping people to design new experiments, understand existing ones, and so on. In that respect, MD works fairly well in my view, though some of the concerns about validation that arise for climate models are an issue also. A big difference that keeps people doing MD simulations honest is that the timescales are such that lots of predictions are made prior to experimental data being known - thus avoiding the subtle temptations of hindcasting.

Another difference is that the amount of experimental data you can use to validate MD simulations is hugely larger than the amount you can realistically use to validate climate simulations. Think of it in terms of Shannon entropy if you wish: Richard Betts (though it is unfair to single him out in this respect) thinks it is a powerful prediction to say temperature will rise by 0.5 degrees in 30 years, if in fact it rises by 0.6. Well, if prior to making the prediction, a very rough guess is that temp could do anything between rising by 1 degree and dropping by 1 degree in that period, and you accept an accuracy of +/- 0.1. So of ca. 21 possible outcomes, you'd decide you are spot on if you 'predict' that any of three of them are right. Your prediction has an entropy content of the order ln 7 = ln (21/3) = ca. 2. If instead your problem involves predicting some structural property of four proteins, each of which has 5 possible values, then you now have 5.5.5.5 = 625 possible outcomes. Lets say your MD simulation is accurate enough to narrow it down to one of two possibilities for each protein, then your prediction has an entropy content of ln (625/2^4) = 3.7 or almost twice as much. In fact, you can do largely better than this in the field of MD simulation - there's loads of things to calculate, so loads of things you can test your predictions on. If you can keep up a good success rate on a much wider dataset, then the more skilful your model is. A single successful prediction doesn't tell you much about skill. It is worth noting that the issue of modelling global temperature at two points in time, vs. modelling the change in temperature between two points in time (i.e. getting the anomaly at a later point in time) also involves a lower information entropy requirement in the latter case. From what I understand, climate models do not do a great job of getting mean global temperature right - they might be off by much more than a degree. chris's arguments notwithstanding, that has to be a worry.

And all of this is leaving aside Simon's excellent argument that at the time the 'correct' prediction was made 30 years ago, there were lots of other predictions being made that have now been conveniently forgotten.

May 23, 2012 at 9:29 PM | Registered CommenterJeremy Harvey

Thanks very much Jeremy, as you can tell I am struggling with the concept of entropy, and seeking to find out if it is a valid measure of climate predictions.

Many years ago I was involved in the study of protein crystal growth in microgravity (where a lack of convection enables larger and more homogeneous crystals that are suitable for X-ray diffraction analysis). My understanding is that even large and complex molecules have structures that are defined by well understood chemical bonds, so the solution space, although extremely large for a large molecule, is nevertheless finite and bounded. In other words, any solution (structure) must always be theoretically possible.

My concern about climate models (apart from them being orders of magnitude more complex) is that they must be unbounded if essential information is not to be lost during the integration, as the end state is unknown. Therefore, the number of possible states (of all of the variables) tends to infinity very quickly. I realised that such models could not possibly function in an unbounded manner and therefore sought and found references to filtering and "re-setting". This did not surprise me as I knew something like this had to be there in order to stop the models becoming unstable. This clearly involves a loss of information, to the extent that (in my opinion) the model loses any predictive power.

I think that your comparison of the entropy of MD outcomes and temperature outcomes is not valid, as temperature is just one dependent variable in what quickly becomes an almost infinite number of possible states.

I am trying (inexpertly) to explain this in a mathematical way, whereas I think that Rhoda gave a much better explanation in her post. I simply want to understand if mathematics can explain something that I, and others, intuitively feel to be right. Having said all of that, do you think that the concept of entropy is a valid, and sensible, way to measure the performance of climate models?

Cheers, Roger

May 23, 2012 at 10:48 PM | Unregistered CommenterRoger Longstaff

Hi Richard, sorry bit late picking up on this:-

“Sawyer (1972) used such a method to estimate a rate of warming of 0.2C per decade for the next 30 years. The actual rate of warming in the HadCRUT4 dataset was about 0.17C over roughly that period (so Sawyer's estimate was not too bad IMHO!)”

Not too bad? For Sawyer to have made his predictions against a metric to be devised some 40 years into the future! IMHO quite amazing!

So is HadCRUT4 now "the" metric? Yet has no data past 2010?

May 23, 2012 at 11:07 PM | Registered CommenterGreen Sand

Jeremy, you describe applications of MD to protein folding, but this is surely a more difficult problem than simulating the broad elements of the climate system. If you’re interested in MD simulation of protein folding of even the smallest “protein” domain using an all-atom representation in an explicit solvent, then you need to run your simulation for a significant number of microseconds, you need pretty massive computational resources, and since protein folding is essentially a phase change with a rather tiny (few kJ.mol-1) folded state free energy [delG(F-U)], relative to the 100’s/1000’s of kJ.mol-1 of the folded and unfolded states, simulating protein folding is still on the cusp of being doable. In fact pretty much all computational predictions of folded state structures of proteins are made using some combinations of homology modelling, threading or such like as I’m sure you are aware.

Simulating climate and the effects of enhanced greenhouse forcing involve much cruder assessment of the effect of top of the atmosphere radiative imbalance on the accumulation of energy in the climate system. If the greenhouse effect is enhanced thermal energy is driven into the climate system. The questions then relate to the rate and equilibrium extent of energy accumulation and its distribution through the climate system. At this level climate simulations are much easier than simulating protein folding. We could then start quibbling over how successful climate simulations are/might be in simulating the finer-grained elements of the system and its response to enhanced radiative forcing (I expect my conclusions on this might not be so different from yours).

Richard Betts made a very sensible suggestion earlier in the thread that we might describe what we consider to be the value of models, and despite his relentless helpfulness on this thread you and I are the only ones that have bothered to address this (you somewhat indirectly in relation to protein dynamics, and I more directly). I couldn’t agree with you more that simulations help to provide a context for doing experiments that one might not otherwise have thought of.

But climate simulations are no different in this regard. The crudest climate models predict a raised tropospheric temperature in response to enhanced radiative forcing and were proved correct in the face of 15 years of misanalysis of satellite tropospheric temperature measures that indicated little tropospheric warming. Models have long predicted moistening of the troposphere as raised temperatures promote enhanced absolute humidity. Like tropospheric warming that might be somewhat of a “no-brainer”, but at least one distinguished atmospheric scientist was asserting through the 1990’s that the upper troposphere would dry in response to greenhouse induced warming and thus constitute a negative feedback. The models have been proved correct again. Currently there is an apparent anomaly between upper tropospheric temperature over the tropics that should warm (in models) more than empirical measures suggest. It’s an open question of whether the models will be proven correct (or not)….however the fact that we have models that have predictive power means that an empirical focus on this particular issue is likely to be of great value to our understanding.

So climate models are useful just like MS simulations of protein folding and dynamics. I listed a whole load of early predictions from models that have come true, and sketched citations of the early work in an earlier post (May 22, 2012 at 8:48 PM | chris). These have been met with accusations of “post-hoc”-ness when they clearly aren’t. There weren’t many people doing climate simulations in the 1960’s through 1980’s and so there aren’t many papers to inspect to assess the extent of the variability of predictions. All climate simulations of the effect of enhanced greenhouse forcing predicted accumulation of energy in the climate system and raised surface and troposphric temperatures, enhanced water vapour concentration, and where this was specifically assessed high Northern hemisphere amplification of warming and delayed Antarctic warming. The fact that early climate modelling predicted around 0.6 oC of warming by 2000 was broadly correct because these crude elements of the climate system are eminently modelable; it wasn’t a result of some sort of statistical chance.

Some things are expected to be accessible to simulation (e.g. the crude properties of protein dynamics – unfolded proteins easily display hydrophobic “collapse” in MD simulations and it’s easy to simulate thermal protein unfolding), some things are not easy (full atom MD simulation of the folding of ubiquitin). There’s no point in having a discussion of these things unless one recognises what is and isn’t accessible!

May 24, 2012 at 12:04 AM | Unregistered Commenterchris

Roger,

I was using entropy in a slightly different way, basically concerning the amount of information encoded in a particular prediction. I'm not sure I follow your discussion of entropy in climate modelling, though I did not your question about filtering: in my mind at least, this is not an issue. If you assume, as climate modellers do, and as very nicely described by Paul M, that climate involves some rapid but essentially stationary 'noise', and an underlying, slow-moving signal, then to identify the latter in a simulation, you may want to filter out high-frequency components of your calculated time-series. I'm sure that there are more and less sensible ways to do this averaging.

chris:

If you look at my post, you'll see I was carefully not using the word 'folding' in the context of proteins. I talk about predicting elements of structure - e.g. the binding mode of a ligand. I'm aware that folding is very tough, and in fact linked to the CASP project (note the very apposite URL) in a similar context here on BH some time ago. The point is that such predictions are, in a quantitative sense, more 'risky' than many climate predictions. This applies to MD simulation predictions (including things that are not full protein folding predictions) also. The reason for the greater riskiness is because the extent to which the space of possible outcomes is narrowed down needs to be much greater than, say, for predicting global mean temperature anomaly. By the way, did you concede to Simon that you'd been making a bit of a meal of the absolute T vs. anomaly thing? The latter is defined based on the former, so if you argue that the former is meaningless, you have a problem with the status of the latter.

Also, you write:

"The fact that early climate modelling predicted around 0.6 oC of warming by 2000 was broadly correct because these crude elements of the climate system are eminently modelable."

That may be true - but it may be not. If it is so easy to model the crude features of climate, then why do predictions of future temperature changes vary by so much (even within a given emissions scenario, so don't try to bring in the projection/prediction argument!).

May 24, 2012 at 8:08 AM | Registered CommenterJeremy Harvey

O.K. fair enough Jeremy; we probably agree that (i) the broad elements of the climate system relating to energy balance under radiative forcing are more amenable to computational modeling than simulating protein folding (real world observations tells us so); (ii) that the underlying usefulness of both computational MD models and climate models is that they are devices for encapsulating, testing and improving our knowledge, especially in their ability to focus experiment/empirical observation on interesting areas where real world measures and models are either in accordance or different, and to make predictions about real world phenomena, whether these involve calculating the dissociation constant of drug binding to an enzyme (bloody difficult) or determining the likely latitudinal-dependence of changes in precipitation patterns in a warming world (broadly speaking very much easier). If we agree we've probably answered Richard Betts question quite well!

Simon’s assertions re “global temperature”. Not sure what to concede here; Richard Betts and I are right (I gave some rather detailed descriptions of the problem in posts above) and Simon is wrong. But since Simon is addressing this point by bullying rather than explanation, it’s not fruitful to engage with him on this.

However if you and Simon are adamant that global temperature has an important meaning, why don’t you tell us what it is (e.g. is it 14.2 oC, or 14.7 oC, or 15 oC, or 15.3 oC, or 15.7 oC?). Does it involve the troposphere (in which case it’s much cooler). Please tell us what the global temperature is and why!

Anyway, I thought this was a rather well understood issue, but maybe not. Although the global temperature anomaly is based on absolute temperature measurements it certainly isn’t based on a calculation of a “global temperature” from which the anomaly is subsequently calculated. The global temperature anomaly is based on a large set of discrete temperature anomalies from site-specific absolute temperature measurements, and the local, regional or global anomaly is determined by some average of these involving elements of area-weighting and so on. There are lots of very good reasons why this is done some of which I showed in a couple of posts above.

As far as models go the aim, presumably is to properly account for the energy balance and the distribution of thermal energy spatially such that modeled local or regional temperatures properly map onto real world measures. Is there a problem with this model-wise in your understanding?

Of course Richard and I could be wrong and “global temperature” or “global average temperature” might have some fundamental importance. If so I for one would like to have this conveyed by explanation and evidence rather than assertion and bullying!
;-)

Re your last point. Of course there is still uncertainty in the Earth response to enhanced greenhouse forcing (climate sensitivity is still poorly bounded between 2 - 4.5 oC), and part of this uncertainty is contained in the temporal evolution of accumulation of thermal energy. So the crude predictions of future warming from the 70's were no doubt blessed with a certain amount of good fortune! But the accumulation of energy, and consequent effects on surface temperatures under enhanced radiative forcing is a "no-brainer", and so it's not very surprising that they were broadly correct (and of course why we consider that warming will continue, uncertainties notwithstanding).

May 24, 2012 at 9:39 AM | Unregistered Commenterchris

May 24, 2012 at 8:08 AM | Jeremy Harvey

If it is so easy to model the crude features of climate, then why do predictions of future temperature changes vary by so much (even within a given emissions scenario, so don't try to bring in the projection/prediction argument!).

Hi Jeremy,

Because of uncertainties in the feedbacks, which have a greater impact further into the future because they have longer to propagate.

Comparing a case of strong positive feedbacks with one of weak positive feedbacks, after 30 years they'll be a bit different but after 100 years they'll be much more different.

This is illustrated quite nicely in this figure from IPCC AR4 which shows the "plume" of uncertainties expanding the further into the future you go, even within an individual emissions scenario.

Cheers

Richard

May 24, 2012 at 10:28 AM | Unregistered CommenterRichard Betts

May 22, 2012 at 8:59 PM | Simon Anthony

I didn't deny that there'd been "correct" predictions (I explicitly said so). I pointed out that you'd selected the "correct" predictions post-hoc and missed out the much greater number of incorrect predictions. It's a common failing, everyone is prone to remembering when their predictions were right rather than when they were wrong, so not really anything to be especially ashamed of. But it's important at least to try to get beyond these weaknesses. If you spend a few days listing all the wrong predictions made by climate scientists, I think you'll benefit enormously.

Please can you provide links to other scientific papers from the 1970s that made quantitative estimates of a change in global mean temperature over the coming 30 years or so?

It sounds like you think people were randomly giving out predictions of all sorts of change, happy in the knowledge that one of them will be right purely by chance. This is not the case. Sawyer made a estimate based on physical understanding, and it turned out to be a reasonable estimate, which lends support to his understanding.

If someone else made a "rival" prediction of, say, cooling over the same period, they were clearly wrong, and whatever understanding they had was incorrect.

However I'd be interested to see the other published papers on other predicted rates of warming, please can you provide them?

Thanks!

Richard

May 24, 2012 at 10:41 AM | Registered CommenterRichard Betts

chris,

We certainly agree more than you might have expected when you first started coming here. Certainly we agree on the heuristic value of modeling. We still disagree quite a bit on the confidence we should have in genuine predictions made by models, and on the degree to which policy should be predicated on the reliability of such predictions.

"Richard Betts and I are right and Simon is wrong." As you wish. What was that about arguing by assertion? Some aspects of the value and meaning of a temperature average and a temperature anomaly are obvious, some are not. It would show willing if you tried to understand what Simon's point was - I can assure you it is not (at least, not trivially) wrong.

"But the accumulation of energy, and consequent effects on surface temperatures under enhanced radiative forcing is a "no-brainer"" - Again, as you wish...

May 24, 2012 at 10:46 AM | Registered CommenterJeremy Harvey

May 22, 2012 at 6:15 PM | Rhoda

And here it comes. Richard will tell us they are not used for political advocacy but for scientific investigation. Maybe that they are not really accurate in any sense, but indicative within a range of uncertainty. That if this year's model is just so good, give us a few more quid and a bigger computer and we will get better. All good stuff. But people of a particular opinion are using model results in order to advocate serious changes in poliicy. Now Richard, are the models good enough for that, or would you like a few more quid and a bigger computer?

Hi Rhoda

Actually I was going to say that there are three purposes of climate modelling.

One of the purposes of climate models is to inform mitigation policy, by estimating the future levels of global warming several decades into the future under different emissions scenarios.

So far it looks as if they are fit for this purpose, because projections of change on timescales longer than a decade seem to be holding up.

The second purpose is to inform adaptation policy, by making forecasts of regional climate change and natural variability in the nearer-term (the next few years to a couple of decades). Currently the models are rather less useful for this, as uncertainties in regional changes (especially rainfall) are huge. The models are not yet fit for the purpose of informing any decision which requires accurate information on regional climate variability, so we specifically do not promote their use for such decisions. However, this is what we are working towards, with the help of "a few more quid and a bigger computer" :-)

The third purpose, as you say, is to help improve understanding of the climate system, by providing a way in which theory (as expressed in the models) can be tested against observations. As well being important in its own right, this also helps to contribute to the first two purposes.

Cheers

Richard

May 24, 2012 at 11:00 AM | Registered CommenterRichard Betts

Thanks Jeremy, I think the difference is that I was trying to equate a loss of entropy as a loss of valid information (in other words deviation from reality), not the total number of states required to describe a system.

I agree that filtering is a valid technique if you are sure that you are filtering out random noise. But with such a complex and ill defined systen, how can you be sure? Who defines the bandwidth? Also, is it valid to re-start a model that has (for example) vioated conservation of mass?

These questions may not have answers, but thanks for your comments.

May 24, 2012 at 11:11 AM | Unregistered CommenterRoger Longstaff

"Richard Betts and I are right and Simon is wrong." As you wish. What was that about arguing by assertion?"

Well yes. The difference of course is that I gave some careful explanation of the issue that underlines my statement (see my posts May 22, 2012 at 6:51 PM | chris and May 22, 2012 at 9:25 PM | chris, for example). In any case this is a rather well established point. At some point we should recognise what are and aren't things worth arguing over!

"But the accumulation of energy, and consequent effects on surface temperatures under enhanced radiative forcing is a "no-brainer"" - Again, as you wish...

It's not so much what one "wishes"...it's what the evidence and our understanding gives us confidence in expecting. Of course we might have some particularly large contingent events (series of massive volcanoes; very weird solar phenomena that intervene to oppose enhanced greenhouse) but I find it very difficult to comprehend a situation in which large accumulations of thermal energy in a system doesn't make it get warmer. The million dollar questions relate to how much and how fast...

May 24, 2012 at 11:12 AM | Unregistered Commenterchris

Richard,

You say that model output over timescales up to a decade, and for regional variations, is not yet fit for puropose, but that "projections of change on timescales longer than a decade seem to be holding up". You then use this premise to imply that models running over longer timescales are fit for the purpose of "inform(ing) mitigation policy, by estimating the future levels of global warming several decades into the future under different emissions scenarios."

This means that the only models supposedly fit for purpose are the ones constructed decades ago, without supercomputers. Is this correct?

Also, is it not just as likely (as GHG effects) that the 0.7 degree warming that we have seen over the last seven decades or so is simply part of a natural cycle of variation between LIA and MWP conditions?

May 24, 2012 at 11:25 AM | Unregistered CommenterRoger Longstaff

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>