Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Shale gas dropped? | Main | Is it or ain't it Rashit? »
Sunday
May202012

Myles Allen on Berlin's two concepts of liberty

Simon Anthony sends this report of Myles Allen's recent lecture at Oxford.

Myles (I think he'd prefer I call him "Myles" rather than Prof Allen as most people in the audience seemed to refer to him thus) is prof of geo-system science in the school of geography and the environment and heads the climate dynamics group in physics dept, both in Oxford. His main interest has been in attribution of aspects of climate, particularly "extreme events" to human activities. Recently he's been working on how to use scientific evidence to "inform" climate policy.

The lecture's title comes from Isaiah Berlin's contrast between "negative" and "positive" liberty. These can be (slightly) caricatured as, respectively (and perhaps contrarily) freedom from constraints (eg tyranny) and freedom to do particular things (eg vote for the tyrant). Amongst other things, Berlin was concerned about the possible abuse of positive liberty in which the state prescribes what is permitted rather than ensuring the conditions in which individuals were free to make their own choices.

Myles contrasted two extreme views of how to address climate change: either continue as currently so 0.001% of the world's population choose to benefit from emissions of CO2 and the poorest 20% involuntarily suffer the consequences or halt emissions and so demolish the capitalist, liberal, market system. In conversation afterwards he accepted this was a rhetorical flourish rather than a genuine choice. 0.001% of the world's population is ~700,000. He said this number was those who profited directly from extraction and burning of fossil fuels. But it omits shareholders or citizens who benefit from taxes paid by oil cos etc. And it omits those who, for example, drive or keep warm or light their houses. If these people were included, the number of beneficiaries would likely be rather more than the number suffering. So it seems more than a little disingenuous to characterise the "sides" in these terms. In any case, rather than have states impose strict controls, Myles wanted to investigate means by which emissions could be voluntarily curtailed and suffering compensated through negative liberty.

So, he says, assume that IPCC's predictions are correct but it'll be 30 years before confirmation. What measures can be taken to reduce CO2 emissions? Offsetting doesn't work because what counts is cumulative emissions, not the rate. Centrally imposed limits would potentially mean big opportunity costs as beneficial activities might not be undertaken. Is there instead some means by which the impacts can be traced back to CO2 emissions and the originators made to pay (cf Deep Water Horizon)?

An essential component of any such scheme is that harm caused by climate changes should be correctly attributed to fossil fuel CO2 emissions. If that were possible then, on a pro rata basis of some kind, those companies responsible for the emissions and which had directly benefitted from extraction and burning of fossil fuels (oil, coal, gas, electricity, car manufacturers, airlines...) could be penalised and the proceeds distributed to those who were judged to have suffered.

Now Myles (I think somewhat inconsistently) seemed to accept that climate predictions for 30 years into the future were unverifiable, unproven and unreliable (perhaps not surprising when, as Richard Betts confirmed in another thread, even when the Met Office has the opportunity to assess its 30+-year temperature anomaly predictions in, for example, forecasts made in 1985, it chooses not to do the tests. One can only speculate as to why that might be.) He also accepted that the public might justifiably not believe the assurances of climate experts, particularly given the patchy record of mighty intellects in predicting the future (examples he gave were Einstein post-war seeing imminent disaster unless a world government was immediately set up; a Sovietologist who in the mid-1980s confidently predicted the continuing and growing success of the Soviet Union; 30-year predictions of US energy use which turned out to be huge overestimates and Alan Greenspan's view that derivatives had made the financial world much secure. I'd have been tempted to add Gordon Brown's (or George Osborne's) economic predictions but time was limited.) There was very little reason to expect people to believe in the extended and unfeasible causal chain leading to predictions of temperatures in 30 years time

Instead Myles proposed that the frequency and pattern of "extreme" events was now well enough understood that the effect of CO2 emissions could be reliably separated from natural variations. He gave various examples of how models had been validated: the extent of human influence on the European heatwave of 2003 has been "quantified"; the Russian heatwave of 2010 was within the range of natural variation; model predictions of annual rainfall in the Congo basin matched uncannily well the "observations" (Myles himself initially doubted the extraordinarily good match, although he now accepts it's genuine. However, the "observations" weren't all one might expect because conditions for meteorologists in the Congo are understandably difficult, so there aren't any actual measurements. Instead an "in-fill" procedure was used to extend readings from safer locations to the Congo basin. I asked whether this agreement between a model and, um, another model was really a good test of either. Myles assured me that both models were reliable and show good agreement with measured data in, for example, western Europe. Still, an odd way to illustrate reliability of model predictions.).

So although it wasn't possible reliably to predict climate to 2050, current near-term regional forecasts may be good enough to show that the probability of extreme events was changed by CO2. In any case, the people who believe they've been adversely affected by climate change are free to take legal action against the companies they believe are responsible. Myles foresaw such litigation growing as the effects of climate change became more apparent.

An obvious question arises, rather like the "dog that didn't bark": if the evidence for the effect of AGW on extreme events is as strong as Myles and others claim, why haven't class actions already been brought, particularly in the US? "Ambulance chasing" lawyers aren't renowned for their reticence but so far there has been no action of great significance. I don't think it's wild speculation to suggest that lawyers have examined possible cases but haven't yet thought the evidence strong enough to make it worth while proceeding. Of course at some stage such cases will come to court and then Myles may find that his hope that they'll change the "climate" of debate will cut both ways. Because if a major class action against, say, oil companies claiming compensation because the 2003 European heatwave was due in part to CO2 emissions, was brought and failed, it would be a major setback to hopes for international laws to limit further emissions. While litigation won't advance science, it could be very politically significant - as well as entertaining - to have the arguments for AGW tried in court.

Finally, having been to three of the Wolfson lectures on climate change, I'd like to add a couple of observations. First, although all the speakers talked about the evidence for AGW, not one of them mentioned hockey-sticks. Stocker came closest when he said that current temperatures were the warmest for 500 years but didn't venture an opinion on the medieval warm period. I wonder whether it's too much to hope that the more scrupulous climate scientists are distancing themselves from the petulant antics and inept science of hard-core "Team" members. And second, two of the three speakers (Wunsch and Allen) said that there was little reason for people to believe that 30-year climate predictions were reliable. So perhaps the better climate scientists will stop relying on magical trees and statistics to make up the past and dubious models for scary futures. Instead they might try to do what Myles advocates and concentrate on shorter term understanding of the climate which might at least be testable.

PrintView Printer Friendly Version

Reader Comments (204)

May 24, 2012 at 9:39 AM | chris

Sorry for the delayed response - busy day. I'm in something of a hurry just now but, since you've accused me of bullying, I'll find time to respond.

"Simon’s assertions re “global temperature”. Not sure what to concede here; Richard Betts and I are right (I gave some rather detailed descriptions of the problem in posts above) and Simon is wrong. But since Simon is addressing this point by bullying rather than explanation, it’s not fruitful to engage with him on this."

You don't give examples so I shall have to try to imagine what you might mean. For example, perhaps you're upset that I said you were being disingenuous and desperate in your increasingly tortured attempts to insist that global temperature was a "meaningless concept". (although I notice promising signs that you're moderating your views in your most recent posts...) I said that only because your arguments, in the face of directly contradictory evidence and argument, were disingenuous and desperate. If you're upset by such a description, either turn down your sensitivity or else improve your arguments.

In an earlier post, you wrote...

"Please...you've completely misunderstood again Simon. Not sure there's much point in responding if you can't be bothered to read what others write, or to comprehend trivial and rather well established concepts like a temperature anomaly.

Try reading my post again. If I describe the essential problem re "global temperature" by illustrating the altitude-dependence of Earth temperature, and you interpret this to mean that the problem arises from the sparseness of temperature sampling, then there is something seriously wrong with your reading comprehension."

Now if I were over-sensitive, I might interpret those as deliberately insulting, even bullying, remarks; certainly gratuitously and unnecessarily offensive. But I don't. It seems to be just a fairly standard rhetorical advice adopted by someone who's losing an argument. As such I see it as part of the general to-and-fro and, inasmuch as it shows you've not got much to argue with, rather encouraging me that my point-of-view is correct (or at any rate you're unable to find anything significant to dispute).

So you accuse me of bullying for merely factual statements while yourself attempting to insult and belittle me. I'm never quite sure of psychological terms (they're nearly as hard to pin down as climate scientists' arguments) but I believe there's a concept called "projection" which might be relevant.

I'll revert on the substance of your comments later.

May 24, 2012 at 1:46 PM | Unregistered CommenterSimon Anthony

May 24, 2012 at 11:25 AM | Roger Longstaff

This means that the only models supposedly fit for purpose are the ones constructed decades ago, without supercomputers. Is this correct?

Hi Roger,

Ha! Very good, I like your style, follow the logic of the argument and see where it goes...

But not, it's not correct, because the central principles in modern models run on supercomputers are still firmly grounded in the basics established in the early models. For example, Manabe and Wetherald (1967), which Sawyer (1972) used for his year 2000 warming estimate, is still a seminal paper.

Also, is it not just as likely (as GHG effects) that the 0.7 degree warming that we have seen over the last seven decades or so is simply part of a natural cycle of variation between LIA and MWP conditions?

No. We know that the greenhouse effect exists and that the concentration of greenhouse gases in the atmosphere has increased over the past century or more due to human action, so it is logical that the greenhouse effect has increased. Satellite measurements show that the Earth's radiation budget is being perturbed in a way consistent with this.

Also, although the large-scale temperature anomaly at the MWP is highly uncertain, the current evidence seems to suggest that the northern hemisphere at least is unlikely to have been as warm as now. This doesn't rule out the possibility that it was as warm as now, but it doesn't support your suggestion that natural variability is "just as likely" to be the cause of recent warming.

Cheers

Richard

May 24, 2012 at 4:44 PM | Registered CommenterRichard Betts

A further thought Jeremy (and Paul M). You say: "If you assume, as climate modellers do, and as very nicely described by Paul M, that climate involves some rapid but essentially stationary 'noise', and an underlying, slow-moving signal, then to identify the latter in a simulation, you may want to filter out high-frequency components of your calculated time-series"

But where does the noise in the model come from? The modeller starts the run with a dataset and a set of algorithms of his choosing, therefore how can a numerical model generate high frequency components of random noise? Moreover, as the Nyquist sampling theorem dictates that we must sample at twice the rate of the highest frequency component in order not to lose information, any high frquency filtering of the model's own output MUST result in a loss of information, or an inevitable deviation from "reality" (in my terms a loss of entropy).

Do you agree?

May 24, 2012 at 4:51 PM | Unregistered CommenterRoger Longstaff

Thanks Richard,

I can see the logic of your first point - fair enough.

However I can not accept anything further in your statement. Can you give a reference for your statement "Satellite measurements show that the Earth's radiation budget is being perturbed in a way consistent with this" ?

Furthermore, I distrust ice core methodology concerning CO2 levels, as it completely disagrees with historical chemical measurements (as reported by Beck). I also distrust the "hockey stick" analysis of temperature records (that eliminated the LIA and the MWP).

To me:

LIA = ice fairs on the Thames
MWP = vineyards in Yorkshire

As the LIA seems to be a repeat of the "dark ages" and the MWP seems to be a repeat of the Roman warm period, I would rather trust contemporary historical records rather than pine cones, or whatever.

But I continue to enjoy the discussion!

Cheers, Roger

May 24, 2012 at 5:09 PM | Unregistered CommenterRoger Longstaff

Chris: I should perhaps have replied at greater length, but you come across to me as rather peremptory. Hence my "as you wish".

I agree with you that a global temp anomaly is more robust than a global average temperature with respect to how the averaging is carried out. I'm also happy to concede that my wording suggested I thought that to compute an anomaly you must first compute an average temp. I don't think that.

But you are claiming that an average - carried out over some defined domain - has less physical meaning than an average anomaly. That can't be true: both ultimately are not directly measurable, and both are derived from a set of temperature measurements.

And the key point is this: the climate models have temperatures in them. It is not an unreasonable question to ask how those temperatures, suitably averaged regionally or globally, compare to the average of temps derived from experiment. By using anomalies only, you could have models predicting mean temps at sea level in the UK of 40 degrees Celsius in 1970, and 40.6 now, being described as accurate. As I understand it, Simon was asking about this, and it seems a fair question to me!

May 24, 2012 at 6:00 PM | Unregistered CommenterJeremy Harvey

May 24, 2012 at 9:39 AM | chris

Again, I'm a bit short of time so apologies for brevity...

"Simon’s assertions re “global temperature”. Not sure what to concede here; Richard Betts and I are right (I gave some rather detailed descriptions of the problem in posts above) and Simon is wrong. But since Simon is addressing this point by bullying rather than explanation, it’s not fruitful to engage with him on this.

However if you and Simon are adamant that global temperature has an important meaning, why don’t you tell us what it is (e.g. is it 14.2 oC, or 14.7 oC, or 15 oC, or 15.3 oC, or 15.7 oC?). Does it involve the troposphere (in which case it’s much cooler). Please tell us what the global temperature is and why!"

Rather than a detailed rehearsal of the obvious points (principally that average temp anomaly is just a trivial rearrangement of an appropriate combination of average temps and that any interpolations for the latter must also be done in the former), since you asked for global temps, here are some estimated figures:

Land surface mean temp (1901-2000): 8.5 degrees C (sorry, don't know how to do those little circles)
Sea surface mean temp (1901-2000): 16.1 degrees C
Combined mean (1901-2000): 13.9 degrees C

They're from NOAA. They also have the average monthly figures if you're specially interested.

Now, if you'll just concede this bleedin' obvious point (please forgive my exasperation) that you simply made a mistake and that global average temp is a measurable - and measured - quantity, then perhaps we can go on to something more interesting, specifically the relevance of global average temps to testing models.

If you still think global average temp is "meaningless", please contact NOAA and let them know.

May 24, 2012 at 6:32 PM | Unregistered CommenterSimon Anthony

What do the models (or any individual model) say about the radiation budget at the surface? What do comparitive measurements say? How does that ole downwelling IR vary according to the CO2 level which varies sufficently for a change in the IR to be measured, doesn't it? Is it not true that the change in outcome when there IS more downwelling back radiation will only have an effect at the low end of an asymptote, and therefore the 4am temp will be a little higher but it will all be gone by tomorrow? It's little things like that which I don't see being discussed. If shown they would provide far more of a proof than some unarchived ice core or some committed programmer's output on a supercomputer. Why is that area not the front line of climate debate, not the models all the way down proxies all the way back nonsense and the dead Swede? Oh, albedo.

May 24, 2012 at 6:38 PM | Unregistered CommenterRhoda

Yes, that all seems pretty sensible to me Jeremy. As I said in a post yesterday:

"As far as models go the aim, presumably is to properly account for the energy balance and the distribution of thermal energy spatially such that modeled local or regional temperatures properly map onto real world measures."

But a global average temperature is not a very useful metric, because of (i) the difficulty of actually measuring this in the real world (I described my consideration of this difficulty in posts yesterday) (ii) the susceptibility of the global average to the particular sampling employed (a problem that is greatly attenuated by using the temperature anomaly), and (iii) the more particular problem in comparing modeled to real world measures that a single metric which can't easily be defined in the real world (global averaged temperature of some description) doesn't provide a particularly good test of the reliability of the model.

For example just as in your example of an apparently accurate change in temperature anomaly that might arise from a completely mis-modeled 40 oC sea level temp around the UK, so an apparent match of a modeled "global average temperature" of 15 oC (say) might arise from a model that has 30 oC temperatures in the Antarctic and an ice sheet covering the UK.

So I would expect that a good model would be one in which the energy distribution is such that modeled local or regional temperatures properly map onto the equivalent local or regional temperatures that are robust in real world measurements.

May 24, 2012 at 6:46 PM | Unregistered Commenterchris

Richard

You've mentioned the Sawyer paper from 1972 several times. I guess you missed the post in which I asked whether it was available from a source other than Nature as I don't have a subscription (and I confess rather resent having to pay to read a 40-year-old paper). I'd like to read it so let me know if it's otherwise available.

I've read some commentary about the paper and, as far as I can gather, Sawyer predicted a temp rise of 0.6 degrees by 2000. At first sight that looks interesting (although I'll have some further comments on that later). But, again from what I've read at second-hand. Sawyer assumed that CO2 concentration would rise by 25% whereas it actually rose by only 13%.

So (roughly speaking) his estimated CO2 sensitivity seems to have been only half of the actual value. I've seen this apparently quite inaccurate result described as a "remarkable" prediction. I suspect that gullible media might even report the underestimate of sensitivity as things being twice as bad as we thought: from the actual increase in CO2, Sawyer would have predicted a rise of only 0.3 degrees. I, in my sceptical way, see it at face value as a reason to doubt the validity of the prediction method.

But as I said, I only have reported info to go on so I'd be interested in your comments (and a link to the paper if possible).

May 24, 2012 at 6:53 PM | Unregistered CommenterSimon Anthony

I guess we’ve pretty much flogged this subject into submission Simon! You did ask why the Met Office (like pretty much everyone else) presents anomalies rather than absolute temperatures. I’ve given you some of the reasons why “global temperature” or “global average temperature” isn’t a particularly meaningful concept, especially for assessing the reliability of a model. It’s a poorly specified metric, whereas the temperature anomaly is much better specified (as well as being far less susceptible to errors arising from poor sampling or breaks in records). If Richard Betts chooses to step in and tell me that I’ve got this hopelessly wrong then that will be great!

The NOAA web site your numbers come from (http://www.ncdc.noaa.gov/cmb-faq/anomalies.php) also addresses this problem. The global land surface, global sea surface and global surface temperatures you cite in your post aren’t global temperatures, but are averages of sets (e.g. land or sea surface) of gridded temperatures further averaged over a 100 year period (1901-2000). They are used to specify a baseline from which evolution of temperatures (anomalies) can be measured. The anomalies are rather well specified and thus are a useful metric (at many scales, both global, regional and local) for comparing with temperature anomalies from models.

If you read the FAQ #7 on the NOAA site (url in para above) you’ll see an explicit description of why anomalies are preferred over absolute temperature averages, and the reasons are rather similar to the one’s I described in my posts a day or two ago.

May 25, 2012 at 8:19 AM | Unregistered Commenterchris

Hi Simon

Sorry, yes, I missed your earlier post. You can find a non-paywalled copy of the Sawyer paper here

(Sadly, Nature turned down my request to make this important paper open access, but Google Scholar is a wonderful thing!)

Yes Sawyer overestimated the CO2 rise, but:

1. He also overestimated the warming a bit - he estimated 0.6C, actual warming was 0.5C

2. He didn't account for other GHGs, which have also increased in concentration since the 1970s.

To be fair he also didn't account for aerosols either, which counteract some of the GHG effect to some extent. So yes, I agree with you that it's important not to think that he had perfect predictive skill, but nevertheless it is still useful to see that understanding of the first-order processes gave an estimate in the right ball-park.

Cheers

Richard

May 25, 2012 at 8:54 AM | Registered CommenterRichard Betts

...tumbleweed blew around the deserted thread...

At the risk of talking to myself, I'd like to add a few more thoughts on the discussions on models in this thread.

First, on the average temperature vs average anomaly debate. I raised the point because, some time ago I briefly discussed the matter with Lucia Liljegren who runs the "Blackboard" blog and knows rather more about the technicalities than I do. She said that, as far as she was aware, climate models which roughly matched historic temp anomalies were off by (this by memory, so not sure) ~5 degrees in their calculations of average global temp. She also said that, when parameters were adjusted so as to get the temperature to agree with measurement, the predicted anomaly was way off. She didn't have details and I should say I've not been able to verify that elsewhere.

Chris's assertion that global temp average is "meaningless" is also found on the websites of various aggressive defenders of the validity of climate models, based on very similar (and equally odd) arguments. I'm sure it's just my sceptical nature coming through again but when people argue in misleading and disingenuous ways against something which is intuitively and rationally clear, I wonder why they go to such trouble...and whether ~5 degrees of error might have something to do with it.

Chris was concerned about the highly variable sampling of surface temperatures and whether that could be used to calculate a global average temperature. So here's a suggestion which addresses that concern and would also test the ability of models simultaneously to predict temperature value and anomaly.

The temp anomaly estimated by eg BEST used interpolation from weather station measurements and averaged those interpolated values to produce a global average anomaly. Climate models produce values for points on a regular grid, which can then be averaged. Chris (and others) think it may be a tricky process to average over the highly irregular ground station network to get a "meaningful" global average temperature.

So I suggest the modellers reverse the process: interpolate from the regular grid of their models onto the locations of the surface stations. Then you have direct point-by-point comparison of temp measurements and predictions and you can compare averages without having to worry about whether this is a "global average temperature" or not. Rather it's a straight test of whether the models can simultaneously predict the temperature and anomaly of the weather station network.

Has anyone done that?

May 25, 2012 at 8:58 AM | Unregistered CommenterSimon Anthony

Jeremy

Yes, your example of anomalies seeming OK while modelled temps are way off is just what I was getting at. The tortured arguments that apologists for the models go to in order not to have to say anything about the actual predicted temp (rather than anomaly) worries me.

I'll give an example of a parallel situation. A proton and a neutron don't weigh quite the same: a neutron is slightly heavier. In the 1950s, 60s and 70s people tried to calculate that mass difference using the models they then had of protons and neutrons. A lot of very clever people did a lot of work in ingenious ways and some came up with estimates which really weren't far off (although there's a paper from Feynman in which his hope is to get the sign right, let alone the magnitude).

These calculations were possible without needing to be able to calculate the actual proton and neutron masses - only the "anomaly" was attempted because there was at that time no known way of describing and calculating the strong interactions which determine those masses.

So, at risk of labouring the point, anomalies were modelled and more or less agreed with measured values but the model had no realistic way to calculate the absolute values.

Then, in the 1970s, a new theory of the strong interactions, QCD was discovered. Although in principle QCD allowed calculation of proton and neutron masses, the sums seemed impossibly hard. But then, in the early 1980s people realised that they could do the calculations in an approximate and statistical way using computers. Nowadays the calculations of masses and mass differences agree well with measured values.

The point of that excursion was that the early calculations weren't correct in that they were based on very poor models. They could be made to agree with the measured values of mass difference only by judicious twiddling of parameters. Only when there was a reliable theory which allowed masses to be calculated accurately was there an equally reliable theory for mass differences.

Along the same lines I'd suggest that a basic test of a climate model is that it should match measured values of temperatures, and not just anomalies.

May 25, 2012 at 9:32 AM | Unregistered CommenterSimon Anthony

Yup, I have no problem with an analysis of the sort you've just posted Simon. It's more or less the same as the one I've suggested several times on this thread, namely that one would expect that a good model broadly accounts for the energy balance and the distribution of thermal energy spatially such that modeled local or regional temperatures map reasonably well onto real world measures. However the reasons for preferring anomalies and not attempting to reproduce some sort of notional "global temperature" are not "tortured". I'm rather gratified that the descriptions of the value of anomalies I posted here (which seems obvious to me, but then "common sense" is often rather more personal than common! ) is pretty much exactly mirrored on the NOAA site you referred to.

May 25, 2012 at 9:48 AM | Unregistered Commenterchris

May 25, 2012 at 9:48 AM | chris

"I'm rather gratified that the descriptions of the value of anomalies I posted here (which seems obvious to me, but then "common sense" is often rather more personal than common! ) is pretty much exactly mirrored on the NOAA site you referred to."

Well, good for you but I don't think I or anyone else debated this point. What we were asking about was whether models are any good, or rather, how can we test their fitness. In this regard "global average temp" was simply an example (which is why I became exasperated in debating the red herring of its being "meaningless"). I contend that the models should pass the basic test of getting temp and anomaly simultaneously right. I've been told they can't do that.

Do you know whether that's the case, or at rate how hard people have tested them on that simple measure?

May 25, 2012 at 10:08 AM | Unregistered CommenterSimon Anthony

Simon, you did ask a very explicit question:

May 21, 2012 at 5:39 PM | Simon Anthony

4: That brings up another question. Why not just predict global temp in degrees K rather than the anomaly, which could then be derived from the temp?

And now we are all very clear about the answer, yes?

Not sure how well our suggestions re asessing models by mapping modeled temperatures onto robust local or regional real world temperatures do. I expect we could find out with a little research.

May 25, 2012 at 10:28 AM | Unregistered Commenterchris

I had a quick look at Sawyer's paper. Typical of the style of the day it’s almost impossible to follow Sawyer’s calculation. He might be using the value of 319 ppm for 1969 he quotes (which isn’t actually correct) and a value of 400 ppm for year 2000 (although he does suggest a value of 375 in the paper too, which is close to reality). 319-400 ppm seems like the only way to get close to a 25 % increase in CO2.

However using the climate sensitivity value of 2.4 oC (per doubling) from Weatherall and Manabe should then give 0.77 oC (equilibrium temperature rise; since Sawyer isn’t considering ice albedo feedbacks, he’s presumably considering the 2.4 oC per doubling to be something like a transient sensitivity but that isn’t totally clear either). If one uses the real CO2 values (329 in 1972 and 371 in 2000), the equilibrium temperature change should be around 0.4 oC. The only way to get near 0.6 oC is if we use two of the values of CO2 Sawyer cites (319ppm for 1969/375 ppm projected for 2000). That gives an equilibrium temperature rise of 0.56 oC. But then 319-375 isn’t a 25% increase. So it’s entirely unclear…

I said earlier in the thread that the crude predictions from the 1970’s were blessed with a certain amount of good fortune, and without being clearer about how Sawyer actually calculated his temp rise, maybe his study has to be considered in the light of a broad brush expectation of a significant temperature response to rising CO2. However Sawyer’s paper is interesting to me in how much was already known 40 years ago, and the fact that the set of considerations for establishing likely climate response to enhanced greenhouse forcing aren’t so different from today’s. The accurate monitoring of atmospheric CO2 was already established (not sure why Sawyer seems to have got this a little wrong!), the proportion of emissions retained in the atmosphere was already established. The radiative properties of CO2 and effect on atmospheric warming, the water vapour feedback and so on are pretty much as we consider them today…

May 25, 2012 at 10:35 AM | Unregistered Commenterchris

Thanks, Simon - I didn't know that story, and it is a good one. In computational chemistry, we call that sort of thing "getting the right answer for the wrong reason". It happens all the time! Knowing that it happens all the time teaches you to distrust the sense of confidence that can be instilled when a calculation gives the right answer, and to be on the lookout for hints that it was only right by accident. You can also call such things "Pauling Points", in honour of Linus Pauling, the famous theoretical chemist, who defended the judicious use of inaccurate theories. I notice, by the way, that the blog post I've linked to mentions QCD in this context.

May 25, 2012 at 10:41 AM | Registered CommenterJeremy Harvey

May 25, 2012 at 10:28 AM | chris

You really don't do your side of the debate any favours...

"Simon, you did ask a very explicit question:

"May 21, 2012 at 5:39 PM | Simon Anthony

4: That brings up another question. Why not just predict global temp in degrees K rather than the anomaly, which could then be derived from the temp?

And now we are all very clear about the answer, yes?"

Putting aside the snide insults that you meted out to me (while accusing me of bullying you), you said that global average temps were meaningless and demanded that, if I thought there was such a thing, I tell you what it was, I'd guess because you thought I wouldn't be able to. And then I showed you data where NOAA has estimated just that thing which you thought meaningless and non-existent.

And still you don't have the grace just to admit you were wrong. Your techniques are more usually used by politicians rather than scientists and I think that's why they annoy me so.

Nonetheless, I'll keep debating you, in the hope that reason and evidence will prevail over point-scoring rhetoric and bias, at least among those who are able to see the difference.

I hope to respond to your substantive points later today - fences to mend - literally, rather than with fellow-debaters. It's been interesting to debate these issues with you; I look forward to hearing better arguments from you in future.

May 25, 2012 at 10:50 AM | Unregistered CommenterSimon Anthony

So do those thermometer thingies which I understand are used at weather stations, what do they measure, temperature or anomaly? Anomaly against what, is the other question. You have to pick a base, thus implying that the base represents some kind of standard, or right temperature against which anomalies are measured. And far be it from me to even entertain the thought that the more manipulation of data the more likelihood that it will comply with one's hypothesis.

May 25, 2012 at 2:25 PM | Unregistered CommenterRhoda

Rhoda, yes. As someone here recently pointed out, the term 'anomaly' is a nice example of climate science spin, since it implies that something anomalous or unusual is going on.

Regarding climate models, this paper recently discussed at climate etc was quite an eye-opener as to how bad the models are at reproducing past temperatures over short times. Even when started off from observed data some of the models cool quickly by a degree or so. Because of this some models don't care about the actual temperature and just work with 'anomalies' (see my Q and Ed Hawkins' answer on that thread). This may relate to what chris is trying to say. Overall, the models overestimate the warming (surprise surprise).

May 25, 2012 at 3:20 PM | Registered CommenterPaul Matthews

Richard

If you're still reading...

This is more about how people interpret the findings of climate models rather than the models themselves

Obviously the 90% confidence interval for any model is wider than, say, the 50% CI so any measurement is (equally obviously) more likely to lie within the former than the latter. So given two models, if a sequence of measurements lies within the 50% CI of one model, that model is doing better than another for which the measurements lie within the 90% CI.

Of course, you know all this (when I've finished writing this, I shall teach my grandmother to suck eggs) but I was interested in how people in general interpreted CIs. So, in an entirely unscientific manner, when I took my daughters to school this morning, I asked 10 parents which of 2 models was best, one for which measurements were within the 90% CI or another for which those same measurements were within the 50% CI.

All of these parents have degrees of one kind or another, mostly arts, but one economist (wannabe science?). All but the economist thought that the model for which the measurements were within the 90% confidence interval was the better of the two.

Now I think this difference in the statistical meaning of "confidence" from the everyday meaning is well-known to people who use statistics routinely. However, journalists and politicians in general don't come from backgrounds which involve much use of statistics and in this they are similar to the parents I spoke to this morning. So I wonder if a climate scientist talking to a politician/journalist were to say that (I exaggerate for effect) temp anomaly measurements were within the 99.999...% confidence intervals of a model, said politician/journalist would be terrifically impressed whereas he/she ought to be rather underwhelmed.

As a result, unless special measures are taken to inform politicians of how CIs work, I wonder whether some might have misinterpreted the accuracy of model predictions. With hilarious consequences...

Do you have any first hand experience of journalists' or politicians' grasp of this important subtlety? (I call it a subtlety but it seems a deliberately perverse and confusing use of language. I wonder how it originated.)

May 25, 2012 at 7:16 PM | Unregistered CommenterSimon Anthony

May 25, 2012 at 10:35 AM | chris

Hi Chris

Sawyer cites 2 estimates of CO2 concentrations by the year 2000: first he mentions 400ppm as the original estimate then 375ppm as one that came out of a recent conference. 400ppm is the 25% increase he talks about.

Turns out the "conference" estimate of 375ppm was closer to what actually happened. But as I say, this ignored other GHGs and aerosols as well.

Cheers

Richard

May 26, 2012 at 12:01 PM | Registered CommenterRichard Betts

May 25, 2012 at 2:25 PM | Rhoda

So do those thermometer thingies which I understand are used at weather stations, what do they measure, temperature or anomaly? Anomaly against what, is the other question. You have to pick a base, thus implying that the base represents some kind of standard, or right temperature against which anomalies are measured. And far be it from me to even entertain the thought that the more manipulation of data the more likelihood that it will comply with one's hypothesis.

Hi Rhoda

Global mean temperature at sea level is estimated to be about 14.0C +- 0.5C.

Yes, of course, thermometers at weather stations measure temperature not an anomaly. But turning those into an estimate of global mean temperature is subject to high because of things like the stations being at different altitudes, which need to be corrected for on the basis of an estimate if you want to define "mean temperature at sea level" for example. And if you want "mean temperature across the surface of the Earth" that can't be obtained without measurements at more sites picking up more detail.

However since altitudes of weather stations are not changing (much) then we can be confident that the differences over time are useful in estimating changes in the global mean.

Basically there is a systematic error in the estimate of global mean temperature, which does not matter if you are just interested in how it is changing over time.

Cheers

Richard

May 26, 2012 at 2:03 PM | Registered CommenterRichard Betts

May 26, 2012 at 12:01 PM | Richard Betts

Thanks Richard; that's what I surmised as well!

May 25, 2012 at 10:50 AM | Simon Anthony

You're still not getting it Simon, and your relentless presumption of bad faith isn't helping!

It's largely impossible to determine a "global average temperature" meaningfully. However we do need a metric to assess what happens to the climate system in relation to natural variability, in response to enhanced forcings, in assessing models and so on. Scientists have considered this at enormous length and determined that temperature anomalies are the most accurate, robust and useful means of dealing with earth temperature measures. No doubt you will find helpful explanations on all of the relevant sites (NOAA, NASA Giss, Hadcrut etc.).

Of course in using anomalies, one does need to have some sort of a reference point. In the NOAA analysis you linked to this is taken to be (somewhat, but not completely, arbitrarily!) some kind of average of monthly gridded temperatures, yearly averaged and then further averaged over the entire 20th century. Other groups determine their base temperature against which anomalies are determined somewhat differently making different choices about interpolation methods, base year averages and so on.

So this global temperature metric doesn't have much meaning as an absolute metric - it's poorly specified. Hadcrut, for example, considers that their global surface temperature may be within 0.5 oC of the "real value" (see e.g. Jones et al (1999) Ann. Rev. Geophys 37, 173), and the "average temperature" over regions of land can be uncertain to the extent of 0.5 - 1.0 oC, for some of the reasons I specified in my posts of a couple of days ago.

However all of this doesn't matter terribly much, since the anomalies can be established independently of the method of data management, and are far more robust (Hadcrut's anomalies are assessed to be accurate to +/- 0.05 oC at 2 standard error, whereas they don't consider it fruitful to attempt an assessment of the standard error in global surface temperature at least up to their 1999 summary paper).

So you’d be unlikely to find papers that assess global average temperatures with modelled temperatures (there might be some reasons for being interested in doing this, so long as one recognised the problems). More likely one might see a comparison of a sub-global temperature average using regions of the globe where averages may be more reliably assessed; however even here one needs to consider that the real world average temperature isn’t able to be determined very precisely…whereas we're more confident that the temperature anomaly can be.

May 26, 2012 at 2:07 PM | Unregistered Commenterchris

O.K. Richard has just addressed this far more concisely than I have!

May 26, 2012 at 2:09 PM | Unregistered Commenterchris

May 26, 2012 at 2:07 PM | chris

"You're still not getting it Simon, and your relentless presumption of bad faith isn't helping!"

It's rather difficult not to see you acting in bad faith when you repeatedly do so.

See if you can answer these questions yes or no - without any red herrings, insults or attempts to soften what you said but without admitting you were plain wrong to say that global temp average was "meaningless" (my answers in brackets):

1: Is it meaningful to theoretically define a global average temperature? (trivially true)

2: Is it meaningful to calculate the average temperature in a climate model? (obviously - I've even made made a very simple computer model and done so myself)

3: Is it meaningful to measure values of temperature at different locations? (obviously)

4: Is it meaningful to average those values of temperature? (obviously)

5: Is it meaningful to compare model and measured average values? (obviously)

Now, if you still think the concept of a global average temperature is "meaningless" then I'm afraid you're beyond my help. We can of course argue about whether the sample density is a good approximation to the model's grid and the errors on the averages but those are numerical details. In fact I'll add

6: Have I ever said that global average temp can be measured more accurately that anomalies? (no)

Unless you're willing to concede that you were wrong to say that global average temperature is meaningless it's really rather hard to take you seriously. I notice in your latest post, rather than "meaningless", you now say global average temperature doesn't have "much meaning" and is merely "poorly specified" but still without admitting your mistake.

Come on, you can do it... then we can argue about whether these models have been properly tested.

May 26, 2012 at 3:31 PM | Unregistered CommenterSimon Anthony

May 26, 2012 at 2:03 PM | Richard Betts

Your comments apply to surface stations. Do you think that satellite measurements of temperatures around the globe can be averaged to produce an estimate of global average temperature?

May 26, 2012 at 3:35 PM | Unregistered CommenterSimon Anthony

May 26, 2012 at 2:03 PM | Richard Betts

Sorry, meant to add some other points/questions (and in any case need a break from the sun):

You gave an estimate of global average temp at sea level.

Are there published model estimates of global average sea level temps to which it can be compared?

And a slightly more abstract question: the accuracy on the measured global average temp is +/- 0.5 degrees. With what do you think that should be compared to estimate the relative size of the error? 273K? 14C?

And, related, with what should the estimated temp anomaly be compared to show its relative size?

May 26, 2012 at 4:04 PM | Unregistered CommenterSimon Anthony

May 26, 2012 at 4:04 PM | Simon Anthony

Last line should have read:

With what should the error on the estimated temp anomaly be compared to show its relative size?

May 26, 2012 at 4:05 PM | Unregistered CommenterSimon Anthony

Let's get this straight. Average global temperature is pretty meaningless, but no less meaningless than avergaing temps according to some algorithm with grids and interpolation and then producing an 'anomaly' figure and claiming that to be somehow different to a temperature average. It's just another way of saying the same thing, but with an added dimension that your method claims to be able to overcome differences in altitude, station selection, data quality, what have you. Once you average your data or smoosh it together in some other way you have lost buckets of salient data which you could have used to see trends with. It's gone, subsumed in a average, which we all agree is meaningless. There cannot BE an absolute temperature, temperature is an extrinsic property. There are better ways to handle that data if you want to find out what is going on. There are no better ways if you want to prove an agenda of warming as part of an alarmist advocacy position.

May 26, 2012 at 4:42 PM | Unregistered CommenterRhoda

May 26, 2012 at 2:09 PM | chris

I've just noticed a post in which you say:

"O.K. Richard has just addressed this far more concisely than I have!"

Apparently you think Richard's post is supporting yours in which you say that global average temp "doesn't have much meaning" and is "poorly specified".

The first line of Richard's post is:

"Global mean temperature at sea level is estimated to be about 14.0C +- 0.5C."

So Richard gives an estimate of global mean temp to within half-a-degree and you see this as supporting your views that global average temp is "meaningless" and "poorly specified"? Are you 'avin' a larf?

Also, I was just reading back through some of your posts and noticed that you seemed concerned that NOAA's calculation of global average temperatures for the last century was averaged over 100 years, as though that was a compelling argument in your favour (how else would you calculate an average over a century?).

Well, here are NOAA's (meaningless?) figures for the global average temperature for April 2012...http://www.ncdc.noaa.gov/sotc/global/. Their estimate is 14.35 +/- 0.08 degrees C. Not bad for a "meaningless" figure (and incidentally more than 6x more accurate than the numbers given by Richard Betts).

When reading back I saw you mentioned a scene from "The Life of Brian". Your approach to argument reminds me rather of the Black Knight scene from "The Holy Grail". He's had his arms and legs chopped off, he's a mere torso with a mouth, but he still wants to fight on, threatening to bite King Arthur's legs.

Give it a rest Chris; choose your battles more wisely.

May 26, 2012 at 5:25 PM | Unregistered CommenterSimon Anthony

I don't know whether anyone is still reading (it would be very easy to find better things to do) but, just for the record...

Chris (and I think Richard) have expressed doubts about the accuracy of average global temperature measurements; Chris somehow giving the impression that the direct calculation of an average temperature anomaly was far more accurate that one deduced from average temps. I'd assumed they were probably right, but I thought I'd check...

NOAA seem to quote (April) temperature and anomaly to +/- 0.08 degrees. HADCRU temp anomaly (as far as I can tell) is quoted to ~+/- 0.05 degrees.

If I've got that right, direct calculation of anomaly (HADCRU) seems to be marginally more accurate than anomaly deduced from temperature (NOAA), but not significantly so.

May 27, 2012 at 8:06 AM | Unregistered CommenterSimon Anthony

O.K. that’s a clever find Simon..good work! Let’s consider your NOAA combined average temperature in a moment.

As Richard and I have shown you, Hacrut/Met don’t assign an estimate of error to a notional global average temperature but estimate that it (the global average) is within 0.5 oC of 14 oC.

Clearly therefore comparing a model with real world measures using a global average temperature is not particularly useful. After all if we are quite relaxed about the likelihood that our measured estimate might be 1 oC in error then this doesn’t constrain the model terribly well. Our estimated error is larger than the entire surface warming of the 20th century!

In a nutshell that answers your question of a few days ago: “4: That brings up another question. Why not just predict global temp in degrees K rather than the anomaly, which could then be derived from the temp? That’s obvious isn’t it Simon? There’s little point in using a metric that we can only estimate within around a degree C. On the other hand we can define the temperature anomaly with a 10-times or more greater accuracy and precision.

NASA Giss come to a similar conclusion ( see: http://data.giss.nasa.gov/gistemp/abs_temp.html):

”For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.” (note that “models” here refers not to climate models but models for interpolating through areas with poorly defined temps).

If you have a read of the NASA Giss page the FAQ are also rather informative about the rather notional nature of the concept of “global average temperature”.

So what are NOAA doing quoting a value to an apparent precision of 0.08 oC when Hadcrut/Met and NASA Giss consider we’re fortunate if we get within 0.5 oC of what the global average temperature might be?

One possibility is that NOAA are being rather naughty and are adding a precise temperature anomaly to a very imprecise global temperature estimate and retaining a spurious level of precision! I suspect, however, that NOAA’s “combined average temperature” isn’t a global average temperature at all, but is an average of the local temperatures that can be measured accurately (or something like that; it’s not totally clear and might be worth emailing them to find out).

So the whole concept of global average temperature is somewhat dubious in the context of real world measurement; it rather depends on how one chooses to define it. After all on the first NOAA page you referred us to the 20th century average “global average temperature” was 13.9 oC, whereas on the second page it’s changed to 13.7oC. It can’t be both at the same time. Thankfully the temperature ”anomaly” eliminates the whole slew of complications and confusions associated with attempting to define the rather hazy concept of “global average temperature”

May 27, 2012 at 9:17 AM | Unregistered Commenterchris

May 27, 2012 at 9:17 AM | chris

All too predictably (and much in the spirit of Myles Allen on another thread) whether by accident or design you miss the point. You started this debate by claiming that global temp was "meaningless". I showed you that it was trivially calculable and that NOAA provided an accurate estimate.

Richard then showed you another accurate estimate (please don't waste anyone's time by pointing out that the anomaly estimate is more accurate; that was never at issue, except when you tried to change the subject) which you bizarrely claimed supported your wrong-headed views. You've now agreed that the estimate Richard gave is valid.

So I'm faced with arguing with you when you say that global average temp is meaningless, despite having seen two published estimates by reputable organisations, at least one of which you say you accept (I'm not quite sure why you think NOAA is being "naughty", "imprecise", and is apparently publishing misleading information. I suggest you let them know that you're onto them; that you "Chris" know better than they do).

You seem able simultaneously to think two contradictory things. I can't do that. My five-year-old daughter, at least when she's overtired, hungry and fractious, can do it, you can do it, but I can't. If you want to continue the discussion, I'll see what her thoughts are, but I warn you, she's not usually as illogical as you.

May 27, 2012 at 9:15 PM | Unregistered CommenterSimon Anthony

The anomaly is measured using thermometers. The average global temp is measured using the same themomters, indeed the same records. How is one more accurate than the other? Whatever the base of the anomaly, any given anomaly is relative to the base temperature and thus equates to a temperature figure. They are in fact just ways of presenting the same thing, with the difference of implication that the anomaly brings.

You will not learn from anomalies or global average temps. You can learn a lot from the data before it is messed about with. But nobody seems keen to learn, only to push agendas.

May 27, 2012 at 11:15 PM | Unregistered CommenterRhoda

Steady Simon... I didn't say anything negative about NOAA. It would really help if you were to read posts carefully before sounding off.

The estimate of global average temperature that Richard and I showed you isn't accurate at all. How can some estimate of "global average temperature" be "accurate" when it's value isn't known? The estimate Richard and I showed you (it's estimated to be between 13.5 and 14.5 oC) doesn't have an assessment of a standard error. It's a bit of an educated guesstimate and not a meaningful metric against which to assess a model. The uncertainty in its value is greater than the entire 20th century temperature change.

I'm genuinely curious what the NOAA value refers to. It seems rather unlikely that while Hadcrut/Met and NASA Giss consider a value for global average temperature to be beyond useful calculation out side of an estimated error of +/- 0.5 oC, NOAA consider it can be definable to below a tenth of a degree. So I suspect that NOAA's consideration of "global average temperature" is different from Hadcrut/Met and NASA Giss. If I have some time I might try to find out.

Of course I could be wrong which would be amusing. It wouldn't surprise me if much of the debate (or is it an argument?) we're having here has a semantic basis relating to the meaning of words. Whether or not that's true, I suspect we are both very clear about the answer to your question of a couple of days ago, and for the rest perhaps this might be a good time to agree to disagree.

May 27, 2012 at 11:50 PM | Unregistered Commenterchris

May 27, 2012 at 11:15 PM | Rhoda

I agree with much of what you say Rhoda. In fact the reason that scientists use anomalies is because they are rather keen to determine the most accurate Earth response to greenhouse forcing or other influences! But your point about thermometer measaurements is exactly right. Both absolute average temperatures and temperature anomalies are measured with the same thermometers. However the estimation of a global average temperature involves far more "messing around" with the data, than the anomaly does. In order to estimate a global average temperature one needs to make rather large scale interpolations between temperature stations that involve estimation of temperatures up and down mountainous terrain where temperature rises and falls rather dramatically with altitude, across vast empty desert regions, and the polar ice sheets and so on. So the global average temperature is rather a nominal concept.

On the other hand the global anomaly (the global average temperature change) is rather strongly tied to the directly measured thermometer data. If one considers for example that the global temperature might be determined from an average of local/regional average temperatures (that are determined by some sort of interpolation) you can consider conceptually that by subtracting the local or regional temperature averages from the local or regional base line, that the interpolation is also subtracted and so the anomaly is rather more strongly pinned to the true temperature readings than is some sort of average.temperature This works because temperature change is quite strongly correlated spatially even if the absolute temperatures measured at different sites in the same locale/region may be quite different.

May 28, 2012 at 12:33 AM | Unregistered Commenterchris

May 22, 2012 at 12:14 PM | chris

"global temperature is pretty much without meaning"

May 27, 2012 at 9:17 AM | chris

"Hacrut/Met...estimate that it (the global average) is within 0.5 oC of 14 oC."


May 27, 2012 at 9:17 AM | chris

"NOAA are being rather naughty and are adding a precise temperature anomaly to a very imprecise global temperature estimate and retaining a spurious level of precision! I suspect, however, that NOAA’s “combined average temperature” isn’t a global average temperature at all"

May 27, 2012 at 11:50 PM | chris

"I didn't say anything negative about NOAA."


Such clear examples of "doublethink" are rare. I'll keep these for future reference.

May 28, 2012 at 5:55 AM | Unregistered CommenterSimon Anthony

The anomaly is merely a way of presenting results. Whenever you say to yourself 'how shall I present the results?' you are saying in effect 'what do I want my readers to think'. Of course to globally average temps also produces a meaningless number. Both numbers give ample opportunity for fiddling. The essence of my objection is the amount of useful data which is lost when you smoosh everything together. The way you can spin results by cherry-picking thermometers or introducing unjustified adjustments. The interpolation thing, who checks that? Who tests the method by interpolating grids where the temp is actually known? How does that work? In the polar regions too? But I guess all those smart climate scientists know more than an oxfordshire housewife.

May 28, 2012 at 8:09 AM | Unregistered CommenterRhoda

Simon at 5:55 AM, I think you'll find that the arguing style chris used here has quite a few precedents in the doublethink stakes. Nick Stokes at Climateaudit has a fine line in this. The basic rule is: never concede any point, however minor, if doing so would represent a criticism of consensus climate science. Don't even concede that you won't concede anything. And logical consistency is desirable but not mandatory.

To do both of them justice: they both know a lot about the topic, and are, by the standards of blog discourse, extremely and unfailingly polite.

May 28, 2012 at 1:24 PM | Registered CommenterJeremy Harvey

Hi Rhoda,

We're actually thinking along pretty similar wavelengths. In fact regional and even global temperature anomalies can be defined pretty accurately without doing any "smooshing" at all, due to the fact that temperature differences are well correlated spatially whereas absolute temperatures aren't. So even though we don't really know what a global average temperature (or even a regional average temperature) is, we can be fairly confident that our value for the temperature anomaly reflects the true temperature change.

So we can take our set of completely unadultured thermometer temperature readings in year 1 (maybe a year that we define as our "baseline"), subtract each direct Year 1 reading from the Year 10 reading, and the average value would be a pretty good measure of the temperature change in the region that our set covers between Years 1 and 10. We don't have to do any interpolation whatsoever (we might do an area-weighting to account for the fact that some regions may have a high density of temperature stations and some a low density).

The only problem with this if there are some areas of the world that we don't have good coverage (Antarctica, Arctic) and that may be changing at a different rate to the rest of the world. NASA Giss address this by reintroducing an interpolation through these vast areas, and so this certainly seems to be a more "smooshed" temperature series. However if you're not happy with that procedure (there are actually some good reasons for attempting to assess Arctic and Antarctic contributions to overall temperature change), then you can choose not to use that data set, and in fact most published papers don't.

So the question about who checks the interpolation methods is partly redundant when temperature anomalies are used. Since there are at least four independent groups that compile these data and they all give very similar anomalies, we can be fairly confident that they are a decent reflection of real world temperature change.

The discussion that Simon and I have had on this thread illustrates rather graphically the problems with the "global average temperature"! It's difficult to know what this metric actually means (we have to be very careful to define what we actually mean when using it - I think NOAA are using it differently from Hadcrut/Met and NASA Giss) and so it's catnip for the generation of confusion. Using anomalies cuts through all the befuddlement! That’s how I see it anyhow…

May 28, 2012 at 2:24 PM | Unregistered Commenterchris

Well, I'm not ecstatic about averaging temperature at all in the even in the merest sense. Not high/low over 24 hours, not weekly, monthly or annually, not spatially. As an extrinsic property temperature cannot be averaged. As a principle, changes in an average or an anomaly derived from an average (which they all are) can leave a lot unknown. If I really wanted to know what was happening with temperatures, I'd take a number of stations and record continuously. I'd look to see whether minima and maxima occurred at the same times or differed. I'd compare each station against its record over time. I'd interpolate nothing and average nothing. I would find, for sure, stations which were flat, station with bathtub curves, or humpback, or hockey sticks. Then I'd drill down according to what I had found. That's the way to find out what is going on.

I wouldn't go round trying to scare people on the basis of a change in anomaly which is smaller than the difference between yesterday and today.

May 28, 2012 at 3:07 PM | Unregistered CommenterRhoda

May 28, 2012 at 1:24 PM | Jeremy Harvey

Yes it's odd. Neither Nick nor Chris seems interested in understanding; only in winning by whatever means. Such inability to accept that one is wrong on even trivial points generally shows deep insecurities about one's beliefs - as though the whole edifice is balanced so finely that a tiny breeze might bring it all down.

However, on your final points, I'm not so sure. I'd honestly expected better arguments from Chris. Like you, I had the impression that s/he (one of those names that could go either way - perhaps Chris could let me know) knows quite a lot about the subject. I'd previously been reluctant to challenge other dubious assertions that I'd noticed s/he make because I thought I'd have to spend a vast amount of time reading papers that were often not written to be understood.

On this average temperature issue, Chris's objections were so unsound that I thought I really ought to follow up. I was very surprised that it took only a few minutes of googling to find that average global temp had not only been measured (which was all I claimed) but was available from NOAA to the remarkable claimed accuracy of ~0.04% (if I've got my sums right and the error should be related to degrees absolute). Now NOAA isn't exactly a low-profile organisation in this field and yet it seems that Chris was unaware that they had the very data s/he insisted were meaningless.

I've looked back at other posts from Chris and noticed other assertions and claims which also look dodgy. Others haven't followed up, I'd guess for similar reasons to those which inhibited me. So I think there may be other glaring inconsistencies (or anomalies?) in what superficially seems a plausible belief system.

I suspect that Chris's unawareness of counter arguments is related to his/her inability to admit when wrong - just as s/he is unwilling to concede even when caught in basic logical errors, s/he finds it very hard to question his/her current opinions. The result is sadly but inevitably that his/her belief system is so one-sided that it's dangerously unstable. Having built such a flimsy structure without properly challenging his/her views, he/she just doesn't know which bits are reliable and which are rotten and so can't let any of them go.

As I said in another thread a few weeks ago, the global warming debate may ultimately be interesting not so much for the flawed scientific arguments (after all, flawed science comes up all the time) but for what it shows about the psychology and belief systems of those involved.

May 28, 2012 at 3:30 PM | Unregistered CommenterSimon Anthony

Rhoda

I wouldn't go round trying to scare people on the basis of a change in anomaly which is smaller than the difference between yesterday and today.
Or between the front of the house and the back garden.
It would help if we could persuade the warmists to abandon the word 'anomaly' — which is intended to be a scary concept — and replaced it with the word 'variation' which is equally accurate but less emotive.
We can then ask what norm is the current figure a variation from It has to be a variation from a standard which is normally taken to be the mean temperature 1961-1990.
But the important thing is that it must be a variation from a temperature however that temperature is calculated. Why? Because thermometers measure temperatures, not anomalies.
And if the global mean temperature itself is meaningless (assuming it is possible to make a proper calculation anyway), as chris claims, then the global mean temperature averaged over 30 years is just as meaningless. Which, you could argue, makes temperature variations from that norm meaningless as well.
What you can do is record maximum and minimum temperatures for a given site every day for 30 years and then starting from day one of the 31st year start publishing variations from the average derived from those 21,900 readings. They will tell you nothing about any other site anywhere in the world and they will only be meaningful for that site if (a) readings have been taken every day for 30 years, and (b) there has been no change either in the environment of the site or the equipment used.
Which raises the question, how many sites world-wide fit those two criteria? I would suggest the answer is pretty close to none.
What this suggests to my simple mind is that we are being led by the nose into believing that it is possible to calculate a global variation from the norm without at some stage calculating a global norm and all that current "anomalies" do is pretend to be reliable indicators of how the temperature is changing as compared with some arbitrary 30-year period.
I am not convinced that any of this has any relevance in the real world.

May 28, 2012 at 4:04 PM | Registered CommenterMike Jackson

May 28, 2012 at 3:30 PM | Simon Anthony

"...the global warming debate may ultimately be interesting not so much for the flawed scientific arguments (after all, flawed science comes up all the time) but for what it shows about the psychology and belief systems of those involved."

I'm quite interested in the climate science story as a scientist involved in computer modelling - and as a citizen concerned about the state of the world. But for me, the biggest reason for interest is as an amateur philosopher of science. We're living through one of the biggest "Science and Society" stories ever and it is fascinating.

Chris comes across to me as someone with very high broad knowledge of lots of science (he talked very knowledgeably in this thread about climate modelling and its history, temperature records, but also about protein structures and MD modelling). But his knowledge does not seem enormously deep in any area. Maybe he is a science communication professional rather than a professional researcher? He may have limited experience of discovering that much cherished theories are not correct. That happens on a daily basis when you do research.

May 28, 2012 at 4:56 PM | Registered CommenterJeremy Harvey

May 28, 2012 at 4:56 PM | Jeremy Harvey

FWIW, on climate change I have no very strong opinion or rather, I don't think the evidence and arguments are anywhere strong enough to support the conclusions, either scientifically or economically. My real concern is with the subversion (corruption?) of science.

I don't think it's mere correlation that unprecedented growth of democratic government and improvements in living standards have come at the same time as science developed. The reason is that what's scientifically "true" and false doesn't depend on anyone's opinions - neither popes nor tyrants can stop the earth going round the sun rather than vice versa. Nature isn't beholden to committees. Science has been essential for freeing people from tyranny and so I'm fearful when I read that "consensus" is now important in deciding what's scientifically "true".

My fears aren't limited to the climate change debate: rather, if powerful and influential groups of people can decide what's "true" in science, preventing appeal to rational consideration of the evidence, then so they can decide what truths are allowed throughout all society.

So, for me, the climate change debate isn't solely or even mostly about whether temps rise by a degree or two. Rather it's about whether we are able to stop the perversion of science into propaganda so it becomes just another weapon for would-be undemocratic rulers.

May 28, 2012 at 5:43 PM | Unregistered CommenterSimon Anthony

May 28, 2012 at 1:24 PM | Jeremy Harvey

When discussing with chris (and I think with Nick Stokes, although I've only watched his antics rather than engaged) something that's striking (and dumbfounding and exasperating) which you indicate, is the lack of a sense of proportion.

Nick's recent contretemps about "death threats" to climate scientists where he appeared in several places with arguments that made him seem increasingly ridiculous was a clear example. He made himself look foolish, like a drunk in a bar who wanted to fight anyone in the room, over something which mattered not in the slightest. He could have just shrugged and quite reasonably asked what difference do imagined threats to over-excited climate scientists make to climate science?

Similarly, on this thread chris claimed that global temp was meaningless then found himself (I'm following your lead on this) in an increasingly embarrassed position when all he had to do was ask just how accurately it can be measured. Then a useful discussion might have ensued.

Of course when people like chris and Nick show such poor judgement, such a distorted sense of proportion, it leads one inevitably to wonder just how badly wrong they may have gone on other aspects of climate science.

May 28, 2012 at 7:02 PM | Unregistered CommenterSimon Anthony

Interesting, I've just noticed Simon's false precis of my comments re NOAA. In relation to the surprising precision with which NOAA cite their metric of "combined average temperature" I said (May 27, 2012 at 9:17 AM | chris); and note exclamation mark at end of first sentence:

"One possibility is that NOAA are being rather naughty and are adding a precise temperature anomaly to a very imprecise global temperature estimate and retaining a spurious level of precision! I suspect, however, that NOAA’s “combined average temperature” isn’t a global average temperature at all, but is an average of the local temperatures that can be measured accurately (or something like that; it’s not totally clear and might be worth emailing them to find out)."

In order to pretend that my comment is intended to be negative about the NOAA, Simon removes the first 4 words from my first sentence, and the last half of my second sentence and voila ! he changes the meaning to suggest a dogmatic assertion of misbehaviour:

Simon's version:

"NOAA are being rather naughty and are adding a precise temperature anomaly to a very imprecise global temperature estimate and retaining a spurious level of precision! I suspect, however, that NOAA’s “combined average temperature” isn’t a global average temperature at all"

Yuk...

There is an interesting question of what the NOAA metric "combined average temperature" is. It's clearly not the same as the "global average temperature" discussed on both the NASA Giss site and Hadcrut/Met site. But perhaps finding out stuff about the science isn't actually of much interest to some of the posters here...

May 28, 2012 at 7:26 PM | Unregistered Commenterchris

May 28, 2012 at 7:26 PM | chris

If I understand you correctly, your latest claim is that putting the words "One possibility is that..." in front of a negative statement (about NOAA in that case) stops it being a negative statement.

Try your logic on this...

Chris is intellectually dishonest, preferring nit-picking sophistry to substantive argument, wriggles rather than faces up to errors; when caught out, rather than change his mind he tries to change the subject, all the while so lacking in self-awareness he is entirely unable to see how ridiculous he appears and how much damage he does to whatever case he has...

Oh sorry, that sounds a bit negative; I meant to put "One possibility is that..." in front of it.

You know, I'm not sure that really works.

May 28, 2012 at 9:06 PM | Unregistered CommenterSimon Anthony

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>