Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« IPCC on climate sensitivity | Main | Material World on climate models »
Monday
Jul042011

Testing two degrees

One of the questions I would have liked to ask at the Cambridge conference the other week related to a graph shown by John Mitchell, the former chief scientist at the Met Office. Although Mitchell did not make a great deal of it, I thought it was interesting and perhaps significant.

Mitchell was discussing model verification and showed his graph as evidence that they were performing well. This is it:

As you see, the data is actually derived from the work of Myles Allen at Oxford and examines how predictions he made in 2000 compare to outturn.

The match between prediction and outturn is striking, and indeed Mitchell was rather apologetic about just how good it is, but this is not what bothered me.  What I found strange was that the prediction (recalibrated - but that's not the issue either) was compared to decadal averages in order to assess the models. As someone who is used to Lucia Liljegren's approach to model verification, I found the Allen et al method rather surprising.

The difference in assessment is obviously very different - Lucia is saying that the models are doing rather badly while Allen (Mitchell) et al are saying that they are doing fine. It seems to me that they cannot both be correct, but as a non-statistician I am not really in a position to say much about who is right. I have had some email correspondence with Myles Allen, who is quite certain that looking at sub-decadal  intervals is meaningless. However, I have also read Matt Briggs' imprecations  against smoothing time series, and his fulminations again smoothing them before calculating forecast skill.

We really ought to be able to agree on issues like this. So who is right?

PrintView Printer Friendly Version

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (127)

SFT

"reality rather than ideas"

I find it hard to see 'global temperature' as anything but an idea. A construct, if you prefer, but not something that anyone actually experiences on a globe where it is both summer and winter and day and night, all at the same time.

Jul 5, 2011 at 1:04 PM | Unregistered CommenterJames P

Good piece today from our friend Andrew Orlowski at El Reg, which includes this delicious quote from Judith Curry, referring to an article suggesting that coal emissions might have a cooling effect:

"The political consequence of this article seems to be that the simplest solution to global warming is for the Chinese to burn more coal, which they intend to do anyway"

Link

Jul 5, 2011 at 1:19 PM | Unregistered CommenterJames P

Not sure what happened to the link. Try again...

Link

Jul 5, 2011 at 1:20 PM | Unregistered CommenterJames P

HOLY EXPLODING WATERMELONS.....It's the Wizard of Oz!..

http://fenbeagleblog.wordpress.com/2011/07/05/ivery-towers/

Jul 5, 2011 at 1:23 PM | Unregistered CommenterFenbeagle

@Jul 5, 2011 at 10:52 AM | ScientistForTruth

"But above all, the climate modellers themselves have to be seduced into believing they are something that they are not, otherwise they wouldn't produce them. They have to believe they are doing something related to reality rather than producing a tool for a social narrative."

If I might be permitted to lower the tone, the modellers thus prove Brumby's First Law of Bullshit.

This suggests that those who are particularly adept at turning out bullshit always end up by believing it themselves.

Jul 5, 2011 at 1:24 PM | Unregistered CommenterMartin Brumby

What this thread badly needs is Zed to come along and tell us what we ought to be posting about, and to shut up and post about that instead.

Jul 5, 2011 at 1:43 PM | Unregistered CommenterJustice4Rinka

I notice that the model used for comparison warms at about 2C per century, with the low end of the range below 1C per century.

But the models they actually talk about in press releases give us 3, 4, 5, 6 or more degrees per century. The latest from CSIRO in Australia, being quoted in the media, is "up to" 5.5 degrees C by 2070. I'd like to see THAT model compared to the real world data.

Jul 5, 2011 at 1:45 PM | Unregistered CommenterBraddles

Am I to believe that the concept of global temperature has some sort of scientific meaning? And that we spend fortunes on models that try to predict something with no scientific meaning? Lordy lordy I am so glad I live in the ocean.

Jul 5, 2011 at 2:06 PM | Unregistered CommenterDolphinhead

@Dolphinhead, it's not awfully clear what scientific meaning it has.

The "global temperature" is calculated as a kind of average of near-surface air temperatures. It doesn't explicitly take account of high altitude air temperatures, or temperatures at or below the surface of either the land or the sea, so I'm not sure to what extent it can really be used to represent the average temperature or total energy in the earth as a whole.

Certainly it can be useful as a diagnostic tool, just as an under-armpit temperature can give an indication of illness, but I'm not convinced that it has any particular physical significance.

I also don't understand why it is calculated so infrequently.

It appears that global temperature is calculated once per year, by taking a weighted average of the average daily temperatures at a fairly small number of locations (a few thousand).

I don't understand why, given modern computer and communications technology, the average global temperature is not calculated every day, if not every hour.

The amount of data and processing involved would be quite small and easily within the capacity of any modern PC – about 175 MB/year for hourly readings between -327.68º C and +327.67º C (should be sufficient?) from 10,000 locations. In these days when the smallest PC hard disk is 500 GB it is not a burden to use 17.5 GB to store 100 years of global data at hourly intervals. Hell – it's not expensive to build a PC that can fit that into RAM, let alone disk.

If the calculation used to average the various temperatures has any physical significance then the hourly global temperature should be quite smooth and lacking in the daily and seasonal cycles present in the individual stations' readings. If this turns out not to be the case then it's a strong indication that the global averaging calculation is wrong.

Jul 5, 2011 at 2:34 PM | Unregistered CommenterBruce Hoult

James P "I find it hard to see 'global temperature' as anything but an idea."
Dolphinhead "Am I to believe that the concept of global temperature has some sort of scientific meaning? And that we spend fortunes on models that try to predict something with no scientific meaning?"

It has a meaning insofar as something can be defined. But the definition is only a semantic idea. It is not a 'thing', nor is it an attribute or quality of any 'thing' any more than, say, global salary or global lifespan. It does not describe any reality, but only an idea.

“When I use a word,” Humpty Dumpty said, in a rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master - that’s all.”

Jul 5, 2011 at 2:48 PM | Unregistered CommenterScientistForTruth

Thanks, SFT. I must say that for such an intangible concept, global temperature certainly gets a lot of air-play. As I see it, AGW predicates global catastrophe on the speculative properties of a trace gas and their even more speculative amplification, and the effects of both on a nebulous statistical construct that is only calculated when there’s a blue moon.

Whole careers can thus be financed in the search for solid supporting data that will always remain tantalisingly out of reach, which would be fine if it were just an academic exercise. Unfortunately, the greens have hijacked the operation and got us all wearing hair shirts, while they lounge about in hand-made silk ones...

Jul 5, 2011 at 3:16 PM | Unregistered CommenterJames P

The latest from CSIRO in Australia, being quoted in the media, is "up to" 5.5 degrees C by 2070. I'd like to see THAT model compared to the real world data.
Jul 5, 2011 at 1:45 PM | Braddles

Here you have it compared to the other model, the HadCM3, I'll keep searching for the "real world data"
http://tinyurl.com/3lwbo7f

Jul 5, 2011 at 3:17 PM | Unregistered CommenterPatagon

Myles Allen, or Hadley, has made what is essentially a prediction about the average temperature for the decade. That prediction is apparently spot on.

Lucia has claimed that the observed trend over the decade is inconsistent with model predictions. In doing that, Lucia had to make some statistical assumption for the trend. She tried various assumptions, including AR(1). The AR(1) assumption is insupportable; a non-technical explanation of that is given in an article that I published in The Wall Street Journal. The other assumptions that Lucia considers are similarly insupportable. Thus Lucia’s conclusions are unfounded.

What assumption should be used in analyzing trends? The only assumption that has been given reasonable justification is what is variously called “stochastic self-similar scaling” (SSS), “fractional Gaussian noise”, “Hurst–Kolmogorov”. For a brief overview, see
http://www.bishop-hill.net/blog/2011/6/6/koutsoyiannis-2011.html

I have not done the calculation with SSS, but SSS tends to give very wide confidence/likelihood intervals. Such intervals would presumably include the observation.

To conclude, Allen/Hadley is right and Lucia is wrong.

Jul 5, 2011 at 3:38 PM | Unregistered CommenterDouglas J. Keenan

Apparently China by burning coal is saving the World.

"Global warming lull down to China's coal growth" - Richard Black BBC News
http://www.bbc.co.uk/news/science-environment-14002264

"The absence of a temperature rise over that decade is often used by "climate sceptics" as grounds for denying the existence of man-made global warming.

But the new study, in Proceedings of the National Academy of Sciences, concludes that smog from the extra coal acted to mask greenhouse warming."

"The researchers conclude that declining solar activity over the period and an overall change from El Nino to La Nina conditions in the Pacific Ocean also contributed to the temperature plateau.

Lead researcher Robert Kaufmann from Boston University, whose research interests span climate change and world oil markets, said the study was inspired by "sceptical" questioning."

But of course.

Question : I guess Global Warming is ok but why aren't the temperatures rising?

Oh that... well we'll just spend a few more dollars and tell you why, its because China is releasing more CO2... er I mean SO2 along with the CO2

Jul 5, 2011 at 3:49 PM | Unregistered CommenterRichard

Doug: I understand your reasoning for the statement "Lucia is wrong." But if Allen/Hadley aren't assuming SSS how can they be right?

Jul 5, 2011 at 3:52 PM | Unregistered CommenterRichard Drake

Sorry Doug,

Me as well. If Allen/Hadley are right, what are they right about? Their model?

Jul 5, 2011 at 3:58 PM | Unregistered CommenterGSW

I think the correct way of assessing the consistency between short-term global temperature trends in observations and climate model projections is here - I doubt Lucia or Myles would agree, but they are wrong :-)

http://julesandjames.blogspot.com/2010/05/assessing-consistency-between-short.html

....Clearly, over this time interval, the observed trends lie towards the lower end of the modelled range. No-one disputes that. But at no point do they go outside it, and the lowest value for any of the surface obs is only just outside the cumulative 5% level. (Note this would only correspond to a 10% level on a two-sided test). So it would be hard to argue directly for a rejection of the null hypothesis. On the other hand, it is probably not a good idea to be too blase about it. If the models were wrong, this is exactly what we'd expect to see in the years before the evidence became indisputable....

This was up to 2009, perhaps slightly better when you include 2010.

Jul 5, 2011 at 3:58 PM | Unregistered CommenterPeteB

@ Douglas J. Keenan "Myles Allen, or Hadley, has made what is essentially a prediction about the average temperature for the decade. That prediction is apparently spot on."

The prediction he made was "We expect global mean temperatures in the decade 2036±46 to be 1±2.5 Kwarmer than in pre-industrial times under a `business as usual' emission scenario."

1. How can that prediction be spot on when 2036-46 is a while away? ( be safe predict 40 years hence, in the meantime collect the money)
2. Business has not been as usual CO2 emissions have increased. The prediction should be in a higher range still.
3. Lucia is talking about temperature trends and comparing it with IPCC projections. To reach the higher range since the trends are falling well below, they have to do quite a bit of catch up.
Its like a race, if the guy is falling behind he may well still win the race, but the more he falls behind the less likely it becomes.

Jul 5, 2011 at 4:06 PM | Unregistered CommenterRichard

Just realised that Lucia is named as co-author on the paper I linked to so maybe she doesn't disagree with it ! But it is very different to the MMM analyses that she does on her blog

Jul 5, 2011 at 4:29 PM | Unregistered CommenterPeteB

Richard

"China is releasing more CO2... er I mean SO2"

As usual, whatever happens is due to GW and/or CO2. Hurricanes, tsunamis, droughts, floods, any change in ice or sea level, any inundation or depletion, there is nothing that can falsify the One True Way - although a few black-outs might test the faithful...

Jul 5, 2011 at 5:06 PM | Unregistered CommenterJames P

I know this is a rather large oversimplification but, if I assume that the CO2 concentration in 2040 is approximately double pre-industrial level** and that this modelled temperature projection assumes no major reduction of CO2 emissions, doesn’t the plot suggest that doubling CO2 only results in a temperature increase of between 1C and 2C?

If this is indeed a reasonable interpretation, doesn’t it also demonstrate that CAGW is being rather over-cooked?

**Assuming ~600ppm vs ~300ppm from IPCC WG1 Ch.10, Fig 10.36 a), p.828

Jul 5, 2011 at 5:27 PM | Unregistered CommenterDave Salt

Fenbeagle

I like 'carbonomics' - it not only sounds right, but it has the necessary implication of dodginess. I shall use it at every opportunity!

Jul 5, 2011 at 5:27 PM | Unregistered CommenterJames P

@ Richard Drake (3:52 PM), GSW (3:58 PM)

I meant that Allen/Hadley are right about their prediction for the average temperature of 2001–2010. In order to check that prediction, we would usually need to know the standard deviation (or, strictly, probability distribution) of the prediction. Allen/Hadley indicate the s.d. in the graph; those s.d. are not necessarily trustworthy though. In this case, however, that does not matter, because whatever the s.d. is, the observation would appear to be well within 1 s.d. of the prediction. Moreover, if the s.d. were calculated properly, it would presumably widen.

Jul 5, 2011 at 6:14 PM | Unregistered CommenterDouglas J. Keenan

Appreciate the response Doug,

A few points.

As far as I am aware no "prediction" for 2000 to 2009 was explicitly stated.

The forecast was made in 2000. The majority of the warming from the 1990's point to 2000's point had already occured by this time (there has been little if any warming over the last 10yrs).

So obviously, at the time of the "prediction" a forecast of no warming at all would give the same result within in 1s.d.

Before people get too excited about the wonders of Allen/Hadleys predictive ability, it is probably worth pointing out again that for their forecast to be "wrong" (with the uncertainties shown) there would have needed to have been an unprecedented ~0.3K average decadal cooling.

Doug?

So the statement "they are right", I think is a little weak.

Jul 5, 2011 at 8:15 PM | Unregistered CommenterGSW

Can anyone comment on how realistic the emissions scenario used by Allen et al. is? Is it possible to test the assumptions they made about future emissions against current observations? Otherwise, it seems to me that getting the ‘right’ temperature but with the ‘wrong’ levels of CO2 and sulphate aerosols would simply be a matter of chance.

My attempts to do this follow, but I've found it difficult to find sulphate aerosol data.

Allen et al. say in the Methods that they use the IS92a emissions scenario

“forced with observed greenhouse gas and parametrized direct sulphate forcing to 1990 followed by 1% yr-1 compound increase in CO2 (close to the IS92a scenario in terms of radiative forcing) and IS92a projected sulphate loadings.”

Apparently IS92a assumes a 1% per year increase in CO2-equivalents (http://www.globalchange.gov/publications/reports/scientific-assessments/first-national-assessment/608). The IPCC TAR’s description of the IS92a scenario agrees quite well with observed CO2 levels (2000 to 2010: 372 to 393 ppm (IS92a) vs 369 to 390 ppm (obs)).

But IS92a also apparently assumes a 23 % increase in sulphate loading over the decade (0.57 to 0.64 TgS) which does not seem to agree with the flat lines linked from Judith Curry’s recent post. But I’ve not been able to find observations in the same units as IS92a. This page also seems to suggest that IS92a overestimates sulphate which would lead to a greater cooling effect.

So, could they have reached the ‘right’ average decadal temperature while overestimating the sulphate in the atmosphere, by also overestimating the sensitivity to CO2?

Jul 5, 2011 at 8:31 PM | Unregistered CommenterDR

ScientistforTruth,

Loved your post. Do you think that all intellectuals can be employed in English Departments of universities? It seems that is where we are headed. The climate modelers might be there at this time.

Jul 5, 2011 at 8:32 PM | Unregistered CommenterTheo Goodwin

The Briggs thread referred to by RichieRich contains this remark from another blogger: "Deliberately spreading misinformation is immoral"

I'll let you find it, but I've only just lifted my jaw off the desk...

Jul 5, 2011 at 11:23 PM | Unregistered CommenterJames P

Patagon (7:42 AM) + others

Sorry for the long delay in replying, only just got back online.

I completely agree that the most important use of GCMs will be (and possibly is already starting to be) for forecasting climate on nearer-term timescales in order to allow plans to be made to deal with variability (whether this is purely natural internal variability or skewed by some external forcing). Sorry if I inadvertently gave the impression of claiming that we could forecast ENSO a year ahead, this is not the case - I just meant that the decadal forecast by Smith et al looks reasonably promising in that they correctly forecast that global mean temperature to continue to roughly flatline for a few years. I do, however, agree with Bruce Hoult and ScientistForTruth that global mean temperature is not particularly useful for anything, at least not on near-term timescales - nobody experiences it, they only care about local weather. So Patagon is right that the real need is to be able to forecast ENSO, monsoons etc a year in advance.

We do appear to have some skill in seasonal precipitation forecasts in West Africa and NE Brazil - two regions where the teleconnections to Atlantic SSTs are strong enough to allow the slowly-varying SSTs to be useful in making forecasts. In the UK we also have some skill (well, just about - 60%, which is better than 50/50!) in seasonal temperature forecasts due to the NAO, but we all know how difficult it is to communicate that kind of probabilistic forecast effectively!!!) The newer generation of models is looking promising, but there is a way to go yet.

I also agree with both Patagon and Arthur Dent that we definitely need to steer clear of model being treated as an Oracle. Unfortunately this does seem to happen - not by anyone who has actually worked with a GCM, I hasten to add (somebody once said "the further you are from the Met Office the more you believe the projections"!). The uncertainties are indeed huge and this needs to be made very clear when these things are used in decision-making. My main concern here is over-reliance on current models for adaptation planning - building expensive infrastructure on the assumption that worst-case scenarios are the most likely.

However the models are the only tools we've got, and uncertainty works both ways - it may well not be as bad as is presented by some people, but equally it may be "worse than we thought" to coin a popular phrase on this blog. The hard bit is then what do the decision-makers do with this? If you can't rule out a low-probability, high-impact scenario, what do you do? Do you hedge your bets? How "unlikely" does a very severe scenario need to be for you to avoid it? Even if there's only a 10% chance of something life threatening happening, do you take the chance? Of course this has to be weighed up against the consequences of taking action to avoid it - and that's why I'm a scientist not a politician....!

So my answer to Martin Brumby is that all I can do is communicate my science to the decision makers in as open and honest way as possible, with all the uncertainties laid bare, and then hope they can weigh this up effectively against all the other uncertainties they need to consider. Personally I guess I subscribe to the precautionary principle to some extent, but that's my personal view and others may disagree, and I don't feel strongly enough either way to overstate the case in order to influence a biased decision by those who have to decide what to do.

RichieRich: indeed there is no real way of testing the models through repeated forecasts for the kind of long-term projection that we are relying on for climate mitigation policy. All we can do is:

1. check the long-term forecasts of the older models (still using the same basic principles)
2. check the nearer-term forecasts (from daily through seasonal to interannual timescales)
3. check the individual components of the models for physical realism
4. run the models as hindcasts (and not cheat!)
5. see how the models compare against large changes in the palaeo record - as my former PhD supervisor Paul Valdes does. However of course you are limited by the availability and quality of suitable proxy data....

So no there is no real way to do what Judith Curry says is necessary. It's just down to whether the evidence gained from 1-5 above gives you sufficient confidence that some serious level/rate of warming is of sufficiently high probability to warrant action. Risk-averse people may still consider a relatively low probability to be "sufficiently high probability", but others may not.

Latimer Alder: yes, routinely updated forecasts are what is needed. We do issue forecasts of global mean temperature for the coming year every December, but you are rightly asking for more than that. Keep an eye on the literature for papers by my colleagues Doug Smith and Adam Scaife who are leading on this kind of work. Yes SimonW it would have been good to have being doing this for the last 30-40 years but we only developed the initialised forecast techniques needed to make this possible in the last few years.

Jul 5, 2011 at 11:55 PM | Unregistered CommenterRichard Betts

Bruce Hoult 3:25 AM

On your point about "5 points being used to predict a 6th point" - it's not really the points that are being used, it's the model (which is process-based, not statistical). The points were just there in the original paper to show the model agreed with observations in the hindcast.

On your question, I don't think ENSO significantly alters the Earth's energy balance, it's mostly just heat shifting around within the system. CO2 rise does seem to accelerate a bit in El Nino years, due to things like the forest fires in Indonesia in 1997, and increased plant and soil respiration / reduced photosynthesis in other droughted regions, but this is not enough to make a real difference. Cloud cover might change a bit I guess, but again I doubt if it is significant as an impact on the radiation budget.

However if anyone knows better I'd be interested to hear!

Jul 6, 2011 at 12:02 AM | Unregistered CommenterRichard Betts

"Cloud cover might change a bit I guess, but again I doubt if it is significant as an impact on the radiation budget."

What leads you to doubt? Hard to quantify I realise, but do you consider that it's unlikely to have a significant impact or is it an agnostic doubt?

Jul 6, 2011 at 12:36 AM | Unregistered Commentermrsean2k

Richard Betts writes:

"So no there is no real way to do what Judith Curry says is necessary. It's just down to whether the evidence gained from 1-5 above gives you sufficient confidence that some serious level/rate of warming is of sufficiently high probability to warrant action. Risk-averse people may still consider a relatively low probability to be "sufficiently high probability", but others may not."

This is an admirably clear statement that what you are doing is not science but policy. Would you be willing to broadcast that to the world? If so, you would put an end to all debate about climate science. Of course, I guess there would remain the Schmidts who would argue that they are doing science while demonstrating that they have no clue as to what the difference is. If one is engaged in using the tools of game theory for making decisions under uncertainty, which certainly seems to be what you are doing, then I cannot see it as science. The tools of game theory include Bayes' Theorem, which is wonderful for learning about one's weaknesses as a gambler, but it has no role to play in science. The product of science is reasonably well confirmed physical hypotheses and the tools needed to articulate them, usually advances in mathematics. At this time, the people who work on GCMs have produced no physical hypotheses that go beyond Arrhenius' work. Yet these people want to claim that there are dangerous scenarios that are probable when no such can be inferred from Arrhenius' physical hypotheses. Let me be clear about what I mean when referring to Arrhenius' hypotheses. Arrhenius gave us physical hypotheses which connect radiation theory to observable effects in the atmosphere. By contrast, the GCM people seem to treat Earth as if it had no physical properties other than heat exchange caused by radiation. Above, you state that, for your practical purposes, that ENSO is an epi-phenomenon. Until you have physical hypotheses which describe natural process on Earth, you will not advance beyond Arrhenius. But I do not see that you have so much as a plan to take up the matter down the road.

Jul 6, 2011 at 12:52 AM | Unregistered CommenterTheo Goodwin

Richard, thanks for your informative responses.

"The points were just there in the original paper to show the model agreed with observations in the hindcast."

I am assuming the model was tweaked until it agreed with observations in the past, rather than arising from purely theoretical considerations, in which case the people who say you could predict one observation into the future with a French curve and no knowledge of the climate (or even that the graph represented climate) have a definite point.

(French curve? Geez .. showing my age there: http://en.wikipedia.org/wiki/French_curve)

Jul 6, 2011 at 12:56 AM | Unregistered CommenterBruce Hoult

Theo

I see your point, but I'm not actually making the decisions, just providing what evidence I can to help inform them. My contribution to the evidence is the set of possible futures we think is plausible based on current understanding - other people's contributions include their advice on the economic consequences of decisions to take various courses of action.

And the bottom line, for me at any rate, is that we need to understand the climate system in order to both live with its changes/variations and help decide whether we need to minimise our influence on these.

Jul 6, 2011 at 1:30 AM | Unregistered CommenterRichard Betts

Bruce

We do obviously try to get the models to agree with present-day observations as best we can (although this is never perfect), but contrary to popular belief, we don't tweak the models to get them to agree with the past change - this would be simply too computationally expensive and time-consuming, as the computing cost and processing time is huge (it takes several months to run a century-scale simulation, even on a supercomputer). So yes it's pretty satisfying when it agrees reasonably well with observations when driven by historical forcings - although again this is never perfect...

Jul 6, 2011 at 1:37 AM | Unregistered CommenterRichard Betts

Richard Betts wrote:

Theo

"I see your point, but I'm not actually making the decisions, just providing what evidence I can to help inform them."

Will no GCM "worker" ever give us a answer about physical hypotheses? If not, you will never address the question of "forcings," such as the effects of rising CO2 concentrations on cloud formation. Poor old Arrhenius understood that the impact of CO2 on Earth depended entirely on forcings. According to him, CO2 might redirect radiation but whether that would heat or cool Earth depends entirely on the forcings. Yet forcings are not effects of radiation, they are effects of effects of radiation. In other words, redirected radiation might cause greater cloud cover and then the greater cloud cover might cause Earth to cool. The only way to know is to actually study forcings and develop empirical generalizations about them. As I see the research, no one in climate science is doing this and the Schmidts would shun anyone who attempted it. Yet creation and testing of genuine physical hypotheses is exactly what Svensmark is doing.

Svensmark is creating and testing physical hypotheses about the effects of cosmic rays on cloud formation. The important point here is that in Svensmark's theory (or set of hypotheses) there is a predicate something like "___is a cloud." That is, Svensmark's hypotheses actually refer to natural features of planet Earth; that is, natural features in addition to heat transfer and radiation. He resists the temptation to treat natural phenomena as "expressions" of underlying changes in heat transport caused by radiation. For those reasons, Svensmark can address the question of forcings while the GCM people have no way to refer to the phenomena in which forcings would occur. With their obsession with modelling the radiation budget, the GCM folk treat natural phenomena such as ENSO or cloud formation as if they were optical illusions that hide the underlying reality of radiation.

Svensmark's theory passes every test that scientific methodology can pose. What Svensmark is doing is genuine science. His results might not prove to be earth shattering and might not answer all scientific or policy questions about global warming, but he is practicing as a scientist and his work will live in the annals of science.

It is unconscionable for the GCM people to claim that their GCMs are compatible with scientific method, that they can substitute for actual physical hypotheses, or that they can be used for prediction. It is also unconscionable for the GCM people to assert that their models are the best way to address scientific questions about global warming. As we have known since the work of Arrhenius, the only way to address the questions of global warming is to address the forcings and the only way to do that is by creating and confirming physical hypotheses which describe natural regularities on planet Earth.

If the GCM people want to retreat into game theory and employ Bayesian statistics and subjective probabilities then they are not doing science. Let them have the honesty to say so. Then they can get on with their policy recommendations for everyone who find computer models adorable.

Finally, since all GCM people seem to be wholly ignorant of the terminology of scientific methodology, let me finish by pointing out that what GCM people call "hypotheses" are not. A typical example in a peer reviewed journal states that the hypothesis is that "rising CO2 concentrations will cause an increase in global average temperature." That statement is not falsifiable. All predictions must come with a time stamp. Because this hypothesis makes no reference to time then it is not falsifiable and has no place in science. For a clear and simple example of a set of scientific hypotheses, Google Newton's formulation of Kepler's Three Laws. With a GPS, an astronomical calendar, a Walmart telescope, and Kepler's Three Laws you can predict to within minutes the time that you can observe a given phase of Venus from your front lawn. Until GCM folk can come up with actual predictions, they should stop talking about forecasts. Instead, they should talk about their educated hunches.

Jul 6, 2011 at 3:58 AM | Unregistered CommenterTheo Goodwin

@richard betts

'Latimer Alder: yes, routinely updated forecasts are what is needed. We do issue forecasts of global mean temperature for the coming year every December, but you are rightly asking for more than that. Keep an eye on the literature for papers by my colleagues Doug Smith and Adam Scaife who are leading on this kind of work'

Thanks for your positive and civil reply. You stand out among your fellows just for achieving both of those things.

I'll be very interested to see the Smith and Scaife work. Ciao

Jul 6, 2011 at 6:16 AM | Unregistered CommenterLatimer Alder

Theo

"It is important to recognise one important fact about the climate models: they are hypotheses. Newer and more sophisticated and perhaps better hypotheses, but hypotheses all the same" - Andrew Montford, The Hockey Stick Illusion (page 384)

Jul 6, 2011 at 7:47 AM | Unregistered CommenterRichard Betts

Thanks for your answers Richard.

I keep thinking that GCMs need far more work before they can say anything sensible in the long term.

First, they underestimate natural variability, the past is too flat. This chart is a comparison of CRUTEM and the average of 2 runs from your HadCM3. There is very little oscillation in the model temperature while the CRU shows larger variations. R^2 is 0.38 for the annual temperatures and 0.65 for the decadal.

Second, the regional modelling is not that good, there are regional biases of up to 4K and 400% differences in specific humidity. That is serious, because water vapour is the cause of the main projected increase in radiative forcing, not CO2.

Third there are the clouds.

Fourth there are the oceans, which are begging to disagree in the non- trivial figure of tens of zettaJoules.

etc.

Those issues need to be resolved before we place any trust in GCM future projections, especially in the long term.

Jul 6, 2011 at 8:36 AM | Unregistered CommenterPatagon

An IPCC author quoting HSI. Maybe things really are changing. Will you be citing it in AR5 Richard? :)

Jul 6, 2011 at 8:58 AM | Unregistered CommenterPaulM

Richard, you mention that the GCM models are not fitted to reproduce past data. I completely accept this is true: these models are expensive enough to run as it is; if you had to do some kind of non-linear curve fitting exercise of all the GCM parameters then the Met Office really would need a few more teraflops for its computers ;-)

But can you comment on how much room there is for tuning of the models and their inputs? The cooling or temperature standstill after World War II is blamed on aerosols. How well do you feel that the forcing associated with aerosols being emitted at the time is known? It seems to me almost impossible that it is well enough known in terms of the basic physics let alone the emissions budget for aerosols at the time. The temptation to input just the right amount of aerosol forcing to 'get' the desired temperature curve must be pretty strong.

Jul 6, 2011 at 10:18 AM | Unregistered CommenterJeremy Harvey

An IPCC author quoting HSI.

Well, it's only a sample of one but I vote Richard Betts the best such 'visitor' we've ever had. Thanks to him. Which reminds me:

... it takes several months to run a century-scale simulation, even on a supercomputer.

This doesn't exactly aid independent replication, does it Richard? I do have real issues with this. That's the flip side of the 'excitement' I started with on your arrival.

I think we should start to talk about a hierarchy of openness of GCMs:

1. Open source code
2. Open source code and initial data
3. Open source, data and ability to step through the calculation showing every intermediate value exactly as in the original, without rounding errors.

Then we should start to talk about the set of published results of GCM runs. Those used by the main IPCC reports would be top of the tree, where it's vital to have as much openness as possible, because of the policy implications.

It would be a totally different world. And, I think, a necessary one.

Jul 6, 2011 at 10:28 AM | Unregistered CommenterRichard Drake

Richard Betts writes:

"It is important to recognise one important fact about the climate models: they are hypotheses. Newer and more sophisticated and perhaps better hypotheses, but hypotheses all the same" - Andrew Montford, The Hockey Stick Illusion (page 384)

OK, if you do not wish to discuss the standards that apply to physical hypotheses then I will take this quotation as making the simple claim that climate models are hypotheses. In that case, each and every one of them is not only radically false on the basis of predictions but they never so much as got into the ballpark of confirmed prediction. So, why do people continue to work on them? Love of the computer? Faith?

Jul 6, 2011 at 5:06 PM | Unregistered CommenterTheo Goodwin

@ GSW (Jul 5, 2011 at 8:15 PM)

I got the years off by 1 before; yes, it should be for 2000–2009.

The most important question in global warming is, arguably, this: what will the average global temperature be in each of the coming decades? Allen/Hadley made what is effectively a prediction for one decade. The prediction turned out to be spot on. There is little to be gained by discussing things like std.deviation: the central point of the prediction is virtually on top of the observation.

There can be discussions about how strong that is as evidence for the goodness of the climate model. I was just answering the original, statistical, question.


@ Theo Goodwin

Science broadly comprises two main activities: gathering data and theorizing about that data. Theories are virtually never exact; they are almost always approximations. Often though, approximations are most useful, even when we have near-exact theories (e.g. the approximations of Newtonian mechanics).

A given GCM constitutes a hypothesis: roughly, that the climate system works approximately like the GCM. Some of those models/approximations will be better than others. None will be perfect, but they can still be useful. There can be debates about how good/useful current GCMs are, and I believe that there are reasons for substantial criticisms. But work is ongoing to improve GCMs. Such work is surely valid science—developing, testing, and improving hypotheses.

My own view is that the climate system has so many feedbacks, both positive and negative, and so many nonlinear processes, that our only hope of understanding the system is to work on building GCMs. That work includes gathering data to calibrate and check the models. There has been a lot of progress over the past 15 years or so.

Jul 6, 2011 at 6:59 PM | Unregistered CommenterDouglas J. Keenan

Richard Drake

I actually do think that open source GCM code would have massive benefits, not least in terms of having more people checking and testing the code and unearthing bugs, but also of course through being able to contribute new stuff. We have definitely found this by making the JULES land surface model freely available.

There are significant practical issues in making such huge pieces of software available, such as technical support. With several hundred thousand lines of code, vast amounts of I/O and many options to choose from its not trivial to get it up and running. However, as I say, there are those who share the view that these models should be more accessible - we'll see how the internal debate goes!

But I really would encourage you to get hold of the JULES code:

http://www.jchmr.org/jules/

This is essentially the land surface scheme in the new Met Office Hadley Centre model HadGEM2-ES which we are running for AR5 (just with some minor differences). The JULES code shows how the surface energy and moisture budgets are calculated, and also how the terrestrial carbon cycle is treated. You can even get it running with a global dynamic vegetation model if you want. Some readers of this blog may be particularly interested in the code for plant physiological responses to CO2 - ie: CO2 fertilization and the effect of stomatal closure on water use efficiency. As I have mentioned previously here, some of our colleagues don't like us including those processes, but we think they are needed.

Go on, get the code and start using it... it all helps to demonstrate the importance and usefulness of sharing the models!

Jul 6, 2011 at 7:15 PM | Unregistered CommenterRichard Betts

Thanks Richard. I will download :)

Jul 6, 2011 at 7:36 PM | Unregistered CommenterRichard Drake

Douglas J Keenan writes:

"Science broadly comprises two main activities: gathering data and theorizing about that data. Theories are virtually never exact; they are almost always approximations. Often though, approximations are most useful, even when we have near-exact theories (e.g. the approximations of Newtonian mechanics)."

This is a Red Herring. Of course Newtonians Mechanics is now an approximation in the arena of physics and one that we rely upon totally as we work within this solar system, launching satellites and planetary probes and such. But the relationship between Newton and the more complicated physics of our time has been explicated fully and one can deduce Newton's formulations simply by limiting the ranges of variables found in the higher level theory.

What I object to is the claim that a statement such as "increasing CO2 concentrations cause increasing global temperatures." That is not an approximation of anything. It cannot be deduced from a higher science. It certainly is not a hypothesis because it cannot be falsified. There are no time stamps for its so-called predictions. Read what I wrote above regarding Newton's formulation of Kepler's Three Laws. There you have prediction and anything worthy of the name should meet that standard. If you have no predictions then you certainly have no hypotheses.

"A given GCM constitutes a hypothesis: roughly, that the climate system works approximately like the GCM."

What does this mean? The climate is not computer code.

"Some of those models/approximations will be better than others. None will be perfect, but they can still be useful."

They are not hypotheses and they cannot produce predictions. So, what are they good for? Nothing scientific.


"There can be debates about how good/useful current GCMs are, and I believe that there are reasons for substantial criticisms."

Yes, as I have done here. Would you care to engage me on this? Or contribute your own criticisms. They do not belong to science and the good name of science should not be associated with them.

"But work is ongoing to improve GCMs. Such work is surely valid science—developing, testing, and improving hypotheses."

To improve them how? So that they can now do what? What hypotheses? If you would offer a particular hypothesis, maybe we could have a fruitful discussion of it.

"My own view is that the climate system has so many feedbacks, both positive and negative, and so many nonlinear processes, that our only hope of understanding the system is to work on building GCMs."

In all the history of humanity, never have so many excuses been made as for climate models and climate science. Do you really believe that climate science will inspire the creation of a new mathematics as happened in the case of String Theory? I do not. Climate science is part of Natural History. It will not prove to be nearly as complicated as contemporary cosmology. And let me anticipate the Chaos excuse. Tipping points might introduce chaotic behavior into a system but the study of the tipping points themselves is purely deterministic.

"That work includes gathering data to calibrate and check the models. There has been a lot of progress over the past 15 years or so."

Finally, we get to data. It is way beyond ironic that, pushed to the wall, Hansen and others have introduced what is for them a classic "ad hoc hypothesis," namely, that China is producing enough aerosols to counteract warming from CO2. Two points. One is that this is the first time that any Warmista has addressed the importance of natural processes (apart from heat transfer caused by radiation) in their calculations. Yet they have been forced to do it. Two is that the models did not include China's production of aerosols. So, if the models are hypotheses then they failed to cover some of the known important data, the aerosols. For lack of the aerosols, the "projections" from the models have turned out to be completely worthless. If they were predictions, we could only conclude that they have set a new record for conclusive falsification. How can you defend something that is both incomplete and false to experience?

The need at this time in climate science is to learn scientific method and to start using the terminology of hypothesis and confirmation in an intelligible fashion. Climate scientists and modelers are hopelessly confused about the relationships that exist between a set of hypotheses and what they purport to explain, on the one hand, and, on the other hand, between the hypotheses and the evidence for them. Climate scientists cannot tell us what they purport to explain. The modelers model only processes of heat transfer caused by radiation. None of them address natural processes such as the history and effects of aerosols in the atmosphere, until pushed into a corner. Why? Because climate scientists look for only one thing: rising temperature numbers. That is not science.

Jul 6, 2011 at 11:19 PM | Unregistered CommenterTheo Goodwin

Theo

My colleagues Smith et al (2007) made the very specific prediction that "at least half of the years after 2009 [up until 2014] are predicted to be warmer than 1998, the warmest year currently on record." Maybe I am lost in the nuances, but I don't understand why this cannot be tested and shown to be either right or wrong within the next few years.

Also, testing the predictive capability on shorter timescales, are you flying transatlantic anytime soon? If so then keep your fingers crossed that our GCM can correctly forecast the position and speed of the jet stream, because your airline will be relying on it to decide where to route your plane and how much fuel to carry.

This is actually the same model used for climate projections, just run at shorter timescales. We are clearly happy with its skill in quite some detail on the timescale of days. At longer timescales (decades) we are reasonably happy with projections of long-term global mean temperature, albeit with fairly large uncertainties, and are even also happy with some of the general long-term regional precipitation trends as long as we can establish that there are credible physical mechanisms behind them. However as we have discussed there are also many cases where we are not happy with the skill of the model and have much more work to do. But we are at least now at a stage where we can attempt near-term (interannual) forecasts, evaluate them, and improve the model (and hindcasting is also important here).

Jul 7, 2011 at 12:41 AM | Unregistered CommenterRichard Betts

BH: The difference in assessment is obviously very different - Lucia is saying that the models are doing rather badly while Allen (Mitchell) et al are saying that they are doing fine. It seems to me that they cannot both be correct, but as a non-statistician I am not really in a position to say much about who is right. I have had some email correspondence with Myles Allen, who is quite certain that looking at sub-decadal intervals is meaningless. However, I have also read Matt Briggs' imprecations against smoothing time series, and his fulminations again smoothing them before calculating forecast skill.

Here's a partial reconciliation:

1) Lucia uses MEI to reduce the influence of natural variability associated with ENSOs. It does not correct for other sources of natural variability.

2) I absolutely agree with Myles Allen that you need to use "low-passed" versions of the data if you want to compare slow secular effects like anthropogenic CO2 forcing to data.

3) Matt Briggs is right as far as he goes.... smoothing data and then decimating it is nothing more than low-passing the data, and retaining only the non-aliased portion of the data. If you smooth the data without decimating it, you are introducing a large artifactual correlation into your data.

It is completely appropriate to take e.g., decade-only averages of data (one per decade) then use those to compare with model trends. What Lucia strives to do is shorten the period over which one can make a comparison between model and data trend. There is nothing wrong with this either, but one has to correctly model the unaccounted for natural variability over this period and the uncertainty intervals necessarily get increased (and I think Lucia's method still needs work in that sense).

Jul 7, 2011 at 3:14 AM | Unregistered CommenterCarrick

@Carrick:

>2) I absolutely agree with Myles Allen that you need to use "low-passed"
>versions of the data if you want to compare slow secular effects like
> anthropogenic CO2 forcing to data.

I disagree with that. You can low-pass the outputs, not the inputs. Or, the human eye can do a godo job of that.

The actual models are surely being iterated with a time scale much shorter than a decade, otherwise they would not require a supercomputer.


>If you smooth the data without decimating it, you are introducing a large
>artifactual correlation into your data.

True. I'd prefer not smoothing.


>It is completely appropriate to take e.g., decade-only averages of data
>(one per decade) then use those to compare with model trends.

It's certainly mathematically valid. The problem is that the number of data points accumulates only very slowly, and the nature of statistical tests is that the error expected from random sources scales inversely with the square root of the number of data points.

It is not very useful to make predictions where the standard deviation is bigger than the signal you're looking for.

Smoothing also means that you can't say anything useful about the recent past on a scale less than your smoothing interval.

Jul 7, 2011 at 4:06 AM | Unregistered CommenterBruce Hoult

Jul 7, 2011 at 12:41 AM | Richard Betts

At least this is another confirmation that the weather forecasting models are the same as the climate models as you would expect. How old a weather forecast does an airline look at then compared to the real weather at take off?? They would really risk not carrying enough fule based on a weather forecast?? I'm flying back across the Atlantic in a few week so I'll keep my fingers crossed.

I agree with the main points of Theo's arguments though.GCM's aren't explicit hypothesis. You at least need to completely explain the physical mechanisms of what is programmed into the GCM and why

Jul 7, 2011 at 4:36 AM | Unregistered CommenterRob B

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>