Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Peiser on journal and media bias | Main | BBC attempts to outdo Heartland »
Saturday
May122012

Stocker in Oxford

Simon Anthony sends this report of Thomas Stocker's recent talk in Oxford.

Yesterday I attended a talk at Wolfson College, Oxford by Thomas Stocker, co-chair of the IPCC's AR5 WG1 on "Climate Change: Making the best use of scientific information".  He's an intelligent, well-mannered and rational man, in a position of great influence.  It's therefore all the more concerning to see the weakness of the evidence and arguments which have, it seems, convinced him of the reality and urgency of AGW and which he feels should convince everyone else.

Now one wouldn't expect the head of an IPCC working group to pour scorn on the evidence for AGW (after all, the Pope is unlikely to ask Richard Dawkins to write one of his encyclicals to the faithful).  However, while nothing he had to say was novel, I think it's reasonable to assume that Prof Stocker brought along the very best evidence he had, not leaving the really good arguments back at home.  So it's all the odder that what he had to say was so weak.  

He told us that the IPCC's unequivocal view was that the climate had got warmer.  He seemed to think that sceptics would disagree with this.  (I suppose some might, but not many, so the statement rather obviously begs questions of speed, typicality and relative importance of the various causes.)

His main concern was with communicating key information so that non-specialists (public and policy makers) reached the same conclusions that he and his colleagues had.  He therefore showed three graphs, of global temperature, sea level and snow cover which he thought were conclusive.  The charts were all from AR4's Summary for Policy Makers and it's true they all showed the behaviour you'd expect in a warming world.  Unfortunately the start dates for the data sets were 1850, 1870 and 1920 respectively, so giving no comparison with longer term behaviour and all therefore again begging the obvious questions.

He discussed the uncertainty in "forecasts" from various "scenarios" (or whatever they're called in climate science-speak) for future temperatures up to 2100.  He described the difficulty in explaining the uncertainty in these "predictions" (I'm stuck in Oldspeak) due to the variations between and within the models.  These included parameters, initial conditions, physical processes, natural variability, economic assumptions and so on.  These factors were combined together "mathematically" and then subject to "expert" interpretation before being delivered to policy makers.

It was striking that not once did he suggest that the models' uncertainties (or "errors" in Oldspeak) should be established by comparing their "predictions" against measured data.  The only sources of uncertainty with which Prof Stocker seemed concerned were between and within models.  It seemed comparison with what was supposedly modelled was not relevant.

There was another troubling note in that Prof Stocker referred several times to "deniers".  It was, at the very least, unfortunate that a man of such seniority should look on those who disagree with his views in such terms.  It struck a discordant note from a man who otherwise seemed polite.  It was also ironic in that he said that he and his colleagues mustn't respond in equivalent terms to the "provocations" of climate change deniers.  (He seemed put out that he'd had to spend "several hours" responding to FOI requests.  Apparently these mostly came from the UK - I don't know who's doing it, but keep it up.  To his credit - or at least not adding to his debits - he did say, albeit reluctantly, that climate scientists should continue properly to respond to such requests.)

Also to his credit - or at any rate avoiding an obvious trap - in response to a question he insisted that scientists shouldn't take activist roles and shouldn't be influenced by WWF, Greenpeace or Heartland.

Something which has constantly surprised me is how otherwise intelligent and rational people can come to believe strongly in something for which the evidence is either lacking or, in some cases, absent.  I suppose the most striking example is Newton who, judging by the amount of work he devoted to it, attached far more importance to alchemy than science and mathematics.  I suppose in Newton's case it might have been, at least in part, because he lived quite an isolated life and didn't discuss his work much with others.  

Now obviously the weakly grounded beliefs of modern climate scientists aren't in the Newtonian league (and nor, equally, is their work) but they do seem to have very strong beliefs based on weak or equivocal evidence.  Of course they don't tend to spend their time in hermit-like isolation (see below) but they seldom if ever are obliged to engage directly with lucid sceptics.  So I wonder if the reason for the strength of climate scientists' weakly supported beliefs is that not only are they similar to Newton in that they seldom or never face opposition, but they constantly reinforce one another's beliefs.

I also wonder (hope?) if, in some more rational future, "AGW" will be investigated mainly by psychologists as a powerful example of a recurrent and damaging aspect of human behaviour.

Oh, and this is somewhat BTW, and possibly unfair, but I looked at the website for WG1 and came across a list of the meetings and workshops for the group.  The noble men and women in WG1 have, since 2009, endured travel to and stay in the following locations: Honolulu, Oslo, Venice, Geneva, Bali, Panama, Boulder, Hanoi, Kuala Lumpur, Stanford, Belgium (hmmm, someone slipped up), Geneva again, Kunming (China), Okinawa, Gold Coast (Australia), Lima, Brest, Kampala and Marrakech.  They are plainly terrified of the effect of all those CO2 emissions.

PrintView Printer Friendly Version

Reader Comments (118)

May 13, 2012 at 7:30 PM | Simon Anthony

Hi Simon

Thanks for your question.

When the models are developed, we use data from recent decades to get the present-day state as good as possible - see Martin et al (2006).

Then we see how well the models compare with the observed changes over the last century or so, see Stott et al (2006).

Not the same paper, but the second does refer to the first.

Cheers

Richard

May 14, 2012 at 10:58 PM | Registered CommenterRichard Betts

May 13, 2012 at 7:03 AM | Arthur Dent

In my case, most of the contact between the 8 authors of my chapter is electronic, mostly email but a bit by skype. So far in AR5 we have met face-to-face twice, at the main meetings for all authors (approx 250).

The main meetings for all authors last 4 days, and involve meetings of individual chapter writing teams, cross-chapter meetings on areas of common interest, and plenary sessions where we hear updates from all chapters and have presentations from the chairs and TSU on general issues like writing style and uncertainty quantification.

More details are here.

Cheers

Richard

May 15, 2012 at 8:56 AM | Registered CommenterRichard Betts

"When the models are developed, we use data from recent decades to get the present-day state as good as possible ... Then we see how well the models compare with the observed changes over the last century or so ... "

Richard - are you not omitting the third step? ;-)

May 15, 2012 at 9:58 AM | Registered Commentermatthu

@ James Evans, Richard Betts

There's something about those charts of decadal temperature predictions which looks a bit odd, at least to me: the 90% confidence intervals remain the same throughout the 10 years of predictions.

Now, I think it's fairly well established that 1-day weather forecasts are much more accurate than 5-day weather forecasts (ie the eg 90% confidence intervals for a 1 day forecast are much smaller than for a 5-day). I'd similarly expect a temperature forecast for 10 years into the future to have wider confidence intervals (I'd go so far as to say "much wider") than a 1 year forecast. And yet they seem (at least to my admittedly less than 20:20 vision) to be the same. I wonder why that is.

May 15, 2012 at 10:33 AM | Unregistered CommenterSimon Anthony

@ James Evans, Richard Betts

Just another passing thought on those 10-year temperature predictions. This is fairly rough but suppose you'd taken the temperature at the start of each of the 10-year periods (1985, 1995, 2005) and drawn a straight horizontal line for your forecast, along with the same sized "confidence intervals" as given by the Met Office.

I think this very cost-effective method (involving a ruler and a pencil) would have matched the MO in remaining more or less entirely with the 90% CI band. I also think it would give the MO's model a run for its money if you calculated the rms difference between predicted and measured temperature over each 10-year interval. For the most recent interval in this regard the rather simpler method would have done significantly better than the MO's forecast.

Now the "no-change" estimate with a start-date of 1985 would have got current temperatures wrong by being ~0.3 degrees low. I wonder what the Met Office's forecasts from 1985 predicted for today's temperatures, or even (since the model has changed in the meantime) what the current MO model would forecast for today, given 1985 initial conditions and parameter values derived from pre-1985 data.

Richard, can you shed any light on this?

May 15, 2012 at 12:59 PM | Unregistered CommenterSimon Anthony

Richard Betts,

Thanks for the response.

May 15, 2012 at 5:54 PM | Unregistered CommenterJames Evans

Richard Betts,

A question just occurred to me as I was stood outside having a gasper. Why was the forecast updated? I mean, will a new forecast replace the current one in a year or two? Or will the Met Office stick with the current one?

Thanks.

May 15, 2012 at 6:02 PM | Unregistered CommenterJames Evans

Hi Simon

Thanks for the interesting questions!

You are right that for the daily/weekly local forecasts, confidence intervals are small for day one but increase for the following days. However, it does not logically follow that the same should be true for forecasts of global annual temperatures - the uncertainties in forecasting next year's global temperature anomaly are very large, much larger then forecasting tomorrow's local temperature. Year-to-year processes do not follow the same patterns as day-to-day processes. For the decadal forecasts, the uncertainty becomes large rapidly because of difficulties in predicting internal variability in the system.

I think that a model started in 1985 conditions would have given a best estimate of warming that was above what has actually occurred, but the lower end of the confidence range would probably still include the actual observed record. That exact study has not been done, but that would be my guess based on what we see of other forecasts started at more recent times (2005) and of long-term projections started at pre-industrial conditions and looking at the difference between 1985 and now.

Cheers

Richard

May 15, 2012 at 11:45 PM | Registered CommenterRichard Betts

Hi James

The forecast was updated because after a year you can use more recent initial conditions to help you start the forecast in the right place and hopefully give an improved forecast as you get nearer the target date - just like with a forecast for the coming weekend, we start with a 5-day forecast issued on Tuesday, then a 3-day on Thursday and a 1-day on Friday, so the idea is that you keep an eye on the forecast throughout the week and hone your plans accordingly. As I said to Simon above, the reduction in uncertainty is often not so great with successive decadal global mean temperature forecasts, but at least it gives you the chance to see whether the forecast has changed substantially or not in the light of recent data. Yes the forecast will be updated either next year or the year after, not only due to the availability of the observational data but also due to a new model.

Cheers

Richard

May 15, 2012 at 11:52 PM | Registered CommenterRichard Betts

@ Richard Betts

Thanks for your reply. A few comments/questions:

"the uncertainties in forecasting next year's global temperature anomaly are very large, much larger then forecasting tomorrow's local temperature"

I don't think that can be right. According to the MO decadal predictions, "next" year's global temperature is forecast to (90% CI) +/- ~0.2 degrees. I'd be surprised if anyone thought their forecast for tomorrow's temperature was within a degree. So, unless I'm missing something, tomorrow's local temperature is apparently much more uncertain than forecasting next year's global temperature anomaly.

Are the confidence intervals in practice just set by calculating the standard deviation of the measured interannual temp anomaly variations?

"Year-to-year processes do not follow the same patterns as day-to-day processes. For the decadal forecasts, the uncertainty becomes large rapidly because of difficulties in predicting internal variability in the system."

The first part is clearly true for obvious astronomical reasons but, I think, not necessarily relevant. As for the second part, from papers to which you linked, it seems that the MO uses the same computer model for decadal forecasts as daily weather forecasting (the time scales differentiated principally by time and space resolution). So surely they're both subject to the same uncertainties?

From what I've read, uncertainties roughly double in extending the forecast from 1 to 5 days (approximately the same kind of behaviour as Brownian motion, I seem to remember). Do you know a paper which shows the dependency of forecast uncertainty on the relevant variables (time and space resolution and length of forecast I guess)? And is there an equivalent for climate forecasts?

"I think that a model started in 1985 conditions would have given a best estimate of warming that was above what has actually occurred, but the lower end of the confidence range would probably still include the actual observed record. That exact study has not been done"

It would be interesting if such a study were to be done so that one could see whether your intuition is correct. If one simply extends the 1985-1995 prediction in a straight line, today's predicted temperature anomaly would be ~1.2 degrees, leaving current measurements well outside the confidence range (unless the 90% confidence range became so wide as to make predictions too vague to be useful). It seems the model would have to make a fairly dramatic downward shift in its rate of increase sometime after 1995 to get anywhere near today's anomalies.

May 16, 2012 at 9:42 AM | Unregistered CommenterSimon Anthony


According to the MO decadal predictions, "next" year's global temperature is forecast to (90% CI) +/- ~0.2 degrees. I'd be surprised if anyone thought their forecast for tomorrow's temperature was within a degree. So, unless I'm missing something, tomorrow's local temperature is apparently much more uncertain than forecasting next year's global temperature anomaly.

Hi Simon

I mean uncertainty relative to the natural variability. The forecast for the 2012 global mean temperature anomaly says:

2012 is expected to be around 0.48 °C warmer than the long-term (1961-1990) global average of 14.0 °C, with a predicted likely range of between 0.34 °C and 0.62 °C, according to the Met Office annual global temperature forecast.

so the uncertainty is +- 0.14°C

However if you look at HadCRUT4 you'll see that the interannual variability is generally smaller than that. So, the uncertainty is large compared to the signal we are trying to capture.

Contrast that with the daily temperatures for a specific location - they can differ by several degrees or more from day to day, and the uncertainty is generally much less than this, so in this case the uncertainty is small compared to the signal we are trying to capture.

the MO uses the same computer model for decadal forecasts as daily weather forecasting (the time scales differentiated principally by time and space resolution). So surely they're both subject to the same uncertainties?

Same basic processes but there's a difference between local, one-day effects and averages over the whole world and whole year.

For one location on one day, we only have to worry about one weather system, eg: one front or anticyclone, whereas for the global mean temperature we have to worry about hundreds / thousands / tens of thousands over the whole world over the whole year.

Cheers

Richard

May 16, 2012 at 7:32 PM | Registered CommenterRichard Betts

Richard,

Thanks, I hadn't appreciated that you do a new one of these every year or two. That's quite a few forecasts that must have been made. Is there somewhere where I can see all the decadal global temperature forecasts that have been made?

James

May 16, 2012 at 8:01 PM | Unregistered CommenterJames Evans

@ Richard Betts, James Evans

If I could add to James' request for info: are the numerical data used to plot the charts of forecast temperature anomalies available and, if so, where might I see them?

May 16, 2012 at 8:58 PM | Unregistered CommenterSimon Anthony

Hi James and Simon

The first decadal forecast was published in this paper in 2007, and I think the two figures you (James) found are the two updates since then. You could try asking the lead author for further info.

Data for the latest set of decadal forecasts (done for IPCC AR5) are available via the 5th Coupled Model Intercomparison Project (CMIP5). An overview of the various model simulations (including decadal simulations as well as the more traditional centennial ones) is here and the data portal is here.

This includes data from other modelling centres as well as the Met Office Hadley Centre.

Cheers

Richard

May 16, 2012 at 10:52 PM | Registered CommenterRichard Betts

I've just noticed that Judith Curry has published a paper on the decadal hindcasts in CMIP5 (that have been done for IPCC AR5) - she has a blog post about it.

May 17, 2012 at 12:19 PM | Registered CommenterRichard Betts

Richard,

Yes, I noticed that too. I haven't read the paper yet. Thanks for taking the time to answer my questions. Most useful.

James

May 17, 2012 at 6:52 PM | Unregistered CommenterJames Evans

I "cherry picked" the following from Dr. Curry's paper:

"....... the models predict less warming or even cooling in the earlier decades compared to observations and too much warming in recent decades."

This could be explained if the models had been tuned to show increasing temperatures in line with increasing CO2 levels, using low pass or bandpass filters to extract the warming signal over decadal timescales.

The "radiation scheme" within the models is only as accurate as the representation of albedo (cloud cover) within the grids. I would love to know how this works.

May 18, 2012 at 8:17 AM | Unregistered CommenterRoger Longstaff

If you have airline miles, upgrading with mileage is the best option. We recently booked business class tickets from LA to Sydney for about $1400 round trip using miles. If miles are not an option,

Oct 7, 2019 at 8:22 PM | Unregistered CommenterJames

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>