Predicting climate 100 years from now
Mar 10, 2010
Bishop Hill in Climate: Models

These are notes of a lecture given by Prof Tim Palmer on some of the fundamentals of weather prediction. The notes were taken by Simon Anthony. This is well worth a read, and I'm certainly struck by how little we know about how to forecast the climate.

If we can't forecast next month's weather, what hope for predicting climate 100 years from now?

Lecture at Dept of Earth Sciences, University of Oxford by Professor Tim Palmer, Royal Society Professor at Oxford, previously at European Centre for Medium Range Weather Forecasts.

[In contrast to simplistic fixed view of climate change preferred by journalists and politicians, TP adopts more traditional scientific view: create and develop models, make predictions, compare predictions with actual measurements, revise/replace models, try to understand models’ limitations. He seems happy to talk about uncertainties.  That said, he did sign the Met Office “Statement from the UK Science Community”… http://www.timesonline.co.uk/tol/news/environment/article6950783.ece .  Taken together with his final suggestion of the need for a “CERN for Climate”, I’d say he seems like a good scientist who believes in the importance of his science, trying to argue the best case for that science but not necessarily too concerned about “collateral damage”.]

Why ask this question?

Following Climategate, Glaciergate and the repeated failure of Met Office’s seasonal forecasts, this is a question the public and commentators often ask rhetorically to argue that long-term climate predictions must be nothing more than guesswork.

An answer for the public

The failure of short-term prediction doesn't necessarily mean long-term forecasts won’t work but you need to be clear what’s being predicted on different time-scales.

An illustration is the Lorenz model: a simplified weather model which showed "sensitive dependence on initial conditions" (aka the "butterfly effect"): two initially very close states may diverge to very different later states.

Lorenz's model “flips” between two different states; when it flips or how long it stays in one state before going to the other is unpredictable. However, the probability of being in either state over a long period is entirely predictable.

Warm and cold winters and greenhouse gases

The two states might be interpreted as "cold" and "warm" winter"; while it isn't possible to predict whether a particular winter will be cold or warm, the proportion of each type of winter can be predicted.

When an extra "forcing" term is added to the Lorenz model, the system's flips are still unpredictable but the relative probabilities of the two states change in a predictable way. The forcing might be interpreted as the effect of greenhouse gases being added. You can’t say for sure whether this coming winter will be warm or cold but the model predicts that warm winters will be increasingly probable.

TP doesn’t mean that this is a realistic model. It’s only used to make the obvious point that even though you can’t predict the throw of a die, if thrown a lot of times, the probability that a particular number comes up is nonetheless predictable. Weather is a particular event while climate is the probability of particular events occurring.

Hurricane Fish and the butterfly

The butterfly effect in real weather is exemplified by Michael Fish’s 1987 misfortune. At that time the Met Office made one prediction, based on measured data, for any given time-scale. On October 15th, 1987, that single forecast on which MF relied, didn't predict the UK hurricane.

Nowadays people do ensemble forecasting – they use not just the actual measurements but a number of similar but slightly altered values – and sometimes also average over different weather models. All these forecasts are run and the probability of various future weathers is assessed.  Ensemble forecasts made using the data available to MF forecast predict, with significant probability, hurricanes in SE UK, as well as lots of other results. The initial conditions in October 1987 were unstable with a range of very different local predictions.

Forecast probabilities
 
If the public better understood probability, the Met Office could give the relative probability of different forecasts. Such predictions can be quite good. For example, ECMRWF predicts the probability of precipitation throughout Europe, then looks at places for which probability is, say, 60% to see whether the fraction which actually experience precipitation is 60%.  Predictions aren’t perfect but work well (to within ~2-3%) across the whole range of probabilities.

So the straightforward answer is that short-term failures of weather prediction are due to the perceived requirement of giving a single forecast rather than a range of forecasts with different probabilities.  If the probabilities can be accurately predicted then very long-range climate forecasts may still be possible.
Weather influences climate

But that was the easy message: the real relationship between weather and climate is subtler and more complex because initially small-scale weather effects may become amplified to affect climate. It’s not obvious beforehand when such complications will occur.

For example, using the same models that work well at predicting short-term weather probabilities, you can make regional seasonal forecasts and assess them in a similar way.  TP has found that predictions are OK for, for example, Amazon and Central America but completely wrong for Northern Europe, no better than chance. It seems that climate models may have systematic biases, for example persistent blocking anti-cyclones (as the UK has experienced for the past few months) are typically under-represented in climate models.

“Robust” prediction goes wrong

Should the reliability of ensemble forecasts matter for regional climate predictions? One school of thought holds that since the lifetime of a blocking anti-cyclone is very brief compared to, say, 100 years they don't matter.

TP thinks this may be simplistic.  For example, IPCC AR4 described a supposedly robust signal for future warmer and dryer European summers, typified by that of 2003. But these predictions used a model with a grid spacing of ~160km. When the calculations are repeated on a grid spacing of ~20km, the signal is much weaker and fragmentary. It seems as though the difference is due to the higher frequency of blocking a/c at the finer resolution.

The point is that climate models running on computers can’t include features smaller than their grid size but the actual climate may amplify the importance of such features so that they make a significant difference to climate predictions.
 

Possible fundamental problems

How much resolution is needed to capture climate change details? For example, convective instabilities (~km scale) aren't included in climate models; should they be? Does higher resolution reduce uncertainty? There’s no good theory for estimating how well climate simulations converge with increasing resolution.   Even worse, the equations themselves change with finer resolution as new features have to be included.

The underlying unknown is whether there is an irreducible level of uncertainty in the climate equations.
How do you test predictions of climate 100 years hence?

Obviously you can’t use the traditional method of comparison with real measurements.  You can only make and test relatively short-term predictions.  When these are accurate, with known limitations and the same models are used for 100-year prediction, then you may have confidence in the longer-term predictions.

We need bigger computers

All of which invites the question: does climate science have enough computing power to establish its own limitations?   Perhaps there’s a need for a “CERN for climate science”, something apparently to be proposed here… http://www.21school.ox.ac.uk/news_and_events/events/201001_Bishop.cfm  …by a Dr Robert Bishop, president of something called the “International Centre for Earth Simulation Foundation”.  [I haven’t been able to find out any more about the ICESF so it may currently be only a glint in Dr Bishop’s eye.]

 

 

Article originally appeared on (http://www.bishop-hill.net/).
See website for complete article licensing information.