Thursday
Jan102013
by Bishop Hill
Spot the difference
Jan 10, 2013 Climate: MetOffice Climate: Models
I'm still suffering. Even whisky isn't working. It must be serious.
In the meantime, Paul Homewood has found something interesting about the Met Office's forecasts.
Reader Comments (216)
Ah! In answer to my own question it just occurred to me that they are probably 5 year runs joined together from 1960 onward. If so I'm not sure why that couldn't be more explicitly stated.
Maybe shouldn't be on a page titled Decadal Forecast ? ;)
It seems to me that the Met has only done one thing wrong. You can't blame them for trying to model the climate. It seems difficult or downright impossible but one doesn't mind them having a go. The problem is that right now there is no indication that they have it modelled to any sort of duplication of reality but they pretend they do. Now, if there is an output from this month's model that says something, and last month or last year it said something different, it is plain from that fact alone that the model cannot be used to formulate policy. It can only be used as a step to a better model. So the mistake of the MO is to publish the results, knowing very well that they depend on parameters, that some processes are not well understood, that the actual achievement of a decent working model would be a scientific and computing feat of unprecedented skill. The scientists who run the model shouldn't let their results get anywhere near the ambitious bureaucrats who want to influence policy or can't admit to politicians that all that money doesn't actually produce useful predictions and may never do so.
All they need to do is stop issuing model output in press releases as if it were reliable. It is, but not in the way they mean.
Oh. putting a gag on their chief scientist to stop her saying daft things might help too.
Were the MO to disband now, what impact would it have on global temperatures in 50 years?
Jan 11, 2013 at 7:12 PM | Martin A
This coment is probably a bit late now. When the UK had an electronics industry and I worked in it, the designers designed things, manufacturing made them and the Test Engineers took the specifications and tested them. When systems failed in the field the first question was to the Test Engineers, "why hasn't your test detected this failuer mode?" sometimes the testing was inadequate, sometimes the manufacturing process made for Early Life Failures and sometimes the design wasn't up to handling all conditions met in the field, and sometimes the components had built in problems (early DRAMS 1K, yes! 1 kilobit were a disaster in the early days). Modelling of anything seems to be exactly the same.
Hopefully on topic as well!
Jan 12, 2013 at 10:11 AM | Justin Ert
I'm sure the MO could provide a projection.
Jan 12, 2013 at 8:43 AM | The Leopard In The Basement
I noticed that as well, at the top of the page they redefined decadal:
"Decadal forecasts, also called 'near-term' climate predictions, range up to a decade ahead".
..and I don't understand this lack of computing power argument. They managed to run the hindcast back a further 25 years (5 extra 5 year runs) and then didn't have any time left to run the forecast out 5 more years?
redc
To do the forecast out to 10 years and have sufficient confidence in it, we'd have had to do the hindcasts over 10 years too in order to check the performance over that longer period, and that would have further increased the computing cost.
Hi Richard
Ball park how much would it cost to go the extra 5 yrs?surely we can get more funding from government to make that happen?
Also, just curious how long in computer time would it take?
Barry Woods -
Further to your list of overly-warm predictions on the previous page of comments, I might add from the 2007 Smith et al. paper describing the new model (thoughtfully provided by Dr Betts - thanks!):
"further warming during the coming decade, with the year 2014 predicted to be 0.30° ± 0.21°C [...] warmer than the observed value for 2004. Furthermore, at least half of the years after 2009 are predicted to be warmer than 1998, the warmest year currently on record."
Good point, rhoda (9:20 AM)
I agree. The computer models have been of some benefit in helping extrapolate a few hours to a few days ahead from existing weather conditions - this is approximately the lifetime of major features in the troposphere which dominate our weather e.g. a depression crossing the Atlantic, before they dissolve or evolve into something much more or much less intense. But climate modelling is a different beast altogether. Naive observers such as Paul Nurse of the Royal Society can be impressed by the fact that our satellites, radiosondes, radar and remaining ship and surface stations can produce so much data and the computers can absorb it to produce a decent representation of current and imminent conditions, that they are readily seduced into thinking good things about them when it comes to climate. Some may recall as much from his shallow tv programme on scepticism and science:
[https://sites.google.com/site/mytranscriptbox/home/20110124_hz]
Paul Nurse could have been replaced by an articulate child in that clip, and we'd have been none the worse off for the replacement.
I remember how computer models of climate were regarded with some amusement in the 1970s. You could get the models to do almost anything, sometimes without even having to touch them once underway. Ice-covered sphere, iceless sphere, and much in-between. The ice-covered sphere, as I recall, was actually a hard one to pull out of. Once a model went there, you might have to wait quite a while for anything to change! Since then of course computers are bigger and faster, pampering techniques to keep outputs in acceptable bounds have no doubt improved, and flux adjustment proved handy when the wheeze of 'forcing' came along to give the modeller a sense of being able to model all sorts of things as long as they could be, in the modeller's mind at least, converted to an 'external forcing'. There is no doubt unlimited work for the modellers, the programmers, the subject-matter experts to do with GCMs. But I wish they would do it behind closed doors, and not rush out from time to time clutching press releases to tell us of our doom and get political activists all excited about this new lever for interfering in our lives. Perhaps the genie is too far out of the bottle, albeit a doddery-old genie that can't seem to do much for us except provide fuel for totalitarians and do-gooders. So a moratorium on GCMs would not work. A code of ethics might help. Or better still, lavish funding for really critical audits of models and their limitations.
I found a page on the MO which helped answer some of my questions.
Decadal Forecasting - What is it and what does it tell us?
Not sure what is the merit of showing the nine -5 year- hindcasts back to 1960. Surely they can only be meaningfully looked at individually. I guess the cost in time and money is the same for each. The only effect, at the scale shown of shorter intervals of runs and further back in time, is an apparent closer agreement with long term trends.
Also interesting that it seems that the 5 year segments don't all start from a point on the actual temperature line. If they can start nearly 0.2 degrees out (see 1980) when refreshed with real world data, what are we supposed to make of their further continued fluctuations for the next 5 years?
I'm not sure what we are supposed to impressed by; taking the 1980 segment you can see a projected valley totally contradicts the actual sharp peak overlaid by real temps. But not to worry it gets reset in 1985 to real world figures and we are off again for another five (hindsight) years which look vaguely more like the real temperature shape this time.
The whole exercise and deployment of the information seems incredibly arbitrary, un-anchored in rigour
Ironically the projected temps to 2017 don't mean anything to me after looking at this. But I am interested in the questions of the issues this may have raised within the more PR minded at the MO. Their selective shyness and publicity seeking isn't very convincing either.
Richard Betts
would you accept that the Met Office has been overconfident in predictions about global warming made in the past and as a result politicians have not fully understood just how complex and difficult the whole climate thing is?
As a secondary point, something that I really hate - expression like 'this gives us more confidence'. Utterly meaningless PR drivel. Yesterday I was 1% confident that I would walk again. Today I am 2% confident. So you might say today I am doubly confident but it is the language of scoundrels and the Met Office and those representing it should know better.
You should come to my computer shop. I am offering up to 80% off on all items.
Thanks
Dolphinhead
John Shade
Excellent post, and your withering footnote for the Nurse/Bindschadler transcription brings back all the spiteful malevolence thrown at sceptics in that nasty BBC piece.
How many think that, if the Met Office, or any group of climate scientists for that matter, would rush to publication the moment they find something that challenges the consensus status quo?
How carefully do you think they would time it? Do you think they would wait so the results don't get included in the upcoming IPCC report?
Did the Met Office know about these results that show a pause into the future at the same it tried to criticize David Rose for correctly interpreting temperature data showing a pause?
[snip - probably a bit OTT]
Richard Betts:
I previosuly wrote:
"knowing very well that they depend on parameters, that some processes are not well understood, that the actual achievement of a decent working model would be a scientific and computing feat of unprecedented skill. "
Are any of my three assertions there incorrect? Why would anyone want to advise policy based on a model which is still producing rather different results as it is improved?