Buy

Books
Click images for more details

Support

 

Twitter
Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« A parody? | Main | Who is behind the Nazca vandals? »
Saturday
Dec272014

Schmidt and Sherwood on climate models

Over the last week or so I've been spending a bit of time with a new paper from Gavin Schmidt and Steven Sherwood. Gavin needs no introduction of course, and Sherwood is also well known to BH readers, having come to prominence when he attempted a rebuttal of the Lewis and Crok report on climate sensitivity, apparently without actually having read it.

The paper is a preprint that will eventually appear in the European Journal of the Philosophy of Science and can be downloaded here. It is a contribution to an ongoing debate in philosophy of science circles as to how computer simulations fit into the normal blueprint of science, with some claiming that they are something other than a hypothesis or an experiment.

I'm not sure whether this is a particularly productive discussion as regards the climate debate. If a computer simulation is to be policy-relevant its output must be capable of being an approximation to the real world, and must be validated to show that this is the case. If climate modellers want to make the case that their virtual worlds are neither hypothesis nor experiment, or to use them to address otherwise intractable questions, as Schmidt and Sherwood note happens, then that's fine so long as climate models remain firmly under lock and key in the ivory tower.

Unfortunately, Schmidt and Sherwood seem overconfident in GCMs:

...climate models, while imperfect, work well in many respects (that is to say, they provide useful skill over and above simpler methods for making predictions).

Following on from this, the authors examine climate model development and testing, and both sections are interesting. For example, the section on tuning models includes this:

Once put together, a climate model typically has a handful of loosely-constrained parameters that can in practice be used to calibrate a few key emergent properties of the resulting simulations. In principle there may be a large number of such parameters that could potentially be tuned if one wanted to compare a very large ensemble of simulations (e.g. Stainforth et al 2005), but this cumbersome exercise is rarely done operationally. The tuning or calibration effort seeks to minimise errors in key properties which would usually include the top-of-the-atmosphere radiative balance, mean surface temperature, and/or mean zonal wind speeds in the main atmospheric jets (Schmidt et al 2014b; Mauritsen et al 2012). In our experience however tuning parameters provide remarkably little leverage in improving overall model skill once a reasonable part of parameter space has been identified. Improvements in one field are usually accompanied by degradation in others, and the final choice of parameter involves judgments about the relative importance of different aspects of the simulations...

This tallies with what Richard Betts has said in the past, namely that modellers are using the "known unknowns" to get the model into the right climatic ballpark, but not to wiggle-match. However, I'm not sure that users of climate models can place much reliance on them when there is this clear admission that the models are nudged or fudged so that they look "reasonable".

The section on model evaluation is also interesting:

The most important measure of model skill is of course its ability to predict previously unmeasured (or unnoticed) phenomena or connections in ways that are more accurate than some simpler heuristic. Many examples exist, from straightforward predictions (ahead of time) of the likely impact of the Pinatubo eruption (Hansen et al 1992), the skillful projection of the last three decades of warming (Hansen et al 1988; Hargreaves 2010) and correctly predicting the resolution of disagreements between different sources of observation data e.g., between ocean and land temperature reconstructions in the last glacial period (Rind and Peteet 1985), or the satellite and surface temperature records in the 1990s (Mears et al 2003; Thorne et al 2011). Against this must be balanced predictions that did not match subsequent observations—for instance the underestimate of the rate of Arctic sea ice loss in CMIP3 (Stroeve et al 2007).

I was rewatching Earth: Climate Wars the other day, and laughed at the section on the credibility of climate models, which essentially argued that because Hansen got the global response to Pinatubo correct we should believe what climate models tell us about the climate at the end of the next century. Of course, we'd shout it to the roottops if Hansen's model had got it wrong, but I think some recognition is due of what a small hurdle this was.

Similarly, how much confidence should climate modellers have in Hansen's 1988 prediction? As the Hargreaves paper cited notes, Hansen's GCM overpredicted warming by some 40% as assessed in its first 20 years. This was still better than a naive prediction of no warming, but was still a long way out. Moreover, it should now be possible to redo Hargreaves' assessment at the 25-year mark and it is more than likely that the naive prediction will now outperform the GCM.

And what about the Arctic sea ice predictions? You have to laugh at the authors' shamelessness in picking Arctic sea ice here. Look, it's worse than we thought! Nevertheless, Stroeve et al 2007 proves an interesting read, with computer model simulations presented alongside observational data going back to 1950. The early figures in this dataset were apparently based on a paper from the Met Office, a read of which reveals that they were based on interpolation from other data points. The paper also contains these words of caution:

Care must be taken when using HadISST1 for studies of observed climatic variability, particularly in some data sparse regions, because of the limitations of the interpolation techniques, although it has been done successfully...

Datasparse regions like the Arctic then?

I think I'm right in saying that there has been another paper published recently which reconstructed sea ice levels from old satellite photos and showed that the Met Office figures were too high, but I can't lay my hands on it at the moment.

So, Schmidt and Sherwood is an interesting read, but I'm not sure that the poor policymaker will draw much comfort from it.

 

PrintView Printer Friendly Version

Reader Comments (54)

..'climate models, while imperfect, work well in many respects (that is to say, they provide useful skill over and above simpler methods for making predictions).'

I am not sure that is entirely [correct?] given the output of some models they could not be called more skilled than say flipping a coin and calling heads or tails . But then this does match the 'heads you lose , tails I win ' approach extensively used in climate 'science so perhaps in their frame of reference, as opposed to those actually doing good science, the authors may be right .

Still the question can be asked , without models what have they got .
And the answer to that is 'not a lot ' and no where near enough to keep their gravy train and ideology crusade on tack , so given that how good do you think they will claim their models are?

Dec 27, 2014 at 11:29 AM | Unregistered CommenterKnR

The poor policy maker won't read the paper (nor be encouraged to do so). Instead he or she will be told The Science is Settled, and will do what he wants to do, namely throw money away.

What we need is a paper comparing the successful prediction rate of climate models with something similar e.g. picking the winner of horse races by using a pin when blindfolded.

Dec 27, 2014 at 11:34 AM | Unregistered CommenterGraeme No.3

You sure do know how to celebrate Christmas!

Dec 27, 2014 at 11:47 AM | Unregistered CommenterBob

One thing is certain, you wouldn't use unvalidated models like these in industry, where things have to work, profits have to be made and people's jobs depend on getting things right. You'd be given your P45 pretty smartly if you tried to use something as worthless as a climate model. But when you work at the taxpayers' expense, anything goes.

Dec 27, 2014 at 11:47 AM | Registered CommenterPhillip Bratby

Appalling English

"(that is to say, they provide useful skill over and above simpler methods for making predictions)"

"they provide useful skill". What on earth does that mean?

Dec 27, 2014 at 12:11 PM | Unregistered CommenterJeremy Poynton

Is there no limit to the extent of self-delusion by people like Schmidt**?

**That they imagine they can continue to fool us.

Dec 27, 2014 at 12:12 PM | Unregistered CommenterNCC1701E

The sad fact is that despite the £$millions spent on "state-of-the-art" super-computers and GC models, the vast majority are outperformed by the simple model first proposed by Callendar (1938);
http://onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/pdf

As discussed at Climate Audit
http://climateaudit.org/2013/07/26/guy-callendar-vs-the-gcms/

Dec 27, 2014 at 12:31 PM | Unregistered CommenterDon Keiller

@J Poynton: the typo is really 'useful kills'.

Dec 27, 2014 at 12:36 PM | Unregistered CommenterNCC1701E

Investment pundit firms are not very keen on league tables based on their past performance, in comparison to the FTSE index or equivalent. This is because no firm wants proof they did worse than the FTSE, and if they did do better, they want proof they were the best, not just better than average.

There seems to be an emphasis in climate models on providing the scariest scenario, rather than the most accurate model.

Could we have a league table, with the top spot being for the most pointlessly, stupid and inaccurate climate model? With the actual real earth being used as a benchmark? (Not other models)

This would be an opportunity for scientists to either distinguish, or simply self extinguish.

Dec 27, 2014 at 12:59 PM | Unregistered CommenterGolf Charlie

"So, Schmidt and Sherwood is an interesting read, but I'm not sure that the poor policymaker will draw much comfort from it."

As Graeme No.3 said earlier, I doubt that the policymaker will read it. Absolutely right! The politicians have already set out their policy and only a massive change to a colder climate is likely to have any effect on them. Whenever our climate minister, Liz Truss, is asked any awkward question I squirm with embarrassment as she trots out some benign gibberish, preceded by her trade mark "what I would say...." - She seems to think this sounds intelligent. See her in action at this link - https://www.youtube.com/watch?v=rb4vh0mcFK4 with Andrew Neil, if you can bear to watch.

The policy has been decided, the hapless ministers are merely the "frontman" (or should I say front person!) put forward to try and sell said policy to the largely ignorant, and increasingly sceptical, public.

Dec 27, 2014 at 1:33 PM | Unregistered CommenterDerek

Schmidt et al 2014

"While individual parameterizations are calibrated to process-level data as much as possible, there remain a number of parameters that are not as strongly constrained but that nonetheless have large impacts on some emergent properties of the simulation. We use these additional degrees of freedom to tune the model for a small selection of metrics."

http://pubs.giss.nasa.gov/docs/2014/2014_Schmidt_etal_3.pdf

It's a dream of mine to be able to use other people's money to play Xbox.

Dec 27, 2014 at 2:07 PM | Unregistered CommenterDaveO

The paper's assertion that CMIP5 models show useful skill is simply false.
Take actual temperatures, not anomalies. Models are all over the map. Hence cannot get important features like dew point, evaporation, cloud condensation. Hence have no skill at important phenomena like cloud cover (albedo) and precipitation, according to AR5 WG1 chapter 7. And at a macro level, CMIP5 sensitivities are about twice those of all recent observationally constrained estimates.
Moreover, no matter how tuned, lack of model skill is INHERENT. Essential convection cell processes (tropical thunder storms) take place in grid cells on the order of 10km or less on a side. The finest resolution GCMs are 100, and most are 250. As grid cells shrink, there are more of them, and the time steps must also shrink. 'Accurately' modeling global tropical convection is computationally intractable, and according to AR5 will remain so for at least several more decades.
Specifics in essays Cloudy Clouds, Models all,the way Down, and Humidity is still Wet in ebook Blowing Smoke.

Dec 27, 2014 at 2:54 PM | Unregistered CommenterRud Istvan

Guy Stewart Callendar. Hey, I looked him up to make sure the spellings are correct.
================

Dec 27, 2014 at 2:58 PM | Unregistered Commenterkim

Jeremy, perhaps there's a misspelling. Try 'useless skill' instead.
==========

Dec 27, 2014 at 3:01 PM | Unregistered Commenterkim

"Once put together, a climate model typically has a handful of loosely-constrained parameters that can in practice be used to calibrate a few key emergent properties of the resulting simulations." Gavin Schmidt/ Steven Sherwood


It's laughable that the Chief Scientist of the Met Office can say about their computer models

"So they are not in a sense tuned to give the right answer, what they are representing is how weather, winds blow, rain forms and so forth, absolutely freely based on the fundamental laws of physics.

The parameterisations are essentially simple empirical formulas trying to characterise aspects that are not understood well enough to model properly. So claiming that models incorporating parameterisations are 'based on the laws of physics' is being economical with the truth bollocks.

Tuning a model based on partial understanding can improve its ability to reproduce past history but without necessarily any benefit to its accuracy as a model of the physics of the situation. My spreadsheet table can reproduce past history with complete accuracy but its ability to predict future data is zero.

Dec 27, 2014 at 3:06 PM | Registered CommenterMartin A

The climate models are built without regard to the natural 60 and more importantly 1000 year periodicities so obvious in the temperature record. Their approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb - July and projecting it ahead linearly for 20 years or so. They back tune their models for less than 100 years when the relevant time scale is millennial.
The entire UNFCCC -IPCC circus is a total farce- based, as it is, on the CAGW scenarios of the IPCC models which do not have even heuristic value. The earth is entering a cooling trend which will possibly last for 600 years or so.
For estimates of the timing and extent of the coming cooling based on the natural 60 and 1000 year periodicities in the temperature data and using the 10Be and neutron monitor data as the most useful proxy for solar “activity” check the series of posts at
http://climatesense-norpag.blogspot.com
The post at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
is a good place to start. One of the first things impressed upon me in tutorials as an undergraduate in Geology at Oxford was the importance of considering multiple working hypotheses when dealing with scientific problems. With regard to climate this would be a proper use of the precautionary principle .-
The worst scientific error of the alarmist climate establishment is their unshakeable faith in their meaningless model outputs and their refusal to estimate the possible impacts of a cooling rather than a warming world and then consider what strategies might best be used in adapting to the eventuality that cooling actually develops.

Dec 27, 2014 at 3:22 PM | Unregistered CommenterDr Norman Page

"the skillful projection of the last three decades of warming (Hansen et al 1988; Hargreaves 2010) "

What about the "skillful projections" of Libby and Pandolfi who in 1979, predicted warming from the early '80s until 2000?
http://joannenova.com.au/2011/05/climate-scientists-who-were-right-30-years-ago/

For all we know, Hansen et al might have read Libby's and Pandolfi's before making their "skillful projections."

Dec 27, 2014 at 3:33 PM | Unregistered Commenterkramer

The word "delusion" came to mind but I was beaten to it. It is very dangerous to attach importance to parts of a model that cannot simulate the system it is designed to simulate.

As PB pointed out, you wouldn't get away with this sort of pseudo science in industry. Just imagine that these people were designing a passenger plane.

I would love to see proper scientists and engineers from industry audit the work of the climate community and apply standard quality processes.

Dec 27, 2014 at 3:54 PM | Unregistered CommenterSchrodinger's Cat

Merry Xmas, Richard!

Dec 27, 2014 at 4:02 PM | Unregistered CommenterBitter&twisted

"...In our experience however tuning parameters provide remarkably little leverage in improving overall model skill once a reasonable part of parameter space has been identified..."

Say what? 'parameter space'? In the program? In memory? Perhaps in Gavinator's brain?

Otherwise his phrase 'remarkably little leverage' closely resembles the old adage "herding cats"; claiming that either works at all is fallacious.


"...The most important measure of model skill is of course its ability to predict previously unmeasured (or unnoticed) phenomena or connections in ways that are more accurate than some simpler heuristic. Many examples exist, from straightforward predictions (ahead of time) of the likely impact of the Pinatubo eruption (Hansen et al 1992), the skillful projection of the last three decades of warming (Hansen et al 1988..."

Hansen was not accurate about the three decades of warming; along with his yearly claims of impending super El Nino's. Skillful? Only in Gavinator's shrine to Hansen in GISS's lowest basement.

Just exactly how are Hansen's predictions about Pinatubo accurate? Did he accurately predict when temperatures would change and exactly where? After the fact observations do not match either GISS's GCMs or Hansen's hysterical claims.

GISS blindly ignores GCMs absolute failures to model climate. GISS also ignores climate's refusal to march in lockstep with CO2 or GISS's climastrology beliefs.

Dec 27, 2014 at 4:30 PM | Unregistered CommenterATheoK

The attraction of climate models to the kind of charlatan that would champion them is obvious. These people have a God complex, they also have total control of the fantasy world that is their models. QED

Dec 27, 2014 at 4:50 PM | Unregistered CommenterSteve Jones

AtheoK poses an interesting question: just how well did Hansen predict the effects and timing of the Pinatubo eruption on global temperatures? And if he got that one right, does that then translate into a model that can accurately predict, before the event, what will happen when we have the next eruption?

This should be an easy skill to validate because the global temperature response to volcanic eruptions seems to be both rapid and short lived, and thus the attribution of temperature changes to the eruption should be reasonably strong.

If the climate scientists can take parameters of an eruption (such as mass and height of earth expelled into the air, prevailing wind directions, etc.) and arrive at repeated, accurate predictions of their effects, then they may gain some credibility.

In other words, we're conducting an experiment where we follow Boyle's guideline of only changing one parameter at a time.

At another extreme, there may well be some sense in following a Bayesian analysis of temperatures since Bayesian analysis is becoming increasingly well founded - the Kalman filter approach. Do we consider such to be models?

(I recall a beautiful demonstration (published) of the application of Kalman filter analysis on UK grid frequency data measured at one second intervals. The analysis was able to predict, reliably, the occurence of frequency falls of more than 0.1 Hz within 10 S - events considered to require generation response to stabilise the grid).

Dec 27, 2014 at 5:08 PM | Unregistered CommenterCapell


tuning parameters provide remarkably little leverage

This is climate speach for "we can get whatever we want by tuning the parameters"

Dec 27, 2014 at 5:15 PM | Unregistered Commenterpax

In our experience however tuning parameters provide remarkably little leverage in improving overall model skill once a reasonable part of parameter space has been identified.

We tune but we don't fine tune.

Dec 27, 2014 at 5:17 PM | Unregistered CommenterSpeed

Do you think we, i.e. readers of BH, could put together a proposal to get government funding or a private equity grant to develop a model that predicts the future of global warming and the career path of the climate scientists.

Dec 27, 2014 at 5:20 PM | Unregistered CommenterAnoneumouse

I've noticed from looking at the SST anomaly corrections that the temperature record appears to be considered a scientific data set rather than an engineering one. What I mean is that it appears to be okay to model, say an adjustment, without subsequent extensive characterisation of it. Maybe some experimentation is done but not covering the breadth of variables that would come into play with something like an airplane engine. From here models are used on the back of more models since initial experiments appear to get you in the ballpark.

The problem is that this approach breeds confirmation bias. And it also demonstrates a lack of understanding of the use of the data set. In the case of SST data it's used as if it is engineering gold standard and hence influences policy. It's nowhere near that standard simply because a lot of hard testing work hasn't been done.

Dec 27, 2014 at 5:45 PM | Unregistered CommenterMicky H Corbett

Sorry, I meant to add that this approach also seems to be how models are used. Get some sort of initial boundary condition reasonably right and then rely on theory. It's a big assumption.

Dec 27, 2014 at 5:49 PM | Unregistered CommenterMicky H Corbett

This is a repeat of a posting I wrote earlier; it is still pertinent.
Morley Sutter | August 13, 2012 at 10:12 am
There is a delightful book that deals with models gone wrong titled “Structures: Or Why Things Don’t Fall Down” by JE Gordon, an engineer. It has many examples of things which failed or did not work even though they were derived from models that showed that they should be stable structures or function well. Models are not reality, that is why they are called models; no amount of statistics or computer power or statistical manipulation can obviate the need for verification in real time and real life. I recommend that JE Gordon’s book be read by all who work with models including climatologists. It is a good antidote to human hubris.

Dec 27, 2014 at 6:00 PM | Unregistered CommenterMorley Sutter

The worst scientific error of the alarmist climate establishment is their unshakeable faith in their meaningless model outputs and their refusal to estimate the possible impacts of a cooling rather than a warming world and then consider what strategies might best be used in adapting to the eventuality that cooling actually develops.
Wise words, Dr Page, and words which emphasise that the whole farrago of AGW/ACC is nothing to do with science and everything to do with being a political tool for the establishment of global government control. I keep bringing to mind the Roman empire, where the government grew increasingly large, and the politicians less connected with the public as they vied with each other to expand their own personal empires, utterly oblivious to the reality outside their windows. That did not end very pleasantly for anyone, and, I suspect, neither will it this time.

Dec 27, 2014 at 6:21 PM | Registered CommenterRadical Rodent

The political arm of climate science has taken over quite a few institutions, organizations, and agendas over the years. I, for one, enjoy watching this process. What odds do they have on the field of philosophy of science? Will recalcitrant editors be replaced with ecofriendly ones? Will the peer review process be "modeled" according to the author and position on certain topics? Popcorn, please.

Borrowing language from "known known" sociopath Rumsfeld... come on, folks.

Dec 27, 2014 at 7:09 PM | Unregistered CommenterBrute

Predictions?
I thought they only made projections.

Dec 27, 2014 at 7:37 PM | Unregistered CommenterTony Hansen

As I have noted on other occasions, they are attempting to model a system (the Earth's climate) that is both open and chaotic. The 'open' nature of the system means that it is inherently impossible to recognise -- let alone, quantify -- all relevant variables, and the 'chaotic' quality merely exacerbates the difficulty of processing data that are, due to the system being open, useless for all but the shortest-term predictions/projections. As computers become more powerful they just take less time time to generate meaningless results.

Because of this, even if there happens to be an occasional accurate prediction, it must be presumed to be fortuitous

It is a depressing indication of the state of science that this rubbish is given any credence at all.

Dec 27, 2014 at 9:05 PM | Unregistered CommenterSceptical lefty

"the skilful projection of the last three decades of warming (Hansen et al 1988; Hargreaves 2010) "

Seriously? !!!! Its a good thing I had finished my coffee when I read that. ! :-)

The temperature is currently below Hansen's "no increase in CO2" scenario.

The only comparison to Hansen's work should be to the one where CO2 emissions are going gangbusters.

And Hansen's projection for that is wildly inaccurate.

Not to mention that there has been a plateau for more than half of that 30 year.

Only someone with an extremely tenuous grasp on reality could ever call Hansen, skilful !!

Dec 27, 2014 at 9:10 PM | Unregistered Commenterthe Griss

In the Introduction Schmidt & Sherwood refer to Friedrich Hayek’s Nobel Prize lecture, saying

von Hayek (1974) contrasted the new need for complex simulation in economics to discover its emergent properties with the “simplicity” (in his view) of the physical sciences.

If Schmidt & Sherwood had understood the short speech, they would realize that Hayek would conclude that meaningful “complex simulation” is impossible in economics. The subject deals with phenomena of organized complexity, in which the character of the structures showing it depends not only on the properties of the individual elements of which they are composed, and the relative frequency with which they occur, but also on the manner in which the individual elements are connected with each other. This seems to be the nature of climate as well, which is why I find the title of Hayek’s lecture a fitting description of climate scienceThe Pretence of Knowledge.

Dec 27, 2014 at 9:35 PM | Unregistered CommenterKevin Marshall

"imperfect"

A real scientist wouldn't use such poetry to describe something objectively. Climate models aren't diamonds.

Andrew

Dec 27, 2014 at 10:10 PM | Unregistered CommenterBad Andrew

...climate models, while imperfect, work well in many respects (that is to say, they provide useful skill over and above simpler methods for making predictions).

Hold on, hasn't Dr S gone on record to say that GCMs don't make predictions?

Dec 27, 2014 at 10:16 PM | Unregistered CommenterAdam Gallon

Edward Lorenz knew the limitations of his art and cautioned meterologists/climatologists to only approach problems that are tractable.

Christopher Essex has been telling the likes of Schmidt and Sherwood, and the rest, that their problem is not tractable.

And they just carry on.

In a sense, I don't blame them. Or I'm not surprised. Who else in the history of science has actually 'fessed up and said "Well, actually, this is probably not a problem that can be solved in a finite amount of time. I'm going to give up, and start collecting sea-shells instead"?

Ain't gonna happen.

Dec 27, 2014 at 10:17 PM | Unregistered Commentermichael hart

I found this quite interesting, partly because of where it lead.

I had previously read Hargreaves JC (2010) "Skill and uncertainty in climate models" that they use as the basis for claiming Hansen had skill. This had been given a bit of a working over in the blogosphere as I recall. I was interested in whether she had followed up given her conclusion "It is important that the field of uncertainty estimation is developed in order that the best use is made of current scientific knowledge in making predictions of future climate."

This lead to Hargreaves J. C., Annan J. D.. "Can we trust climate models?" WIREs Clim Change 2014 (that was discussed at Judith Curry's earlier in the year http://judithcurry.com/2014/06/20/can-we-trust-climate-models/ ).

Buried in that is the statement "A recent analysis suggests that dynamical forecasts based on climate models perform clearly worse than empirical methods [in decadal forecasts]". My own sense has been that it is these shorter-term forecasts that are the critical ones right now. The analysis quoted is Suckling EB, Smith LA. "An evaluation of decadal probability forecasts from state-of-the-art climate models."J Clim 2013. There is a draft pdf available on line and it is a good read.

Poking around I found CATS at LSE http://www.lse.ac.uk/CATS/Home.aspx and an excellent presentation given earlier this year by Director LE Smith "The User Made Me Do It: Seamless Forecasts, Higher Hemlines and Credible Computation. Its OK to say we know we don’t know" at the 'Climate Science Needed to Support Robust Adaptation Decisions' Workshop, GeorgiaTech, Atlanta, USA ( http://www.eas.gatech.edu/sites/default/files/SmithTalkGT.pdf ) .

It looks as though it would have been good to hear.

Dec 27, 2014 at 10:49 PM | Unregistered CommenterHAS

In the Hansen Pinatubo paper they cite he predicted 1992 would be "at least" 0.4C colder because of the eruption.

1992 turned out 0.4C colder than this year.

22 years of no man-made global warming then.

Dec 28, 2014 at 1:32 AM | Unregistered CommenterFergalR

Just as we have seen global warming morph into climate change morph into extreme weather ... to next morph into ocean acidification, it is interesting to observe the climate community - barring this above - starting to back away from claiming the models to be a better representation than real world observation. What is up, and where is Mr. Post-Normal Science ("bollocks", rather, is the term used in this household), Mike Hulme, when you most need him?

Various typos suggestions .... quite so!

It's a lovely day here in rural Somerset. Freezing cold, a clear blue sky, buzzards mewing above, and the dogs ever so frisky on their morning walk. Best wishes to all.

Dec 28, 2014 at 11:52 AM | Unregistered CommenterJeremy Poynton

Predictions?
I thought they only made projections.

Dec 27, 2014 at 7:37 PM | Tony Hansen
==============================================

No, no, no - they create "model-based evidence". Magic!

Dec 28, 2014 at 11:54 AM | Unregistered CommenterJeremy Poynton

"In some cases, research groups using individual models
have made surprising predictions, for example that
global warming would not diminish Antarctic sea ice in
the short term (Manabe et al 1992), or that global-mean
surface temperatures would cool temporarily during the
last decade despite continued heat buildup in the system
(Keenlyside et al 2008). The first of these surprising
predictions has been borne out, while the second was
an over-prediction of what turned out to be a reduction
in the mean surface warming rate rather than a reversal.
These were not robust predictions across multiple
models and it remains unclear as to whether these predictions
were based on the ’right’ reasons, so it cannot
be claimed that the community at large foresaw these
things, but they show the ability of models to explore
unexpected possibilities."

This is Schmidt and Sherwood handing themselves compliments under the table while admitting they don't deserve it.

The 'community', as Schmidt calls it, makes such an enormous *range* of predictions it can never go wrong.

Dec 28, 2014 at 12:38 PM | Registered Commentershub

The Galileo Movement (on Facebook) has brought this to my attention, which I would like to share with you. It certainly puts Schmidt & Sherwood into perspective – especially when you look at the date of the article!

Look also at the date given when wine-making ceased in this country; makes you wonder about the local wines claimed for the Pepys era.

Dec 28, 2014 at 1:10 PM | Registered CommenterRadical Rodent

"Usefull skill"
Typical garbage speak only our Gav an UnReal Climate come out with.

Dec 28, 2014 at 1:56 PM | Unregistered CommenterStacey

It's interesting that they offer Hansen et al. 1992 on Pinatubo as a successful prediction. It's very obvious in the models-vs.-observations graphs (e.g., this one) that models tend to exaggerate the cooling effects of eruptions. The paper, as noted above by FergalR, predicted a drop of over 0.4 K, while the actual effect was a drop of about 0.2 K.

A factor of two isn't bad -- but in the end a factor of 2 covers the disagreement between, say, the models' mean ECS of 3.2, and the observation-based ECS of ~2.0 K (e.g. Otto et al.).

Dec 28, 2014 at 2:20 PM | Registered CommenterHaroldW

Imperfect? They don't even demonstrate basic adequacy!

Gavins preferred Giss model was of course proven recently to be the worst of all at spatial correctness. They reason they like to focus on the Arctic is because they do not recreate temps correctly anywhere else on Earth. But worse still, they are only matching the model up to the equally bogus Giss data extrapolations across unmonitored space. It's an absolute farce!

Dec 28, 2014 at 2:45 PM | Unregistered CommenterJamesG

@Salopian: my task in life is to educate those who have been taught incorrect physics. If that means you, so be it. Dec 27, 2014 at 10:25 PM | NCC1701E


You appear to have ejected your warp core...

Dec 28, 2014 at 4:57 PM | Unregistered CommenterDavid Jay

From Jonathan Leake's article in the Sunday Times today:

"Peter Stott, the Met Office head of climate attribution, said: “Current global average temperatures are highly unlikely in a world without human influence on the climate.”

With advice like this, based, of course, on model outputs (though not stated), what chance do policy makers have?

Dec 28, 2014 at 4:59 PM | Unregistered CommenterTim

Tim
Peter Stott, the Met Office head of climate attribution and therefore some one out of job if AGW is BS .

Oddly it is not usual to find those whose jobs depends on dealing with an issues , can always find it and if they are any good always finds its 'worse then we thought '

Dec 28, 2014 at 5:54 PM | Unregistered CommenterKnR

This thread has probably done its dash, but I should just add a bit more about the 'Climate Science Needed to Support Robust Adaptation Decisions' Workshop, GeorgiaTech, Atlanta, USA and the Smith LA from CATS at LSE since I commented above in some haste. In slower time I realised that Judith Curry had been a participant too and had posted a series on the workshop with the Smith (and Palmer) presos on climate models being at http://judithcurry.com/2014/02/18/uk-us-workshop-part-iv-limits-of-climate-models-for-adaptation-decision-making/

The point that I didn't make explicit was that in significant contrast to the breezy confidence that all is well with climate models exuded by Gavin Schmidt and Steven Sherwood, Leonard Smith is arguing (quoting Curry):

"... climate models were not up to the standards in relation to other computational fluid dynamics groups in terms of validation, verification, uncertainty quantification, and transparency, stating ‘Trust can trump uncertainty.’" [Meaning by this last comment that ‘It’s OK to say that we know we don’t know’ and by doing so users will accept the limitations (and be able to take them into account).]

"Smith describes the following limits to transparency: dangerously schematic schematics; showing anomalies versus a real-world quantity (which can hide systematic error that are larger than the observed anomalies); equidismality (rank order beauty contests of climate models, without comparison of some absolute measure of quality); buried caveats; burying the bad news about model performance".

All of which sounds pretty much like what Schmidt and Sherwood is designed to do. In particular Smith draws attention to a number of examples including the failure of climate models to model absolute temperature, and the limited extent to which they include significant mountain ranges eg the Andes, and that these limitations tend to be glossed over. I for one was unaware of the limited inclusion of surface elevation.

Curry goes on: "Smith provides the following summary of the demonstrated value of weather/climate forecasts as a function of lead time:

Medium-range: Significant, well into week two +
Seasonal: Yes, in some months and regions
Decadal: Not much (global and regional)"

Another section of the workshop looked at other possibly ways to give users more useful information on the decadal scale (the particular focus of the workshop was regional forecasts).

All of which is a refreshingly honest appraisal to contrast Schmidt and Sherwood with.

I'm with Smith, if you want users to trust you tell it how it really is rather than insist all is fine and dandy. In the end you get caught out and you're dead meat.

Worse still for the advocates is the death by a thousand cuts that is currently going on.

Dec 29, 2014 at 5:20 AM | Unregistered CommenterHAS

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>