Garden shed tinkerers
There is a fascinating layman's intoduction to climate models over at Ars Technica. Author Scott Johnson starts out with the standard potshot at global warming dissenters, takes a look at how a GCM is put together and talks to lots of climate modellers about their work and all the testing they do; it has something of the air of a puff piece about it, but that's not to say that it's not interesting.
Here's how it opens:
Talk to someone who rejects the conclusions of climate science and you’ll likely hear some variation of the following: “That’s all based on models, and you can make a model say anything you want.” Often, they'll suggest the models don't even have a solid foundation of data to work with—garbage in, garbage out, as the old programming adage goes. But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?
Climate models are used to generate projections showing the consequences of various courses of action, so they are relevant to discussions about public policy. Of course, being relevant to public policy also makes a thing vulnerable to the indiscriminate cannons on the foul battlefield of politics.
Skepticism is certainly not an unreasonable response when first exposed to the concept of a climate model. But skepticism means examining the evidence before making up one’s mind. If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.
"Useful scientific tools"? Well yes, I think I would agree with that. The article describes how a divergence of model and real-world behaviour can help uncover gaps in our knowledge. This is great - this is what a GCM should be for. What it isn't is a prediction of the future - something we can pin policy measures on. But while the article is entitled "Why trust climate models?", in fact to its credit, the article doesn't push this more expansive claim about the usefulness of climate models very much. The case seems to be that because modellers do a lot of testing against historic data we should trust the models. Not convincing at all, in my opinion.
The flimsiness of the case also becomes clear when Steve Easterbrook makes his entrance:
Easterbrook has argued against the idea that an independent verification and validation protocol could usefully be applied to climate models. One problem he sees is that climate models are living scientific tools that are constantly evolving rather than pieces of software built to achieve a certain goal. There is, for the most part, no final product to ship out the door. There's no absolute standard to compare it against either.
To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well. Whether you add it or not then depends on what you're trying to achieve. Is the primary test of the model to match certain observations or to provide the most realistic possible representation of the processes that drive the climate system? And which observations are the most important to match? Patterns of cloud cover? Sea surface temperature?
Here, Easterbrook seems to be making a pretty strong case that climate models have no part to play in the policy process. How can you have a model that can't be built and tested to engineering standards informing policy? How can the public trust the moving feast that he describes? And if models really are being built without a specific goal in mind then the funding councils surely have some fairly pointed questions to answer.
The public is being asked to fork out lots of money on the basis of climate model output. Climate modellers have to decide if they are going to be garden-shed tinkerers or engineers whose findings are robust enough to inform the policy process.
Reader Comments (105)
The model cannot be better than its weakest part, which can be a variable that is given zero effect when it should be given some effect. Explanation by analogy, from script of the great movie, Dr Strangelove. This quote comes from the scene in the US war room, when the president (Muffley) learns that a rogue general has likely initiated global oblivion.
.........................
Muffley:
There's nothing to figure out General Turgidson. This man is obviously a psychotic.
Turgidson:
Well, I'd like to hold off judgment on a thing like that, sir, until all the facts are in.
Muffley:
(anger rising) General Turgidson, when you instituted the human reliability tests, you assured me there was no possibility of such a thing ever occurring.
Turgidson:
Well I don't think it's quite fair to condemn a whole program because of a single slip up sir.
Doug McNeall Sep 6, 2013 at 9:56 AM
That link gives me a 404 error?
ConfusedPhoton Sep 6, 2013 at 9:57 AM
Precisely.
I don't understand why climate modellers refuse to acknowledge that their models simply reflect their assumptions about how the climate works.
All modellers are building models that assume that additional CO2 will cause an increase in temperature, and that the increase in temperature will lead to positive feedbacks and higher temperatures.
I have no problem with that - the model then shows what will happen IF ALL THE ASSUMPTIONS USED TURN OUT TO BE CORRECT.
But that is it. That's all they have done.
If the model makes accurate predictions over a long period - say ten years at least - then it can be said to be robust. But until it does so, it is simply a model of what might happen if the model and its assumptions are correct.
Hindcasting is nice but proves nothing except that the model works how the modellers thought it would. It doe not prove that the model has good predictive capability.
And would all modellers please recognise that if he forecasts of your model are wrong because things getter hotter than predicted, your model is just as wrong as if things remain cooler than predicted. Your model is not "more right" if temperatures go up 2 degrees instead of a forecast 1 degree.
Jack, it is quite normal in protein-folding computations to 'constrain' the molecule within the 'correct' result that is already known from the 3-dimensional structure as experimentally determined. For any significantly sized problem it is always a question of WHEN, not IF, the model will leave-the-rails without some 'expert' guidance.
There are good theoretical reasons to argue that it is not possible to compute through the complexities by brute force (not in a universe the size of the one we appear to live in, at least). If modellers had solved the, certainly larger, problems of climate, then it would truly be Nobel prizes all-round.
As such, "climate" is the aggregation of many, many subdisciplines drawn from physics, chemistry, biology, etc. People in all these areas certainly have sufficient skill to comment on aspects of the larger picture, they just don't choose to call themselves climate scientists.
does not work like that doug
The article runs to 3 pages, but despite saying "The results are compared to observations of things like changing global temperatures", it fails to make any mention of the hot issue of the moment - the increasing discrepancy between the models and observations, as noted in the recent papers by Fyfe et al, von Storch, etc.
This plus the cherry-picked choice of 'experts' such as Easterbrook, shows that this is just a propaganda piece.
Thanks for that summary Paul. I've been concentrating on other forms of software development and their relationship to the transformation of business through the internet and haven't got around to the article in Ars Technica - a publication that I've found pretty helpful on a range of subjects in my professional and non-climate life. But propaganda comes easily in this area. We should be grateful for The Register and Andrew Orlowski in that regard.
I' m sure that Richard Betts would want to comment on this paper in relation to model shortcomings.
Overestimated Global Warming Over the Past 20 Years (2013)
John C. Fyfe, Nathan P. Gillett & Francis W. Zwiers
Nature Climate Change 3,767–769.
link here
http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309
If the models are so amazing, how have they managed to fail their first real world test against observations - the current plateau in world average temperature? If they cannot even predict 15 years into the future with any reliabilty, and given that the error terms accumulate exponentially, then to make public publicy decisions on the future predictions of already wrong models is utterly irresponsible.
As I described on one of the earlier threads, all this is virtual world climate science masturbation. It may be fun to do, it may have great academic interest (like studying far distant stars), but it has no predictive capability at all that has been validated in the real world. And all the while, to get the initial conditions required even to run these models, other computer model outputs are being used as input for GCM models.
IF climate scientists wanted to demonstrate the validity of their models they just have to make very clear, agreed statements about some future climate state that we can measure in 5, 10 or 15 years time. And then shut up and go and play with their models quietly in a corner until we can evaluate their predictions.
I have seen no testable predictions published for any model to date that have been correct, all the justification is "hindcast". Funny how they are starting to "hindcast" the plateau in global temperatures now, after all by 2001 we were told the "science is settled". Met Office winter predcitions, anyone?
Lee Trevino had a tale about a guy with a vicious slice who kept hitting rough on the right. Lee asked him why he didn't just aim left. The guy replied "because I might hit a straight one".
A climate scientist might say "because we modelled my backswing and It says I should be hooking. I just need to hit further until it comes back round. Anyway I'm still hitting fairway if I ignore the rough."
With regard other "large complicated" models, I attended a conference on a risk assessment software package used by a very large percentage of the investment banks and other people who got caught out by the 2008 crash. Yes, their models turned out wrong (after many years of doing pretty well), but boy were they ever trying to find out why and make sure it didn't happen that way again.
One of the best lines - and profound in its way - was when someone said that their models were based on reality, but then reality went wrong. After laughing, everyone thought about it and realised that that was the problem - they hadn't 'modelled' reality properly and all of their subsequent efforts were now based on doing just that.
It didn't seem that hard for the people to admit they were wrong in this forum - why is it so hard to climate modellers to admit the same thing?
Not one of the non-climate models mentioned is being used to justify the expenditure of trillions of dollars and to justify fundamentally changing the way our economy works. And further, to double the price of energy and leave the old and poor to freeze in the dark. All the while further enriching land-owners and banks that trade in the mythical evil carbon (dioxide - but then to be accurate kills the 'black & dirty' image - much as the model-supported policy kills the old in winter). Oh, and don't forget the Mafia!
All in line with the political prejudices of the modellers (or those who purport to justify the models). Oh, I'm as sure that there are exceptions to the last sentence.
I would guess that a significant problem with GCMs as opposed to other large models like the Lawrence Livermore nuclear simulations is that we only have one Earth with one temperature history. What you would like to do (if you were trying to verify and validate your model) is keep the developers 'blinded' to some extent - ie, develop and tune the model against one dataset and then, when they knock your door and say 'it's finished', verify it against an entirely different dataset. That's possible with things like nuclear explosions and oil reservoirs, but isn't possible for climate data - there would be no way to prevent the developers comparing their model to any or all of the publicly available historical data.
As has been said above, if you develop and tune your model by comparison to historical data, you surely won't be surprised when you find that its output matches historical data. This is still true even if you 'train' your model against, say, 1900-1960 data and then let it hindcast 1961-2000 - if you then go back and tweak things to get a better match for the 1961-2000 period, you're still cheating (your 1961-2000 data has now become training data too), and can have no confidence that predictions for 2001-2050 will be of any value (presumably the modellers know all this, and perhaps this is what's behind their claim that they can't do proper V&V, which would be worrying!).
As an example, it would be interesting to have modellers develop a detailed new model for the entire globe, but using only Western hemisphere data for initialisation/training. When they get a sufficiently good match on various parameters (temps, rainfall, humidity, etc) down to small spatial scales in the Western hemisphere, an independent group would take the model's Eastern hemisphere predictions and compare them to historical records, and give the modellers a score for accuracy out of 100 (and is should be decided in advance what score is considered a pass). However, this could only work if the modellers could be kept completely in the dark about the Eastern hemisphere historical data, which seems impossible to achieve.
Clearly, I'm making lots of assumptions here about how climate models are or are not currently tested, but unless more information is forthcoming, we have to make assumptions.
I work in software development in the pharmaceutical industry and we (not just programmers) are held to very high standards in terms of experimental design, blinding, openness, validation, documentation, auditability, etc etc. There are good reasons for this, but it is worrying indeed that trillion-dollar decisions are made based on software that has (probably) been tested less rigourously than software to count the number of headaches experienced during a clinical trial of 200 subjects.
I am reminded of a good thread here a year and a half ago when everyone was posting their own anecdotes and experiences with computer modelling.
http://www.bishop-hill.net/blog/2012/3/20/mathematical-models-for-newbies.html?currentPage=2#comments
Rob Potter:
Because very expensive policies in a vast number countries and international organisations has already been based on the results of the models. As with finance great amounts of money have been put behind models but it's taxpayer money. The various banking bailouts make the two situations closer, without question. But CAGW money and power-seeking is even more lethal to the truth.
"Three 'fields' spring to mind, in descending order of respectability: astrology, homeopathy, climatology ;)"
How about alchemy?
But that is not fair. The alchemists of old (including Newton) who were trying to turn base metals int gold had no knowledge of chemistry or nuclear physics. So what is the excuse for Slingo, Betts, Edwards, et al., who have preciptated this nonsense?
Look at this figure: http://cdn.arstechnica.net/wp-content/uploads/2013/08/AR4faq-8-1-figure-1-l.png
How do the climate models know the exact year of each volcanic eruption?
"Investment bankers have looked at their derivatives and complex financial products and they are confident that they are fine".
Everyone happy with that?
Climate scientists liking their models is not exactly unbiased verification. If you want to change the world, publish everything for public scrutiny. EVERYTHING.
MikeC - simple. They are told the exact dates and magnitudes of eruptions.
Obviously, for the future they simply estimate a typical range of expected dates and magnitudes.
That's part of the reason that the future projections are so full of features (i.e. wiggly lines). The models assume that atypical events, such as ENSO and Volcanos and solar eruptions, continue to occur at about the same rate as before, but with semi-random timings.
The single biggest problem with risk assessment software packages is that their primary function is to allow senior financial managers to claim that their investments are safe, regardless of what the real-world risks of their investment strategies actually are.
The climate models serve a very similar purpose. They hide the very large uncertainties concerning our knowledge of the earth's climate systems inside a shiny wrapper, one which lends the GCM outputs an aura of scientific credibility and which promotes the illusion that the product inside is constructed using a disciplined scientific process.
OMG: self-fisking prose
Guess it really is
Among people who manage models for a living, the first question asked when a new model is proposed is "What is our level of understanding of that phenomenon?" If the project is to proceed one step farther, someone has to give a reasonable answer. For example, someone might say that we want to model the night time sky as seen from any point on Earth. Our level of understanding there is totally complete in the disciplines of astronomy and physics. But suppose someone says that we want a model of human growth that will enable us to select the nutrients taken in on a daily basis and determine their effects on human growth? What is our level of understanding for this project? We do not know enough to get this project off the ground now or in the foreseeable future. Why is it that someone proposed a model of Earth's climate that would permit calculation of global average temperature to two decimal points of degree per decade and a professional modeling team said "Yes, our knowledge is adequate?" Only a group of academics who saw the possibility of endless funding for speculation would consider such a project.
If you read the article carefully, you will detect that the author believes that the model itself is part-and-parcel of the enterprise of knowledge and of the scientific process. For the author, it matters not that the knowledge necessary to construct the model did not exist when it was proposed and will not exist in the foreseeable future. The author has an entirely new theory of science according to which parts of scientific knowledge will emerge from simulations of processes that we have not yet understood. He must truly love some supercomputer and its code because he is treating it as an android that partially invents itself.
There is an omission in the article which indicates clearly that the author has no experience evaluating management of models. He says nothing about metadata. The first thing a modeler wants to see when evaluating a model is the metadata. The metadata includes what the author calls "parameterizations" but also includes all changes in inputs, changes to modules, changes of modules used, and all similar information for each run of the model. But the really important part of the metadata consists of who made the change, why he/she made the change, what was expected from the change, what resulted from the change, users' responses to these changes, how these changes and results fit into the history of changes and results for this model, and a modeler's summary of what was learned from this run of the model.
If climate modelers would publish this information for the public then many could achieve a rather good understanding of modelers' progress and their likelihood of success.
I can see one reason that climate modelers would not report metadata. Modelers in industry construct models for specific purposes and at every opportunity compare those models to the reality that they model. Industry modelers get constant feedback. Users do a good job of helping with this comparison. But climate modelers have only dribs and drabs of reality or settled theory for comparison to their models. This sad state is a result of the fact that they do not have an adequate understanding of what they are trying to model. Apparently, climate modelers are committed to the belief that part of the needed understanding will be produced by running the model. Their metadata is always the same: not there yet. As a name for these people, "Garden Shed Tinkerers" is spot on.
Beta Blocker
"The single biggest problem with risk assessment software packages is that their primary function is to allow senior financial managers to claim that their investments are safe, regardless of what the real-world risks of their investment strategies actually are."
The point I was making is that these senior financial managers are quite prepared to change their models when shown to be wrong. The software is irrelevant (and this is not the forum to discuss it).
Trouble is, Doug, I'm being taxed now - while you're still stabbing around on your Fisher Price toy.
Just tell the boss you won't any useful results for some time to come. Everyone will believe that projection.
"...And which observations are the most important to match? Patterns of cloud cover? Sea surface temperature?"
Why, the ones already in the climate diddler's tiny, little, biased mind, of course.
Sep 6, 2013 at 10:40 AM | Stuck-Record
Disembodied voice: "Have you been mis-sold a climate projection?"
An excellent question. Perhaps an example is the modelling of a nuclear detonation, where models are used in the design of new weapons. The models became necessary when the nuclear test ban treaties came into force, and there were no methods to test new designs. I wonder how well they work?
Sep 6, 2013 at 11:00 AM Roger Longstaff
I've never had the remotest involvement with nuclear weapon design (nor any other aspect of them) but I've read everything I can lay hands on about their design and construction as a fascinated outsider.
I'm pretty sure that you'll find that while modelling is no doubt used to study speculative new designs, all current nuclear weapons are essentially designs from the days of testing with actual explosions. The electronics, safety systems, housing and packaging will undoubtedly have changed, but anything that could any way affect the reliability or yield will not have been changed, because of the impossibility of having the necessary level of confidence in their performance.
There are a number of complex and highly accurate models which are in daily use.
There are many celestial mechanics models which have been used since the 18th century. In those days the models predicted the positions of the known heavenly bodies 3 years in advance (ships often went on long voyages to places where Amazon didn't deliver). Accuracy was not great but good enough for celestial navigation.
At the beginning of the 20th century the models had advanced so much that it was possible, using EW Brown's 'Tables of the Motion of the Moon', to predict the position of the moon with an angular accuracy of better than 10 arcseconds. Today, computers and a long history of accurate observations have increased the accuracy of predictions to, for example, around 1.5cm error in the distance of the moon and correspondingly small errors in predictions for other bodies.
The models contain all the constants and variables required to enable long term (~ several hundred years) predictions of, for example, eclipses and transits with very small errors in the time of occurrence. The main uncompensated variable being the erratic (from a celestial mechanics point of view) nature of earth rotation.
One fine model which contributes to the overall accuracy of the numerical integrations of ephemerides is DH Eckhardt's 'Theory of the Libration of the Moon' [1981]:
Martin, I have not worked on nuclear weapon design either, so I was just speculating about open source stuff I remembered reading, such as:
http://www.state.gov/t/avc/rls/202014.htm
They seem to be referring to " validations against collections of re-analyzed data from previous underground nuclear explosive tests". Let us hope that they are better at it than the climate modellers!
If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.
Let's try this out on some other "sciences" shall we:
If anyone has scrutinized the workings of astrology, it’s astrologers—and they are confident that, just as in other fields, their models are useful scientific tools.
If anyone has scrutinized the workings of iridology, it’s iridologists—and they are confident that, just as in other fields, their models are useful scientific tools.
We could continue indefinitely.
The fact that climate scientists are the most up-to-date with climate science is axiomatic, as it is the definition of what makes them climate scientists. That says nothing at all -- nothing -- about the validity of their work.
I have a great deal of respect for Doug and Richard. Over the years I am beginning to understand their situation, dangerous I know and probably, in the short term it will end in disappointment.
However I am of the opinion that this is not one way traffic. I am sure that these "good guys" realise they are in the real world and very soon V&V of any of their model predictions will happen online in real time.
It is inevitable, homo sapiens will make it so
Theo Goodwin @ 6.58 & 7.35: Spot on!
Doug: I appreciate your taking the time to comment here. I have two questions:
1) How can we trust the results from models when they are require choosing particular values for one to two dozen parameters whose precise value isn't known? I'm under the impression that Stainforth (2005) and later papers showed that 1000 random selections of a various parameters gave similarly good representations of the earth in a simplified model, but produced wildly different climate sensitivity. The parameters interacted in surprising ways, making optimization of one parameter at a time an unreliable process. It seems like a wide variety of future climates are possible for any one emission scenario if ensemble with a variety of parameters are used and the IPCC's ensemble of national models only scratches the surface.
2) We have excellent historical information for one climate change catastrophe, the southward retreat of the monsoon rain that lead to the drying of North Africa and the [re]creation of the Sahara desert about 6000 years ago. This may have been associated with changes in the tilt of the earth's orbit. Can you hindcast this climate change catastrophe?
Doug, where are you? Surely you have finshed work by now?
Perhaps my own experience here might throw a little light on the problem. I have spent the last twenty years of my life trying to develop a model of eye movement control in reading. There have been a lot of people inolved in this enterprise and we've actually not done at all badly. If you accurately measure six or seven properties in the real world, you can do a fairly good job of predicting how long people will inspect a given word. Yes - that took twenty years!
This apparently trivial task has proved horribly complicated and, ominously, none of the current models, for all their computational sophistication, are capable of discriminating between the many rival theories of the reading process (i.e. theories of the actual behaviour). Worse, most of the models don't yet incorporate physiological and brain mechanisms underlying eye movement control, albeit a great deal is known about these processes. Some brave souls have tried, but it has proved even more horribly complicated.
The simplest way of characterising the complications is to point to interactions between the properties used as input. Interactions involving more than three variables set the limit on interpretation. Higher-level interactions are there, but you can't understand them because it is simply too difficult. They also (and more dangerously) license such a wide range of "plausible" interpretations that models become useless in any case - if the test of "use" is their ability to discriminate between rival theories of the process.
My point is this: I would be absolutley horrified if some arm of government suddenly decided to use eye movement control models of reading to guide educational policy on how to teach reading. We are simply nowhere near that stage of development, and may never be. Much as I'd like to be an "advisor" I would be taking the shilling on false pretences if I claimed I knew how to shape educational policy. I would be on a very slippery slope, because, albeit a layman on policy, I do have ideas and it would be very tempting to be in a position to impose them.
I believe people involved with models of the climate have allowed themselves to slip down this slope. They must be perfectly well aware that one simply cannot use results from an immature discipline to guide energy policy, or indeed any other policy - the two activities have really nothing to say to each other. Models of the climate are clearly intellectually challenging. These challenges will persist for generations because some of the problems look insoluble at the moment. But their owners should show a little humilty - their ambitions as to how we should live our lives are probably no better informed than those of anybody else. We should not all be made green just because the chap writing the code thinks it's a nice idea
Alan Kennedy
Bravo!
Another person who models something rather simpler than climate and recognises the difficulties.
Alan, that is what is missing from climate scientists and their pronouncements on model-based climate prediction.
Humility.
Martin A:
Very important point. But we should certainly (IMHO) know more about all kinds of modelling with policy implications. It's only through such examples that we'll see how humility arises, as Alan says - through hard experience of real testing. A process without that possibility is never to be trusted.
Doug McNeall:
Doug, you might want to rephrase this. it is exactly this success that worries so many of us.
Fantastic post by Alan Kennedy but also many others. This blog is amazing in that this thread brought forth a large number of people with real life modelling experience who together laid out the problems involved in modelling the climate.
We have a government which is basing policy on climate models, who is at fault here; the modellers, their employers (eg the MO), the government or all three? This thread leads me to think it is all three.
However seriously challenging modelling the climate is, why is anyone even bothering to try? Any person attempting to model the climate knows before he/she starts that they are going to fail. breakthroughs are not going to happen through increasing computer power or increasing the complexity and skills of modelling. Improvements will only come through research into the climate and how it works. Climate modellers are therefore being dishonest and taking tax payers money dishonestly. The MO is dishonest because it is well aware of the limitations of its models. The government is guilty of crass stupidity because even good common sense should tell them that these models are worthless.
Optimistic? In a less flattering style, you mean 'not fit for prime time' or better stated 'not worth beans, let alone money'.
In a world where alarmists use these models to frighten children and adults, presuming optimistic borders on willful negligence when the model's results are 'human interpreted' to mean doom or disaster by anyone. Loud noises should have emanated from any modelers involved when their models are used for anything but guesses. Massive economic changes should have brought out shrieks of protest from the modelers, for a very long time.
Oh!? like what, exactly? Humidity? Temperature? Arctic ice? Drought? Floods? Storms? Winter? Summer? TMax? TMin? Storms? Tropical hot spot? Anything? Remember, all robust findings need explicit details.
Or do you mean the averages developed over many many runs hoping for information within the noise? Running means? Smoothed, beyond recognition, data?
How about the error margins and bars? Do we get details on all error estimates from every step of the model's calculations? The good Bishop, Steve Mc and many of the Bishop's readers always have to reverse engineer model results to figure out what is going on. Will all details, including error estimates, be released?
Oh yes, and in many of those, code and data are provided so even amateurs can participate.
All, All if perhaps you missed that word, other models are held to rigid scrutiny with the author's eager to fix any/all errors found.
Yes, I worked with financial business models. I had bosses and customers who let me know immediately when a model was wrong. I had to find and report explicitly what was wrong, either with model or what circumstances were not correctly modeled. Since these models included employee productivity that was a very frequent demand.
Models that failed, failed! If I thought a calculation had possibilities, I or another would try and refine it. Calculations did not get included till merit was demonstrated by matching observations.
Many people think models are like those 'imitation models' used in SciFi shows. The aggressive Climate catastrophists play on that belief in pushing their agendas. All too many of us know the reality behind 'computer models and simulations'; the layers of math and data needed along with the hours of design, testing, programming and verifying.
Over the years, frequent requests here and on other blogs for detailed information on climate model workings, projections and especially about their misuse have been ignored. If you plan to answer questions proposed by participants here; I request that you especially answer this question.
Why do the climate modelers knowingly support model projections misuse and climate model downstream inputs into other computer models? In the last few weeks a number of papers have been released that 'used climate model output ' for their input; yet not a whisper of caution from the model developers.
All the climate modelers have to do is come out and inform the media, downstream chained modelers, public, politicians and so on that, "...I don't think that climate scientists generally claim that level of V&V (engineering quality of 'validation and verification')...". This allows the world to assess model outputs.
Reference to weather modelers is twisting logic and reference. Weather modelers study each run to see why a model is developing a projection; then based on their years of education and experience try and judge the movement and impacts of air mass, low/high pressure interactions.
Confused Photon nailed it with his summation, and Sandys with his observation that your supplied links are bogus.
Sep 7, 2013 at 10:26 AM | Unregistered CommenterAlan Kennedy
Excellent! Just the kind of reasoning and good judgement that should have been applied in the case of climate models.
Excellent post, ATheoK. You write:
' " ...Basically, the adequacy of the models is different for different things. You shouldn't throw out all of the information that you gain from the models, because one small part of the model is inadequate or experimental. It would be foolish to ignore robust information gained from these things - and no matter which way you slice it, lots of the information from climate models is robust..."
Oh!? like what, exactly? Humidity? Temperature? Arctic ice? Drought? Floods? Storms? Winter? Summer? TMax? TMin? Storms? Tropical hot spot? Anything? Remember, all robust findings need explicit details. '
Do some climate modelers have a clouds module? (Computer models are modular.) If so, why not give the world a detailed look at what you are doing with it? Give us that detailed look including the metadata that I described in my post above.
Look at this figure: http://cdn.arstechnica.net/wp-content/uploads/2013/08/AR4faq-8-1-figure-1-l.png
How do the climate models know the exact year of each volcanic eruption?
Sep 6, 2013 at 3:26 PM | MikeC
For eruptions before the model was written the dates are on record and can be included in the starting conditions. It is not possible to predict the dates of eruptions after the model runs have taken place. Thus an eruption in 2010 would cause the real temperature record thereafter. to be lower than a forecast based on model runs from 2007.
There's not really any solution to this, unless you have developed a reliable way of predicting eruptions?
Give us that detailed look including the metadata that I described in my post above.
Sep 7, 2013 at 10:52 PM | Theo Goodwin
Try this as a starting point for study of the current GISS model. As a professional in the field, you'll probably know what you are looking for better than me.
http://www.giss.nasa.gov/tools/modelE/
Let us know how you get on.
DNFTT
The problem isn't that people are skeptical about climate models. What people don't believe is that a climate model can accurately predict the future.
It is perfectly reasonable to model the climate on a computer. If used correctly it can be a powerful tool to help refine our understanding of the very complex system that is climate. Used correctly you run your model out a short time and then compare what happens to what you predicted would happen. You then stop and figure out why the real world and the model don't agree or why do agree. Using this allows one to see if our understanding of the underlying science is right or not.
What a climate model is utterly useless for it running out predictions of the future and then basing any policy decision on that. Because the models don't work as magic crystal balls that predict the future. We are no where near the point we can accurately say we know what drives the climate. Certainly not enough to make more than generalized guesses about the future.
I'll point you to this... how accurate are the models used to forecast the weather? You do have that day by day accurate weather and temperature forecast for the next year handy right? Oh wait they don't actually work more than 5 - 7 day out. And we are expected to believe that it is possible to accurately forecast years into the future with a computer model but that they can't accurately predict the weather more than 7 or so days into the future?
"I'll point you to this... how accurate are the models used to forecast the weather? You do have that day by day accurate weather and temperature forecast for the next year handy right? Oh wait they don't actually work more than 5 - 7 day out. And we are expected to believe that it is possible to accurately forecast years into the future with a computer model but that they can't accurately predict the weather more than 7 or so days into the future?
Sep 8, 2013 at 7:21 PM LamontT "
The Chief Scientist of the Met Office, in testimony to Parliament (if I remember correctly the context in which she said it) said that the Met Office's climate models are tested several times a week because the same models are used to forecast the weather.
Entropic Man is Gavin Schmidt and I claim my £5