The validity of climate models: a bibliography
A reader sent me some correspondence he had received from the Met Office. He had been enquiring about what evidence Prof Slingo et al had of the validity of the output of GCMs, and received in return the following bibliography:
Airey MJ, Hulme M, Johns TC (1996) Evaluation of simulations of terrestrial precipitation in UK Met Office/Hadley Centre climate change experiments. Geophysical Research Letters 23:1657-1660
Allan RP, Ramaswamy V, Slingo A (2002) Diagnostic analysis of atmospheric moisture and clear-sky radiative feedback in the Hadley Centre and Geophysical Fluid Dynamics Laboratory (GFDL) climate models. Journal of Geophysical Research-Atmospheres 107:7
Allan RP, Ringer MA, Slingo A (2003) Evaluation of moisture in the Hadley Centre climate model using simulations of HIRS water-vapour channel radiances. Quarterly Journal of the Royal Meteorological Society 129:3371-3389
Arritt RW, Goering DC, Anderson CJ (2000) The North American monsoon system in the Hadley Centre coupled ocean-atmosphere GCM. Geophysical Research Letters 27:565-568
Bellouin N, Jones A, Haywood J, Christopher SA (2008) Updated estimate of aerosol direct radiative forcing from satellite observations and comparison against the Hadley Centre climate model. Journal of Geophysical Research-Atmospheres 113:15
Bodas-Salcedo A, Ringer MA, Jones A (2008) Evaluation of the surface radiation budget in the atmospheric component of the Hadley Centre Global Environmental Model (HadGEM1). Journal of Climate 21:4723-4748
Collins M (2000) The El Nino-Southern Oscillation in the second Hadley Centre coupled model and its response to greenhouse warming. Journal of Climate 13:1299-1312
Collins M, Tett SFB, Cooper C (2001) The internal climate variability of HadCM3, a version of the Hadley Centre coupled model without flux adjustments. Climate Dynamics 17:61-81
Collins WJ, Bellouin N, Doutriaux-Boucher M, Gedney N, Halloran P, Hinton T, Hughes J, Jones CD, Joshi M, Liddicoat S, Martin G, O'Connor F, Rae J, Senior C, Sitch S, Totterdell I, Wiltshire A, Woodward S (2011) Development and evaluation of an Earth-System model-HadGEM2. Geoscientific Model Development 4:1051-1075
Cooper C, Gordon C (2002) North Atlantic oceanic decadal variability in the Hadley Centre coupled model. Journal of Climate 15:45-72
Corte-Real J, Qian B, Xu H (1999) Circulation patterns, daily precipitation in Portugal and implications for climate change simulated by the second Hadley Centre GCM. Climate Dynamics 15:921-935
Cusack S, Slingo A, Edwards JM, Wild M (1998) The radiative impact of a simple aerosol climatology on the Hadley Centre atmospheric GCM. Quarterly Journal of the Royal Meteorological Society 124:2517-2526
Gordon C, Cooper C, Senior CA, Banks H, Gregory JM, Johns TC, Mitchell JFB, Wood RA (2000) The simulation of SST, sea ice extents and ocean heat transports in a version of the Hadley Centre coupled model without flux adjustments. Climate Dynamics 16:147-168
Hardiman SC, Butchart N, Osprey SM, Gray LJ, Bushell AC, Hinton TJ (2010) The Climatology of the Middle Atmosphere in a Vertically Extended Version of the Met Office's Climate Model. Part I: Mean State. Journal of the Atmospheric Sciences 67:1509-1525
Hewitt HT, Copsey D, Culverwell ID, Harris CM, Hill RSR, Keen AB, McLaren AJ, Hunke EC (2010) Design and implementation of the infrastructure of HadGEM3: the next-generation Met Office climate modelling system. Geoscientific Model Development 3:1861-1937
Hewitt HT, Copsey D, Culverwell ID, Harris CM, Hill RSR, Keen AB, McLaren AJ, Hunke EC (2011) Design and implementation of the infrastructure of HadGEM3: the next-generation Met Office climate modelling system. Geoscientific Model Development 4:223-253
Inness PM, Gregory D (1997) Aspects of the intraseasonal oscillation simulated by the Hadley Centre Atmosphere Model. Climate Dynamics 13:441-458
James PM (2006) An assessment of European synoptic variability in Hadley Centre Global Environmental models based on an objective classification of weather regimes. Climate Dynamics 27:215-231
Johns TC, Durman CF, Banks HT, Roberts MJ, McLaren AJ, Ridley JK, Senior CA, Williams KD, Jones A, Rickard GJ, Cusack S, Ingram WJ, Crucifix M, Sexton DMH, Joshi MM, Dong BW, Spencer H, Hill RSR, Gregory JM, Keen AB, Pardaens AK, Lowe JA, Bodas-Salcedo A, Stark S, Searl Y (2006) The new Hadley Centre Climate Model (HadGEM1): Evaluation of coupled simulations. Journal of Climate 19:1327-1353
Jones PD, Hulme M, Briffa KR, Jones CG, Mitchell JFB, Murphy JM (1996) Summer moisture availability over Europe in the Hadley Centre general circulation model based on the Palmer Drought Severity Index. International Journal of Climatology 16:155-172
Joshi MM, Webb MJ, Maycock AC, Collins M (2010) Stratospheric water vapour and high climate sensitivity in a version of the HadSM3 climate model. Atmospheric Chemistry and Physics 10:7161-7167
Martin GM, Bellouin N, Collins WJ, Culverwell ID, Halloran PR, Hardiman SC, Hinton TJ, Jones CD, McDonald RE, McLaren AJ, O'Connor FM, Roberts MJ, Rodriguez JM, Woodward S, Best MJ, Brooks ME, Brown AR, Butchart N, Dearden C, Derbyshire SH, Dharssi I, Doutriaux-Boucher M, Edwards JM, Falloon PD, Gedney N, Gray LJ, Hewitt HT, Hobson M, Huddleston MR, Hughes J, Ineson S, Ingram WJ, James PM, Johns TC, Johnson CE, Jones A, Jones CP, Joshi MM, Keen AB, Liddicoat S, Lock AP, Maidens AV, Manners JC, Milton SF, Rae JGL, Ridley JK, Sellar A, Senior CA, Totterdell IJ, Verhoef A, Vidale PL, Wiltshire A, Had GEMDT (2011) The HadGEM2 family of Met Office Unified Model climate configurations. Geoscientific Model Development 4:723-757
Martin GM, Ringer MA, Pope VD, Jones A, Dearden C, Hinton TJ (2006) The physical properties of the atmosphere in the new Hadley Centre Global Environmental Model (HadGEM1). Part I: Model description and global climatology. Journal of Climate 19:1274-1301
Osprey SM, Gray LJ, Hardiman SC, Butchart N, Bushell AC, Hinton TJ (2010) The Climatology of the Middle Atmosphere in a Vertically Extended Version of the Met Office's Climate Model Part II: Variability. Journal of the Atmospheric Sciences 67:3637-3651
Pope VD, Gallani ML, Rowntree PR, Stratton RA (2000) The impact of new physical parametrizations in the Hadley Centre climate model: HadAM3. Climate Dynamics 16:123-146
Pope VD, Pamment JA, Jackson DR, Slingo A (2001) The representation of water vapor and its dependence on vertical resolution in the Hadley Centre Climate Model. Journal of Climate 14:3065-3085
Ringer MA, Martin GM, Greeves CZ, Hinton TJ, James PM, Pope VD, Scaife AA, Stratton RA, Inness PM, Slingo JM, Yang GY (2006) The physical properties of the atmosphere in the new Hadley Centre Global Environmental Model (HadGEM1). Part II: Aspects of variability and regional climate. Journal of Climate 19:1302-1326
Slingo A, Pamment JA, Allan RP, Wilson PS (2000) Water vapor feedbacks in the ECMWF reanalyses and Hadley Centre climate model. Journal of Climate 13:3080-3098
Spencer H, Sutton RT, Slingo JM, Roberts M, Black E (2005) Indian Ocean climate and dipole variability in Hadley Centre coupled GCMs. Journal of Climate 18:2286-2307
Stratton RA (1999) A high resolution AMIP integration using the Hadley Centre model HadAM2b. Climate Dynamics 15:9-28
Turner J, Connolley WM, Lachlan-Cope TA, Marshall GJ (2006) The performance of the Hadley Centre climate model (HADCM3) in high southern latitudes. International Journal of Climatology 26:91-112
Walters DN, Best MJ, Bushell AC, Copsey D, Edwards JM, Falloon PD, Harris CM, Lock AP, Manners JC, Morcrette CJ, Roberts MJ, Stratton RA, Webster S, Wilkinson JM, Willett MR, Boutle IA, Earnshaw PD, Hill PG, MacLachlan C, Martin GM, Moufouma-Okia W, Palmer MD, Petch JC, Rooney GG, Scaife AA, Williams KD (2011) The Met Office Unified Model Global Atmosphere 3.0/3.1 and JULES Global Land 3.0/3.1 configurations. Geoscientific Model Development 4:919-941
Wang KY, Shallcross DE (2005) Simulation of the Taiwan climate using the Hadley Centre PRECIS regional climate modeling system: The 1979-1981 results. Terrestrial Atmospheric and Oceanic Sciences 16:1017-1043
Webb M, Senior C, Bony S, Morcrette JJ (2001) Combining ERBE and ISCCP data to assess clouds in the Hadley Centre, ECMWF and LMD atmospheric climate models. Climate Dynamics 17:905-922
Woodward S (2001) Modeling the atmospheric life cycle and radiative impact of mineral dust in the Hadley Centre climate model. Journal of Geophysical Research-Atmospheres 106:18155-18166
Perhaps readers would like to pick a title at random and see how much comfort it gives.
Reader Comments (106)
So all these papers ''proving'' that the UKMO is up to it as far as models are concerned but the data shows the opposite.
I prefer empirical data.
Out of my depth here, I'm afraid.
Write out one hundred times:But the first one you list looks interesting:
Model simulations are not experiments
Model simulations are not experiments
Model simulations are not experiments ...
I did not spot any titles in the list that referred specifically to validation of models.
This paper describes the development of the Met Office's modelling software.
Toward the end of the paper, the authors state the following.
"Our study indicates that for climate science,
at least as practiced at the Met Office, such model valida-
tion is routinely performed, as it is built into a systematic
integration and regression testing process, with each model
run set up as a controlled experiment. Furthermore, climate
science as a field has invested in extensive systems for
collection and calibration of observational data to be used
as a basis for this validation. Climate models have a sound
physical basis and mature, domain-specific software devel-
opment processes. "
The need to validate the ability of models whose purpose is to predict future climate in terms of their ability to do so seems to have eluded the authors of the paper.
I think this is a ploy. Not only is it difficult to get these papaers even when they appear to have hyperlinks, they do not appear to be what we are being told they are.
For example - from Bellouin et al 2008
"These values are still significantly more negative than the numbers reported by modeling studies. By accounting for differences between present-day natural and preindustrial aerosol concentrations, sampling biases, and investigating the impact of differences in the zonal distribution of anthropogenic aerosols, good agreement is reached between the direct forcing derived from MODIS and the Hadley Centre climate model HadGEM2-A over clear-sky oceans. Results also suggest that satellite estimates of anthropogenic aerosol optical depth over land should be coupled with a robust validation strategy in order to refine the observation-based estimate of aerosol direct radiative forcing. In addition, the complex problem of deriving the aerosol direct radiative forcing when aerosols are located above cloud still needs to be addressed."
I seriously doubt these papers will show any real serios validation!
Bish - I think it would be useful if you updated the post with the question and answer text from The Lords which prompted the request for these references as well as the text of the request to the MO and that of their reply.
Also for info. on verification and validation here are some refs I posted on the Q's for Lord D. discussion thread:
For sources on model V and V issues and methodology, look to organisations such as NAFEMS, ASME, ASTM. AFAIK there are NO established v and v standards for GCMs - if anybody has a reference please post it.
http://www.nafems.org/search/validation/0/0/0/0/
http://www.asmeconferences.org/VVS2012/index.cfm
http://www.astm.org/Standards/D6589.htm
ConfusedPhoton - I don't think it is a ploy. I think the reality is that there is no proper validation and verification of these models and hence there will be no references to support it.
IMO the notion that there is a long history of v and v of these models in the PRL is just another part of The Big Myth and it is only now, thanks to a direct and specific enquiry, that this notion will be shown to be false.
Could be wrong - maybe I'll change my mind once I've had a look at some of the papers.
DNFTT
I picked out one which seemed to compare empirical observation against the Hadley Centre climate model and I could find the paper for. It found significant departure.
http://centaur.reading.ac.uk/30593/7/jgrd14409.pdf
Read all about Met Office model validation here:
http://www3.nd.edu/~gmadey/sim06/Classnotes/Validation/pope.pdf
(Vicky Pope. no less)
DNFTT x2
IMO it is notable how significant threads get a lot of disruptive posts which inhibit progress. It would be good if this key topic got a proper exploration.
nby - seconded
The WUWT recent post on the shambolic coding underlying some GCM models to my mind poses a far bigger question than wading through "a truck load of print-outs" - albeit that those printouts are provided on the likely basis that nobody'll bother... and probably contain a few unintended gems.
In critical software systems (and given the use that GCMs are put to who would deny they are critical?) it is many times mandatory that the code follows a very particular methodology and is subject to independent review / audit - iirc there's been scathing criticism of various snippets of GCM code and some code has been withheld?...
It is trivial to contrive a required answer with computer code, there is a jungle of gotchas lurking that coders have to be aware of and sloppy undocumented code always slips through when there's no consequences....
I'd be interested to know how much code has been formally audited. In my view - code proffered to control a critical system should be subject to some independent third party review and extensively tested.
When critical code is mentioned I always think of this and return here on a dpressingly regular basis....
Perhaps a new wing of the Climate Hall of Shame should be considered - the main area is getting rather cluttered.
Martin A - thanks, reference noted, I'll read it later this weekend.
Not one of these "experiments" explain the lack of a trophospheric "hot spot", reproduce the 15 year haitus in global temperature rise, or predicted a fall in atmospheric water vapour- as has been measured.
The lack of a hot spot and the lack of increase in atmospheric water vapour fatally undermine the "positive feedback scenario" required for significant increases in global temperature. In fact they point to a climate sensitivity of about 1.5C/[CO2] doubling.
What the Met Office is guilty of is serial neglect of reality v its precious models.
Well, this is a bit superficial but it takes me more than 30 minutes to have any hope of anything else when it comes to reviewing papers! I picked on the Bodas-Salcedo paper because it was the first one I tried that was available for free, gave it a quick skim, and then concentrated on the conclusions.
The Bodas-Salcedo et al paper is available at the AMS site
To get some context for the relative sizes of the biases reported below, the IPCC, much takne with ‘radiative forcings’ (RF), reports this in AR4 : “Using the global average value of 379 ppm for atmospheric CO2 in 2005 gives an RF of 1.66 ± 0.17 W [per sq metre]”. The model assessed in this paper reports a bias in incoming solar radiation over land of 17.2 W per sq metre, and a bias in incoming long-wave radiation of 6 W per sq metre. And we are expected to trust them when it comes to projecting the presumed forcing effect of CO2?
Ss,d is the ‘surface incoming solar radiation’
Ls,d is ‘downward surface LW flux’
Immediate thoughts? A recollection of some cheeky quote from Johnson about dogs walking on their hind legs comes to mind with regard to GCMs used for climate modelling “It is not done well; but you are surprised to find it done at all.”
In the "Proceedings of the ECLAT-2 Helsinki Workshop , 14-16 April, 1999, A Concerted Action Towards the Improved Understanding and Application of Results from Climate Model Experiments in European Climate Change Impacts Research - Representing Uncertainty in Climate Change Scenarios and Impact Studies", a paper by Mike Hulme and Timothy Carter started with this:
"Climate change is an inexact field of science. It has long been recognised, by researchers and decision makers alike, that uncertainties pervade all branches of the subject. However, as the science of climate change has progressed, the effectiveness with which uncertainies have been identified, analysed and represented by scientists has abjectly failed to keep pace with the burgeoning demand for usable information about future climate and its impacts (Shackley et al.., 1998; Rotmans and Dowlatabadi, 1998; Jaeger et al, 1998; Jones, 1999)."
"Uncertainty is a constant companion of scientists and decision-makers involved in global climate change research and management. This uncertainty arises from two quite different sources - incomplete’ knowledge and ‘unknowable’ knowledge. ‘Incomplete’ knowledge affects much of our model design, whether they be climate models (e.g. poorly understood cloud physics) or impact models (e.g. poorly known plant physiological responses to changing atmospheric nutrients). Further advances in science (and computing technology) should lessen uncertainty stemming from this source. ‘Unknowable’ knowledge arises from the inherent indeterminacy of future human society and of the climate system. Human (and therefore social) actions are not predictable in any deterministic sense and we will always have to create future greenhouse gas emissions trajectories on the basis of indeterminate scenario analysis (Nakicenovic et al.., 1998). Uncertainties in climate change predictions arising from this source are therefore endemic
The climate system, as a complex non-linear dynamic system, is also indeterminate (Shukla, 1998) and even with perfect models and unlimited computing power, for a given forcing scenario a range of future climates will always be simulated."
It wasn't long after this series of meetings that the science became "certain."
With regards to
available hereIt appears to suggest that a more recent model is better than an older model but it does not say the model is good.
And I am sure the last line will surprise Kevin Trenberth: "This should lead to more reliable climate change predictions."
If your being taught good science one thing you learn is watch just how often the author uses themselves has reference. Here seven of the references used to justify Slingo stance have an author who is Slingo themselves. Not quite but not far away from the idea of 'it must be true because I say so' and that is before you start to find out if these references really do offer the validity they claimed they do .
Bottom line the reality is unless there made the error margins so wide its virtual impossible to be wrong , their predictive qualities have been rubbish. And no matter of reference swill cover over that fact.
What the Met Office is guilty of is serial neglect of reality v its precious models.
Aug 3, 2013 at 12:52 PM | Don Keiller
I've just been re-reading the wonderful book Systemantics by John Gall. [avoid later editions - they suffer from the problems he describes in the original edition]
He explains how, within a system (eg the Met Office climate research) external reality pales into invisibility and the system becomes the reality, for those trapped within it.
Mike (Aug 3, 2013 at 11:41 AM). Go easy on that terminology. It is the mis-use of it that I am sure you and I would agree is the problem, not the terminology itself.
I have helped with many statistically-designed experiments on industrial processes, and I have also worked on experiments with computer simulations of such processes. It is a perfectly legitimate way to explore how such models work. For example, in an experiment on a model, I might try runs with the conveyor speeds at two levels (high, low), equipment breakdown rates at two levels (high, low), inventories of raw materials at two levels (high, low), and so on. This is an experiment on the model, partly in case it will expose errors in the software, partly in case it may be good enough as a rough guide to the real process (or more generally, as a source of hypotheses that might be worth investigating on the real process).
As with a few other words deployed by those wishing to alarm us over climate, the difference between experiments on reality and experiments on computer simulations can easily be ignored. Referring to computer experiment results as 'experimental results' without qualification is a bit similar to the sleight of hand involved in talking of 'climate change' and 'climate change deniers'. They mislead the lay-person.
Gavin Schmidt
Some extremes will become more common in future (and some less so). We will discuss the specifics below. Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
http://www.realclimate.org/index.php/archives/2011/02/going-to-extremes/
Freeman Dyson (some say the greatest scientist not to win a Nobel Prize)
“My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do.
The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.” –
http://www.edge.org/documents/archive/edge219.html#dysonf
IPCC Third Assessment Report - Climate Change 2001
In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system's future possible states by the generation of ensembles of model solutions.
http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/505.htm
In the past, the MO have recommended Collins, Tett and Cooper to me, and I've read it (unfortunately it doesn't seem to be online at the moment). The evidence presented there at least does quite a good job of supporting the opposite of what Slingo wants to suggest. It reports on 1000 year unforced GCM runs which plainly don't simulate the kind of medium and long term variability that might be anticipated from reconstructions. Another useful paper they've pointed me at before (not on this list) supports this interpretation of Collins et al by demonstrating that temperatures simulated by GCMs simply follow forcing.
The list of papers here is so long that you've got to think they're trying to blind with science. Don't know whether one of the Met Office scientists will point out the ones they think best support Slingo's position?
Thank you for publishing the list of references Andrew.
Aug 3, 2013 at 12:55 PM | John Shade - John, I agree. The models search for an AGW signal in "noise" that is more than an order of magnitude greater. In some fields (eg. sonar) with an a priori knowledge of the signal, a good understanding of the noise and long integration times very impressive signal to noise detections can be realised, and validated. But this is not the case for GCMs - the noise is not understood and the a priori nature of the signal (anthropgenic CO2) may well be incorrect.
I would ask BH readers the following - as there is no empirical evidence to support the AGW hypothesis, do you think that these references are sufficient to validate GCM methodology (and therefore the AGW hypothesis), either individually or taken as a whole, as claimed by the Met Office?
John Shade
Spin it how you like: a computer simulation is not a "climate change experiment".
The day anyone starts running actual climate change experiments is the day I look for a spaceship out!
And yes, you do have to watch the terminology. And the longer you let them get away with thinking that tweaking their computer programs is an "experiment" within the established meaning of the word (ie the result might in some way be meaningful) the more they will be telling us that it must be so, "because the models say so".
Here is the latest GCM prediction of the death spiral for Arctic Ice: http://stevengoddard.wordpress.com/2013/08/03/latest-death-spiral-forecast/
My education came through an online discussion with 2 climate 'scientists'. It it became clear that the physical systems underlying these so called models or simulations are hugely complex, chaotic and involve countless feedbacks which are not well understood.
At the end I asked 'can computers model these systems' ? No, was the answer. This was before climate denialism became a class 'A' academic offence.
eSmiff reports that IPCC Third Assessment Report - Climate Change 2001 says
"The most we can expect to achieve is the prediction of the probability distribution of the system's future possible states by the generation of ensembles of model solutions."
This means a single model run under a Monte carlo regime. Not the meaningless spaghetti of results of many different models that is usually presented and averaged. Also this is only true if the monte-carlo model is mathematically and physically complete which means accurate evolution of albedo, ocean currents/heat content, aerosols, clouds, bio sequestration of CO2...etc etc.
So the first challenges are to identify the best model and then start to improve it and assess if it will be useful on a local, and regional level so it can be used to inspire what actions should be taken to mitigate dangerous predictions eg how will the jetstream change?
I won't be holding my breath.
To quote Terry Pratchett, "It's turtles all the way down."
I do not see any report here that provides the validation of any GCM. I would expect to see a report entitled "Validation of the XXX GCM".
These are the exact words of the Met Office when they provided the references:
"Thank you for your email regarding references referred to in a response to a recent Parliamentary Question. Attached is a list of peer-reviewed literature since 1996 that have established the validity of general circulation modelling. If you are interested in reading papers prior to these you will find the relevant literature listed in the reference lists of these papers."
The enquiry was made in good faith, and I believe that the response from the Met Office was as well, However, I have yet to find anything in the literature that comes even close to establishing the validity of general circulation modelling. But there is a lot of reading left to do. Perhaps Richard Betts, or Tamsin Edwards, could point us to the key publications?
I have some questions about noise.
Does noise include identifiable signals not of interest? Can noise include unrecognized but possibly related signals? Can very low frequency signals go undetected, except possibly in their interaction with other very low frequency signals? What if they can be "sensed" but not analyzed? How would one determine that they are unimportant?
Is there any generally accepted standard for the breadth of understanding of the constituents of a sample before the characteristics of a signal detected can be thought to be identified?
How likely is it that the length in time of our instrumental data is too short to provide a basis for the longer term inferences that are being drawn? Is that an area in which someone could say that because of the signals we can see in the data, we know we don't have enough yet to do anything serious with it - just keep looking?
Is it possible to ask intelligent, hopefully useful, questions in this subject without any real grounding in the physics of the thing - assuming there is any physics to the thing?
"Is there any generally accepted standard for the breadth of understanding of the constituents of a sample before the characteristics of a signal detected can be thought to be identified?"
An excellent question. Let me try to give an example.
Many years ago I worked on passive sonar systems for the RN. One of the problems was to understand and model natural acoustic variability, in order that the signal (target) could be detercted and classified. In this case intelligence on the target was necessary in order to implement narrowband filters in order to increase the S/N ratio. Other filtering methods were also employed.
But how to test the validity of the equipment and analysis? Sea trials were organised in which red and blue forces put to sea and then tried to sneak up on each other and obtain firing solutions. When the trial was over the warships returned to port and their analysis of the theoretical engagement was compared with ground truth (reality). After repeated sea trials a very good estimation of the the validity of the equipment and methodology was evident.
This is what validation means to me - going into battle with a good chance of success. Applied to GCMs it means having sufficient confidence in their ability to predict future climate states that you are prepared to spend £100 billion on climate mitigation strategies.
Very short of time so not able to read papers at present. Just looking through the comments the following in a comment by Martin A caught my eye:-
Maybe I am being over sensitive but that reads as though the main purpose of the collection of observational data is to validate climate models? Statements of this type along with the constant "upgrading" of observational data sets that always result in a colder past and warmer present, really do make my "confirmation bias" antenna twitch.
I simply repeat the question I always ask.
The IPCC admits to our poor understanding of the climate. So why does ANYBODY believe in the output of models that must be defective from first principles?
Regards
Paul
Roger Longstaff
I spent most of my life, learning, understanding, using, and abusing the English language. A fair amount of that time was spent trying to persuade PR people to come clean on their press releases.I am a past master of the art of dissembling and listening to others do the same and so I ask ...
Roger, where precisely does that say anything about the validation of any or all of the general circulation models as opposed to the vague theory of modelling in general being valid?
Answer: it doesn't.
And it isn't even good grammatical English.
Roger Longstaff,
"Perhaps Richard Betts, or Tamsin Edwards, could point us to the key publications?"
Don't hold your breath. They'll be here if they can think of some self-serving spin.
Could be a long wait.
I see no reason why one needs to have studied the physics of the thing to ask intelligent useful questions on this subject. Just as one does not need to have studied X to ask questions of Y where:
X = medicine, Y = GP
X = automobile engineering, Y = garage mechanic
X = homeopathy, Y = practitioner of homeopathy
X = nuclear engineering, Y = nuclear installation public enquiry
X = accounting, Y = accountant
X = investment analysis, Y = financial advisor
etc etc
That begs the question of the extent to which many climate scientists really have a grounding in physics or the basic principles of science, in view of their readiness to use concepts ('radiative forcing' for exmple) that exist only in models and which are intrinsically incapable of being the subject of physical observation.
They give the appearance of using physics and they use words and equations from physics, but is it really physics?
How very interesting.
Many years ago, my dad was a scientific civil servant involved in organising and evaluating trials of underwater military systems, including sonars.
He told me it was a constant battle between the conflicting objectives of:
[A] Establishing the limitations of the systems and avoiding the production of misleadingly optimistic assessments of their capabilities (which could lead to catastrophic results if they were relied on by military commanders).
[B] Confirming that correct strategic decisions had been taken by the management of the development programmes.
The professionalism of the scientific civil service of the time meant that [A] was never sacrificed to achieve [B], although there were pressures to do so.
"Roger, where precisely does that say anything about the validation of any or all of the general circulation models as opposed to the vague theory of modelling in general being valid? Answer: it doesn't."
Ouch! I can not believe that the MO would torture the English language to provide such an exit strategy. Would it stand up in court?
These citations are uniformly "verba," nothing more. In cyberspace, the term is "sock puppetry"-- posting comments without attribution, then quoting your own verbiage as authority for heaping Ossa upon Pelion. Since AGW Catastrophists inhabit fact-free bubbles to begin with, their Mutual Admiration Society bandies meaningless fabrications ("models") with abandon... but that is no excuse for any independent, rational observer of integrity to grant them the slightest credibility.
Why not, Roger?
Just as inquiries are for setting up, not for answering so are enquiries for responding to, not for answering.
They don't advertise it as part of the PPE course but Advanced Obfuscation is essential if you want better than a 2.2!
Thanks for the comment Martin. It sounds like your dad and I worked at the same place - AUWE Portland?
As you say, the lack of validation then would have been catastrophic in a time of war - all of our subs would have gone to join the "missing heat" in the deep ocean!
I have another maybe more general question. Has any academic ever suffered professionally at his own institution for publishing nonsense that wasn't also fraudulent or plagiarized? I guess I'm probing for institutional sensitivity to embarrassing publications by their denizens.
Aug 3, 2013 at 12:55 PM | John Shade - John, I agree. The models search for an AGW signal in "noise" that is more than an order of magnitude greater. In some fields (eg. sonar) with an a priori knowledge of the signal, a good understanding of the noise and long integration times very impressive signal to noise detections can be realised, and validated. But this is not the case for GCMs - the noise is not understood and the a priori nature of the signal (anthropgenic CO2) may well be incorrect.
Aug 3, 2013 at 1:47 PM | Unregistered CommenterRoger Longstaff
Very well said. No number of runs of GCMs can address this problem. The field of climate science is in its infancy. Unlike the field of sonar, climate science has not attained a critical mass of well confirmed hypotheses that could support an understanding of the noise. Runs of GCMs simply illustrate the a priori assumptions of climate science
I would add to the Met Office list:
McKitrick, Ross R. and Lise Tole (2012) “Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods” Climate Dynamics, 2012, DOI: 10.1007/s00382-012-1418-9 http://link.springer.com/article/10.1007%2Fs00382-012-1418-9
McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) "Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets". Atmospheric Science Letters, DOI: 10.1002/asl.290
(pre-prints available at http://www.rossmckitrick.com/model-testing.html)
These papers pose simple hypotheses, namely that the models match the observations they are trying to simulate, and then test the hypotheses using sound statistical methods. A surprising feature of past IPCC reports is that, despite having an entire chapter in each one on the subject of "model evaluation" they don't actually get around to model "testing". My Climate Dynamics paper has particular relevance for anyone who thinks climate models get spatial details correct: 20 of 22 GCMs individually contribute either no significant explanatory power or yield a trend pattern negatively correlated with observations.
'A surprising feature of past IPCC reports is that, despite having an entire chapter in each one on the subject of "model evaluation" they don't actually get around to model "testing".'
Aug 3, 2013 at 7:48 PM | Unregistered CommenterRoss McKitrick
It seems to me that they are unwilling to take up the question. They are unwilling to address what would count as a test of a model. I do not understand why this is not a scandal of the first order.
I commented, not too long ago, in the discussion section that the recent "improvements" made to HadGem3 invalidate the ability of all previous models to detect or simulate the effect of a doubling of CO2. It is so because these "improvements" represent a change in the global energy balance wich is much larger than the radiative forcing of increased CO2.
In any other branch of science of engineering, such a change would require the revision of older forecasts and their implications. It is only in this special discipline that older forecasts seem to be cast in stone and unquestionable.
If there really is no evidence at all to support GCM validation, then I suggest that we delete the "C" from CAGW, and add it to GCMs:
CGCMs - Catastrophic General Circulation Models
For they surely predict catastrophe. But not the one that they envisaged.
This looks like the Met office version of 'El Libro de las Profecías'
Senna the Soothsayer finds the Garden of Eden