Continental hindcasts
I recently emailed Richard Betts, inquiring about evidence that climate models could correctly recreate the climate of the past ("hindcasts") at a sub-global level. Among other things, Richard pointed me to FAQ 9.2 from the IPCC's Fourth Assessment Report, a continental-scale comparison of model output with all forcings (red band), natural-only forcings (blue band) and observations (black line). This figure also appears in the Summary for Policymakers as SPM 4. There is a similiar analysis at subcontinental level in the same chapter of the report.
For now I'm going to focus on the continental-scale analysis. (Click for full size)
This on the face of it looks like reasonable evidence of some hindcast skill for a group of climate models, at least as far as temperatures are concerned (Richard says the skill is much less for precipitation, say).
Surprisingly for such an important finding, this seems to have been put together especially for the Fourth Assessment Report rather than being based on findings in the primary literature. (Richard has also pointed me to some papers on the subject too, and I'll return to these on another occasion.)
With a bit of a struggle, it is possible to find some details of how FAQ9.2 was put together: see here. The detail is quite interesting and leaves me wanting to know more. For example, here's how the model runs were chosen.
An ensemble of 58 “ALL” forcing simulations (i.e., with historical anthropogenic and natural forcings) was formed from 14 models.[...] An ensemble of 19 “NAT” forcing simulations (i.e., with historical natural forcings only) was formed from 5 models. See Note 1 below for the list of simulations. Models from the multi-model data archive at PDMDI (MMD) were included in these ensembles if they had a control run that drifted only modestly (i.e., less than 0.2K/century drift in global mean temperature).
This immediately raises the question of why there are so many more models behind the red "ALL" band than the blue "NAT" band. Surely you would want to have the same models in the two bands. Otherwise you'd have an apples-to-oranges comparison, wouldn't you?
Also, I'm struck by the sharp warming shown in each and every continent. I had always believed that the majority of the warming was in the Arctic, but perhaps I am mistaken.
Lastly, I have a vague idea that there is some history behind this figure - did Tom Wigley do a figure like this once or is my memory deceiving me?
Note: This post is about sub-global hindcasts. Comments on radiative physics or other off-topic subjects will be snipped.
Reader Comments (94)
This on the face of it looks like reasonable evidence of some hindcast skill for a group of climate models
One could see it differently: Start with the 'Global' chart ; Over the period 1900 to 1940 the warming rate was at least as high as the warming rate from 1960 to 2000 [when the Anthropogenic effect kicks into the results] and this early warming falls well outside the confidence limits of the ensemble mean (~1/8 of the population spread) The failure is very clear in the 'Global Ocean' chart.
There is some big 'thing' that the models do not understand, in the Global Ocean this 'thing' seems to be maybe twice as powerfull as the anthropogenic effect.
So first half definitely wrong, second half right - 50:50 , I don't think that calling heads correctly half the time is very skillful. OK they seem to have failed slightly less badly in the Global land chart and so have diced and sliced the land regions (rather than ocean basins) to hint at some skill.
OK the models always got it roughly right in the last third of the period, but would a climate model that failed to show 1970-2000 global warming have been nurtured by its creators or funders for very long?
What strikes me most about the diagrams is the divergence in actual temperature value between the putative 'natural' and anthropogenic 'forced' value- about 0.5-1.0 deg C, which given all the recording error margins, station record and urbanisation issues, together with the controversial natural/ anthropogenic ratio issue is, lets face it, trivial. I recall an old 60's vintage schooldays atlas. It had neat little bar graphs for each major city with the twelve months along the x axis, vertical bars for monthly average rainfall in inches on the y1 axis and an average temperature curve showing the annual variation on the y2 axis. And the strangest thing of all, is that its still dead accurate and still the best depiction of climate in all those places, for practical purposes.
The process of natural selection applies to the evolution of climate models used by the IPCC:
Question: If a GCM wasn't able to reproduce the historical temperature record, would the IPCC use that model in their report?
Hint: Stainforth produced physically reasonable modifications of the Hadley GCM that had climate sensitivities ranging from 1.5 to 11. Reasonable models that can't do so, certainly exist.
If you had a climate model that couldn't reproduce the historical record with one set of anthropogenic forcings, would you look for a different set of forcings with different amounts of aerosols that would allow you to better match the historical record?
Do you think the IPCC forced everyone to use the same set of forcings?
Question: Is anyone shocked that most continents show exactly the same trends as the whole global? If one switched the natural or anthropogenic graphs for North and South America or any other pair of continents, would this create a contradiction? Have we learned anything new by looking at continental regions?
Climate models make very different predictions about changes in future precipitation, but the "fingerprint" of man's influence on historical precipitation is hard to detect. (Susan Solomon has a Science paper claiming to do so.) In an intermodel comparison done in preparation for AR4, most models used predicted an increase in precipitation in the Amazon. Only one predicted a major decrease, because it predicted that a good chunk would become savanna. Bonus question: Get which model was highlighted by WGII SPM?
Even if the Climate Models could accurately predict past and future climate (they cannot) they would still not be Evidence.
All Science needs to be based on real world observed evidence.....Empirical Stuff.......Not BS (Bureaucratic Science) as is the domain of the AGW Warmers.
Apr 7, 2012 at 7:15 PM | John Shade
"When hindcasting, are the model runs monitored as they proceed, and if so, are runs which are clearly deviating well away from the target values discarded?"
John, read the http://www.climateprediction.net project (brief and pretty good) documentation and yes you are correct basically with that assumption. As I've said before and this really is curve fitting pushed to the limit.
If we can hurl a Rolls Royce into the air and have it come down a bucket of bolts, surely then we can hurl a bucket of bolts into the air and have it come down a Rolls Royce?
To judge from recent correspondence on this site, some involved in creating and running these models have little more than a technician level education so can't understand why the four major mistakes in the physics have occurred, and how they must be corrected before the models have any utility apart from being lines of redundant computer code.
This isn't science, it's alchemy.
PS In 2010. US cloud specialist G. L. Stephens wrote that the models use twice the real optical depth of low level clouds [to offset exaggerated CO2-GW].
As far as I can tell, the paper he submitted for publication has been rejected, presumably so as not to upset the politicians or disrupt the gravy train.
So, as well as being alchemy, we have political censorship. It's time for all independent scientists to shut this down by asserting the truth whatever it may be.
The problem with Fig.SPM.4 here is that the results were pre-determined by the modellers. That would be OK if the results had been tested and not falsified by econometric (LSR) analysis using the raw data on the observed natural and anthropogenic "forcings". That was not done, and never has been done, least of all by Richard Betts and the UK Met. Why not? It is not difficult. I have done it, and in every case the coefficients on the anthropogenic forcing variables are not statistically significant, whereas that on atmospheric water vapour (to which the anthropogenic contribution is trivial) always is.
The underlying issue here is the misuse of the term "experiment" by modellers. Adding or subtracting natural/anthropogenic variables from models is NOT the same as either the physical experiments done by Tyndall in 1861, where he added or removed CO2 and H2O from a cylinder of air before heating one end of it, and earlier in effect by John Snow with his analysis of water supply sources during the 1854 cholera outbreak in Soho, or simultaneous evaluation of independent variables by econometric multi-variate regressions. The interesting question is why the SPM preferred to use an invalid methodology rather than one available for 150 years.
I'm guessing that in the next IPCC report they'll use EVEN BIGGER crayons to draw those graphs. You can get rid of all sorts of unfortunate wobbles in the graphs that way.
Apr 8, 2012 at 6:59 AM dogsgotnonose
Please don't be so rude about alchemy.
Alchemy produced some useful things and was conducted (on the whole) in a spirit of enquiry. It evolved into the science of chemistry. Investigating the physical world prior to the establishment of the scientific method could not have been easy.
But what excuse does "climate science" have?
Apr 8, 2012 at 12:37 AM Frank
Obviously not.
Even I, given half a morning, could program a model that would reproduce the historical temperature record. I could probably even do it in Fortran, if pressed.
Frank: Do you work for the Met Office?
Well , Bob Tisdale doesn't think much of the 'new improved' climate models ...
Preview of CMIP5/IPCC AR5 Global Surface Temperature Simulations and the HadCRUT4 Dataset
http://bobtisdale.wordpress.com/2012/04/05/preview-of-cmip5ipcc-ar5-global-surface-temperature-simulations-and-the-hadcrut4-dataset/
For the UK Met Off hindcasting is forecasting and vice versa. You see, what they do is make a forecast, say, 20 yrs ago which shows the temps continue to rise but with 20% uncertainty bars. Then in 20 yrs time they can say :
the Met Office model did 'forecast' - or at least, it was within the uncertainty range even if it was not the central estimate.
So easy this forecasting business, isn't it ? except, of course, when you have to make it before the event.
Franck asks:
Question: If a GCM wasn't able to reproduce the historical temperature record, would the IPCC use that model in their report?
Yes, but not until it's output had been 'parameterised' to to give the right answer. Hint: Stainforth produced physically reasonable modifications of the Hadley GCM.
If you had a climate model that couldn't reproduce the historical record with one set of anthropogenic forcings, would you look for a different set of forcings with different amounts of aerosols that would allow you to better match the historical record?
No Frank, that is not how real models work. You first understand fully the system the are trying to model, you create the model, when it doesn't follow the empirical data you go back to the real world and try to determine why. You do not adjust the model with no reason. You do not 'try' another parameter that you suspect will modify the output (answer).
Do you think the IPCC forced everyone to use the same set of forcings?
No, but all GCM are so dominated by CO² that other parameters change the output only maginally. That is, you run the model backwards, then you adjust to arrive as close as possible to the historic data which has itself been adjusted to death and then you think that gives the certainty to run it forward with a garanteed accurate prediction.
Question: Is anyone shocked that most continents show exactly the same trends as the whole global?
NO: The data has been adjusted and and it base it may well have shown only maginal differences between continents. It is not a useless exercise because according to your comment only MOST conctinents show a similar trend. As a real scientist and would ask WHY only MOST and not all.
Bonus question: Get which model was highlighted by WGII SPM?
You mean "which model guessed nearly right by some massive piece of lack because it is the only time this model has guess right.
Frank, your are someone easily fooled by the computers and their magic. Team up with your friend and colleague Richard. The two of you might be able to convice some of us that you really can predict even if the UK Met Off never gets it right when they have to predict before the event.
Kind regards to all at Exeter and UAE
Can we see the run that produced the fifteen years flat? Or am I to assume that because I could draw a flat line for fifteen years without going outside the error bars (or is it a range of runs?) then that counts as a prediction? Anyhow, the first thing to do is discard ALL the models and runs which did not show the flat bit. Only those which show it, by whatever criterion, can be right.
Rhoda: you fail to appreciate just how badly broken, probably fraudulent climate modelling really is.
1. Imaginary 'back radiation' used to beef up the claim that IR from the earth's surface is at the black body level in a vacuum exaggerates real warming by a factor of 2.6 [2009 data].
2. Hansen 1981 claim that the '33 K' is all GHG warming when the meanest intellect can work out that ~24 K is lapse rate means the IPCC claimed CO2-AGW is at least 9.62 time too high.
3. To offset this, the models use double real low level cloud optical depth and variable net aerosol cooling the cloud part of which is based on a mistake by Carl Sagan,. Correct this and this cooling is warming, the real GW mechanism.
4. To maintain the IPCC fiction, 'the team' has captured key journals like Science and Nature, which only publish papers supporting the fraud. As shown in Eschenbach's new article on WUWT, the latest Nature paper has as an error as bad as Mann's Hockey Stick, an apparently deliberate hiding of inconvenient data showing that during the non-industrial Holocene, T has fallen as CO2 has risen.
It looks to me as if this latest nature paper is the last ditch attempt by the fraudsters to confuse the public in the run up to a desperate [for the Marxists] Rio conference.
Geronimo,
The following paper provides an overview of climate sensitivity estimations -
http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
Sensitivity from response to solar cycle -
http://www.amath.washington.edu/research/articles/Tung/journals/solar-jgr.pdf
Sensitivity from ocean temperature changes -
http://www.gfdl.noaa.gov/bibliography/related_files/jmgregory0201.pdf
Sensitivity derived from response to volcanic eruptions -
http://www.agu.org/pubs/crossref/2005/2004JD005557.shtml
http://homepages.see.leeds.ac.uk/~earpmf/papers/ForsterandGregory2006.pdf
Paleoclimate derived sensitivity -
http://www.nature.com/ngeo/journal/v2/n8/abs/ngeo578.html
"Can you cite the papers that have found the short term natural variations" - I'm not sure what you want here. There are short term natural variations. This is pretty basic.
Sea level rise - http://sealevel.colorado.edu/
September Arctic sea ice extent - http://nsidc.org/arcticseaicenews/
Comparison of above to models -
http://www.ccrc.unsw.edu.au/Copenhagen/Copenhagen_Diagnosis_FIGURES.pdf
Figures 13 and 16.
Full report - http://www.ccrc.unsw.edu.au/Copenhagen/Copenhagen_Diagnosis_LOW.pdf
Your Grace I do not understand why you are spending any time
blogging about climate models.
These models are based on the idea that CO2 causes
climate change via 'forcing' or 'climate sensitivity'
(always helps to have more than one name),
unless you believe that CO2 does effect the climate
in this way then any results from models based on this
idea will be meaningless and any agreement with reality
purely coincidental.
As a postgrad I earned some money helping with maths tutorials.
One of the things I found hardest was to explain to students
that although they had got the expected answer,
they had used the wrong method and so were wrong.
Here is a video of Nir Shaviv at EIKE
http://www.youtube.com/watch?v=L1n2oq-XIxI
Nir explains that climate models have a very bad record
of matching extreme events (e.g. volcanoes) when run backwards.
However if you minimised the difference between the output of
climate change models and past observations you obtain the best
fit when climate sensitivity is reduced so that CO2 has no effect
on temperature (surprise surprise).
Note I found the link to Nir's talk on your blog
some months ago. If you have not watched this video
I urge you to do so.
Climate modelling is hard as climate is an example
of a nonlinear dynamic system, or chaotic in more popular jargon.
Its pretty funny that people are seriously using computers,
which have finite precision when representing floating point numbers,
when it is known that in some circumstances chaotic calculations
can blow up. Tiny differences in starting conditions produce
huge differences in answers. It was after all a weather man,
Edward Norton Lorenz, who lead to the saying if a butterfly flaps
its wings in Needsen it can lead to a hurricane in Bognor Regis.
I'm now going to veer of topic and talk about robotics.
Early robots where largely useless, they're inventors had decided
there needed to be a model of the world. Early examples such as Shakey
were connected to a mainframe (for the model of the world). Usually Shakey
just jerked forwawrd a few inches then sat whislt computer recomputed map of world
for 15 minutes. Progress was made Rodney Brookes realised that you didn't
need a map of the world, as the world serves as a pretty good map for err
the world.
In a similar way Piers Corbyn has realised that there is no need for
huge supercomputers. Here he is explaining his ideas at EIKE
http://www.youtube.com/watch?v=tbGWLgpylKc
and here is pdf of slides
http://www.eike-klima-energie.eu/fileadmin/user_upload/Bilder_Dateien/4th_climate_energy_conference_munich/Piers%20Corbyn%20IV%20International%20Conf%20on%20Climate%20%26%20Energy%20Munich%2025-26Nov%202011.pdf
People are paying Piers for his weather forecasts
presumably as they believe they are more accurate than the met office.
Curiously the met office uses a model based on CO2.
Piers uses a model based on position of sun and moon.
To make a weather predicition on a particular date
he simply looks back in time to when there was a similar
solar, lunar orientation and asks what the weather was like then.
CO2 climate sensitiviy forms no part of his methodology.
Whatever success models are having going backwards
they are not doing to well going forwards.
On page 39 of the Piers' pdf you can see that the trend
for temperatures since 2000 is down yet all IPCC projections are up.
Models rely on input data. Is this right?
It is commonly said that temperatures have risen around the world
since 1900. Have they? Much of the evidence comes from datasets
which have been 'adjusted'.
Michael Palmer writing here
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/
surveyed 600 weather stations in known locations which have at least
90% complete temperature records. He found that from 1900 to 2000 the trend was
-0.0073 C/year for rural stations
-0.0069 C/year for urban stations
I thought about doing something for UK weather stations
but 600 seemed like too much work. So I settled for one
and the first station I looked at had a continous record from
1890 to 2010. The trend is 0.007 C/year
BUT
The annual mean temperature for 1890 was 7.5C and 6.8C for 2010.
Here is a story about GISS dataset being adjusted to
make temperatures lower at Reykjavik in early 1900s
to produce a greater temperature rise to 2000
http://notalotofpeopleknowthat.wordpress.com/2012/01/24/giss-make-the-past-colder-in-reykjavik/
you can download monthly means as originally published
http://icelandweather.blog.is/blog/icelandweather/entry/1230185/
Here is another story of CRUTEM4 not reflecting
record temperatures in north america in 1913
http://www.real-science.com/smoking-gun-that-crutem4-is-useless
So if the models are based on a false premise
and the input data is wrong
what is the point at looking at them
whether they run backwards or forwards?
I think that all here agree that hindcasting can not validate computer models of complex systems - only predictions that can be compared with reality. However, there seems to be some confusion whether climate models accurately predicted "no significant warming over the last 15 years". I therefore have a suggestion.
In my world of aerospace I have to prepare reports on vehicle performance that require trajectory modelling. This involves codes that have been validated against wind tunnel and flight data. However, when they are used for new applications, and results are reported on, I always freeze the code, with hardwired input data, in order that I can always return to it if questions arise. Sometines this happens and I re-run the code and find a mistake. Nobody likes the "sinking feeling" of finding a mistake, however, the responsible scientist / engineer admits it, and then proceeds to put the situation right to the best of his / her ability.
So here is the suggestion - all climate model forecasts that appear in official documents of the IPCC, Met Office, etc., reference the code and the hardwired input data that produced them - officially catalogued in a read-only database - in order that they can be re-examined if questions arise.
Rob Burton (Apr 8, 2012 at 3:18 AM)
Had a quick look, thanks. My first, rather cynical, reaction is that this looks like it could be a project for the harvesting of 'good' runs along with their associated initial configurations and other settings. Where 'good' of course means conforming to political requirements. But surely not, surely things have not degenerated this far?
Frank: Some people appear to have mis-read your post
"Question: Is anyone shocked that most continents show exactly the same trends as the whole global? If one switched the natural or anthropogenic graphs for North and South America or any other pair of continents, would this create a contradiction? Have we learned anything new by looking at continental regions?"
The Mannic Depressives believe that the MWP was confined to the Northern region and they control WG1, so they should be shocked. Shocked I tell you?
My guess that the single model that showed drought in the Amazon was the one used in WG2.
There are mysteries about the models that could be easily resolved. While it's believed that they are tweaked to provide the hindcasts, primarily to reduce extra heat, so with extra aerosols, do they then take that same "tweaked" model and use it for forecasting? Engineers have been using models for as long as they've been around, admittedly in an area of science that is pretty well understood, and as someone has pointed out already, once they have tweaked the hindcasts they use exactly the same parameters for the forecasts.
I'm not sure that the Mannic Street Preachers working for the IPCC do that and such is my state of mind with the IPCC and its output I wouldn't believe them if they said they did without proof and documentation.
mdgnn: Many thanks, I was lamenting your daily input on back radiation not being available yesterday and am pleased to note that you have fitted it into the model meme. I wonder if there is such a disease as Back Radiation Deprivation Syndrome? If there is I was suffering the first twinges yesterday, along with Martin A it appears.
Roger Longstaff. I too have pondered the use of quality systems in the production and maintenance of the models. Although there isn't the slightest reason to doubt Richard Betts in the matter, I'm left with the suspicion that there won't be an audit trail to the model output that showed the cooling period. Another puzzle is that Vicky Pope, apparently now Deputy Politician in Chief at the Met Office, congratulations Vicky, said in 2008 that the cooling period was a natural phenomenon and we should expect the temperature's to rise soon. If they had forecast the hiatus in warming why didn't she simply say our models have forecast a hiatus in warming. But she didn't.
Since it's Easter Sunday and I suspect we're all half-asleep, let me throw this small pebble into the pond. Just for fun.
Can anyone (Vicky Pope would be good) explain why, if the 1998-2008 cooling period was a "natural phenomenon", all the other recent warmings and coolings haven't been "natural phenomena" as well?
I agree with this:
From the supplementary material pdf linked to they detail the comparison as follows:
This makes no sense to me. Only the same models should have been used in both cases and they should have been run the same number of times. And also why run the models a seemingly arbitrary number of times?
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
* Attributed to John von Neumann by Enrico Fermi.
anivegmin: Thanks for the response. I probably didn't phrase the question about natural variations well, but as it happens Mike Jackson has half explained it above. Vicky (Deputy Politician in Chief UK Met Office) Pope explained the current hiatus as coming from natural causes, well I was asking what are they and how have we measured them?
If you don't hear from me in the next 10 weeks it's because I'll be trying to read the first source you sent me!
anivegmin: You're not a reincarnation of BBD are you?
Could I ask a silly question - maybe it has been answered in the comments elsewhere - I am not a scientist by any means.
As I understand it the method of creating a model (either to try and predict the future or to recreate the pre-measurement past) is to use measured data for a period, and then match your model's predictions against measured data from another period - the creating and verifying steps ?
However as I understand from various places the observed data has been subject to quite a little, er, "tweaking" - some of it explained (e.g. for temps replacement instruments, moved stations etc.) and some not (Dr Hansen I believe does quite a bit).
is the latter is the case how can a model be created or calibrated accurately ?
Just wondering.
"is the latter" - should be "If the latter" - sorree.
anivegmin writes:
"The usual litany of comments, some verging on the anti-scientific......"
and:
"Commenter's posts in the various threads about models on this site seem to think that the models live in a world of their own...."
I will be the first to agree that authors of published science can often get frustrated by criticisms that appear to assert that the authors have wholly ignored issues that the authors really have given much thought to. This is a frustration for critics too. We do not wish to waste anybody's time [well I know I don't]. But just because the authors believe they have addressed the issues adequately, that doesn't mean it is necessarily true. It is too easy for scientists and 'non-scientists' alike to feel insulted or become angered by "questions that can't be asked" [Yes, I have heard academics giving papers in esteemed journals a good rubbishing, but at the same time defending the reviewers because the reviewers couldn't ask the obvious question for fear of causing outrage].
Unfortunately, if a scientist's work is going to be used to justify changing the economic base of the industrialised world, then that scientist had better start getting used to answering those questions.
Getting back wholly on-topic, the image in this article is attributed to the IPCC in 2007. The figure shows data plotted up to the year 2000 using, I think, data up to 2005 [centred decadal means].
There is a lot of discussion that could be had about this graphic. But in short:
It is not a forecast now. It was not a forecast in 2007.
Hind-casts don't cut it. See my comments to the open letter by iwannabeasceptic on this blog:
http://www.bishop-hill.net/discussion/post/1729708
I've just had an OMG moment. My last post on the Bishop Hill blog [linked above] actually DID contain an AWSOME prediction for the month of March, and I had completely forgotten.
OMG! Totally righteous. No, you’re welcome. Really. Don't mention it. Please.
Do the skeptics who comment on the validity of General Circulation Models (GCMs) have an understanding of their fundamental assumptions? They tend to focus on secondary matters such as feedback mechanisms when the models’ most basic assumptions are beyond the realm of acceptance.
The dynamical equations contained in these complex “sophisticated” models are derived by applying Newton’s second law to a single parcel of air which is assumed never to mix with the atmosphere. Once the equations are derived they are arbitrarily “transformed” from a Lagrangian to an Eulerian frame of reference. Such a transformation allows the equations to be treated as though they have general applicability to all diffusive atmospheric conditions. This is contrary to assumptions of applicability only to a single indivisible air parcel as contained in the derivation of the equations. .
The models would be rejected out-of-hand by any student of logic.
I am always interested in the ability of different models that have be validated by hindcasting to produce radically different forecasts.
Could it be that hindcasting is a flawed procedure?
I notice some of the data is "forced"...
Is that the same as when I play a card trick on someone, and force a card on them they think they have chosen of their own free will?
Douglas Leahey: my understanding is that they are finite difference grids of the Navier Stokes momentum equilibrium.
There's not much that can go wrong with finite difference except the large grid size and round off errors.
"Layman's possibly stupid question. If I ask ten people the way to a town and they point in several different directions then why would I think that if I had asked just two people that their averaged direction would be any more accurate than the averaged ten?" -- artwest
Great analogy, given the fact that these ten people you're asking are NOT IN TOWN. Just as the model "ensembles" (or ragouts, if you prefer) they haven't been able to find their way back to town in ten years. Why should we believe any of them?
For all areas there seems to be a modelled divergence starting at 1950. The anthropogenic trend is flat or even negative for a short initial interval (why?) then rises exponentially. The natural modelled component invariably declines, perhaps gently, but always remorselessly. What is the justification for this decline? Has natural warming not only been 'thieved' into the anthropogenic modeled 'account', but then some, for good measure?
Pull the other one.
'mydogsgotnonose' commented on Continental hindcasts:
Douglas Leahey: my understanding is that they are finite difference grids of the Navier Stokes momentum equilibrium.There's not much that can go wrong with finite difference except the large grid size and round off errors.
I don't understand this response. If the physical assumptions used in an equation's derivation are indefensible then the finite difference schemes used to integrate them cannot be relevant. The equations must still be rejected.
Douglas Leahey
I have studied other IPCC multi-model "models". You have to look at the individual runs, many of which are in the Annexures. The range of answers is so wide that you can find many pairs of outputs with exactly opposite predictions. However, when you take the whole lot together, you land up with the answer you wanted. This is cherry-picking carried to a really fine art form - science it is not.
It doesn't matter whether you use a Lagrangian or Eulerian frame of reference.
'mydogsgotnonose' commented on Continental hindcasts:
It doesn't matter whether you use a Lagrangian or Eulerian frame of reference.
Well I couldn’t agree more.
The dynamical equations expressed in the GCMs are explicitly derived for a single parcel of air of a m3 volume which is assumed to maintain its integrity and not mix with the environment (e.g. Haltiner and Martin 1957). It is irrelevant as to what frame of reference is used to express the equations. Such a physical assumption remains unacceptable for equations which are being used as the basis for predicting global scale atmospheric motions.
(How the movement of a single small parcel can be rationally expressed in Eulerian coordinates does nonetheless escape my understanding.)
Reference: Haltiner G.J. and F.L. Martin 1957 Dynamical and Physical Meteorology McGraw-Hill Book Company Inc. New York Toronto London
Douglas Leahey
Frank’s Question: Is anyone shocked that most continents show exactly the same trends as the whole global? If one switched the natural or anthropogenic graphs for North and South America or any other pair of continents, would this create a contradiction? Have we learned anything new by looking at continental regions?
This seems a very important point – there is hardly any continent scale information to be had from the graphs.
There is one exception –Australia which, in contrast to others cooled prior to 1950. However, yet again the models failed to hind cast this.
I don’t think that it is crystal ball gazing to predict that ‘evidence will be found’ for a spike in CO2 or some such that will allow the models match the data.
At the end of the day it boils down to whether you can go along with the belief in the transubstantiation of the world’s climate into a wafer of silicon