Dangerous climate change?
This is a slightly edited version of a comment Richard Betts left on the discussion forum. I thought it was quite challenging to much of what we hear about climate change in the mainstream media and therefore worthy of posting here as a header post. (Richard, for anyone visiting for the first time, is head of climate change impacts at the Met Office).
Most climate scientists* do not subscribe to the 2 degrees "Dangerous Climate Change" meme (I know I don't). "Dangerous" is a value judgement, and the relationship between any particular level of global mean temperature rise and impacts on society are fraught with uncertainties, including the nature of regional climate responses and the vulnerability/resilience of society. The most solid evidence for something with serious global implications that might happen at 2 degrees is the possible passing of a key threshold for the Greenland ice sheet, but even then that's the lower limit and also would probably take centuries to take full effect. Other impacts like drought and crop failures are massively uncertain, and while severe negative impacts may occur in some regions, positive impacts may occur in others. While the major negative impacts can't be ruled out, their certainty is wildly over-stated.
While really bad things may happen at 2 degrees, they may very well not happen either - especially in the short term (there may be a committment to longer-term consequences such as ongoing sea level rise that future generations have to deal with, but imminent catastrophe affecting the current generation is far less certain than people make out. We just don't know.
The thing that worries me about the talking-up of doom at 2 degrees is that this could lead to some very bad and expensive decisions in terms of adaptation. It probably is correct that we have about 5 years to achieve a peak and decline of global emissions that give a reasonable probability of staying below 2 degrees, but what happens in 10 years' time when emissions are still rising and we are probably on course for 2 degrees? If the doom scenario is right then it would make sense to prepare to adapt to the massive impacts expected within a few decades, and hence we'd have to start spending billions on new flood defences, water infrastructure and storm shelters, and it would probably also make sense for conservationists to give up on areas of biodiversity that are apparently "committed to extinction" - however all these things do not make sense if the probability of the major impacts is actually quite small.
So while I do agree that climate change is a serious issue and it makes sense to try to avoid committing the planet to long-term changes, creating a sense of urgency by over-stating imminent catastrophe at 2 degrees could paint us into a corner when 2 degrees does become inevitable.
*I prefer to distinguish between "climate scientists" (who are mainly atmospheric physicists) and "climate change scientists" who seem to be just about anyone in science or social science that has decided to see what climate change means for their own particular field of expertise. While many of these folks do have a good grasp of climate science (atmospheric physics) and the uncertainties in attribution of past events and future projections, many sadly do not. "Climate change science" is unfortunately a rather disconnected set of disciplines with some not understanding the others - see the inconsistencies between WG1 and WG2 in IPCC AR4 for example. We are working hard to overcome these barriers but there is a long way to go.
Reader Comments (285)
Mike, we need seven men that can shoot straight!
I'm off to the pub.
Cheers, R.
Nov 16, 2011 at 6:40 PM | Roger Longstaff
Hi Roger,
We can be pretty certain that the recent CO2 rise is human-caused, because the rate of rise of atmospheric CO2 is only about half of the rate of anthropogenic emissions. In other words, there is more than enough CO2 being emitted by human activity to account for the increased concentration in the atmosphere.
We know emissions and CO2 concentrations rather well, and if emissions are larger than the change in CO2 concentration in the atmosphere then the difference must be due to a net sink somewhere else - ie: the natural part of the carbon cycle must be currently a net sink.
So, (uptake by ocean waters + uptake by photosynthesis of land vegetation) is larger than (release by outgassing + release by respiration).
Incidentally, you may find this animated visualisation of CO2 rise interesting - it puts the recent instrumentally-measured CO2 rise in the context of the CO2 concentrations over the last 800,000 years (from ice cores)
Hi Richard,
I am not sure that I fully understand, or agree with, your statement "We can be pretty certain that the recent CO2 rise is human-caused, because the rate of rise of atmospheric CO2 is only about half of the rate of anthropogenic emissions."
If I remember correctly, our current understanding is that anthropogenic emissions of CO2 account for about 3% of the total, but the current rise in CO2 concentration in the atmosphere (currently 0.04%) is the result of a net effect - the difference between total source and total sink. My speculation (which is not original) was that it is a change to the sink - the decreasing capacity of the (naturally warming) oceans to absorb CO2 - that is contributing, at least in part, to the rising levels of CO2 that we have seen in recent decades.
I think that a lot of our current estimates (excluding amthropogenic emissions from combustion, which we can quantify) are pretty uncertain, and which effects are dominating (eg. anthropogentic emissions or warming oceans, or other factors) is not as clear as some might have us believe.
Thanks for the video, but as I have previously stated, I think that the ice core methodology may be fatally flawed. However, I accept that the jury is still out on that one!
This is indeed a very productive and interesting thread, and I wish I could spend more time within it. This will have to be my parting shot however.
A useful post for references to studies of the limitations of global and regional climate models, and for some lively prose, is here: http://theresilientearth.com/?q=content/scry-me-river. Pointing our the severe limitations of such models will of course only encourage the modellers to ask for more money for better ones, while cynics end up howling at the moon. But some of those in power are surely always trying to see which way the wind (political kind!) is blowing, and may wish to judge the allocation of resources accordingly.
The budgeting (sources, sinks) of any atmospheric constituent is an interesting study, since so much is involved, with many uncertainties whether the study is of Argon or Oxygen or anything else up there. CO2 modelling is no exception – for example, researchers there have long enjoyed the ‘missing sink’ problem. Estimating human contributions is not done by direct measurement, but by proxies such as energy production data combined with some algorithm for computing associated CO2 releases. So how good is our data on such proxies? Some thoughts and further links are given here: http://sppiblog.org/news/china-india-indonesia-brazil-can%E2%80%99t-estimate-their-greenhouse-gas-emissions%E2%80%94latest-figures-are-from-1994
If one concedes that even without we humans, there would be and there have been substantial variations in ambient CO2 levels. For example, warming seas are expected to release more CO2. Human release rates are widely taken to be around 3 to 4% of the natural ones, and this blogpost asserts that the net effect of them has averaged out to a 1ppm per year increase: http://diggingintheclay.wordpress.com/2011/02/24/the-futility-of-trying-to-limit-co2-emissions/#more-1400 . It further asserts that 1 ppm per year is equivalent to an annual increase in warming from whole world Man-made additions of about 0.0032°C per annum.. Now this is merely a blogpost, and people can say the most awful nonsense on blogs without much hesitation, but the referenced post does look to me to be a genuine effort on one person’s part to make sense of the information they have had access to. And he does, in the end, merely appeal for ‘due diligence’ to be applied before massive interventions are made into our lives:
So where is the due diligence?
Although policies are already are well underway and being implemented to reduce Man-made CO2 the Essential Due Diligence does not exist, and, as the analysis above shows, it is more than ever necessary to call for it. So, if sceptics are accused of asking the same questions over and over, perhaps it is time to take them seriously before we condemn the world community to an expensive and futile exercise.
Hi Roger
Thanks for your further question.
The clincher for me is this: if anthropogenic CO2 emissions are X but the change in CO2 concentration is only 0.5X then where is the other 0.5X of the anthropogenic emissions going? (It must be going back into the biosphere and oceans.)
It's a simple matter of conservation - we are putting enough CO2 into the atmosphere to easily account for the measured CO2 rise.
It's a bit like saying that we've seen a half-pint glass of beer get filled up, and we know we've tried to pour a whole pint into it, so this easily explains why the half-pint glass is now full (and why we also have a mess on the pub floor).
:-)
Cheers
Richard
Richard,
With respect, I think that you have demonstrated the flaw in your argument.
Anthropogenic emissions are X, but total emissions are X + Y, where Y are natural emissions and Y is at least two orders of magnitude greater than X. Your analysis only applies if Y is constant, but my point is that Y varies with naturally rising and falling temeperatures (for example between LIA and MWP values), and other factors such as volcanoes, and changes in Y may easily dominate over changes in X.
I think that the only thing that we can be certain about is that we can not be certain!
Hi Roger
You're right that the natural emissions are much larger than human emissions, but there's also the natural sinks to take into account. Let's call those Z.
Let's also use C for the overall change in atmospheric CO2
So we have:
X + Y - Z = C
Now, we know from observations that C happens to be about 0.5X (ie: as I said earlier, the rate of rise of CO2 in the atmosphere is about half the rate of anthropogenic emissions.
So substituting C = 0.5X into our equation we get:
X + Y - Z = 0.5X
So, subtracting 0.5X from both sides we get:
0.5X + Y - Z = 0
And then adding Z to both sides we get:
0.5X + Y = Z
So since X, Y and Z are all positive numbers (as defined above) this shows that Z (the natural sink) must be larger than Y (the natural source).
(ie: if you add together two positive numbers, you must get a third positive number which is larger than either of the original positive numbers)
So it doesn't matter whether Y and Z are huge in comparison with X (which indeed they are), this still shows that Z is larger than Y, ie: the removal of CO2 from the atmosphere by natural processes is larger than the net emission to the atmosphere by natural processes.
The net natural exchange of CO2 between the atmosphere and the land+ocean (ie: Y-Z) is a negative number. ie: the natural processes are acting as a net sink overall.
QED :-)
Cheers
Richard
Hi again Roger
I forget to add, you are of course absolutely right that Y (and Z) vary according to temperature and other climatic factors. This is the reason why the annual rate of rise of CO2 in the atmosphere varies a bit from year to year - we see a faster rise in an El Nino year and slower in La Nina, for example, and also a slower rise when a major volcano has temporarily cooled the planet by putting reflective aerosol particles up in the stratosphere (eg: Mt Pinatubo in 1992). My argument above applies only to the long-term average - you are right as far as year-to-year variations in the CO2 rise are concerned.
Hi Richard,
Just finished work for the day and poured a large drink - so too late for algebra! However, I think that we are now in agreement, after the second of your posts? I think it all depends on the rate of change of Y and Z, and how that compares with the rate of change of X (sounds like a bit of calculus is required?).
I'll have another think in the morning.
Cheers, Roger
Richard,
I thought that I had to come back, as you went to the trouble to explain your logic, which I now understand and accept.
The question remains - what happened in pre-industrial times, when X was effectively zero and (within the current interglacial period) temperatures cycled by 5-6C (?) over a roughly 2000 year period (Roman Warm Period - Dark Ages - Medieval Warm Period - Little Ice Age - ....). Were CO2 levels constant, as your vidio showed based upon ice core data, or did they flctuate by 100 - 150 ppmv as shown by chemical analysis and stomata data? I do not know, but I would have thought that university physics and chemistry departments should be able to answer these questions for us. Do you think that it is worth trying to arrange this, as it seems to be crucial to the AGW hypothesis?
It also remains my personal opinion that even if there was a significant increase in atmospheric CO2 levels (say, a doubling to 0.08%) this would have a negligable effect on temperatures (but would in fact be highly beneficial to humanity because of increased plant growth) as planetary surface temperatures are consequent only upon insolation, gravity and atmospheric density (of an atmosphere well mixed by planetary rotation), as determined by the gas laws and conservation of energy. This model works for Earth, Mars and Venus, and does not require complex computer modelling, which can not be verified.
Sorry to change the subject, but I needed to clear my mind. Tricky stuff, this CO2! Thank you for engaging in debate.
Cheers, Roger
OK, it took me a while, but I've had a look at some papers:
(a) "Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn", Etheridge et al., J Geophys Res. D 1996 p. 4115, http://dx.doi.org/10.1029/95JD03410, which is one of the key papers on the ice cores. (I didn't find a free version - I think this is paywalled).
(b) Ulrich Beck's paper, 180 YEARS OF ATMOSPHERIC CO2 GAS ANALYSIS
BY CHEMICAL METHODS, Energy and Environment 2007, 18, 259 (available for free download here).
(c) One of the stomatal frequency papers, Kouwenberg et al, "Atmospheric CO2 fluctuations during the last millennium reconstructed by stomatal frequency analysis of Tsuga heterophylla needles", Geology 2005, p. 33, available free here.
I looked at a few other ones also but those are the main ones.
I think I remain on my previous conclusion: the chemical measurements in the pre-1950 period (as collated in (b)) are not reliable enough. Some of the measurements may have been made using inadequate titration methods, though I can't judge that for myself, but mostly, many of the measurements have been made in places where the air sample is simply not 'background well-mixed air". So the peak in the 1940s is almost certainly an artefact. Roger queried the issue of measuring heights of 0.5 vs. 2m above ground - but neither of these are guaranteed to give background air. You would need to be hundreds of meters above earth - outside the so-called boundary region - to guarantee that.
The stomata data are interesting, but Fig. 1 of that paper suggests that as a proxy for CO2 concentration, this is not great. The correlation between CO2 concentration and stomata frequency is not great, to put it mildly. So the results are suggestive of variable CO2 over the years - but not very strongly so - and even then, that may be due to local conditions, plants not growing in 'background air'.
The ice measurements claim amazingly low error bars, +/- 10 ppm or lower, which I find hard to believe. There's a complicated issue to do with when the bubbles actually got isolated from the open atmosphere, as the snow settled into 'firn' then ice. Diffusion could clearly lower the temporal resolution such that spikes would not be seen. But these measurements seem likely to me to be about right. The IPCC, not unexpectedly, probably overestimate the confidence in these values being right, but in my view not hugely so. I'll carry on having more doubts about other aspects of AGW science than this one.
The dialogue between Roger L and Richard B was nice, and emphasizes another important point: the fact that "Z - Y" - the amount of CO2 absorbed by sinks, minus the amount emitted by non-anthropogenic sources, is very big. And probably not so well understood.
Jeremy,
Excellent work! Thanks for the references, which I will follow (the link to the stomata papers actually goes to Beck's, but googling the title you give gets rapid access to them).
Concerning the stomata stuff - a quick look yields more confusion for me. For example, I blatantly "cherry picked" the following:
"...therefore call into question the concept of the Intergovernmental Panel on Climate Change, which assumes an insignificant role of CO2 as a preindustrial climate-forcing factor"
but I do not want to do what I accuse others of, and I could have equally cherry picked the opposing viewpoint. The views of a biologist/botanist would be much appreciated (if anyone else is still reading this?).
We have discussed the chemical measurements, and the only comment I have is - can the methodology be replicated in a modern university chemistry lab, using the original metyhodology and equipment? I would have thought that this would be a relatively simple undergraduate project.
Which brings us back to the ice cores. Some months agio I spent several hours researching this, and came across many partisan papers and articles, both for and against. Some claimed that isotope ratios were the "knock out blow" then others refuted this. To me, the most convincing arguments against were that ice inclusions could not preserve original samples of air over time, given the inevitable changes in temperature and pressure, as the higher solubility of CO2 (with respect to O2 and N2) gave rise to consistently low measurements (ice inclusions have a thin inner film of water, down to -40C, consequent upon surface tension). (Please check - all of this is from memory!).
So what to do? Could a modern physics lab study the ice inclusion process? If the AGW hypothesis rests upon lower, pre-industrial CO2 concentrations it would surely be worth it.
I will continue to look at the references that you have found. I wonder if it is worth summarising this thread in order to list the outstanding questions and points of debate?
Regards, Roger
Dear Roger,
Sorry about the link error. Note I did not dig out any stunningly original references - just read a few of the papers I came across mentioned in this thread, and that people had linked to. I looked at a few other references, but they did not add anything material.
I forgot to reply to your query of Nov. 17 at 10:24, "I do not know, but I would have thought that university physics and chemistry departments should be able to answer these questions for us." Well, I fear it really is not so easy. The problem here is the classic one of reconstructing the past: it has now departed! And people in the 17th century were inconsiderate enough not to carry out modern measurements of CO2 concentrations at Mauna Loa or other such sites. So we have to resort to proxies. Like for paleotemperatures, proxies are problematic because you need to calibrate them to avoid noise, and you need to be sure that the calibration curve you get based on data when you can measure both the desired property and the proxy is going to still have worked in the past.
From what I can see, the plant stomata data fail on both of these tests: the calibration curves are very noisy (Fig. 1 in the paper I mentioned), and other studies suggest that different species have different response curves, and it is hard to be sure evolutionary changes would not have occurred in the past also (see "A critical framework for the assessment of biological palaeoproxies: predicting past climate and levels of atmospheric CO2 from fossil leaves", G J Jordan, New Phytologist, 2011, vol. 192, 29, http://dx.doi.org/10.1111/j.1469-8137.2011.03829.x, sadly probably paywalled, but maybe you can google it). I'm sure these measurements provide some CO2 information - but I would not trust them over and above the ice cores.
About the ice cores: sure, CO2 can dissolve in the ice around an air bubble, and perhaps diffuse away. There is the bigger problem that the date at which the bubble gets isolated can be many decades after the corresponding layer of ice fell as snow. The people doing these studies use various techniques to model such effects and generate appropriate error bars. At my level of knowledge of what they do, my guess is that their error bars may be somewhat optimistically narrow, but I can't see strong evidence that they are way off-beam. Hence my overall conclusion that I presently feel that the ice core CO2 measurements are quite likely to be mostly right.
And we have climatologists taking ice core readings which may also be wrong for the reasons you have given but they have "techniques to model such effects"* and so the chances are that their findings — which by coincidence just happen to be the most convenient ones for all our anti-human environmental extremists to berate us with — are the correct ones.
I'm not saying they aren't but it would be nice to see some evidence that the IPCC has seriously considered the alternatives.
Please believe I'm not having a go at you. It's just that find myself confused like a lot of other people and I am far from convinced that the climate research which finds its way into IPCC reports is as honest, truthful, and accurate as it ought to be.
* I seem to remember that Mann and Briffa, amongst others, also had "techniques to model such effects" and look what happened!
Fair point, Mike J - my language does sound very hand-wavy. Seriously, there are lots of papers on trying to model what happens in the ice. I looked a few, and my view is that while lots of things could conceivably introduce some errors in the ice-core derived values, I doubt they could at the 50-100 ppm level. Errors of that magnitude in the older direct measurements, and in the stomata proxy-derived values, are otoh very easy to imagine.
You or someone else pointed out up-thread that the IPCC 4AR had not cited the stomata work. I haven't checked that - but if true, it is disappointing (again). A full and fair review should cite all serious results, albeit with a judgement call about which one is most reliable. Having looked at some papers (and with my professor of chemistry hat on, ie someone used to looking at the literature, albeit only on related topics), I've given my view on this: I trust the ice cores most, though as a lukewarmer who distrusts a lot of IPCC stuff I would not have been surprised to conclude otherwise. And I don't exclude natural variability having been bigger than the IPCC claim, but I doubt they are off by 100 ppm, say. So modern CO2 levels are higher than those in historical times in my view.
Jeremy,
Two quick questions:
1. I found a summary of the stomata paper (concering possible evolutionary changes). Could not DNA analysis give a definitive view on this?
2. As a chemistry prof., could you at least verify the methodologies reported by Beck, using accurately calibrated modern samples? (I agree that we have no samples from the past). At least this could rule out one potential source of error - or otherwise.
Cheers, Roger
Jeremy
I've no direct evidence that AR4 didn't include the plant stomata papers. I think my source was Jaworowski so, given his stance on the whole subject(!), this might not be 100% reliable.
My instinct (which lets me down regularly, of course!) tells me that CO2 is just that wee bit too convenient and whenever anybody suggests that they get shouted at or sneered at. That is definitely not the way to win friends and influence people if you want them on your side (ask any salesman or reporter and I've been both).
Of course the alarmists may not care whether I'm on their side or not. Which is even more alarming.
I'm happy to go along with GW; AGW, obviously to an extent— it would be weird if man didn't have some effect on his environment; CAGW, sorry, I'd need a deal of convincing!
Must go. Time I spent an evening in the bosom of the family!
Nov 16, 2011 at 5:02 PM Richard Betts
Thanks for the link to the HadGEM2 paper. However it looks to me that those models have been in some sense simultaneously fitted to all the significant climate observables? If so then you can't use even accurate reproduction of current climate (supposing they actually achieved that, which they don't) as independent evidence that the models are "right"; all it shows is that you have enough fitting knobs to twiddle.
Instead such models can only be tested by comparing their predictions for changed climate with real future experience. Of course that's the only real evidence one should trust in any case, but it would be nice to have at least some hint that these models have predictive power, rather than merely fitting capacity,
JJ
That is the concern...what will the forcings be - even averaged out for next year? What will the feedbacks back averaged out for next year? As I understand it, the IPCC supply parameters...based on what? Also, they have supplied paramaters in the past but the only way to get the modeled responses to approximate subsequent experience has been by talking about how the various forcings a\nd feedbacks are outside the modelled range. I think that sulphates is an example of something that was not considered in earlier models. It might even be that no one in the IPCC in even earlier drafts had realised that steel production, with associated coal mining etc was shifting rapidly to India and China...hence the "unexpected aerosols" suddenly appeared to damp down the warming.
but...I have never seen any retrocasting take place...eg we were told to assume x,y,z...the actual forcings and feedbacks were a,b,c...how would our models have been different? That is not to say that these processes do not happen. I hope they do take place. Otherwise how can you tell the difference between a "bad" model and merely bad inputs? And yet, the range of the model results in the last IPCC report was wildly divergent.....Are any models getting weeded out this time?
Jonathan Jones
Anyone can back-fit if he sets his mind to it.
I demonstrated this in a blog post in April (http://standstoreason.wordpress.com/2011/04/01/racing-uncertainty/) though I really had the "divergence problem" in mind when I wrote it.
I think the only difference between climate scientists and those who "invent" useless racing systems are that the former do it for noble reasons and the latter for money.
On the other hand ...
Roger - sorry I misunderstood your question about testing the methods, and replied to an altogether less pertinent one. Yes, you could test the old analysis methods. I'm sure people have done so. I think Beck, and certainly Engelbeen, whose webpage I pointed to a few days ago on this thread, did so. I think most analysis methods are OK, at least when carried out with care. But in fact I suspect that almost all the measurements Beck mentions were reasonably accurate in terms of the air samples analyzed. The problem with those measurements was more with the samples used, not the analysis method.
By the way Jonathan is onto something much more worthy of doubt in the IPCC edifice: the extent to which models are reliable. As someone whose research is based on modelling, I know from hard experience how hard it is to get insight, let alone predictions, from computers. And we have the luxury of being able to test our predictions AFTER making them. That is invaluable in terms of keeping modellers honest...
Nov 18, 2011 at 9:32 PM | Jonathan Jones
Thanks, yes that is very true.
Obviously we can't wait 30 years to evaluate new versions of the model, so we run hindcasts by starting at 1860 and running forwards to the present day using observed GHG and aerosol concentrations, land cover change, solar irradiance and volcanic aerosol emissions, and then compare with obs. Although one could still say that this could still be fiddled by further tuning, in practice this is impractical because of the computing resources required - it takes several months to run a centennial-scale simulation so you simply can't keep tweaking and re-doing the runs. We really do have to just let it go and keep our fingers crossed..... ! This is always the most nerve-racking time after a new version of the model has been finalised - have we made the whole thing worse even though we think we've improved individual components?
It's also worth pointing out that the estimates of future global mean temperature rise made by earlier versions of this models in the 70's turned out to be pretty good!
Hi Mike
We don't back-fit - see my comment above to Jonathan.
Cheers
Richard
Richard
I believe you.
My point was simply that back-fitting can be fiddled very easily. The Met Office may not do it. Are you confident that (a) other people don't, and/or (b) if they do, do they ever succumb to the temptation to make sure it comes out right?
Marginally off topic. Are you with the IPCC's decision to concentrate on "the dangers of extreme weather events such as heat waves, floods, droughts and storms"? and that "those are more dangerous than gradual increases in the world's average temperature." (There's a truism, if you like!)
I can't disagree with Maarten van Aalst, director of the International Red Cross/Red Crescent Climate Centre (whatever that is) that "our response needs to anticipate disasters and reduce risk before they happen rather than wait until after they happen and clean up afterward" but I'm still looking for the evidence that "risk has already increased dramatically."
Together with a certain "repositioning" by the UN in its definition of climate change, we now appear to have some back-tracking on the "warming" meme and replacing it with a "disaster" meme. I wonder if I'm hearing the first signs of bureaucrats searching for reverse gear.
Richard,
You don't do explicit back fitting, but you do implictly fit your models. The whole process of developing new improved models, whether through tweaking parametrisations, adding extra terms, or simply thowing away unstable models, is one huge implicit back fit.
I'm also intrigued by your reference to "observed aerosol concentrations" for back predicting data to 1860. Where on earth are you getting historic observed aerosol data from?
Hi Jonathan,
Sorry, slipped up there on "observed aerosol concentrations", of course we don't have those. I originally just wrote GHGs and then remembered I should mention other forcings, so stuck aerosols and land use in too, but forgot to change "concentrations". For aerosols, we use past data/reconstructions of aerosols emissions and the actual concentrations are simulated by the model, including atmospheric transport and also removal by deposition.
(For land use, the land cover is reconstructed from census data and maps, along with satellite data for more recent times)
Regarding the "implicit back-fitting" and the aspects of model development that you describe, do you regard that as a problem, and if so, what do you think is the alternative? What would you like to see?
Thanks Richard, I'm glad to see my memory is not as unreliable as I feared.
The trouble with aerosols is that the historic data on sources simply isn't good enough to allow accurate reconstruction of aerosol forcings. If you're not very careful they end up becoming just another fudge factor which you can use to handle unexplained lumps and bumps in the records, thus saving the phenomena.
The last time I looked at this some groups were actually attempting to reconstruct aerosol records from the mismatch between historic temperatures and their models, and then (piling madness upon madness) other groups were using these fitted "aerosol" records to check the performace of their models. It sounds like things have improved since that particular low point?
To answer your more general question, if you want to construct models I don't see that you currently have any choice beyond massive implict back fitting. Attempts to set up "first principles" models don't work, as reality is simply too complicated. So you do what you can, and there's nothing wrong with that. But what you must not do, however, is pretend that successful hindcasting counts as a verification of the models; you don't have sufficiently powerful internal checks to conclude this. The most it shows is that the models are not completely hopeless.
Models that can't hindcast are utterly hopeless. Models that have proven forecasting ability are potentially useful. Climate models sit in that horrible grey area, where we simply don't know whether to take them seriously or not.
Oh, and as Barry Woods reminds me, never, never, never use the word "experiment" when you are just playing with a model. It's an appalling abuse of language.
I agree with the points being made by Jonathan Jones. I would like to add some thoughts on computer models from my world of aerospace, dealing with the subject of validation / verification.
In the 1980s Computational Fluid Dynamics (CFD) became all of the rage in aerodynamics. It was claimed that wind tunnels and flight test would soon be things of the past, and that systems could be designed purely by computer. From memory, only one vehicle ever did this (the Pegasus launch vehicle?). Needless to say, it crashed and burned! Nowadays, however, CFD is a valuable tool - when the codes have been verified using wind tunnel and flight test data they can "infill" over a wide range of conditions, and considerably aid and accelerate the design process.
The second area is performance modelling - the estimation of the range of an aircraft, or the payload of a launch vehicle. In-house performance models usually provide the key metric by which we optimise the highly multi-vavriate design process. As soon as we have a solution we check it against other models that use different codes and are run by different analysts, using the same input data (mass model, engine peformance, aerodynamic characteristics, etc.). It is only then that we have sufficient confidence to proceed to the next phase of the project. All models are subject to regular audit, by applying flight data from known, operational systems and and comparing the model results with reality.
In both of these examples it is the vetification / validation (I can never remember the difference) of the codes that is the key to success. This is obviously much more difficult, if not impossible, in the case of climate models, as pointed out by Jonathan, and others.
Richard, Jonathan
Bob Tisdale, mentioned up thread, has a new article which I find persuasive
Jones
Models that can't hindcast are utterly hopeless. Models that have proven forecasting ability are potentially useful. Climate models sit in that horrible grey area, where we simply don't know whether to take them seriously or not.
Tisdale
We’ve illustrated and discussed in a number of recent posts how poorly the hindcasts and projections of the coupled climate models used in the Intergovernmental Panel on Climate Change’s 4th Assessment Report (IPCC AR4) compared to instrument-based observations
In particular, I find his animated graphic showing the very poor hind cast skill of 30+ individual members of the ensemble mean from AR4 instructive. Not to mention their collective even worse predictive skill for the period since AR4 publication.
http://api.ning.com/files/sG7qq74yCI4jFA8dOaylk4cVqpGj8mjRDVV2iy-DQcGADEjobxEYxQbG6sS6T-*ilbe1IIgBsvGVbF7VQ3UKZeonG1-OnAG-/Animation1.gif
As a soaring pilot, I have, over 35+ years, 2000+ hours of airborne interaction with lower atmosphere physics, unsurprisingly leading to a thirst for meteorological knowledge which might provide competitive advantage. I could best be described as a lukewarmer on CO2 physics with one exception, in my experience all cloud, from fog to cirrus, reduces w/m2 at the surface. Cirrus always reduces convection (thermal size, volume, rate of ascent & frequency) in the lower atmosphere, on multiple occasions, I have witnessed the thinnest of cirrus turn off convection like a light switch. I find conjecture that cloud feedback might be positive in daylight to be in disagreement with observed behaviour.
Good to see this thread still going after so long! Richard, thanks again for taking part. Before I join in with Jonathan and Roger in the "Beat-up-Climate-Models" exercise, I should state that I find these models amazingly impressive given the complexity of the problem they address. As a computational chemist, all my research work involves trying to model things - molecules and their reactions - that are pretty complicated, but still much less so than the coupled ocean and atmosphere. Getting anything right in the global climate models is incredibly impressive. I should also add that I am confident that the "Climate Scientists" (not climate change scientists) who develop such models are almost all doing their work without any explicit reference to the policy implications, and that the cries of "fraud" are really unjustified.
But I think Jonathan & Roger make good points about computer models - stated most crisply by Roger by saying they need to be verified and validated. As I understand it, 'verification' means checking that the program does what it sets out to do (no division by zero, misfunctioning arrays etc.) and 'validation' means checking that the model outputs are in some sense true to the real thing. There was a thread on V&V at Climate Etc. last year which addressed many of the same points as we have here. By Climate Etc. standards it is even quite a short thread, and has lots of good comments.
Richard B is of course right to say that as it is impossible to suddenly conjure up thirty years of new, and accurate, global temperature data to test model behaviour, then validation is hard. But that kind of problem is not unheard of in computational modelling. Any simulation generates many values for many observables. For some of these properties, especially simple ones such as scalar values, or low-dimensional arrays, the programmer will be very conscious of the 'desired' output values, and may implictly tune the model to generate such results. Even if it is possible to generate new measurements of such properties, it may be that the programmer has a good enough sense of the physics of the problem to be able to guesstimate those values, and tune the program accordingly. The model may be "right", but for the wrong reason.
The solution is to carry out a more challenging test of the model: try to reproduce other properties, especially ones with more potential degrees of freedom - things like covariance behaviour of two properties, or spatially resolved properties, etc. For example, for climate modelling, regional temperatures on sea and land. Precipitation. And so on. There's so much data of this type that even a skilled programmer couldn't tune a code to get all of these things the way he or she "knows" they "should" be. And indeed, as nicely discussed at Science of Doom (see also the 'Part 2' of the same series), the models don't really get these things right - this is also what Bob Tisdale looks at in the post linked to by Gras Albert above.
Does it matter that the models get these more detailed properties wrong? In my view, yes. It suggests that the physics in the models is neither correct nor, probably, complete. So their predictions should be treated with great caution. By the way, I'm aware that climate modellers do make all the sorts of tougher tests I mention above. But such tests get little exposure. When people try to market the results of models, they always go back to the simpler property of global temperature, and use success in hindcasting that to prove the models work.
Unsurprisingly I agree with all Jeremy's points, especially the thanks to Richard Betts.
There are fascinating echoes of the discussion above in the Climategate 2011 files...
Thanks, Jonathan. I too enjoyed the fact that my comments above about models - and some of yours previously - sound rather lame and over-confident compared to some of the insider quotes in the released emails...
One of the more intriguing things about Climategate 2011 (which was not true of the 2009 release as far as I recall) is how close my personal views seem to be to the private views of many of the major players, at least among the proper climate scientists. It's a pity that so many of their public statements are so reprehensible.