Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Crunch time for UK fracking | Main | Windfarms in court »
Sunday
Dec022012

Quantifying Uncertainties in Climate Science

Another date for your diaries - the Royal Met Soc's meeting on uncertainty in climate science.

Climate models produce different projections of future climate change under identical pathways of future greenhouse gases. This meeting will highlight recent studies that have attempted to quantify those uncertainties using different approaches.

Programme: 
Time No. Presenting author Title
16:40
Prof Reto Knutti, (ETH Zürich) Projection uncertainties: The multi model perspective.
17:10
Dr Paul Williams, University of Reading. Climate models: The importance of being stochastic.
14:10
Dr Jonty Rougier, University of Bristol Background and philosophy
14:40
Dr David Sexton, UK Met Office UK climate projections.
15:10
Dr Tamsin Edwards, University of Bristol Palaeo-constraints on climate sensitivity.
16:10
Dr Lindsay Lee, University of Leeds Constraining aerosol models.

Details here.

PrintView Printer Friendly Version

Reader Comments (224)

Although I've forgotten exactly where and when, I do recall Richard Betts making memorable posts on blog threads calling alarmism to order. Those impressions remain, I can assure him.

Dec 4, 2012 at 12:17 AM | Registered CommenterPharos

Barry - somebody has moved around the keys on your keyboard. Please have them back in their original configuration :)

Dec 4, 2012 at 12:33 AM | Registered Commenteromnologos

Don Keiller re: O/T:

1) Response from MP who is surprisingly supportive but carefully neutral
2) Letter passed to culture and media secretary by MP
3) So far no response BBC Trust
4) Like you, additional covering letter and copy sent to John Redwood MP, essentially FYI

Dec 4, 2012 at 6:13 AM | Unregistered CommenterThinkingScientist

ThinkingScientist: I'm not chasing my inner-London Labour MP in the same way at present I have to admit. I made some points to Michael Gove a while back in some emails, more than one of which he thanked me for! From the early days when he was highly unknown the boy's done alright. And I'm deeply appreciative of these efforts, from Don, yourself and no doubt others. The only freedom we really have is the one we actually use.

Dec 4, 2012 at 6:52 AM | Registered CommenterRichard Drake

Hi Richard,

Up early like me, I see! This is a long post, but I felt like saying it.

My concern has always been the hijacking of science by political/environmental activism. I have grave concern that this is reverting science (to some extent) back to a pre-enlightenment time, when scientists were persecuted for not adhering to the (then, religious) consensus. Science is not about consensus and no matter how many scientists (and pseudo-scientists) vote for something, it should carry no weight in a scientific argument.

I am not an atmospheric physicist, but I am certainly qualified enough to be a “climate scientist”, whatever that is. I know a lot about models and forward predictions and uncertainty. I have a very full and broad earth science training and 28 years of experience in the earth sciences. I have studied Oceanography and, in Soil Science, quaternary geology and glacial processes. I am a professional geophysicist and, judging by a couple of honours I have received, a fairly good one. Some years ago, when I first got concerned about this stuff following the claims in 2001 after the third IPCC report that the 1990’s were the warmest decade for a thousand years (the hockey stick, of course), and later people like Gordon Brown started accusing people like me of being “flat earthers”, I got pretty angry and starting reading technical papers. McIntyre and McKitrick were right about Mann’s papers, the hockey stick has no merit. O’Donnell et al were right about Steig et al 2009, concerning warming in West Antarctica. The literature is littered with failed papers and failed predictions.

When I first got started on this, I read a lot of Fred Singer’s output. Fred Singer has been vilified by the environmentalists, but I actually think he is interesting to read. In his youth he worked with Van Allen. He is an atmospheric physicist and was the first director of the US Weather Bureau. He was a key player in satellite observation. He knows his stuff. And interestingly Singer has consistently stated that CO2 is a greenhouse gas, but that the effect of increasing CO2 is overstated. He estimated the effect for a doubling of CO2 alone would be about 0.6 degC over 100 years. I don’t think that is contentious. The contentious parts are the water vapour feedbacks and clouds. Some atmospheric physicists have pointed out that there is no empirical evidence for these feedbacks, or even there sign.

In 2001 it was claimed by Sir David King, Chief Scientific Advisor to the government at the time, that sea level would rise 6 m in 100 years. A decade later that should translate into 60 cm or 2 ft. Where is it? Just going on holiday to the seaside each year should be enough to spot that kind of change. In 2006 it was stated that 15 years of no trend in temperature would invalidate models. That has already happened. Up thread above, Nic Lewis points out that the effect of sulphate aerosols has likely been overestimated. A look at the cross-correlation of the ice core temperature and CO2 suggests that cause and effect is the wrong way round for evil CO2. All this leans me towards the view that CO2 effects are small and are probably going to be swamped by natural cycles. Meanwhile the sun is going in to one or more very weak cycles. The analogy for this might be the Dalton (or even Maunder) minimum.

As I stated earlier on this thread, I don’t think the uncertainties of these climate models are realistic. As far as I can tell they only deal with the uncertainty envelope of the proposed model, they do not consider the model itself may be wrong. These types of problems I wrestle with every day. I watch presentations and attend conferences by oil companies that put huge resources into understanding these types of problems for reservoir modelling. They don’t have simple answers, not even the (evil) might of Exxon-Mobil or Shell research. In reservoir modelling the model predictions are tested quite quickly in time – just after a few years. It then becomes obvious if a model is useful or not. Climate modellers make absurd claims about the validity of their models, but they have little basis for such claims. And where the climate models have met the real world the predictions have been extremely poor – a 15 year temperature plateau being a case in point. They are falling into all the traps and problems that are well known in the (probably even less) complex modelling in the oil industry, but the climate modellers are too inexperienced, naive and arrogant to see it. Reservoir modellers know that with sufficient free parameters any model can be made to fit historical data (“hindcasting”), but this tells you nothing at all about the either the validity of the model or its predictive ability. Only hard measured data from future performance will tell you that. And while all this is informing public policy, the BBC listens to activists not scientists and decides to jump on an environmental/political bandwagon instead of staying impartial. And all I can do is write and complain. It may not be much, but it is something.

Dec 4, 2012 at 8:07 AM | Unregistered CommenterThinkingScientist

I think the missing, unnamed virtue that far too many 'climate scientists' are missing is this: intellectual honesty (a implied by Athelstan - above, reproduced below from the first page of comments).

Let me give an example of one of the climate scientists Athelstan names, my one-time professor at the University of Colorado at Boulder ("CU"), Mark Serreze.

Serreze gave a presentation about the National Ice and Snow Data Center (NSIDC) in Boulder at CU this past October which I attended. And his very theme was the necessity of being honest with the data used by the center in its online mission. (See the many posts about NSIDC at wattsupwiththat.)

Mark Serreze was withering about the lost respectability of blowhard Al Gore. But when, during the Q&A, James Hansen's name came up, it was circle the wagons! "He's done some good things..." and therefore exceptions have to be made when it comes to throwing scientists turn nutty activists under the bus!

And amidst the PC university crowd gathered, the irony and hypocrisy of the moment passed without anyone crying out: "This is stupid!" A blatant double standard of judgement.

Therefore, I believe some notice of the failure of scientists to demand intellectual honesty is the root betrayal that motivates the kvetching and teeth grinding seen on in these pages.

While it is true that honest distinctions must be - and needs to be - made. But the failure of climate scientists to ever police the rampant intellectual DISHONESTY - that's the fundamental bone of contention.


That's the mistake you make Andrew, Richard Betts and Tamsin Edwards sound reasonable enough, I am quite sure Mark Serreze, Al Gore at his beach side property and Rajendra Pachauri are all affable and come across as sounding - reasonable people.

In the end, there is a right and a wrong in all of this and moreover a morality, of probity which has significantly gone missing and it follows, that, with such people compromise can never be reached because polar opposites can never meet.

Dec 3, 2012 at 8:09 AM | Athelstan.

Dec 4, 2012 at 8:19 AM | Unregistered CommenterOrson

Richard Betts wrote:

"Nic, you are correct that satellite data show the 1st indirect effect* of aerosols to be smaller than assessed in AR4 (in the chapter that I was a lead author on).
However I'm not aware of any papers (or unpublished data) showing it to be zero. Do you have a reference for that, or is it just a "personal communication"? Can you provide a link?
Did you review the RF chapter in AR5 WG1?"

Richard,
Many thanks for your helpful response. I read the SOD of both the RF and the Clouds and Aerosols chapters in AR5 WG1 in my capacity as 'expert reviewer' of the Detection and Attribution chapter, but I wasn't a reviewer of them as I have no special expertise in those areas. (I did actually point out in my comments a couple of obvious errors in both those chapters, but I'm not sure if they will reach the relevant authors. Do you think I should notify Gunnar Myhre directly, as the error in the RF chapter concerns data shown in one of the main graphs, which would be awkward to rectify through the formal IPCC error correction protocol?)

The statement by Prof. Graeme Stephens that, based on the best cloudsat/calypso measurements, the best estimate for the 1st indirect effect of aerosols zero was made by him, on the record, to a climate science writer who interviewed him for a book. He also gave a presentation containing essentially this assertion at a 2009 Gewex conference, as reported by Pielke snr. See http://pielkeclimatesci.wordpress.com/2009/10/09/major-issues-with-the-realism-of-the-ipcc-models-reported-by-graeme-stephens-of-colorado-state-university/. Clicking the “Earth observations and moist processes” link brings up the presentation. Slide 7 says "Even with this new precipita6on effect added, and assuming all correlations are causal (which they aren't), the most we infer the IRF to be is ~0.1 Wm-2". I'm not aware that this has as yet been published in a peer reviewed journal. Maybe Prof. Stephens thanks that doing so wouldn't improve his career prospects! But why don't you ask him about it directly?

Even taking current mainstream satellite estimates of total aerosol forcing of -0.7 W/m^2, in line with the studies I cited, mean aggregate net Adjusted forcing has increased by about 2.1 W/m^2 between the decades to 1880 and to 2011, while the Earth's ocean etc. heat uptake (estimated at 0.08 W/m^2 circa 1880: Gregory et al, 2002) has only increased by about 0.45 W/m^2. Per the Met Office's HadCRUT4 temperature dataset (which has the highest 1880-2011 trend of the three main records), mean global temperature increased by 0.73 C between those two decades. There doesn't seem to be much difference in the effects of internal climate variability between the two decades: they both reflect similar stages in the ~60-70 year AMO cycle, and ENSO indicators show a greater tendency to La Nina relative to El Nino type conditions during the earlier decade, so if anything this 0.73 C increase in temperature is likely to overstate the underlying rise in global temperature.

A rise in global temperature of 0.73 C for a rise in forcing net of the Earth's heat uptake (radiative imbalance) of 2.1 - 0.45 = 1.65 W/m^2 implies a best estimate for climate sensitivity of 1.64 C for a doubling of CO2. So IMO the IPCC 2 to 4.5 C 'likely' range for climate sensitivity is now contradicted by the best observational evidence, even if the relevant lead authors decide to claim otherwise in AR5. Using Prof. Stephens' best estimate of zero cloud indirect aerosol forcing would push the change in total net forcing up to 2.5 W/m^2 and reduce the climate sensitivity estimate to 1.32 C, below the bottom of the IPCC 'very likely' range.

Apologies for the length of this response.

Dec 4, 2012 at 9:42 AM | Unregistered CommenterNic Lewis

@Thinkingscientist.
Thanks- let's keep each other posted on this so we can "cross-check" any responses.

Dec 4, 2012 at 9:57 AM | Unregistered CommenterDon Keiller

What a wonderful thread! It has everything, top scientists, vigorous debate, insults, apologies, ceasefires, state of the art science (is that a contradiction in terms?) and even hugs. Brilliant.

Hope to see some of you on the 12th, both at the event and afterwards for a drink. I will also bring a few calendars.

Dec 4, 2012 at 10:18 AM | Registered CommenterJosh

Of course sensitivity will be less than the IPCC estimates. It's like asking the organisers of a demo how many people there were...always and inevitably they will report many more than actually were.

Dec 4, 2012 at 10:19 AM | Registered Commenteromnologos

Dec 4, 2012 at 8:07 AM | ThinkingScientist

Very similar background in Geophysics/Oceanography but got scared off from stories of people working for companies like Schlumberger so ended up postgradding in Meteorology/modelling. My scepticism lit up at the hockey stick and what I thought I knew about the Holocene. Started to follow the blogs a lot around Yamal fascinating at the time (Climategate too obviously) and also when places like WUWT had good thought provoking articles, unlike now unfortunately. From a science/uncertainty point of view I'm confident we are at a time where strong counterpoints will start appearing in the scientific press, with the possibility of a very good reputation to be made on the back of it.

Dec 4, 2012 at 10:47 AM | Unregistered CommenterRob Burton

Dec 4, 2012 at 10:18 AM | Josh
"What a wonderful thread! " It has everything, top scientists, vigorous debate, insults, apologies, ceasefires, state of the art science (is that a contradiction in terms?) and even hugs. Brilliant...

Agreed. BH at its best.

Dec 4, 2012 at 11:09 AM | Registered Commenterlapogus

lapogus
I'll second that. One reason why I've kept out of it. I know when I'm out of my depth.
Though I will endorse Maurizio's point. I don't think that what Nic Lewis or Richard Betts or Graeme Stephens or Gunnar Myrhe have to say about climate sensitivity or clouds will find its way into AR5 unless it endorses the "pre-publicity" that we have heard from the IPCC that "it's even more worse than the worse we thought it was."
As we saw from AR4, the science is being made to serve the environmental/political cause. I can see no reason to assume that will change until, as I have been saying for long enough (hence a certain feeling of déja vu when I read some of the comments up-thread), the environmental NGOs and other non-scientific special pleaders are barred from having anything to do with the IPCC.

Dec 4, 2012 at 11:57 AM | Registered CommenterMike Jackson

Quite interesting series of take-downs of problems with tree ring analysis continues:

http://ecologicallyoriented.wordpress.com/2012/12/04/severe-analytical-problems-in-dendroclimatology-part-four/

Dec 4, 2012 at 12:16 PM | Unregistered CommenterJ Jackson

@ThinkingScientist:

I have to disagree with you on the CO2-climate chicken and egg interactions. Just because in the past the egg came first, it doesn't mean a two-way feedback isn't possible. Today the chicken comes first, that's all.

But I agree this is a very important point: "As I stated earlier on this thread, I don’t think the uncertainties of these climate models are realistic. As far as I can tell they only deal with the uncertainty envelope of the proposed model, they do not consider the model itself may be wrong."

I agree that too many modellers (not just in climate; not just physical modellers but also statistical) do only consider the bounds of possibility within their model and not the wrongness of the model itself. But there are lots of people working on the latter too, *cough* like me *cough*.

The particular jargon we use in our field is parametric versus structural uncertainty. The UK Climate Projections (David Sexton's talk on the 12th; disclosure: my colleagues) explored the former with a "perturbed parameter ensemble", detuning the model to explore all the uncertainty in the parameters thought to be most poorly-defined and with most effect on climate sensitivity. They ran about 320 versions of the Hadley Centre climate model with different parameter values, and then supplemented this ensemble with emulation (statistical modelling to predict the output of the climate model in other areas of parameter space: this is also done in other fields, such as galactic evolution and, as I mentioned, reservoir modelling).

They also attempted to quantity the latter. But this is very hard, of course, because (unlike weather forecasting) we can't keep testing the model forecasts with new data. Part of the structural uncertainty can be explored by comparing different climate models to see the range of their predictions. Reto Knutti (also talking) looks at this kind of thing too. But, as you will have already guessed, this doesn't go far enough because (a) climate models have some or many aspects in common with each other and (b) all models are wrong, i.e. the global "multi-model ensemble" doesn't necessarily span a space in which reality lies.

I work with Jonty Rougier, who is working on a new approach to structural uncertainty. It makes the assumption that all the climate models of the world are more similar to each other than they are to reality, and tries to estimate the distance between the ensemble and the real world in a simple and transparent way. If you're interested, an older draft of his paper is here and my blog post about applying it to palaeoclimate simulations is here. This is still work in progress but I will talk about it on the 12th.

Dec 4, 2012 at 12:57 PM | Unregistered CommenterTamsin Edwards

Thinkingscientist 8.07am

Wow, that's quite a tour de force. A great summary of some of the main issues, that I think many of us here would agree with.

Science being infiltrated by activists - tick.
Mann, Steig wrong - tick.
CO2 effect overstated - tick.
Naivety of computer modellers - tick.

But what will your detailed comment, mainly to the converted, on page 3 of this blog post achieve?
Have you thought of starting your own blog (having inspired Tamsins)? There is a niche for a more detailed science blog, particularly now that CA has quietened down a bit.

Dec 4, 2012 at 1:02 PM | Registered CommenterPaul Matthews

Personally I think the greatest source of misunderstanding about climate predictions derives from frequentist vs Bayesian statistics.

Computer models of observable quantities, like weather and things underground? Wonderful. Test your models against reality, build up a frequency distribution of how well the model performed. No controversy here.

Computer models of un- (or less) observable quantities, like the statistical properties of the next century of weather, or the evolution of the universe? We can't keep testing their forecasts against reality to build up a frequency distribution of model success. So we are cornered (I think, though some in the climate community try other things) into a Bayesian mindset. Our probabilities are subjective assessments of what we think, given the available information (theory, simplified theory e.g. parameterisations, and past observations), rather than objective measurements of past success.

For those that are feeling keen to get into Bayesian statistical approaches to assessing uncertainty in complex models, here are two papers from other research areas. They are co-authored by Michael Goldstein, who was Jonty's mentor and has a strong focus on model structural uncertainty. Michael and Jonty's approach to uncertainty assessment is behind the UK Climate Projections 2009 (in spirit if not in exact implementation), which are used in UK decision-making and climate impacts studies. Michael applied the same sort of approach to these two other research areas:

- Technical report on Bayesian emulation of a reservoir model

- Paper on Bayesian emulation of a galaxy formation model.

You can see I'm trying to convince you that we do think about the difficulties you mention and that we use the same approaches to those difficulties in climate as in some other research areas. It's just that most computer model research areas (I think) are more straightforward because (a) they are simulating observable quantities and (b) the models are quicker, so you can explore more of the model envelope without resorting to emulation, and (c) have lower dimension parameter and output spaces, so it's easier to constrain the parameters.

Dec 4, 2012 at 1:25 PM | Unregistered CommenterTamsin Edwards

Just because in the past the egg came first, it doesn't mean a two-way feedback isn't possible. Today the chicken comes first, that's all.
Like most others on this site, Tamsin, I really appreciate your visits and Richard's and several other experts who have recently taken to giving us the chance to hear their arguments and listen to our views.
However ... I'm afraid I'm going to call you on this (from the depths of my ignorance!!), but relying on thinkingscientist's reference to Singer who, from what I have read both by and about him, has probably forgotten more about climate than some of the modern generation of climate scientists will ever know.
We know that in the past increased temperatures preceded increased atmospheric CO2 and posited a causal link based on (I presume since I need to take this on trust) laws of physics.
We don't know whether your "two-way feedback" is possible or not and to say "Today the chicken comes first, that's all" is purely an assertion.
Continuing — I hope — to be reasonably polite, I also find it just mildly insulting to those of us who have been arguing, not least on this thread, that climate scientists tend to wave sceptics away with an air of "you poor souls wouldn't understand".
Which, I'm afraid, is what comes across here though not intentionally I'm sure.
If you have any empirical evidence to back up that assertion, then let's have it. If you don't then please go and find it before asserting, as you have at least by implication, that it must be true.
Even the consensus that CO2 has any major part to play in atmospheric temperature seems to be being called into question by more than just a fringe element. If we are to understand and believe that it does then we need more than simply the "warmist" mantra of the last 20 years, "oh, but this time it's different!"

Dec 4, 2012 at 1:26 PM | Registered CommenterMike Jackson

Hi tamsin,

On your first point, I agree with you. The ice/core paleo-climate data shows CO2 lags temperature. Therefore, this cannot be evidence fo AGW, but it does not mean the theory is wrong (although I would argue the CO2 effect must be prettyy small, based on ice core data). My real opbjection is the totally mis-placed and scientifically illiterate propaganda promulgated by Gore et al. Not helpful.

I am very interested to hear you are working on how the wrong model might be, not just its error envelope. As you note, this is a difficult problem and of generic value in many fields, not jsut climate science. The parameter sensitivity with emulation is also used in other disciplines, including reservoir engineering (eg through Latin hypercube sampling).

You last two paragraphs very neatly encapsulate the problem, and I particularly like the statement about the fact that the climate models tend to resemble each other more closely than they do reality. This is an interesting area, and came up in the conference I attended last week where they talked about eigenvalue transforms to work out how close realisations were to each other. I am looking forward to downloading and reading the papers you lined to.

I wonder, though, if you go back to my original questions that inspired you to start a blog, particularly the one about history matching ("hindcasting") to the first time interval of the data and then forward modelling to predict the second half of the data. Is anyone doing this?

Dec 4, 2012 at 1:27 PM | Unregistered CommenterThinkingScientist

Phil Jones idea of peer review below and if any passing climate scientist cannot see the problem with the 'scientific process' here (lack of) we might as well all go home: call it my 'intuition'

Jones:
"I’ve never requested data/codes to do a review and I don’t think others should either. I do many of my reviews on travel. I have a feel for whether something is wrong – call it intuition. If analyses don’t seem right, look right or feel right, I say so. Some of my reviews for CC could be called into question!"

h/t to Hilary
http://hro001.wordpress.com/2012/02/02/phil-jones-keeps-peer-review-process-humming-by-using-intuition/

Dec 4, 2012 at 1:34 PM | Unregistered CommenterBarry Woods

@Mike Jackson

I'm sorry, the very last thing I want to be is patronising.

I think my brevity and tone for that statement came from the fact I was addressing ThinkingScientist, who I know is a geophysicist, so I switched into "terse academic mode". Banter, if you like, as demonstrated by the use of "chicken and egg". I was making the assumption that TS is familiar with the arguments about why CO2 and climate are linked with two-way feedbacks, and was inviting him to come back to me with a counter-argument. As you may know, this kind of tone is common at conferences and meetings, where people can be a bit cheeky and mildly insulting to each other when arguing their case. It's not really appropriate for online discussions though, because it's not face-to-face and because the audience is broader.

The other reason for the brevity (i.e. without references, which I would normally try to give) is that I'm supposed to be working on something else today, and mainly wanted to talk to TS about model uncertainty, but didn't feel I could let that point go unchallenged. You might also notice that I didn't feel such a strong urge to challenge the other points...

I can see now it came across as patronising - I *really* didn't mean to be. That's honestly the thing I hate most!

Hope that explains.

Dec 4, 2012 at 1:37 PM | Unregistered CommenterTamsin Edwards

One aspect seldom investigated is that climate models are like bonfire night fireworks. No matter how many warnings are written by the experts, there are going to be many users who will make a mess out of them simply because they do not fully understand what they are using.

That's also called 'broken telephone' and it manifests itself with certainties increasing the further away the paper's authors are from actual modellers.

A similar thing occurs very often with computer users, even those that should know better. I could simply find no way to make the Met Office blog guy understand for example that if the uncertainty is on the. first decimal than there is no meaning in providing results with two decimals.

In that case the problem is that computer scientists and engineers know what computer users don't. Namely, that any computation is bound to produce figures only meaningful up to a point.

What is the value of the ration between a real-world circle and its diameter? An engineer will say 3.14, many climate scientists will use their computer to state it as 3.14159265etcetc

Dec 4, 2012 at 2:20 PM | Registered Commenteromnologos

Earlier in this thread I wondered if the Bish might invite Richard and Tamsin to review his book on "hiding the decline". You see, I suddenly remembered what Richard had said long ago on an old thread to do with whether current temperatures are within the range of natural variability.

"As for whether the NH temperatures of the last couple of decades were matched by similar warmth lasting a couple of decades at times in the last over the last millennium – well my understanding is that none of the palaeoclimate reconstructions show comparable warm periods over the last 1,300 years, but there are of course uncertainties in these especially for shorter time periods.

"Reconstructions of northern hemisphere temperatures over the last millennium (or starting earlier in some cases) are published in the following papers, which show varying levels of agreement or disagreement with each other for different times over the past few centuries:

Mann, M.E., R.S. Bradley, and M.K. Hughes, 1999: Northern hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations. Geophys. Res. Lett., 26(6), 759–762.

Mann, M.E., and P.D. Jones, 2003: Global surface temperatures over the past two millennia. Geophys. Res. Lett., 30(15), 1820, doi:10.1029/2003GL017814."

I have been wondering (lost in scientific ignorance) whether these two papers (picked from a list of other temperature reconstructions) are used by modellers and,if so, for what purpose. Perhaps Richard and Tamsin might cast light on the matter?

Dec 4, 2012 at 2:35 PM | Unregistered Commentersam

This is an excellent post, Bish. Strange as it really could not have been anticipated.
2 things strike me –

1. There is no doubt that sceptics have suffered all sorts of humiliations over the years at the hands of activist scientists, the media and politicians and now we are expected to simply forget all that and welcome Richard Betts and Tamsin with open arms. I definitely have sympathy for Don keiller as someone who knows first hand what it is like to face ‘the forces of evil’. However, as is always the case, we have to rise above. The view from the moral highground is so much better! So, as I have done before, I applaud Richard and Tasmin for taking the time to visit our humble coffee shop.

2. There is also some attempt to look at the science! Woo hoo as we say on Facebook! In his Think Progress post Richard Betts says ‘the actual scientific evidence for climate change …….. is pretty good’. How many times have we sceptics heard and seen words to that effect. The problem is that when we say, ok lets have some, lets debate the science, all we get is a lofty silence. So Richard Betts, here is a challenge: with the approval of our gracious host which we might infer if he allows this post of mine, hows about you and your colleagues at the Met Office post an article here setting out briefly the evidence for climate change that is ‘pretty good’ and then let the denizens cross-examine you/the Met Office. I appreciate that this will require a bit of planning, but can we float it as an idea?

Dec 4, 2012 at 2:46 PM | Unregistered CommenterDolphinlegs

An enjoyable and informative thread and discussion. It is good to see polite yet searching questions being discussed. IMO the length and content of posts is welcome. The agenda and speakers for next week's event make it look well worth a visit.

Tamsin - if you are still reading, I saw your response to Mike and although I see you apologising for a possible misunderstanding of tone, I don't see that your response settles the CO2/lead lag/question? It appears to simply remain as an assertion at this point. I appreciate you are under time pressure today but please do come back with the references you promised.

I would also like to say that as an observer of the climate science debate I welcome your input and I think the direction of work you mention Jonty has picked up is very promising - namely how the models perform versus a standard of reality rather than against each other. I would also note I find it worrying that it appears to be an area of work arriving so late in a debate where it has oft been claimed "the science is settled".

Omnologos - re: measurement of circles and engineers - I think the response is that a measure is only any use when its uncertainty is known.

Dec 4, 2012 at 3:26 PM | Unregistered Commenternot banned yet

Forget the media. Richard's troubles are closer to his professional home...

Dec 4, 2012 at 3:36 PM | Registered Commenteromnologos

Tamsin
Thanks for taking the time to reply. I can assure you I wasn't reading it personally — the skin has developed a thicker layer than that over the years — but I was trying to point up one of the problems that we sceptics face, more especially in dealing with those who, over the years, have only served to increase our scepticism by refusing to engage and see every question as a personal affront to their scientific integrity.
Which we have discovered in a few cases when we do get below the surface to be absent.

Dec 4, 2012 at 4:10 PM | Registered CommenterMike Jackson

"The particular jargon we use in our field is parametric versus structural uncertainty."

Tamsin, I don't really see any real difference between these, they are just 2 versions of the model being wrong. Obviously you can tweak the parameters for all eternity to see what difference it makes which you can't do for your structural uncertainty.

Going back to my earlier response to Richard where he has his model showing a 15 degree average warming for the Arctic. I would say with certainty that the model was wrong and think someone a bit bonkers if they really thought the Arctic would heat by 15 degrees by 2100. (This is the average too. Without see the seasonal and day night differences I would guess the model had higher increases for things like summer nights.) I would say there was a clear structural problem with this model (I'd guess no where near enough convection is occurring to transport surface heat up into the Troposphere.) and use this evidence to then try and improve the model. I'm sure there are lots of tests and simpler models that could be used to test various hypotheses.

In the end the only way you can ever test your model is to compare it to reality. For something like a 2100 climate prediction I'm sure there are many features such as the Tropospheric hotspot to compare with reality on earlier dates to see how your model is doing. I'll have a look at what you and Jonty are doing. but I'm a bit sceptical about what you might achieve.

Dec 4, 2012 at 4:31 PM | Unregistered CommenterRob Burton

Tamsin, you wrote:

"Personally I think the greatest source of misunderstanding about climate predictions derives from frequentist vs Bayesian statistics."

An interesting point. I will look at the papers you cited. I am generally in sympathy with a Bayesian approach, but IMO the subjective Bayesian approach used in many climate science studies has led to major errors in the estimation of, in particular, climate sensitivity. Where the relationship between parameters being estimated and the observable variables is highly non-linear and has a low signal-to-noise ratio, as is the case with climate sensitivity and ocean effective diffusivity, for instance, use of uniform and other naïve (or 'expert') priors can lead to posterior distributions for the parameters which are highly distorted, particularly as regards their tails. Unfortunately, taking an objective Bayesian approach (involving use of noninformative prior distributions) to minimise this problem seems to be virtually unknown in climate science. I have two climate science papers relating to this subject undergoing peer review at present.

I'll be at the Imperial meeting on 12 December, and hope to meet you then.

Dec 4, 2012 at 4:33 PM | Unregistered CommenterNic Lewis

Lots of interesting questions - off to an event now, and more work to do afterwards, but I will try to return to as many as I can.

Two v.quick responses:

Nic, funnily enough I went to a talk about reference priors last week by James Joyce (who is critical of them). Myles Allen etc are keen, but I don't think all of the community are convinced they are sufficiently valid or useable in climate (i.e. high dimensional correlated parameter space and high dim output space). Let's talk on the 12th. Would love to hear what you're doing.

I agree that parametric and structural uncertainty are slightly arbitrarily defined. But you can think of the latter as the 'leftover' error at the model's best possible (tuned) parameter values. This will change over time as different processes are added to the models (e.g. replacing parameterisations with resolved processes).

Rushing off to philosophy of science event now! Sorry I didn't reply to everyone.

Dec 4, 2012 at 5:12 PM | Unregistered CommenterTamsin Edwards

May I say that this turned into a highly stimulating and informative thread.... With or without hugs or insults, this is the kind of thing the blogosphere does best: bringing together many people from widely scattered locations, interests, and experiences for some provocative discussion. The process can be messy, even painful at times, but there is much to be gleaned from this thread. Thanks to all....

Dec 4, 2012 at 5:43 PM | Unregistered CommenterSkiphil

Talking of philosophy of science, I have another problem with climate models.

It has been agreed by now that in the timescale of 10-20 years natural variability can mask any "climate change signal". Obviously most if not all natural variability is impossible to forecast with much accuracy.

This means that ever if we had THE best possible model, we would still be unable to describe future climate in the timescale of most Governments (Reagan was in office 8 years, Mrs Thatcher for 13, etc).

So assuming THE best possible model exists and is within reach, it would still be pretty much useless (=NOT USEFUL) for policy purposes. Sort of like telling the Titanic captain (1) there is a iceberg far away ahead more or less in the direction of travel, but (2) there might as well be icebergs ahead too and far less away if the direction of travel is changed slightly.

It all ends up with handwaving and guesswork. What would be really useful would be the ability to extend current weather forecast to seasonal scale at the very least.

There is a lot of fog ahead and the last thing we should focus on is the ability to constantly "see" (or pretend to see...) decades in the future.

Dec 4, 2012 at 6:32 PM | Registered Commenteromnologos

@Tamsin "Computer models of observable quantities, like weather and things underground? Wonderful. Test your models against reality, build up a frequency distribution of how well the model performed. No controversy here."

Except I am not sure you fully understand the problem. The "things underground" would be subsurface reservoir modelling. However, two points of clarification here. The first is that the static model is quite poorly known and has very sparse control of its properties and facies/geological distribution (well data) and secondly, for the reservoir engineer, they are trying to predict the long term time response to production but they are doing it from a very well defined and rigourous understanding of the physics. This means that in general, the physics of the dyamic fluid flow model is not uncertain, only the static framework is. This is a much easier prediction problem than in a climate model, where some of the physics is not properly understood (eg cloud response, water vapour feedbacks).

Yet despite this simpler problem the lessons we learn from reservoir engineering are (a) there are so many free parameters almost any static model, no matter how wrong can be made to fit prior data (history matching in RE jargon; "hindcasting" in climate). To take a well known statistical example, I can fit an elephant with 4 parameters and with 5 I can waggle his trunk. With a hundred free parameters I can fit a herd of elephants and make them all waggle there trunks. In other words, even for the much simpler case of a reservoir model with known physics, the degrees of freedom can sometimes be so large the problem may be ill-conditioned. The second lesson from reservoir engineering is that you can fit a model very closely by history matching and everything looks great, but when the forward predictions meet the reality of actually putting the field on production it can very quickly be seen that the forward predictions are worthless because some very small geological characterstic was not included in the model and the effect of that geological property (eg, a laterally continuous but only cm thick shale creating a vertical permeability barrier) will radically change the flow characteristics. In a complex non-linear system, even a small ommission in the initial model can have a dramatic effect on the quality of the predictions. For example, how sensitive are climate models to initial conditions? Do they use 2D or 3D initial conditions (temp; pressure etc?).

Finally, I was rather surprised by your defintion of "I agree that parametric and structural uncertainty are slightly arbitrarily defined. But you can think of the latter as the 'leftover' error at the model's best possible (tuned) parameter values. This will change over time as different processes are added to the models (e.g. replacing parameterisations with resolved processes)."

This seems to take the view that there is a gradual convergence of the model with reality, that we "tune" the model to match observations and any "leftover error" simply requires more parameters. I did not envisage this definition at all. In a reservoir model, it is sometimes the case that after tuning (history matching) there is a good fit with observation, but the model has no predictive capability at all. (It does work quite well a lot of the time, it should be pointed out!). By describing the structural uncertainty as the "leftover error", it is almost like a linear regression model: A straight line fit yields a predictor (the regression line) plus the "leftover error". Simply adding more parameters to make the residuals smaller does not improve the model if, for example, the model has a different response outside the measured observation range (unknown unknowns...).

Dec 4, 2012 at 6:41 PM | Unregistered CommenterThinkingScientist

Meanwhile, back at the AGU meeting :

Dec 4, 2012 at 7:05 PM | Unregistered CommenterRussell

http://fallmeeting.agu.org/2012/events/gc13e-the-national-climate-assessment-draft-findings-building-capacity-and-implementing-a-sustained-process-video-on-demand/

Dec 4, 2012 at 7:07 PM | Unregistered CommenterRussell

Dec 4, 2012 at 3:36 PM | omnologos

That Telegraph article was from 2009 - we've learnt a number of lessons from that year :-)

The projections for changes in UK snow are here. Somebody going to the RMS meeting could ask David Sexton about this if they wanted :-)

Dec 4, 2012 at 8:55 PM | Registered CommenterRichard Betts

Dec 4, 2012 at 9:42 AM | Nic Lewis

Glad to hear you took part in the review! If your comments on the RF chapter were clearly identified as such (ie: on the spreadsheet you listed the chapter, section and line number) then they will reach the chapter authors - the TSU will be spending the next few weeks sorting all this out. However, if they were mentioned in passing in comments on the D&A chapter and were not clearly identified as relating to the RF then there is a risk they may not be realised what they apply to. If it wasn't clear, I'd suggest the best thing would be to email the WG1 Technical Support Unit on wg1@ipcc.unibe.ch and tell them which comments of yours include information relevant to the RF chapter even if they are identified as referring to the D&A chapter - refer to your own comments spreadsheet, mentioning the reference number if you kept a note of it (but your name ought to be OK if not) and give the number assigned to each of the relevant comments in the spreadsheet. Obviously I can't speak for the TSU and this is certainly no guarantee, but that's my informal advice on the best thing to try.

Re: Graeme Stephens - OK, yes, I'll ask him.

Dec 4, 2012 at 9:40 PM | Registered CommenterRichard Betts

Hi All,

I am wondering whether ThinkingScientist has touched on something of more general interest than mine own, in his or her phrase: "eg through Latin hypercube sampling", that being the question of experimental design.

I am aware that many may find it hard to grant that climate modelling could constitute an experiment, except perhaps one in futility, but that is also my point.

If an experiment has a point, I suggest that there are good ways and bad ways of performing it. Ones potentially able to supply evidence that will underpin that point or refute it.

Many, all though I doubt most, experiments in modelled climates are not experiments in the sense I have given but are then used as if they were. That would cover most of the archived climate model runs which are projections based on assumptions for a purpose. I presume that puprpose is to inform decision making. These projections may then be used by professional and amateur to aid their thinking and argument. I do not think that they are well designed experiments for that secondary scientific purpose.

I will now partially contradict myself by saying that I might consider they be well designed for their intended purpose if I knew what that intended purpose was.

Were they intended to be some best guess of outcome, some best guess of our certainty in some outcome, or some best display of our ignorance in some outcome? Without knowing that, I have no comment to make on whether the process that generated them, the experiment, was well constructed.

Perhaps others with some relevent modelling experience do have some commentary on the way climate model experiments are constructed. I am aware that there are some amongst that community who think that much of the labour and expense is wasted for want of either better or more transparent design.

I will end with one such who commented on a climate ensemble as being an: "ensemble of lost opportunity".

Alex

Dec 4, 2012 at 9:58 PM | Unregistered CommenterAlexander Harvey

Back on aerosol forcing (see my comment on Dec 4, 2012 at 9:41 AM), I've just read another recent observationally constrained study that estimates aerosol forcing, along with climate sensitivity and ocean diffusivity: Ring et al, Causes of the Global Warming Observed since the 19th Century, 2012. Its best estimate of total aerosol forcing, using the HadCRUT4 surface temperature record, is -0.5 W/m^2, below current mainstream purely observational estimates but in line with Prof. Stephens view that indirect aerosol forcing is very low. Their corresponding best estimate of climate sensitivity is 1.6 C - in line with the estimate I calculated in my earlier comment and very close to being in the IPCC 'very unlikely' range.

But the most interesting, indeed worrying, revelation in this study is in the very useful table that reconciles the current climate sensitivity estimate of 1.6 C to the estimate of 2.5 C in a study by the same group published in 2000. Most of the changes look normal enough - changes of dates, of data sets, etc. - and they all net out to zero. That leaves one large change, which accounts for the entire reduction in the estimated climate sensitivity from 2.5 C to 1.6 C. It is:

"Correct code error (3 N hemispheric coefficients in S hemispheric equation) & recalibrate …"

IMO, this shows a) that the more complex a model is, the less it should be trusted (and this study involves a fairly simple model!); and b) that no study should be trusted (or, indeed, published in a peer reviewed journal) if the authors have not made both full data and computer code, sufficient at least to reproduce all its main results, publicly available.

Dec 4, 2012 at 10:00 PM | Unregistered CommenterNic Lewis

@Nic Lewis ""Correct code error (3 N hemispheric coefficients in S hemispheric equation) & recalibrate …"

Debugging complex model code is a bitch, eh? You have to take it apart bit by bit and run regression tests on all the components. If you want to find errors in your code, either give it out publicly or try and sell it commerically...

Dec 4, 2012 at 10:34 PM | Unregistered CommenterThinkingScientist

ThinkingScientist has alluded to the problems of reservoir models even when the reservoir characterisation is well controlled from log core and production data. When the model predictions are then fed into modelling of life-of-field economic projections used for management decision making, gross assumptions on fiscal stability, rig contract rates, future market prices, pipeline tariffs, maintenance and abandonment costs the uncertainties get magnified yet again. Such is life for an industry that asks for and gets no subsidy, has to write off all the costs of unsuccessful exploration and gets for its pains even more of a Bronx Cheer welcome from the present Secretary of State for Energy than Greenpeace ideological zeolots.

Dec 4, 2012 at 10:45 PM | Unregistered CommenterPharos

Dec 4, 2012 at 8:55 PM | Richard Betts

“That Telegraph article was from 2009 - we've learnt a number of lessons from that year :-)
The projections for changes in UK snow are here.”

From the link:-

5. Conclusions

5.1. In summary, the RCM projections provide a useful dataset for the analysis of possible changes to future changes in snow. They simulate historical frequencies of snow days with some skill, but regional and seasonally varying biases are also present.

5.2. Significant future reductions in numbers of snow days, mean snowfall rates and the intensity of heavy events are projected for the end of the 21st century, consistent with the projections of warming temperatures. The sign of the changes is robust in most cases to the subset of modelling uncertainties sampled by the ensemble members. For changes in mean snowfall rate, the ensemble-mean changes are broadly consistent with those obtained from alternative ensembles of projections from global climate models, although this cannot be checked for the other metrics considered in this report due to a lack of daily data from the global model data archives.

5.3. The RCM ensemble possesses the advantage of accounting for high resolution regional influences of mountains, coastlines and land sea contrasts, however (as for other variables) it does not sample the full spread of possible outcomes consistent with present knowledge or modelling capability. The RCM ensemble should therefore be interpreted as providing a set of plausible alternative outcomes, but not as being suitable to attach likelihoods to different levels of change. Users requiring more detailed snow information than provided in this document should consider further analysis of the RCM data, whilst bearing in mind the limitations noted in this report.

I see none of the above as “Conclusions”, they are at best “thoughts” and at worst “lures”. However I do agree "we've learnt a number of lessons from that year :-)"

Dec 4, 2012 at 11:51 PM | Registered CommenterGreen Sand

Dec 4, 2012 at 9:40 PM | Richard Betts

Dec 4, 2012 at 9:42 AM | Nic Lewis

Glad to hear you took part in the review! If your comments on the RF chapter were clearly identified as such (ie: on the spreadsheet you listed the chapter, section and line number) then they will reach the chapter authors - the TSU will be spending the next few weeks sorting all this out. [...]. Obviously I can't speak for the TSU and this is certainly no guarantee, but that's my informal advice on the best thing to try.

Re: Graeme Stephens - OK, yes, I'll ask him.

Richard, please don't take this the wrong way, and I apologize in advance if my observations are phrased in such a way that you take exception to them (as you have in the past, on occasion!)

That being said .. I find it disappointing that you have chosen once again to focus solely on a minor point of bureaucratic procedure and have treated the rest of Nic's post as though it had not been written.

While I don't even pretend to understand the "jargon" (thanks, Tamsin!) it seemed to me that Nic had responded at considerable length to your:

Do you have a reference for that, or is it just a "personal communication"? Can you provide a link?

Yet, your reply did not even acknowledge Nic's response to these questions.

The mileage of others may certainly vary, but I find this to be discourteous and, well, a far too-frequent-for-comfort pattern that I've observed in your interactions here and elsewhere in the cyber-universe (examples available on request).

I have learned to expect this kind of behaviour from (amongst others) Mann, Gergis, Karoly, Ward, Wallis, Klein, Weaver, Allen and Schmidt - whom I notice you recently welcomed to the twitosphere - but I'd really like to not have to learn to expect it from you!

OTOH, on a brighter note ... while I'm here, I'd like to second both the implicit and explicit observations by Paul Matthews in his [Dec 4, 2012 at 1:02 PM] response to ThinkingScientist.

Dec 5, 2012 at 3:28 AM | Registered CommenterHilary Ostrov

Nic Lewis:

That leaves one large change, which accounts for the entire reduction in the estimated climate sensitivity from 2.5 C to 1.6 C. It is:

"Correct code error (3 N hemispheric coefficients in S hemispheric equation) & recalibrate …"

IMO, this shows a) that the more complex a model is, the less it should be trusted (and this study involves a fairly simple model!); and b) that no study should be trusted (or, indeed, published in a peer reviewed journal) if the authors have not made both full data and computer code, sufficient at least to reproduce all its main results, publicly available.

Wow. I mean, seriously, wow.

Dec 5, 2012 at 4:48 AM | Unregistered CommenterRichard Drake

@Jilary Ostrov 9.40 PM

And at the risk of possibly giving further offence, I cannot recall any time when posting here that Richard Betts has responded to one of my posts, even when I have posted a direct question to him. On one occasion I recall posting direct questions (twice?) in the thread and then re-posting in unthreaded too. Sounds of silence were all I got.

Richard always leaves me with the impression of some highly selective posting decisions being made. To be fair to Richard, if I had to worry about what my employer might think I would probably feel similarly constrained.

Dec 5, 2012 at 6:13 AM | Unregistered CommenterThinkingScientist

@Green Sand

And itsn't the writing so turgid? The style reminds me of papers written by Michael Mann (eg 1998) rather than the clear writing of say McIntyre & McKitrick 2003. If I was reading stuff like this for decision making I would be very concerned that it was just flannel. No wait, it is!

For eample, returning to one of the themes I have talked about on this thread, it is clear in 5.1 that the "hindcasting" works for snow frequencies, but there does not appear to be any predictive capability. And the conclusion in 5.2 that there will be less snow if it gets warmer seems rather obvious. Was a complex model required for that? And what if it doesn't get wamer ie the model is wrong?

And I really hate the use of the word "skill". Its so pretentious.

Dec 5, 2012 at 7:19 AM | Unregistered CommenterThinkingScientist

 

 <.B>This article may change your thinking on "climate sensitivities."

Dec 5, 2012 at 8:31 AM | Unregistered CommenterDoug Cotton

Since the Met Office has learned so much since 2009, can we assume that any climate science published prior to 2009 can be discounted, and what is the discount rate?

Dec 5, 2012 at 8:40 AM | Unregistered Commentersteveta

Unanswered Questions about climate models:

at 11.47 am

Repeated at 10.08 am

Dec 5, 2012 at 8:53 AM | Unregistered CommenterThinkingScientist

Hilary Ostrov - ThinkingScientist

one of the problems with blogland is that people can and do just walk away, Pretty soon this post will be off BH home page. Which is why I suggested up thread that we try to formalise a blog post on the science. We would need to agree ground rules like a response has to be made within a reasonable time. Given that the Met Office is part of government their idea of a reasonable time will probably be something like 3 weeks. OTOH we all know they will not rise to the challenge because the pretty good evidence for climate change that Richard Betts refers to is actually the output of computer models and thus does not qualify as evidence at all.

I would love to be proved wrong, Richard.

Dec 5, 2012 at 9:04 AM | Unregistered CommenterDolphinhead

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>