Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« The problem of pal review | Main | Diary dates »
Friday
Aug032012

Rougier on trust and the IPCC

Commenters on the Rougier thread have been pointing to another very interesting discussion paper by the same author. In essence it's a call for the scientific establishment to move away from their current focus on massive and progressively more detailed climate models. The alternative proposed is simpler models, which can be run more often hence helping policymakers to get a handle on the total uncertainties.

This quote in particular was relevant to recent discussions of trust and the IPCC:

The IPCC reports are valuable sources of information, but no one owns the judgements in them. Only a very naıve risk manager would take the IPCC assessment reports as their expert, rather than consulting a climate scientist, who had read the reports, and also knew about the culture of climate science, and about the IPCC process. This is not to denigrate the IPCC, but simply to be appropriately realistic about its sociological and political complexities, in the face of the very practical needs of the risk manager.

PrintView Printer Friendly Version

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    Response: Brett Favre Jersey
    If you appreciate football, you probably have a favorite group from the National Football League or two and have a list of players who like to have seen.

Reader Comments (27)

Vital point that I've been saying a slightly different way since 2010: the models used in IPCC reports must be re-runnable by all and sundry on 'affordable machines', so that not just all the source code but the specifics of every published 'finding' can be examined in the minutest detail in the new, open source, open content, pre-publish online review world.

What's affordable? That's up for some debate. But the principle must become firmly established - and we still have a few more years of Moore's Law to help us achieve it.

Aug 3, 2012 at 8:46 AM | Unregistered CommenterRichard Drake

"Only a very naıve risk manager would take the IPCC assessment reports as their expert, rather than consulting a climate scientist, who had read the reports, and also knew about the culture of climate science, and about the IPCC process."

How would a wise risk manager go about finding a good "climate scientist"?

Aug 3, 2012 at 9:21 AM | Unregistered Commenternot banned yet

Not commenting on the content, but these 2 papers like these could only be written in Academia. If I sent a sales proposal like that to a customer I don't think it would do very well. It is like it is written purely because there is funding already to write it.

p4 "So how might the investment be directed differently?"

A. Electrify all the railways. That would even have a CO2 mitigation effect probably.

Aug 3, 2012 at 9:29 AM | Unregistered CommenterRob Burton

I see at last a glimmer of recognition of other mechanisms to explain climate than CO2- (A)GW. Once you fix broken aerosol optical physics you can explain the end of ice ages by biofeedback, AGW from Asian pollution reducing cloud albedo also why periodic Arctic ice melt causes regional climate change.

However, these people are walking a fine line between scientific honesty and having to kowtow to powerful carbonistas, e.g. the Committee on Climate Change which has some interesting connections: http://www.ipa.org.au/publications/1438/did-global-warming-send-lehman-brothers-broke

Aug 3, 2012 at 9:29 AM | Unregistered Commenterspartacusisfree

I propose a slogan for sceptics: "No model can be trusted until it is tested". Testing of models of Climate requires the passage of decades. They therefore are quite useless in the context of future Climate. Models might be useful in forecasting weather but even here consider how unreliable they are.

Aug 3, 2012 at 9:48 AM | Unregistered CommenterM0rley Sutter

Hi Andrew,

You've hit on a *really* interesting discussion here, and one that will play out in the climate science community over a number of years. I'd love to be able to chat at length about this (it is my current area of research) - but for now a few notes, and maybe others will step in ;)

What you have to remember about climate models is the path dependency. What were they built for? In many cases, they share physics with weather forecasting models, and so there is a long history of using them to inform decisions. Many of the processes that are useful for informing about climate change, however, are different from those in the weather models (long term ocean circulation is a good example). The climate models were built primarily for *understanding the system*. Basically, you know you understand the system well, if you can build a working model of it.

So the move towards ever-higher resolution, and putting more earth system processes in, is a direct result of wanting to better understand the system. "It works when we make these assumptions*, how about when we take them away" is a useful path towards this understanding.

Part of Jonty's frustration is shared by all climate modellers - there is never quite enough computational resource to do exactly what we want to do. A single climate model run with the latest version of the model might take 6 months. It'll also involve (probably) hundreds of people. An older, simpler version of the model runs fast, so you can do big ensembles (and get to the uncertainties) but doesn't *quite* have the new version of this particularly interesting atmospheric process, and so you can't use it for ....etc. etc. There will always be this tension between running the latest model, and running enough ensemble members to fully understand the uncertainties.

But, really crucially, it depends what question you are asking. You don't need the most complex climate model in the world to understand that perturbing the climate system in a big way will lead to big responses, and probably big disruption of (e.g.) human and ecological systems. This, broadly is the mitigation question.

However, if you are concerned about the impact on a specific, local system, then you'd better understand the processes that are important to that system, and how they might change with a changing climate. This will probably require many more computational resources, along with deeper understanding, and context. This is the adaptation problem.

So, the answer is to have a suite, or a heirarchy of models, ranging from the simple, but fast, well understood, and universally agreed upon (box models, EMICs and even older GCMs now fall into this category), all the way up to the recent Earth system models. Crucially, you need to make sure the understanding of the system makes sense all the way up and down the chain. If your Earth system model starts to tell you something different from your box model, then you need to go and review your understanding.

So, personally, I agree with Jonty that the knowledge and context provided by the people that built the models is really important. They should certaintly take ownership of the expressions of uncertainty produced by the IPCC. This is really hard to do, because of the number of people involved, but it should be possible with this kind of heirarchical model approach. I am optimistic that this kind of approach wil be adopted further over the coming years (it is already, to a certain extent).

Oh, and if you're interested in the Hadley Centre model that will be used for the IPCC simulations, you can find some details here.

http://www.geosci-model-dev.net/4/723/2011/gmd-4-723-2011.html

*and, no, I'm not talking about 'we assume that CO2 causes this much warming' here.

Aug 3, 2012 at 9:51 AM | Unregistered CommenterDoug McNeall

@M0rley Sutter: that makes me realise that weather forecasting models seem to do a better-than-nothing job at one, or a few, days, distance, but are no good at a month, and are (as far as I can tell) not much good at a few minutes. (For example, on a showery April day, they can't tell me (as far as I know) whether it'll rain in a few minutes time.)

Now, is there any time lapse on which climate models might be expected to do better-than-nothing? Any at all?

Aug 3, 2012 at 9:54 AM | Unregistered Commenterdearieme

@Doug - what is with this "ensemble" idea ? Looks like throwing in the towel to me - admitting that one's own model is pants and trying to hide in the crowd.

It reminds me of playing the recorder at primary school - I just used to hum into it and hope the rest of the class knew what they were doing.

Aug 3, 2012 at 10:29 AM | Unregistered CommenterJack Hughes

It appears that some politicians are waking up: http://www.youtube.com/watch?v=6w-FKTQYMFk

Aug 3, 2012 at 10:37 AM | Unregistered Commenterspartacusisfree

Good software developers will test their code at multiple levels, starting with unit tests that test a method (or algorithm) in isolation, then integration tests that check the interaction between these methods, upto full system test.

The reason for this is quite simple - if system testing throws up an issue, how do you know where it is unless you've got test results at a lower level?

As such, Rougier's call is very sound. It seems to me that the big problem with current models is they've not been written in a functionally deconstructed way, so these lower level tests are either absent or impossible to write and/or run.

This puts another perspective on Mr Drake's point. Monolithic systems running on huge mainframes are not really that common any more. Distributed computing is considered a much more appropriate way of solving many of the problems we encounter with crunching significant volumes of data. For example, there are already folk out there experimenting with what you can achieve with arrays of Raspberry Pis - the very definition of "affordable".

The sad thing is that this is nothing new - distributed computing was seen as the way forward when I was at University in the Eighties.

Aug 3, 2012 at 10:44 AM | Registered Commenterthrog

'The IPCC reports are valuable sources of information,'

only if you assume that information is valid and that its actual the 'best' information available and not the information that is 'most useful ' for the IPCC political objectives were quality comes second place .

Given their past behaviour , who bet that is the case ?

Aug 3, 2012 at 12:55 PM | Unregistered CommenterKnR

From the Ecclesiastical Uncle, an old retired bureaucrat in a field only remotely related to climate with minimal qualifications and only half a mind.

As my understanding of climate science is close to negligible I comment upon the Bishop’s header without reading the quoted paper.

Firstly, re models.
Gosh Bishop! I wonder whether the choice– between small evidently deficient models run frequently and more comprehensive models that are too expensive – that you discuss merits public funding particularly as the cognoscenti here seem to acknowledge resolution is a matter for the extreme long term.

Re the IPCC
This is very much ‘drawing room’ language and in the interests of politeness actually misleads. ‘The IPCC reports are valuable sources of information …’ This leads the reader to suppose that they do not omit (maybe routinely) references to research that is at least as relevant as that they do refer to, to the evident prejudice of their value. Also, ‘but no one owns the judgements in them’ seems plain wrong. Surely, the IPCC does because they write the reports and do not deny ownership. Given the caveats about reliability of the organization lower in the quote, does this error then not further conceal the questions over the value of the reports? And the term ‘risk manager’ seems to be a rather sophisticated import from the world of commerce or business and may not mean much, say to the scientist reading this paper who is also looking to an IPCC report for evidence. It would surely be better to be more direct and state that the IPCC is constrained by the policies of governments that facilitate its work and cannot write reports that do not support their policies.

Aug 3, 2012 at 1:07 PM | Unregistered CommenterEcclesiastical Uncle

Why isn't Rougier head of the IPCC? If modelers had expressed themselves with the good sense of a Rougier there would have been no climate scare. (Maybe a little Mann scare.)

The fact that Rougier or someone like him is not running the modeling show for the IPCC tells me that good work is being suppressed by the IPCC.

Aug 3, 2012 at 2:46 PM | Unregistered CommenterTheo Goodwin

Doug McNeall writes:

"So, personally, I agree with Jonty that the knowledge and context provided by the people that built the models is really important. They should certaintly take ownership of the expressions of uncertainty produced by the IPCC. This is really hard to do, because of the number of people involved, but it should be possible with this kind of heirarchical model approach. I am optimistic that this kind of approach wil be adopted further over the coming years (it is already, to a certain extent)."

It seems to me that Rougier is more severe in his criticisms than you suggest. It seems to me that Rougier said that climate science must benefit from additional insight about the physics if progress is to be made with the models. If Rougier did say that then I agree because it seems obvious to me that climate science lacks some physical hypotheses needed to provide context for the modelers.

Aug 3, 2012 at 4:06 PM | Unregistered CommenterTheo Goodwin

Firstly it seems that this is not an 'academic paper' so the authors "have taken the opportunity in this essay to be a little more polemical than we might be in an academic paper, and maybe a little more exuberent in our expressions" - which probably constitutes most of the bits I consider good!!

However there are others -

The authors identify some of the problems -

"The reason that we are suspicious of arguments about climate founded on experiences in meteorology is the presence of biological and chemical processes in the earth system that operate on climate policy but not weather time-scales. We believe that the ackonwledgement of biogeochemistry as a full part of the climate system distinguishes the true climate scientist from the converted meteorologist"

(It would seem that Julia Slingo falls under this category - see her statement to the Science and Technology Committee when asked if there was a problem with the scientific software -

"At least for the UK the codes that underpin our climate change projections are the same codes that we use to make our daily weather forecasts, so we test those codes twice a day for robustness." )

http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387b/38724.htm

However there is still that statement that -

"In a nutshell, we do not think that academic climate science equips climate
scientists to be as helpful as they might be, when involved in climate policy
assessment. Partly, we attribute this to an over-investment in high resolution
climate simulators, and partly to a culture that is uncomfortable with the
inherently subjective nature of climate uncertainty"

And their solution is simpler lower resolution models with many more runs to 'assist policymakers'

Sorry I simply don't buy the 'solution' (and I've just run out of time thanks to other commitments but will come back to this!!)

Aug 3, 2012 at 5:23 PM | Unregistered CommenterMarion

Doug McNeall , I read your link about the Hadley model. I came away with little idea of how the radiative effects get into the picture. Does the program work from first principles, with insolation, DWIR, upward IR and all that? Is there in fact a trenberth-style energy balance diagram in there somewhere, by implication at least? Or is the radiation part effectively a black box, as I have seen asserted here by critics?

Oh, and while we are here, what is the smallest time slice used in the calculations? Does anyone go down as far as to produce a different energy balance at 6pm than at noon, or is that just a bit too detailed?

Aug 3, 2012 at 6:18 PM | Registered Commenterrhoda

So our climate is an incrediby complex and chaotic system

http://wattsupwiththat.com/2012/01/21/the-ridiculousness-continues-climate-complexity-compiled/

The data would appear to be inadequate and vulnerable to subjective asssumptions

http://wattsupwiththat.com/2009/11/25/climategate-hide-the-decline-codified/

and all models are wrong and the possibilities limitless.

http://allmodelsarewrong.com/limitless-possibilities/

So what does Rougiers advocate to assist policymakers

NOT higher resolution models to help understand our complex and chaotic climate system (which would also help meteorology)

NOT more investment in better technology and more accurate and more numerous temperature data collection points to avoid the need for subjective assumptions

But rather -

"For climate policy it is necessary to enumerate what might happen under different climate interventions: do nothing, monetise carbon, regulation for contraction and convergence, geo-engineering, and so on. And each of these interventions must be evaluated for a range of scenarios that capture future uncertainty about technology, economics, and demographics. For each pair of intervention and scenario there is a range of possible outcomes, which represent our uncertainty about future climate. Uncertainty here is ‘total uncertainty’: only the intervention and the scenario are specified—the policymaker does not have the luxury of being able to pick and choose which uncertainties are incorporated and which are ignored."

Whilst he considers Internal variability to be only a tiny part of the total uncertainty.

It would seem that the problem being 'carbon' related is almost taken as a given.

So it would be interesting to find out if any of the 100 different simulator configurations chosen implied that it was not.

Aug 3, 2012 at 8:06 PM | Unregistered CommenterMarion

Re: Aug 3, 2012 at 10:37 AM | spartacusisfree

"It appears that some politicians are waking up: "

http://www.youtube.com/watch?v=6w-FKTQYMFk

Thanks for the link, Spartacus, I was enjoying it until Boxer spoke (at 4.04) - incredible, again that ludicrous 97-98% of scientists meme - shame Senator Sessions didn't have this graph to hand in response to show up her nonsense

http://wattsupwiththat.com/2012/07/18/about-that-overwhelming-98-number-of-scientists-consensus/

Aug 3, 2012 at 8:19 PM | Unregistered CommenterMarion

Isn't the problem with 'Climate Science' the breadth of the subject matter - no one person can really be understand the whole in expert detail? There is reliance on an overall message being created and the contribution of colleagues. This leads to ... ahem ... confirmation bias amongst scientists, academics and institutions.

Aug 3, 2012 at 8:30 PM | Unregistered CommenterChairman Al

A reminder of the dangers of subjective assumptions from the climategate mails -

<0310> Warren: The results for 400 ppm stabilization look odd in many cases [...] As it stands we'll have to delete the results from the paper if it is to be published.

<1682> Wils: [2007] What if climate change appears to be just mainly a multidecadal natural fluctuation? They'll kill us probably [...]

<2267> Wilson: Although I agree that GHGs are important in the 19th/20th century (especially since the 1970s), if the weighting of solar forcing was stronger in the models, surely this would diminish the significance of GHGs.[...] it seems to me that by weighting the solar irradiance more strongly in the models, then much of the 19th to mid 20th century warming can be explained from the sun alone.

<2967> Briffa: Also there is much published evidence for Europe (and France in particular) of increasing net primary productivity in natural and managed woodlands that may be associated either with nitrogen or increasing CO2 or both. Contrast this with the still controversial question of large-scale acid-rain-related forest decline? To what extent is this issue now generally considered urgent, or even real?

<0953> Jones: This will reduce the 1940-1970 cooling in NH temps. Explaining the cooling with sulphates won't be quite as necessary.

<4944> Haimberger: It is interesting to see the lower tropospheric warming minimum in the tropics in all three plots, which I cannot explain. I believe it is spurious but it is remarkably robust against my adjustment efforts.

<4262> Klein/LLNL: Does anybody have an explanation why there is a relative minimum (and some negative trends) between 500 and 700 hPa? No models with significant surface warming do this

<4470> Norwegian Meteorological Institute: In Norway and Spitsbergen, it is possible to explain most of the warming after the 1960s by changes in the atmospheric circulation. The warming prior to 1940 cannot be explained in this way

http://foia2011.org/

Aug 3, 2012 at 9:13 PM | Unregistered CommenterMarion

Six months to run a model...
How many times you say "oh shit" over that period?
...*off the record*, of course
(I always say it a few times, and my runs hardly take more than 24h)

Aug 3, 2012 at 9:21 PM | Unregistered CommenterPatagon

Six months to run a model...

"I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is."
- Deep Thought, Hitchhiker's Guide to the Galaxy.

Aug 4, 2012 at 4:02 AM | Unregistered CommenterStephen Rasey

Interesting new model (Thanks September 2011)

http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00640.1?af=R&&

Abstract

"Feedbacks in response to climate variations during the period 2000-2010 have been calculated using reanalysis meteorological fields and top-of-atmosphere flux measurements. Over this period, the climate was stabilized by a strongly negative temperature feedback (~ −3 W/m2/K); climate variations were also amplified by a strong positive water vapor feedback (~ +1.2 W/m2/K) and smaller positive albedo and cloud feedbacks (~ +0.3 and +0.5 W/m2/K, respectively). These observations are compared to two climate model ensembles, one dominated by internal variability (the control ensemble) and the other dominated by long-term global warming (the A1B ensemble). The control ensemble produces global average feedbacks that agree within uncertainties with the observations, as well as producing similar spatial patterns."

Oh, what was that .....

"These observations are compared to two climate model ensembles, one dominated by internal variability (the control ensemble) and the other dominated by long-term global warming (the A1B ensemble). The control ensemble produces global average feedbacks that agree within uncertainties with the observations"

What a surprise!! Seems like "Internal variability" wins!

Aug 4, 2012 at 2:14 PM | Unregistered CommenterMarion

Whoops, and

"Interesting new model (Thanks September 2011)"

Should read

"Interesting new paper (Thanks September 2011)"

my bad!

Aug 4, 2012 at 2:27 PM | Unregistered CommenterMarion

In response to Marion.

Well said. Below is a link to the heads of argument about risk on precisely what you say: -

"For climate policy it is necessary to enumerate what might happen under different climate interventions: do nothing, monetise carbon, regulation for contraction and convergence, geo-engineering, and so on. And each of these interventions must be evaluated for a range of scenarios that capture future uncertainty about technology, economics, and demographics. For each pair of intervention and scenario there is a range of possible outcomes, which represent our uncertainty about future climate. Uncertainty here is ‘total uncertainty’: only the intervention and the scenario are specified—the policymaker does not have the luxury of being able to pick and choose which uncertainties are incorporated and which are ignored."

http://www.gci.org.uk/Four_Keys.html

Aug 4, 2012 at 5:32 PM | Unregistered CommenterA Meyer

Re: Aug 4, 2012 at 5:32 PM | A Meyer

Except, A.Meyer, that the words you quote are not my words but I suspect you knew that anyway!

Nor do I advocate multiple model runs with slightly different parameter choices based on the same subjective assumptions as any form of 'proof' to reduce uncertainty.

Indeed it seems to me that this would be done for little more that propaganda purposes. And certainly not the best use of existing model resources which could be more helpfully used in greater resolutions if it would enable more accurate localised forecasting, which is after all what our farmers, fishermen and others require!

But thanks for the link, more examples of mis-spent monies that could have been better used feeding the peoples of Africa where the UN's ludicrous policies on biofuels have left famine.

Aug 4, 2012 at 7:14 PM | Unregistered CommenterMarion

Climate change becomes a 'policy-issue' when calculating the rates of change needed for compliance with the objective of the UNFCCC are seen to be a necessary prerequisite to actually achieving that.

For those who accept that, the question then becomes what is the genus of model needed to strategically focus and quantify that effort so as to avoid yet more of the model detail/dangers of not seeing that wood for an endless complexity of trees?

After 25 years GCI put forward this: - http://www.gci.org.uk/CBAT/cbat-domains/Domains.swf

Its work in progress but already various people who don't dispute the human causation of climate change and who do accept the need for UNFCCC-compliance, see a range of virtues in this approach. Not least that it is a user-inter-active animation where the maths is straightforward and the onus is on the user to choose the level of risk consistent with achieving that objective: - http://www.gci.org.uk/Responses_to_CBAT.html

Oct 17, 2013 at 5:45 PM | Unregistered CommenterA Meyer

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>