Click images for more details



Recent comments
Recent posts

A few sites I've stumbled across recently....

Powered by Squarespace
« Back on Black | Main | Nature: no scrutiny of the academy »

Dangerous climate change?

This is a slightly edited version of a comment Richard Betts left on the discussion forum. I thought it was quite challenging to much of what we hear about climate change in the mainstream media and therefore worthy of posting here as a header post. (Richard, for anyone visiting for the first time, is head of climate change impacts at the Met Office).

Most climate scientists* do not subscribe to the 2 degrees "Dangerous Climate Change" meme (I know I don't). "Dangerous" is a value judgement, and the relationship between any particular level of global mean temperature rise and impacts on society are fraught with uncertainties, including the nature of regional climate responses and the vulnerability/resilience of society. The most solid evidence for something with serious global implications that might happen at 2 degrees is the possible passing of a key threshold for the Greenland ice sheet, but even then that's the lower limit and also would probably take centuries to take full effect. Other impacts like drought and crop failures are massively uncertain, and while severe negative impacts may occur in some regions, positive impacts may occur in others. While the major negative impacts can't be ruled out, their certainty is wildly over-stated.

While really bad things may happen at 2 degrees, they may very well not happen either - especially in the short term (there may be a committment to longer-term consequences such as ongoing sea level rise that future generations have to deal with, but imminent catastrophe affecting the current generation is far less certain than people make out. We just don't know.

The thing that worries me about the talking-up of doom at 2 degrees is that this could lead to some very bad and expensive decisions in terms of adaptation. It probably is correct that we have about 5 years to achieve a peak and decline of global emissions that give a reasonable probability of staying below 2 degrees, but what happens in 10 years' time when emissions are still rising and we are probably on course for 2 degrees? If the doom scenario is right then it would make sense to prepare to adapt to the massive impacts expected within a few decades, and hence we'd have to start spending billions on new flood defences, water infrastructure and storm shelters, and it would probably also make sense for conservationists to give up on areas of biodiversity that are apparently "committed to extinction" - however all these things do not make sense if the probability of the major impacts is actually quite small.

So while I do agree that climate change is a serious issue and it makes sense to try to avoid committing the planet to long-term changes, creating a sense of urgency by over-stating imminent catastrophe at 2 degrees could paint us into a corner when 2 degrees does become inevitable.

*I prefer to distinguish between "climate scientists" (who are mainly atmospheric physicists) and "climate change scientists" who seem to be just about anyone in science or social science that has decided to see what climate change means for their own particular field of expertise. While many of these folks do have a good grasp of climate science (atmospheric physics) and the uncertainties in attribution of past events and future projections, many sadly do not. "Climate change science" is unfortunately a rather disconnected set of disciplines with some not understanding the others - see the inconsistencies between WG1 and WG2 in IPCC AR4 for example. We are working hard to overcome these barriers but there is a long way to go.

PrintView Printer Friendly Version

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    - Bishop Hill blog - Dangerous climate change?

Reader Comments (285)

Dr Edwards --
Let me say that the effort to make your analysis more easily reproduced (and examined) is appreciated. May I ask which non-proprietary software you are using?

Nov 14, 2011 at 5:10 PM | Unregistered CommenterHaroldW

Of course.

Basic processing: cdo and nco operators, Ferret, shell scripting, Perl.
Statistical analysis and plotting: R.

I used to use PV-WAVE/IDL. The move to R was also motivated by its use by all the statisticians I know, so I can share code and check my work with my colleague Jonty Rougier. Free software also means I'm more flexible - I can work on my laptop without being connected to Bristol machines for licences.

For PalaeoQUMP (now officially finished...), I'm required by the funders NERC to put my climate simulations on BADC. I just need to finalise the processing and analysis first. The idea is that others can analyse my ensembles (mid-Holocene and Last Glacial Maximum) without repeating my work. I also won't have time to do every possible analysis with that data.

For ice2sea, I don't yet know whether the datasets will be online. The climate simulations are available to ice sheet modelling groups (under licence agreement) for a model intercomparison. The ice sheet model simulations from Bristol and other groups will be put somewhere public if I'm allowed: that might depend on ice2sea or on who did them, I'm not sure.

Nov 14, 2011 at 5:23 PM | Unregistered CommenterTamsin Edwards

Dr Edwards,
Thanks. As you say, R seems to be the lingua franca for statistical analysis. I shall have to take the time to pick it up in order to keep current.

Nov 14, 2011 at 5:46 PM | Unregistered CommenterHaroldW


My earlier, angry rant about "science" referred to the likes of Mann and Hansen in the USA, and the Climategate scoundrels in the UK, not to you or Richard Betts. All actions have consequences, and the consequences of their fraudulaent science (hockey sticks, New York flooding, fiddled data, etc) have been to raise fuel bills here in the UK by 15 - 20% more than they need be and to land us with a bill for hundreds of billions of pounds. The hunger and deaths that I referred to are real.

As a scientist/engineer I support the academic study of climate, while accepting the uncertainty that arises from a multivariate analysis, with non-linear relationships and a large number of degrees of freedom. But I am sure that you would agree that it is the responsibility of scientists to ensure that their work does not result in unintended, harmful or possibly even catastrophic consequences.

Again, my apologies - my angry remarks were not directed at you or Richard.

Nov 14, 2011 at 6:17 PM | Unregistered CommenterRoger Longstaff

I notice the ICE2SEA project funding is €9.99 million and the project cost is €13.64 million, according to the European Commissions FP7:
Anyone any idea what the €3.65million between the cost and funding is for?

Nov 14, 2011 at 6:40 PM | Unregistered CommenterJustin Ert

Sorry, wrong thread!

Nov 14, 2011 at 6:41 PM | Unregistered CommenterJustin Ert

Well done, Roger Longstaff - you have transitioned back to graciousness! (And I must say my admiration for Tamsin Edwards has grown by leap and a bound.)

I too get angry with what has happened in climate science - I remember it with affection as a worthy albeit often dull and plodding area of scholarship and discussion. That was before its political potential was spotted by malevolent types.

I also try to remember that people born after say 1980 or so may have been exposed to strident, heartfelt climate alarmism from their teachers and their lecturers all their schooling and undergraduate days, reinforced by the disgraceful decision of the BBC to back the alarmists. The realisation of the intellectual and moral bankruptcy of that alarmism will come to many as a threat to be instinctively resisted at almost any cost.

Nov 14, 2011 at 10:28 PM | Unregistered CommenterJohn Shade

Nov 14, 2011 at 3:15 PM | Roger Longstaff

Hi Roger,

Sorry to be absent again - as Tamsin suspected, I haven't run away, I've just been busy both with work and family, and also following up on the various forums on which the conversation has developed around my original post which started this thread. In particular, Bob Ward didn't seem happy with what I'd written, and said a number of things that required a response from me, but my response took a while to show up and required some chasing (it got stuck in a spam filter!)

Nov 14, 2011 at 11:17 PM | Unregistered CommenterRichard Betts

Nov 13, 2011 at 10:43 AM | Mike Jackson

Hi Mike

You are absolutely right that the fact that the pre-industrial baseline is so poorly constrained is one of the difficulties with trying to identify a particular threshold of global warming, and hence why the definition of "dangerous" warming has to be a judgement call (which involves many other non-science considerations as well as uncertain science).

Nov 14, 2011 at 11:22 PM | Unregistered CommenterRichard Betts

Hi again Mike

On your other point:

There is still a debate (quite heated in places) as to whether the feedbacks are positive or negative or even both under different circumstances. Unless the program can decide for itself which, somebody must have instructed it to assume one or the other.
At which point the question arises: where did the programmer get the information to make that decision?

It is a common misconception that positive feedbacks are somehow "built-in" to the models, but this is not the case. The water vapour feedback and cloud feedbacks are emergent properties of the simulation.

Nov 14, 2011 at 11:28 PM | Unregistered CommenterRichard Betts

Nov 13, 2011 at 3:09 PM | Don Pablo de la Sierra

when will your models predict the weather more than a week from now?

When we've got a bigger computer of course! :-)

Joking aside, we know that forecast accuracy has progressively improved over recent decades as the models have improved, and part of this is the progressive increase in computing power which has allowed us to run at higher resolution and include more sophisticated sets of calculations. A colleague who is close to retirement said last week that the 5-day forecast now is as good as the 1-day forecast when he started. At the moment, the resolution of the models is still too coarse to capture the details of convective processes, so these have to be parametrized (ie: their large-scale consequences approximated). If we resolve the finer-scale processes then we anticipate further ongoing improvements, but this needs more computing power!

Nov 14, 2011 at 11:34 PM | Unregistered CommenterRichard Betts

Nov 13, 2011 at 4:00 PM | Pharos

I would like to respectfully ask Richard Betts whether the GCM's produce the marked asymmetry in tropospheric temperature trends between the northern and southern hemispheres described in the paper I referred to upthread at 9:12 PM Nov 10, 2011, and if so whether the model identifies a specific primary terrestrial or celestial mechanism for that asymmetry, and how is that asymmetry predicted to progress?

The simple answer is: it depends on the model. The same authors have published a comparison with the models used in the last IPCC report here. To be honest I am not sure how the AR5 generation of models does in this regard - although I would not be surprised if there were papers frantically being finalised as we speak (the strict deadline for papers to be submitted if they are cited in the WG1 First Order Draft is Friday!!)

However, if by a "celestial mechanism" you mean cosmic rays, none of the models include that. Although there does appear to be some evidence by authors other than Svensmark that this process may have a small influence, I'm not aware of a properly quantified process yet. The recent Kirkby paper may help a little here, but it only addresses only one part of it.

Thanks for the question though - it's an interesting an important issue.

Nov 14, 2011 at 11:48 PM | Unregistered CommenterRichard Betts

Nov 14, 2011 at 6:40 PM | Justin Ert

I realise you said your post was on the wrong thread but I think I may know the answer so I'll say it here (can't find which other thread this is on!)

I expect that the difference between the cost and the funding comes from the fact that for most EU-funded projects, many institutes have to find matching funding from other sources. So I imagine that "cost" means the total cost of the project, as met by all funding sources, and "funding" means just the EU contribution.

Nov 14, 2011 at 11:54 PM | Unregistered CommenterRichard Betts

Richard, does the size and sign of the water vapor feedback emerge from a purely physics based model in which all processes are represented properly (up to well bounded truncation effects) and in which all relevant constants are obtained from separate direct measurements? Or does the model include non-physical parameterisations of any components and/or any tunable parameters?

Nov 15, 2011 at 8:22 AM | Unregistered CommenterJonathan Jones

This would make a good focus for a seminar:
'It is a common misconception that positive feedbacks are somehow "built-in" to the models, but this is not the case. The water vapour feedback and cloud feedbacks are emergent properties of the simulation.'
(Richard Betts, Nov 14, 2011 at 11:28 PM)

I must say that I thought there was of necessity a great deal of 'parameterisation' in GCMs to handle the vagaries of water vapour, including clouds, and much else besides. Of course there are feedbacks of all kinds, since there is a great complex of interactions going in the system. The question, as ever, is not their existence but their sign and their magnitude and the contexts in which they matter most.

I remain impressed by the testimony of Richard Lindzen to the Senate of the USA in 1997 ( I note this passage in particular re feedbacks (where the italics and the emboldening have been added by me):

The presence of the positive water vapor feedback in current models also increases the sensitivity of these models to other smaller feedbacks such as those due to clouds and snow reflectivity. The trouble with current models is that they generally lack the physics to deal with the upper level water vapor budget, and they are generally unable, for computational reasons, to properly calculate a quantity like water vapor which varies sharply both vertically and horizontally (Sun and Lindzen, 1993, Lindzen, 1995). Indicative of these problems is the recent work of J.J. Bates and D.L. Jackson at NOAA who found, using satellite data from infrared sounders, that, on the average, current models underestimate zonally averaged (averaged around a latitude circle) water vapor by about 20%. This is illustrated in Figure 2. It should be noted that this represents an error in radiative forcing of about 20 Watts per square meter, as compared with the forcing of 4 Watts per square meter due to a doubling of carbon dioxide (Thompson and Warren, 1982, Lindzen, 1995). More recent observational analyses by Spencer and Braswell (1997), using satellite microwave data, suggest that even Bates and Jackson have overestimated water vapor, and that the discrepancy with models is still greater. Under the circumstances, there seems to be little actual basis for the most important positive feedback in models. Given our inability to detect expected warming in the temperature data, one might reasonably conclude that models have overestimated the problem.

In some ways, we are driven to a philosophical consideration: namely, do we think that a long-lived natural system, like the earth, acts to amplify any perturbations, or is it more likely that it will act to counteract such perturbations? It appears that we are currently committed to the former rather vindictive view of nature.'

Now of course that was was 14 years ago, and surely progress has been made since. But I would be pleasantly surprised, indeed delighted, if that progress was such that water vapour feedback parameters are no longer set by the hand of man. I'd love to hear more about this.

In the meantime, I note a more recent presentation by Lindzen ( from January this year in which this passage occurs:

The larger predictions from climate models are due to the fact that, within these models, the more important greenhouse substances, water vapor and clouds, act to greatly amplify whatever CO2 does. This is referred to as a positive feedback. It means that increases in surface temperature are accompanied by reductions in the net outgoing radiation – thus enhancing the greenhouse warming. All climate models show such changes when forced by observed surface temperatures. Satellite observations of the earth’s radiation budget allow us to determine whether such a reduction does, in fact, accompany increases in surface temperature in nature. As it turns out, the satellite data from the ERBE instrument (Barkstrom, 1984, Wong et al, 2006) shows that the feedback in nature is strongly negative -- strongly reducing the direct effect of CO2 (Lindzen and Choi, 2009) in profound contrast to the model behavior. This analysis makes clear that even when all models agree, they can all be wrong, and that this is the situation for the all important question of climate sensitivity. Unfortuanately, Lindzen and Choi (2009) contained a number of errors; however, as shown in a paper currently under review, these errors were not relevant to the main conclusion.'

Nov 15, 2011 at 9:32 AM | Unregistered CommenterJohn Shade

John Shade links to the paper Lindzen refers to.
My concern is that — according to this paper and Lindzen does admit that there were certain errors though he says they do not affect the main result — observations are saying one thing and models (all of them virtually) are saying the opposite.
And this is the point I was trying to raise with Richard Betts in my semi-illiterate way.
I accept Richard's statement that "The water vapour feedback and cloud feedbacks are emergent properties of the simulation" but the near unanimity of the models and the difference between them and observations in the real world tells me that at some point the programming has (in effect) told them in which direction those feedbacks are to go or the assumptions that have been input inevitably lead to that result.
And the argument, which I have heard, that all the models say one thing and only one set of observations says something different won't wash. Simply asserting that you must be right because more outputs agree with you than disagree doesn't work.
(I'm not accusing you of any of this, Richard, by the way!)
It only needs one set of observations to be right for the models to be wrong. It ought to need onky one set of observations to differ fundamentally from the models to call for a serious investigation of the accuracy of those models instead of (as seems to be the way in climate change science) a serious invetsigation into the sanity or source of funding or politico-religious background of whoever dared question them.

Nov 15, 2011 at 10:11 AM | Unregistered CommenterMike Jackson

Richard, Tamsin,

If you have time I would appreciate your comments on a post I made upstream, which seems to me much more important than the details of models:

Concerning "pre-industrial levels" of temperature and CO2, and how increasing anthropogenic CO2 may increase temperatures, I would like to make two simple points:

1. Pre-industrial temperatures were consequent upon a world emerging from the (perfectly natural) Little Ice Age, and we would naturally expect temperatures to rise for several centuries, and,

2. Pre-industrial levels of CO2 are far from certain. Many (myself included) strongly suspect that the ice core methodology of measurement is hopelessly flawed, and place greater reliance upon the thousands of chemical measurements by respected scientists, over more than a century, that showed that CO2 concentrations 100 years ago were the same as (or higher than) today.

Unless both of these points can be refuted there is no need for a AGW or CAGW hypothesis (and certainly not a theory), and clearly no need to consider expensive mitigation strategies.

Nov 15, 2011 at 10:14 AM | Unregistered CommenterRoger Longstaff

Dear Barry et al,

Re: 2C, 1C and and who defines dangerous.

A slightly more careful examination of the slides, text and papers where I refer to 2C and 1C would point to my concern being with the logic (or absence of it) in those deeming 2C as dangerous. If a set of impacts associated with 2C is held to be collectively dangerous and later those same or similar impacts are considered to occur at 1C, then logic suggests, ceteris paribus, that 1C is now dangerous, at least to those who considered 2C to be so. That is as far as my argument and broad comments on 1C goes, and in different forms I have stated this clearly on many occasions, for example:

“What constitutes an ‘acceptable’ temperature increase is a political rather than a scientific decision, though the former may be informed by science. By contrast, the correlation between temperature, atmospheric concentration of carbon dioxide equivalent (CO2e) and anthropogenic cumulative emission budgets emerges, primarily, from our scientific understanding of how the climate functions”

“Whilst it is legitimate to question whether temperature is an appropriate metric for representing climate change and, if it is, whether 2°C is the appropriate temperature (Tol 2007), this is not the purpose of this paper. Instead, the paper begins by considering the implications of the 2°C threshold for global emission pathways, before proceeding to consider the implications of different emission pathways on stabilisation concentrations and associated temperatures.”
(From Anderson, K. and Bows, A., 2008, Reframing the climate change challenge in light of post-2000 emission trends, Philosophical Transactions A, 366, 3863-3882.)


“The characterisation of 2°C as the appropriate threshold between ‘acceptable’ and ‘dangerous’ climate change is premised on an earlier assessment of the scope and scale of the accompanying impacts. However, these have since been re-evaluated with the latest assessments suggesting a significant increase in the severity of some impacts for a 2°C temperature rise (see for example Mann 2009; Smith, et al. 2009). Consequently, it is reasonable to assume, ceteris paribus, that 2°C now represents a threshold, not between ‘acceptable’ and ‘dangerous’ climate change, but, between ‘dangerous’ and ‘extremely dangerous’ climate change; in which case the importance of low probabilities of exceeding 2°C increases substantially.”

… and later …

“Moreover, given that it is a ‘political’ interpretation of the severity of impacts that informs where the threshold between ‘acceptable’ and ‘dangerous’ climate change resides, the recent reassessment of these impacts upwards suggests current analyses of mitigation significantly underestimate what is necessary to avoid ‘dangerous’ climate change (Mann 2009; Smith, et al. 2009). Nevertheless, and despite the evident logic for revising the 2°C threshold, there is little political appetite and limited academic support for such a revision. In stark contrast, many academics and wider policy advisers undertake their analyses of mitigation with relatively high probabilities of exceeding 2°C and consequently risk entering a prolonged period of what can now reasonably be described as ‘extremely dangerous’ climate change (assuming the arguments for the 2°C characterisation of what constitutes dangerous still holds.)”
(From Anderson, K., and Bows., A., 2011, Beyond dangerous climate change: emission pathways for a new world, Philosophical Transactions of the Royal Society A, 369, 20-44, DOI:10.1098/rsta.2010.0290)

As for my personal views on the impacts of 2C, I, like most folk on this site are not experts on impacts (and I have made this clear on repeated occasions). Where I can I read a breadth of literature on impacts and for 2C I broadly concur with Richard Betts, an expert on impacts, at least with respect to the large scale physical responses to 2C (e.g. the Greenland icesheet) There are lots of uncertainties – though at more local levels I part company with Richard a little. My net understanding is that some people will suffer significantly, particularly those on the margins and in more vulnerable regions; similarly others will likely benefit. But not everyone can join the fortunate few as they move from one nice home in Dublin to another in Brighton; – though even such a ‘simple’ move may not prove quite as straight forward as glib comments can sometimes suggest. On balance and despite the uncertainties, I judge the rate of change of temperature, and particularly the rate of change of more local impacts, will prove overall negative, though if the temperature were to stop at 2C then whether the big events Richard refers to would be triggered is, as he says, very uncertain.

But in regards to 2C being dangerous and in the absence of a cabinet of wise Bishop Hill contributors putting themselves forward for election, we’re lumbered with those who have made the effort to step forward and for whom we’ve collectively voted (or sometimes chosen not to engage with). And it is those receiving our votes who, on our behalf, have deemed 2C as the appropriate metric and level for delineating dangerous from acceptable climate change – however ill or well informed we may think them to be.

However, all this is a bit academic. Ask Richard about 4C, 5C or higher – and I would hazard a guess he’d be less sanguine. Here I can claim at least some expertise. If the science is broadly correct in relation to temperature and cumulative emissions (Richard’s and others’ domain) then colleagues and my work demonstrates we’re heading much more towards the cumulative emissions associated with at least 4C and far removed from anything associated with 2C. It is this that I am particularly concerned with. In my assessment, with which I think Richard would likely agree(?), we’re collectively claiming concern for 2C whilst merrily heading for 4C if not higher, and whilst the net balance of impacts is arguably uncertain at lower temperatures, this is not the case for 4C, 5C and perhaps a degree or three warmer. At these temperatures, potentially reached within just a handful of decades (also Richard’s work), it is difficult to envisage other than major and systemic damages; I’d be interested in Richard’s view on this.

As I have noted before, if blog sites like this are to perform a thorough critique of the many facets of climate change, then some contributors need to be a little more thorough and circumspect in their assessments of others’ analyses. Richard (Betts) and I will undoubtedly judge many issues on climate change differently, some perhaps significantly so. But overall I think there is much greater consensus than difference in our views on for example: the veracity of the basic science underpinning climate change; the rates of emissions growth; our failure to put in place mitigation measures and how this links with cumulative emissions and temperatures; and even the balance of dangers and opportunities at different temperatures.

Ps. For those Bishop Hill folk evidently interested in who I sit next too, I’ve also sat with representatives of Easy Jet, the IMO, Eon, politicians of many parties, Telecom companies, National Grid, Tescos, BNFL (as was), the London Major, BP, and many more. I’ve taken no money, holidays, gifts etc from any of them or from Caroline Lucas. Given I regularly travel by train and bus, I’ve also sat next to many other people – perhaps Barry needs a list of them all before he judges allegiances objectively?

Nov 15, 2011 at 10:17 AM | Unregistered CommenterKevin Anderson

(Drs) Richard(B), Tamsin, Richard(T)

Would it be possible for you or one of your colleagues to give us an update on the recent development of models and their use in the science? In particular with response to Dr Trenberth in 2007:

The current projection method works to the extent it does because it utilizes differences from one time to another and the main model bias and systematic errors are thereby subtracted out. This assumes linearity. It works for global forced variations, but it can not work for many aspects of climate, especially those related to the water cycle. For instance, if the current state is one of drought then it is unlikely to get drier, but unrealistic model states and model biases can easily violate such constraints and project drier conditions. Of course one can initialize a climate model, but a biased model will immediately drift back to the model climate and the predicted trends will then be wrong. Therefore the problem of overcoming this shortcoming, and facing up to initializing climate models means not only obtaining sufficient reliable observations of all aspects of the climate system, but also overcoming model biases. So this is a major challenge.

Also the recent response of, Doug Keenan, to the upcoming BEST publication and this by Dr Tisdale:

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

I too appreciate greatly all your contributions here and realise that your sporadic replies are limited by the time frame of your current work and AR5 deadlines. I hope that you do realise that the voracious requests for knowledge are born from a need to understand why we, as a society, are being influenced by the political interpretation of your work.

Nov 15, 2011 at 10:27 AM | Unregistered CommenterLord Beaverbrook

"Instead, the paper begins by considering the implications of the 2°C threshold for global emission pathways, before proceeding to consider the implications of different emission pathways on stabilisation concentrations and associated temperatures.” Nov 15, 2011 at 10:17 AM | Kevin Anderson

Dear Professor Anderson,

Excuse my ignorance - I am a concerned (and now confused) citizen, and no scientist - but what do you mean by "emission pathways" ? In its "Emissions Gap Report" the United Nations Environment Programme describes them thus: An "emission pathway" shows how emissions change into (sic) the future".


(Written by a non-native English speaker, presumably, since things change in the future, not into the future).

And what do you mean in this context by a "2C threshold". Is not a threshold - as used for generations in Physics - simply "a limit below (or above) which no reaction occurs".

How could a limit in temperature, above or below which no reaction occurs, have any implications for how emissions change in(to) the future ?

Nov 15, 2011 at 3:02 PM | Unregistered CommenterCassio

Hi Cassio,

Ta for the response. An emission pathway is intended to differentiate the approach from simply setting an end point. In this regards, the pathway is of annual emissions which thereby describe an emission budget over time (area under the pathway). This is particularly important in climate change, as cumulative emissions correlate well with temperature over a given period (i.e. the 2000-2100 emissions correlate well with ~2100 temperature rise) whereas end points (e.g. 80% reduction by 2050) have virtually nothing to do with anything - unless they are accompanied with a pathway.

As for the term threshold. Here I mean the 'threshold' between acceptable and dangerous climate change - as determined by social, political processes etc, informed by but not defined by science. I think this broadly fits reasonably with you physics definition; as in physics a change cannot occur in isolation - i.e a change will have an impact, however small. The issue then is, small to whom. The social/political process has deemed the risk of collective impacts above 2C are dangerous - i.e. not small. You may disagree and therefore 2C may not be a threshold you'd choose. But that's the point to some degree - the threshold is a social-policital construct which you can engage with and use argument to try to propose an alternative value or even metric.

Kind regards


Nov 15, 2011 at 8:58 PM | Unregistered CommenterKevin Anderson

Hi Kevin

Great to see you here!

Yes, I completely agree that the risks of major negative impacts probably increase with higher levels of warming, especially if these are reached rapidly. The point of my post was that it's simply not possible to define the "dangerous" level on the basis of the physical climate science alone, there are too many other factors to be taken into account and in the end it comes down to what level of risk is deemed acceptable. As you say, it's not really the job of scientists (or indeed any other group of specialists) to make that kind of call on behalf of society - it's down to individuals to inform themselves and then decide their own view, and down to our elected representatives to do the same (taking into account the different areas of specialist advice and the views of the electorate).

By the way, don't let Bob Ward hear you say that we are "merrily heading for 4C if not higher" - he bit bit my head off for forgetting to say "if we reach 2 degrees" instead of "when". :-)

Actually, come to think of it, would you like to explain in more detail to Bob and others your views on our chances of avoiding 2 degrees, taking into account your consideration of the socio-economic aspects. I can only really comment on the estimated probabilities of a particular emissions pathway keeping us below 2 degrees or not - you know much more than me about the relative likelihood of such pathways. As I said in my original post, if (!) we become committed to a 2 degree warming and start planning adaptation to that on the basis of the worst-case impacts scenarios that are sometimes used to motivate us to work harder to stay below 2 degrees, we may find ourselves implementing major adaptation measures too soon, or unnecessarily.



Nov 15, 2011 at 11:40 PM | Unregistered CommenterRichard Betts

Hi Lord Beaverbrook

The statements you attribute to Kevin Trenberth still hold true - we do have a lot more work to do with the models. However they are really the only tool we have for bringing together our various bits of understanding of how the climate system works (or is thought to work) and estimating how things may change in the future on the basis of an internally-consistent application of all these bits of understanding.

Doug Keenan makes very thoughtful contributions, but in the quote you give above he is suggesting that the past warming is entirely natural. You won't be surprised to hear that I disagree with him there!

Nov 15, 2011 at 11:46 PM | Unregistered CommenterRichard Betts

Nov 15, 2011 at 8:22 AM | Jonathan Jones

Richard, does the size and sign of the water vapor feedback emerge from a purely physics based model in which all processes are represented properly (up to well bounded truncation effects) and in which all relevant constants are obtained from separate direct measurements? Or does the model include non-physical parameterisations of any components and/or any tunable parameters?

It inevitably does have some dependency on parametrizations, which are grounded in physical understanding as much as possible but also by necessity also involve approximations and tuneable parameters. That's why we set up "perturbed parameter" ensembles of multiple variants of the model, exploring the plausible ranges of key parameter settings - eg: as in UKCP09, QUMP and

However I can assure you that we do not deliberately tune the parameters to get a strong positive feedback. They are tuned to get the best performance against present-day climatology and to get a realistic weather forecast (remember I said it's essentially the same model used for both applications). If we deliberately tuned the model to try to get a scary climate change result then we'd probably mess up the weather forecast (no sarcastic comments to the effect that this has already happened please!)

Nov 15, 2011 at 11:55 PM | Unregistered CommenterRichard Betts

Nov 15, 2011 at 10:14 AM | Roger Longstaff

Your second point is quite novel, I'd not really heard the CO2 ice core records be seriously doubted. I think they are pretty sound, but do you have a reference for the concern? (Sorry if I missed it earlier, this thread has developed much faster than I could keep up with!)

Nov 16, 2011 at 12:00 AM | Unregistered CommenterRichard Betts

"and in the absence of a cabinet of wise Bishop Hill contributors putting themselves forward for election"

Some of us have tried Dr Anderson, we have tried. If you're familiar with the old 'closed shop' arrangements that were once so rife in industry, you'd recognise the selection procedures adopted by pretty much all the main parties. As a result Parliament tends to be a gathering of those with a sycophantic nature. There are honorable exceptions, there must be upwards of a dozen or so principled folk there, but most do as bid rather than think.

You sit next to a similar set of people to me!

Nov 16, 2011 at 12:25 AM | Unregistered CommenterCumbrian Lad

Nov 15, 2011 at 11:55 PM Richard Betts

I understand that you don't deliberately tune the model to get strong positive feedback. Nevertheless, as you say above, the feedback is an emergent property of your parameterisations, not of the underlying physics. That has consequences. The emergence of the feedback in the models cannot properly be used as independent evidence that the feedback exists: you have built it into your models, albeit implictly rather than explicitly. And since your parameterisations are obtained at least partly by tuning there is obvious concern that you have located a false optimum in parameter space, so that the sensitivity of your models to external perturbations may have little or nothing to do with reality.

How might one test for such a possibility? The obvious first approach is to tune models using one or more observables, and then see whether those tuned models correctly predict some quite different observable. For example: does a model tuned to follow past temperature variations correctly predict the absolute temperature? Does it get rainfall right? Does it have the right amount of intrinsic variability? And so on and so on; all obvious stuff which I'm sure you understand better than I do.

As far as I recall the models are not doing terribly well on these basic tests, but I haven't kept up to date, so perhaps you're doing better than I remember?

Nov 16, 2011 at 8:12 AM | Unregistered CommenterJonathan Jones


Thank you for your reply.

A good starting point for the ice core concerns is Jaworowski:

The compilation of historical atmospheric CO2 concentrations is from Beck (google: Beck, atmospheric carbon dioxide, 2007).

These are both peer reviewed papers, that have caused a great deal of debate.

My point is one of pure logic - if both temperatures (midway between LIA and MWP) and atmospheric CO2 concentrations (typical of the 19th and early 20th century) are both more or less exactly where we would expect them to be then there is no evidence at all for AGW, and can be no concern for harmful AGW or CAGW in the future. All talk of the impacts of 2C, or 4C, or whatever is just converation, and we have no need to cause suffering or cripple our economy with senseless mitigation policies.

Do you at least agree with my logic?

Regards, Roger

Nov 16, 2011 at 8:55 AM | Unregistered CommenterRoger Longstaff

This is fascinating. I wonder if Richard agrees with Prof Anderson about 4C+ in perhaps a handful of decades?

Nov 16, 2011 at 9:08 AM | Unregistered CommenterMarkJ

My previous post should have been:

(midway between LIA and MWP VALUES)

I was not proposing time travel!

Nov 16, 2011 at 9:30 AM | Unregistered CommenterRoger Longstaff

Roger L, I would advise caution about the older measurements of CO2 levels. The 'consensus' view of CO2 concentrations (stable-ish in historical periods at ca. 290ppm or so, rising fairly rapidly due to CO2 emissions by burning fossil fuels) is, in my lukewarmer opinion, something that is pretty well established. I would only doubt it if I were provided strong evidence that it is incorrect. I don't think the Beck paper provides anything like such evidence. There's a RealClimate post about this, which I hesitate to draw your attention to for obvious reasons, but behind the inevitable snark, I think it makes a few good points. As a chemist, it is easy for me to see that making good CO2 measurements of well-mixed background air in the 19th century would have been difficult. When you breathe out, your breath is probably something like 40,000 ppm CO2. Many other human activities increase the local concentration significantly - and so do things like plant respiration. So getting a measurement corresponding to background CO2 is not all that easy. Modern measurements in the sorts of environments studied by the people cited by Beck yield highly variable concentrations, and almost always much above the background level.

Nov 16, 2011 at 9:55 AM | Unregistered CommenterJeremy Harvey


I take your point about CO2 measurements. However, Beck reported on thousands of measurements, some by Nobel laureates, that covered a multitude of conditions, and I do not think that they can be dismissed.

As you note, this has been debated on RealClimate, WUWT and in several other places - usually with extreme prejudice, one way or the other. You describe yourself as a "lukewarmer", I am a sceptic (as you may have guessed). However, we are all scientists, and when the results of science are used for overtly political ends it is our duty to ensure that the science has not been misrepresented, possibly leading to dreadful consequences.

If we could at least agree about the logic of the debate, perhaps we could make progress.

Regards, Roger.

Nov 16, 2011 at 10:43 AM | Unregistered CommenterRoger Longstaff

Agreed. I think it is plausible that the Nobel laureates got it wrong - not that they did the measurements wrong, but that they measured air in a region where CO2 was not at the background level. Getting access to well-mixed air with background levels of CO2 turns out to be tricky. This is the point that RealClimate made (along with all their snark) and although I'm not giving them a blanket recommendation, they do occasionally get things right, and my feeling is that they are most likely right on this.

Nov 16, 2011 at 11:05 AM | Unregistered CommenterJeremy Harvey

Thanks Jeremy.

When I get time I will go back to Beck's paper - I seem to remember he wrote about sampling methodolgy and locations, but I can not remember the details.

Nov 16, 2011 at 11:08 AM | Unregistered CommenterRoger Longstaff

Richard (B)

'Doug Keenan makes very thoughtful contributions, but in the quote you give above he is suggesting that the past warming is entirely natural. You won't be surprised to hear that I disagree with him there!'

The bottom quote in my last post was from Bob Tisdale:, apologies if that was not apparent.
I do find the center ground of the debate is more loosely defined according to who one listens to.

Am I wrong in thinking that the recent interest in the AMO spurred by Berkleys pre-release is not moving scientific debate more towards the influence of natural cycles being greater than modeled influence on multi-decadal average temperatures?

Nov 16, 2011 at 11:19 AM | Unregistered CommenterLord Beaverbrook


I took a very quick look at Beck's paper. He states that the data were compiled selectively, usually in rural locations, at 2m above ground, away from industrial contamination. Various sampling locations are discussed - including Alaska, Europe and India, and ocean samples.

As he notes consistent diurnal and seasonal variations it would seem that sensitivities were adequate for the job. But I am nor a chemist, and can not comment further, other than to say that he seems to make a strong case, albeit with an axe to grind (just like almost everybody else in this debate!).

Now, I must get back to the day job!

Nov 16, 2011 at 11:30 AM | Unregistered CommenterRoger Longstaff

Hi Roger, the day job has kept me for a while from replying. I agree that this is a topic where a non-expert can get misled by the sheer venom and axe-grinding nature of the two 'sides'. I also think it is a perfectly reasonable topic to discuss. Anyway, I found this website, where someone whose judgement I respect talks through the issues, and concludes, as I do, that the results put forward by Beck are all likely to be artefacts either of the measuring technique, or, more often, to the huge variations between CO2 levels near human activities or growing plants and those in the well-mixed background air. I note that the person, Ferdinand Engelbeen, has posted about this on WUWT.

Nov 16, 2011 at 3:25 PM | Unregistered CommenterJeremy Harvey

Thanks Jeremy, for drawing attention to something that is cancerous about this debate - internet hatchet jobs.

The link that you give states in section 2: "..nearly all are measured near ground 0 - 1 m high". Now read Beck's paper: "Compilation of data was selective. Nearly all of the air sample measurements that I used were originally obtained from rural areas...... at a height of approx. 2m above ground..."

Some unscupulous people (I will not call them scientists) are willing to spread lies in order to discredit the work of others, and we must all be aware of this!

For anybody here who is becoming confused about this, I urge you to read Beck's paper, carefully and in its' entirety, and make your own mind up. The same applies to Jaworowski's paper.

Nov 16, 2011 at 3:56 PM | Unregistered CommenterRoger Longstaff

Hi Lord Beaverbrook

Thanks for the clarification.

The graph in Bob Tisdale's post is not really allowing a fair comparison of model behaviour with observations, as he's showing the multi-model mean. This is bound to show less internal variability than a single model because much of the variability will be averaged out. Individual model simulations show variability which is much more similar to the observed variability, at least in a statistical sense (ie: frequency and magnitude over a long period.) You can't expect the year-by-year or even decade-by-decade internal variability to match the observed because it's essentially chaotic - you'd only expect it to match for specific dates if it was externally forced variability driven by forcings that vary over short timescales, eg: variations in the sun, aerosols (either man-made or volcanic), etc.

For the last decade or so, an apparent flattening of the general upward trend does happen in the models for occasional decades. So, it may just be internal variability. On the other hand there may indeed be some contribution from external forcing - I think there was quite a bit of discussion about this a few weeks ago, if I remember correctly.

So in summary, yes, there is a lot going on in terms of trying to understand and predict variability as well as the long-term trend. Since much of what society needs from climate science is actually to do with regional precipitation forecasts on the timescales of a season to a few years away, which is extremely hard to say the least, we have our work cut out!

Nov 16, 2011 at 4:45 PM | Unregistered CommenterRichard Betts

Nov 16, 2011 at 8:12 AM | Jonathan Jones

Hi Jonathan

Thanks for your important questions.

Yes, we do look at all that, and I the overall answer for the current generation of models is probably that they do OK but could do better!

The paper on the atmospheric component of the model we are using for AR5 is here and the paper for the "Earth System" version (ie: including the ocean and also biogeochemical feedbacks such as vegetation, the carbon cycle and atmospheric chemistry and aerosols is here. You'll see that it does OK in many respects and less well in others.

The current focus for the next model (HadGEM3) is specifically on improving the simulation at regional scales - previously the priority has been on the global mean energy balance and general global-scale climate patterns, but now we are focussing in on the bits that really matter for regional near-term forecasts.

Nov 16, 2011 at 5:02 PM | Unregistered CommenterRichard Betts

Nov 16, 2011 at 9:08 AM | MarkJ

Kevin was actually referring to a paper of mine!

This is largely a review paper discussing previous work, including some presented in IPCC AR4, but also presenting some new work done by ourselves which looked at climate projections for the IPCC's high emissions scenario using a set of models that were more comprehensive than those used for that scenario in AR4.

Please note that this paper specifically addresses the question of when 4 degrees might be reached for the high emissions scenario and with high climate sensitivity and strong climate-carbon cycle feedbacks - the uncertainties are discussed in the paper.

Nov 16, 2011 at 5:09 PM | Unregistered CommenterRichard Betts

Roger / Jeremy
I read one of Jaworowski's papers the other day (did you know he died on Saturday, by the way?) when I was trying to get a handle on this whole business of CO2 readings from a century ago and the way they differ from the ice core data.
I need to take it carefully because unlike you my scientific knowledge is a bit .. er
I agree entirely that there is too much trumphalism on both sides of this whole debate with bloggers (and scientists) leaping on every new paper which "finally proves" this or "finally puts paid to" that. It inevitably entrenches positions and that is hardly helpful.
On the immediate question of measurements, I agree that is quite possible for even Nobel laureates to draw wrong conclusions from the data and to make simple mistakes in collecting readings. You just need to look at the siting of some of the temperature recording stations to see that that problem has not been solved yet!
But these were at least observations and as such deserve consideration. I have a problem with the ice cores because there seems to be a perfectly genuine scientific disagreement as to whether or not they are a reliable metric for CO2 concentration.
I realise that Jaworowski is highly partisan in this matter but Wagner et al, 2002 postulates that plant stomata show that the idea of an almost flat level of CO2 over the past 11,500 years is not correct and that far from being only in the range of 260-264ppm, CO2 ranged from about 270-326ppm.
My problem is two-fold.
First it is not surprising that the IPCC opted for the ice core data because it fits the AGW paradigm. (This doesn't mean it's wrong but in the light of a lot of other evidence there is at least the suspicion that the authors did not "go looking for trouble"!)
Second, in the light of known variations in temperature and CO2 concentrations in the past, I find it a little surprising that over any period of several thousand years CO2 should not vary by more than 2ppm about a mean.
Looking at this objectively I would be inclined to believe, in the absence of other evidence, that the stomata readings are more likely than the ice cores. Add in the observations from scientists who ought to have known what they were doing (albeit there is always the possibility they got it wrong) and I would argue that the burden of proof is very much on those who believe the ice cores.
It seems to me that there are two items of evidence here that cast sufficient doubt for us not to be betting the farm on CO2 being the villain that the (A)GW — and more especially the (CA)GW — exponents claim that it is.
Sorry if this rambles a bit but I wanted to make my thought processes on this quite clear. (Mainly to me!)

Nov 16, 2011 at 5:22 PM | Unregistered CommenterMike Jackson

Mike, Roger, Jeremy,

I'll have a look at that, it sounds quite interesting. While I must admit that my current position is that I'd be surprised that the conventional view of past CO2 is particularly wrong (I've found it convincing enough in the past), I'm interested enough to look at the counter-arguments!

Nov 16, 2011 at 5:29 PM | Unregistered CommenterRichard Betts

Roger, your advice that I (and others) should actually read the various papers in detail, and carefully consider their different arguments is a good one. I will try to find the time to do this. I note that the 'consensus' view is supported by approximate CO2 budget arguments as well as by the measurement data. I think that anyone will readily accept that the ice core measurements might be questionable, on their own. But on top of the measurements suggesting fairly stable CO2 pre-1950, any large changes in the CO2 concentration pre-1950 would have had to have had a cause. While the fluxes of CO2 into and out of the sea, and through respiration and photosynthesis in plants, are very large compared to human emissions, they are roughly in equilibrium, and it is difficult to come up with credible mechanisms that would cause large changes in CO2 concentration over a period of decades.

Nov 16, 2011 at 5:55 PM | Unregistered CommenterJeremy Harvey

What a great thread.
On the subject of CO2, as Mike says, there are several papers from the plant stomata field that suggest that the variation in CO2 levels in the past was greater than the narrow range (~20ppm) claimed by the IPCC. These papers were not cited by the IPCC. See here for more details and links. It seems likely to me that ice core results could be smoothed by diffusion.

Thanks to Richard for posting links to the long technical paper, which pre-empted a question I was going to ask. I used to share an office with one of the authors.

Thanks to Kevin for joining the discussion - but he should be aware that most readers here will not be convinced by citations of Mann!

Nov 16, 2011 at 6:15 PM | Unregistered CommenterPaul Matthews


Thank you for this - I thought that I was a minority of one!

I did not know that Jaworowski had died. Beck has also passed on. It seems that they passed their knowledge to us just in time.

I agree with every word that you say.

Cheers, Roger

Nov 16, 2011 at 6:34 PM | Unregistered CommenterRoger Longstaff

Thanks Jeremy,

I wonder if ocean outgassing, during the past 150 years or so of (natural) warming, is the reason for currently rising CO2 levels (due to the decreased solubility of CO2 in warmer water)?

Nov 16, 2011 at 6:40 PM | Unregistered CommenterRoger Longstaff


Thank you for the link - I was unaware of this.

Nov 16, 2011 at 6:53 PM | Unregistered CommenterRoger Longstaff

Minority of two at least, Roger!
I don't want to get over-involved in this this evening (other things to do) but I am always worried that my views might be coloured by a longstanding cynicism about the motives and methods of environmetal activists with several of whom I have had fruitless and often ill-tempered (on both sides) discussions over several years.
The CO2 meme and all the baggage that goes with it is manna from heaven to this lot and, as we know from Donna Laframboise's research (and Hilary Ostrov's as well), one of the most radical of them has got its collective feet firmly under the table at the IPCC with Greenpeace not far behind.
As far as I personally am concerned I need a deal of proof that what the enviro-extremists say is not any old lie designed to further their aim of shepherding us back to the Dark Ages and I have little confidence that governments and the UN have not been suborned by the intensive lobbying that environmental NGOs — fully aware that their anti-civilisation message hasn't got a cat in hell's chance of ever being accepted by the people in open and honest debate — have brought to a fine art (backed all too often by our money!).

Nov 16, 2011 at 7:35 PM | Unregistered CommenterMike Jackson

Mike Jackson:

I agree entirely that there is too much trumphalism on both sides of this whole debate with bloggers (and scientists) leaping on every new paper which "finally proves" this or "finally puts paid to" that.

Mike, I agree fully; however, the phrase du jour seems to be "puts the final nail in the coffin of ..." ;)

Nov 16, 2011 at 7:49 PM | Unregistered CommenterHaroldW

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>