Sunday
Feb102013
by Bishop Hill
Helding forth on TCR
Feb 10, 2013 Climate: sensitivity
A few days ago Andy Revkin put a tweet out asking for comments on an old blog post by Isaac Held. Held had been looking at transient climate response (TCR) which he estimated at 1.8, a number that he observed was on the low side.
I pointed the tweet and the post out to Nic Lewis, who has been examining it in detail and seems to have uncovered a few issues which are appended in a comment to Held's post. These issues might lead to still lower estimates of TCR.
Reader Comments (40)
Like a feather in a vacuum in a gravitational field.
==========
Going down like a lead balloon?
Held's estimate is way to high due to ENSO and AMO.
He used Forter/Rahmstorf to account for ENSO, but this method is just not doing the job.
Forster/Rahmstorf did a linear regression with the ENSO index, though the ENSO index is in no way linear to the temperature effect of the ENSO process. The main difference is due to the aftermath of El Nino events, whereby warm water pools drift out of the index region and continue to warm for years.
It would be therefore much better to use start and end points, where PDO and AMO have been in very similar states, such as 1945-2005.
If you do that, you will arrive at around 1.02K transient (and about 1.65K equilibium)
http://judithcurry.com/2013/01/25/open-thread-weekend-7/#comment-289107
And possibly further reduction with new estimates for black carbon.
I find it absolutely amazing, that the temperature effect of the ENSO process has not yet been studied in climate science.
The background is that in the aftermath of El Ninos, huge warm water pools drift out of the ENSO index region polewards. So they are not represented by the ENSO index, but continue to warm. This is well known by sea surface temperature satellite images:
http://www.youtube.com/watch?v=ELDkYJWHNiU
or other evidence, such as the occurence of tropical fish off Alaska after El Ninos.
http://www.elnino.noaa.gov/enso4.html
It should also be noted that there are no similar leftover cold water pools of La Nina events. Tropical cold water just returns downwards, once the upwelling stops while warm water may stay on top.
A consequence are the step function rises in temperature after El Ninos, clearly visible in temperature records. So while Forster/Rahmstorf computed almost no correlation between temperature and the ENSO index, the correlation between temperature and the ENSO process is in reality huge.
http://bobtisdale.files.wordpress.com/2012/05/4-rest-o-world.png
Looks to me like an excellent example of respectful debate of high calibre work, but do we have sufficient faith in the reliability of any of the surface temperature records - that the station selections, homogenization adjustments, UHI effects and so on have no bias? I must confess I yet don't, and that perhaps only SST's and satellite data should be used.
Manfred
According to the UAH satellite figures the 1997/98 El Nino boosted peak global temperature from an anomaly of 0.1C to 0.66C, an increase of 0.56C in six months.
http://www.drroyspencer.com/latest-global-temperatures/
If ENSO is capable of storing and releasing heat on this scale we do indeed need more understanding of heat flow rates between atmosphere and ocean, and the effect of CO2 on the process. Since the process is clearly non-linear and highly variable, it is likely to make TCR estimation considerably more uncertain
A consequence are the step function rises in temperature after El Ninos, clearly visible in temperature records. So while Forster/Rahmstorf computed almost no correlation between temperature and the ENSO index, the correlation between temperature and the ENSO process is in reality huge.
http://bobtisdale.files.wordpress.com/2012/05/4-rest-o-world.png
Feb 10, 2013 at 10:18 PM | Manfred
You may well be right, though the relationship is probably stochastic and non-linear.
The UAH temperature figures from Dr. Spencer show an increase in anomaly from +0.1C to +0.66C in less than a year associated with the 1997-1998 El Nino. Short term swings of that magnitude make estimating TCR a very uncertain process.
http://www.drroyspencer.com/latest-global-temperatures/
Scuse my ignorance, but what is "Transient Climate Response"?
Re: Martin A
From IPCC WG1
TerryS - Thank you. I need to think about what it really means - it's not completely obvious to me at first reading of the definition.
Held has responded already:
Nic's best number seems to be about 1.45 °K/doubling, a sensitivity that would apply for the next 100 years or so. Manfred is correct that if AMO is real (we don't know this), then it would imply a somewhat smaller number. If these numbers are correct, then we multiply 1.45 * log2(600/380) = 1.45 * 0.65 = 0.95 °C of warming is expected, perhaps 0.75 °C if AMO turns out to be a real phenomenon.
[I predict we'll know the answer to this in the next 10 years.]
Equilibrium climate sensitivity is reached after a few thousand years, and is roughly double TCR. I think the number Manfred used, about 60% larger than TCR, is close to the lower bound in full AOGCMs.
Choosing the end points to argue for the higher number, as Held did in his response to Nic, just made me cringe.
While not wishing to be seen going on about something which may be beyond my competence, I have asked a couple of times for a reasoned justification that there is such a thing as CS applicable over centennial timescales. Or that one may reasonably add forcings arithmetically (redundant? Probably) and get to an exlanation for any temp/time graph. What if it is a drunkard's walk? What is there are so many cycles and state changes andf flipflops going on that you can't do that? It means that whenever you take the line and subtract the forcings you think you know and the transients for oceanic shifts or volcanoes or whatever, what you are left with is NOT Co2 forcing, but the sum of all unknowns.
Now, where may I find the justification for this approach? It seems to me that it is not safe to accept it without question.
rhoda
The major ongoing forcings have all made themselves known. These are CO2, particulates, aerosols and variations in solar input.
Less certain are the ocean/ atmosphere interactions such as AMO and ENSO.
ENSO boosted global temperatures by 0.5C in less than six months to produce the 1998 peak so beloved of the cherry-pickers, but can be identified and filtered out of the long term trend.
AMO, on the limited data available, produces temperature swings of about +/- 0.25C on a 60-65 year cycle. Again, it can be filtered out.
The energy flows through the system are predictable, and match observation within the uncertainties to be expected of real world measurements. The details of the process are not completely understood, but the degrees of uncertainty are, and they are smaller than the sceptics would have you believe.
The gross uncertainties you are concerned about are a straw man.
Entropic, just a couple of weeks back you were saying, right here, that heat was going into the deep ocean and you didn't know whether it was going to be down there two hundred or two thousand years. If you give that possibility a look, why can't you consider that there are so many cycles, some known, some unknown, that nobody can sort them out? I proposed no straw man, I asked for justification of a concept, that of climate sensitivity over a centennial timescale. If you need magic ocean heat transfers to make it work, it is already suspect. And now you are certain you have a handle on ALL the unknowns. You should be considering the possibility that you are wrong, but no, the certainty oozes out of your comments.
rhoda
Try this as a review on the subject.
http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
EM,
Which textbook are you citing from? Even AR4 WG1 don't know if 6 of their 9 alleged forcings aren't actually zero, and for one of these they don't know whether the 'forcing' is negative or positive. One of the others (CH4) is also shown in the draft AR5 to have been massively exaggerated in all previous models. An honest assessment of this shambles is that the 'ongoing' forcings are very poorly understood as far as AR4 models go, and if they deign to add in observational data, they understand nothing at all.
Now, do the forcings which are 'known' all include water vapour feedback? Or none of them? Or indeed just the CO2 forcing? Whichever alternative you choose, how do you justify it? How, following that, do you justfiy adding them up to get a figure for all forcings?
Entropic Man, you might wish to consider adding ozone to your list. We're not restricted to 400 characters here like the BBC do.
Rhoda,
The 'known' 'forcings', which of the IPCC admit 6 of the 9 are ill-understood, are just added together. See p. 203 of AR4 WG1 Ch2. The ones they think they understand include CH4, but as I mentioned above, they can't model CH4. It has practically flatlined for the entire history of the IPCC despite repeated modelling efforts to project its atmospheric concentration upwards. If it's not in the atmosphere it's not forcing anything at all.
rhoda, I think you will find support from Henk Tennekes for some of your concerns about deep ocean modelling/knowledge, and climate modelling as well, in very readable essays here: http://scienceandpublicpolicy.org/images/stories/papers/commentaries/tennekes_essays_climate_models.pdf
'Henk Tennekes
is[was] Director of Research Emeritus at the Royal Netherlands Meteorological Institute, Emeritus Professor of Meteorology at the Free University (VU) in Amsterdam, and Emeritus Professor of Aerospace Engineering at Pennsylvania State University. He is the coauthor of A First Course in Turbulence (MIT Press, 1972).' [http://mitpress.mit.edu/authors/henk-tennekes]rhoda, michael hart, Say No To Fearmongers.
To discuss all the factors, all the forcings and all the uncertainties would take half the publishd literature on the subject for the last 40 years, I'm not going that far! This link might start you on the uncertainty question.
I am having problems posting here at the moment.Several posts have gone on late or not at all. I tried to give you this link after lunch, but it did not "take".
Second attempt...
http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
Redundant? I don't think so. You sometimes hear people talking about adding things logarithmically ie applying a nonlinear transformation and then adding the results so adding the word "arithmetically" makes it clear that you really are simply adding things.
A linear system (as any fule kno) obeys the superposition principal - if its response to an input x1 is y1 and its response to an input x2 is y2, then its response to an input (x1 + x2) has to be (y1 + Y2), no matter what x1 and x2 may be.
Is the climate system linear in terms of its internal variables and its response to inputs? It seems unlikely in the extreme.
- Think how the properties of water change as the temperature varies from -5K to +5K.
- The response to changes in CO² is often quoted as being logarithmic ie far from being linear.
- Saturation effects (eg water vapour) are highly nonlinear.
How about also adding:
- Cosmic ray intensity.
- Solar wind intensity.
- Earth's magnetic field (strength and orientation).
Testing. I'm having trouble posting on this site.
EM, your posts are turning up in sequence but late so you have to go back for them.
Uncertainty is all I am trying to express. I don't buy adding all the effects, I'd expect the only way you could do it would be..a computer model. And that would, could, only work if you knew the processes with a degree of certainty we haven't got. So one might speculate on this that or the other behaviour but one would be obliged to admit that this was speculative. Of course if one were to provide checkable predictions at any level (not global temps in x years but observable short-timescale results) one might claim to have something. Nobody can do that.
rhoda
Thank you, I was getting rather frustrated at the delays! I shall try to cultivate patience.
Prediction is difficult, especially about the future.
First you take existing data about the past.
Second you process it through graphs, formulae, physics, computer models, human intuition or whatever works until you have identified the pattern in the data.
Thirdly you extend that pattern into the future and try to analyse the consequences.
For simple systems like pendulums, the process is easy and reliable and feedback is immediate. As the systems get more complex and your ability to get complete information decreases, prediction becomes less certain and the feedback is delayed.
Climate change is a nightmare in this respect. The system is complex and, in the information sense, noisy. The underlying physics is not too difficult, but confirming that the Earth is behaving in accordance with it is harder.
For example, for the latter years of the 20th century all the temperature records show an average temperature gain of about 0.1C /decade superimposed on a +/- 0.1C noise. It takes 20 years of data to show that warming is taking place to 95% statistical significance, the point at which scientists regard it as beyond reasonable doubt.
This is consistent with the behaviour of CO2, but with uncertainties. Using radiation physics you can predict how changes in CO2 should influence temperatures, incoming and outgoing radiation and infra red spectra. Observation agrees with the physics, The temperature rise, the radiation and the spectra agree with theory. Unfortunately it is hard to generate statistically significant links showing this in the real world because of the noise. It's like trying to follow a conversation at a noisy party.
The direct effect of CO2 is mixed up with secondary effects like increased humidity, with solar variations, ENSO and AMO cycles, Milankovich cycles and, worst of all, weather. The full effect of the CO2 already released is still working its way through the system and the effect of future release can only be estimated.
This is why it is hard to supply your wish for short-term results.
What makes it all contentious is that:
1) If there are real consequences to what we are doing, mitigation would need to be done now to reduce the conseqences later. The longer this is delayed, the worse the long-term outcome.
2)If there is no long-term effect, then such effort wastes a lot of resources.
That is the bet. Those on sceptic sites like this have chosen to back horse 2)
Those who perceive a real problem are backing horse 1).
Consider one final example. In 2008 the authorities in New York and New Jersey received reports warning of the increasing risk to the area of storm surges. A $5billion plan to protect the city was drawn up, but never implemented( too expensive). That bet cost the US $60billion+ after Sandy.
Rhoda: your reasoning seems excellent to me. I suspect EM is indulging in very complicated semaphore signals, otherwise known as 'handwaving'.
"direct effect of CO2 is mixed up with secondary effects like increased humidity"
Let's just take this one. I would expect any temperature change from any known or unknown effect to have a parallel effect on humidity/water vapour. When warmists talk about it they always put it in terms of CO2 haivng that result, but never the others. That's why I asked above, is there a water vapour component in all the additive effects? Or in none? Or in CO2 only? Which approach is valid? Why?
Of course if we had the water bit right, just that alone, we would have something. But the role of water in shifting heat up down and sideways is not well understood. To the extent that if you find somebody who claims they understand it you may be sure that they are deceiving themselves and/or you.
Illustration: How many times have you seen it written that the sun shines, the back radiation comes and heats up the ground causing water vapour? Is that what really happens? Well, when you put wet washing on the line, it gets colder as the water leaves it. Any housewife knows that. Why do climate folks put it in terms of getting warmer? The water is sucking the heat out, using it for the state change and taking it elsewhere. Nothing is getting warmer.
Two points. Firstly, the 95% is of warming taking place, not of it being caused by humans or even how much has been caused by humans. Secondly, physicists (and others) might disagree about scientists regarding 95% as being "beyond reasonable doubt". They require 99.9999% (5 standard deviations) for their proof. You should say "the point at which climate scientists regard it..."
The closest I can find is a plan for a harbour barrier for New York. It is unlikely this would have been completed in the short period of time between 2008 and 2012 and it would only have offered protection to a small part of the affected area. Staten Island, Long Island, New Jersey, Rhode island and Delaware (and more) would not have had any protection from it.
The $60billion+ figure is a little high, Roger Pelke has it as $50billion +/-
..Consider one final example. In 2008 the authorities in New York and New Jersey received reports warning of the increasing risk to the area of storm surges. A $5billion plan to protect the city was drawn up, but never implemented( too expensive). That bet cost the US $60billion+ after Sandy...
I understand that the authorities did claim to have addressed this risk - by spending money cutting back on CO2 emissions. This is claimed to lower the risk of 'extreme weather', as I am sure you will have heard.
In retrospect, this would have been money pointlessly spent which lowered the amount of funds available to really address the problem. This goes to the heart of the argument about spending money to mitigate a presumed future catastrophe - even if you assume that the AGW link is indeed true, there are much, much better ways of addressing the issue than cutting CO2 emissions...
Alexander
"handwaving"
I'm trying to discuss the science, something rather lacking among comment writers here.
rhoda
All sources of warming contribute to evaporation, not just CO2. Evaporation does cool the surface, but the latent heat is returned when the water vapour condenses at higher altitude. " Elsewhere" is still within the system.
Terry S
I agree that the significance is for warming, rather than for a particular cause. That's as good as you'll get on this time and temperature change timescale.
That 5 sigma significance level is used in particle accelerators, with very large sample sizes under controlled conditions. That is unrealistic for areas of study such as climate, which have smaller sample sizes and a much noisier environment. Insistance on unrealistic levels of proof is, however, a good way of disguising denial behind a veneer of science.
Obama asked Congress for $60billion, and many regarded it as too low. It won't be confirmed until all the bills are in.
Dodgy Geezer
"there are much, much better ways of addressing the issue than cutting CO2 emissions..."
Suggestions welcome.
I am not insisting on an unrealistic level of proof, I am pointing out that your assertion that scientists accept 95% as "beyond reasonable doubt" is incorrect.
Scientists in Physics, Chemistry, Biology, Medicine etc, would not make this claim, it seems to be limited to climate scientists.
If you want an example of how much reasonable doubt there is with 95% then, assuming there is a 50/50 chance a coin is double headed, flip it 4 times. If you get heads each time you can be >93% certain it is double headed, flip it 5 times and you can be 97% certain. So your 95% certainty is equivalent to no more than 4 or 5 flips of a coin.
$150 million to Alaskan fisheries
$8 million to Homeland Security/Justice Departmet
$41 million to eight military bases
$13 billion to preventing future damage
$4 Kennedy Space Center
£207 million VA Manhattan Medical Center
etc.
The $60 billion has been padded a little bit.
Entropic Man,
Re $60 billion, you are comparing apples and oranges and pears.
Obama's $60 billion legislation is about all the "wish list" items which politicians came up with, much of it beyond direct storm damage (improved infrastructure etc.). One can debate the merits of those various proposals for improvements in the area, but several tens of billions in that number would not be strictly necessary to repair direct storm damage.
Separate from proposed federal expenditures (much of which is not for direct repairs/damages from Sandy), there are combined private/public costs of fixing what Sandy actually broke. It is those bills which still have fairly wide estimates, although Pielke, Jr's figure of $50 billion may be as good as any for now.
Estimates I've seen suggest that around $20-25 billion will be paid out by private insurance companies. That is not part of the federal "$60 billion" in spending proposed by the Obama admin.
The $5 billion (or whatever the final amount, which would likely have been 2, 3, or 4 times as much as originally quoted) would not have protected the entire coastline from southern New Jersey to Brooklyn/Long Island. One can debate merits of various tidal and infrastructure schemes, but if $5 billion had been allocated in 2008 or 2009,
1) it is highly unlikely the work would have been completed in time for Sandy, and
2) it is highly unlikely the project would have prevented more than some fraction of the damages all along the New Jersey and Long Island coastlines.
P.s. or another way to put it about the $5 billion proposed for tidal barriers, as I understand it that project was only about protecting NYC (worthy as that goal is), not about preventing all of the damages a storm like Sandy can do along the New Jersey and Long Island coastlines.
Thus it is far from accurate to say that the $5 billion could have prevented all of the damages of Sandy, even if by some unprecedented miracle the relevant agencies could have completed such a project by 2012.
TerryS
I give you three links in which 95% confidence is regarded as acceptable.
http://explorable.com/significance-test
http://www.kflapublichealth.ca/files/static/Understanding_Facts_-_Figures.pdf
http://en.wikipedia.org/wiki/Confidence_interval
This was my source for the Sandy bill.
http://www.washingtontimes.com/news/2012/dec/7/white-house-says-federal-bill-sandy-60-billion/
Perhaps you can link me to a more reliable source.
If you are really set on 5 sigma data on climate change over a short time scale , you will need to describe how to get it. Nobody else on either side regards it as remotely possible!
You might also like to toss a few coins and see how long it takes you to get 5 in a row the same.
You lot remind me of the GOP. Memo: Never buy any coastal property on the US East Coast. :-)
Nice, Entropic Man, you are corrected on your wildly inaccurate statements so you resorrt to a content-free sneer. You have not advanced the discussion. Hmmmm... It is never worth trying to have an intelligent discussion with a troll....
Skiphil.
I am rightly rebuked for my sarcasm, kindly restrain yours. The corrections to my figures were in the detail and did not change my substantive point.
I still see a city that knew of sea level rise trends over decades and was specifically warned 5 years ago of the storm surge risk.
They did nothing about it,
They suffered storm surge flooding two years ago.
They did nothing about it.
They were storm surge flooded again this year, at a cost in damage which greatly exceeded the cost of enhanced flood defences.
Re: EM
> I give you three links in which 95% confidence is regarded as acceptable.
Acceptable is not "beyond reasonable doubt" and that is what you are claiming.
Here is an extract from an article in Chemistry and Industry Magazine.
Even though they are 95% confident the measurement breaches the limit, it it is not "beyond reasonable doubt"
> This was my source for the Sandy bill.
And this source delves into the actual contents of the bill instead of just looking at the headline http://www.nypost.com/p/news/national/little_help_here_1kW6aQ8fElj4CKwbheEV0N
> If you are really set on 5 sigma data
Obviously you are not reading my replies so I will repeat what I said in the previous comment:
I am not insisting on an unrealistic level of proof, I am pointing out that your assertion that scientists accept 95% as "beyond reasonable doubt" is incorrect.
> You might also like to toss a few coins and see how long it takes you to get 5 in a row the same.
The odds of getting 5 in a row are 1 in 16 or 6.25% and I am 95% confident that I can achieve this within 85 flips of a coin.
Your NY Post link made interesting reading. I wonder if it was coincidence that all the spokeman cited belonged to, or shared a viewpoint with, the Republican Party. A good example of partisan reporting.
Your link to the article in Chemistry and Industry Magazine showed aflatoxin as an example of the use of 95% confidence limits. I think you have misinterpreted it.
Sub-sample C showd toxin levels in which the upper 95% confidence boundary was below the compliance limit. Sub-sample B had toxin levels in which the 95% confidence limits straddled the boundary, Both sampled were deemed to be compliant because the FSA could not demonstrate to 95% significance that the samples were not compliant.
Analysis of Sub-sample A produced results in which the lower 95% confidence boundary was still above the compliance limit. The FSA were at least 95% confident that the sample was non- compliant and refused it entry.
If this had gone to a criminal court, the court would have accepted the FSA,s 95% confidence as "beyond reasonable doubt".