Extreme weather
An interesting article about the wet British summer comes to us from Michael Hanlon in the Mail.
Perhaps the most dramatic and visible impact of climate change to date has been the reduction in Arctic sea ice cover and, particularly, thickness, seen in the last 20 years or so. The ice grows back in winter of course but, the evidence suggests, each year (on average) perhaps a little thinner than before. This makes the North Atlantic a little warmer than otherwise, reducing the temperature gradient between Polar and Tropical air and hence taking some of the wind (literally) out of the jetstream’s sails.
So, the overall effect of global warming will be to make our summers cooler and damper. The trouble is, this contradicts what most of the computer models have been saying to date, namely that in Britain we can expect hotter, drier summers and milder, damper winters. I spoke to Kate Willett, a climate scientist at the Met Office who agrees that the picture is confusing. “Yes, this contradicts the model of wetter winters and drier summers,” she says. It is also true, she adds, that since 2007 Arctic sea ice levels have been exceptionally low but it is not true that the last five summers have been exceptionally bad – those of 2010 and 2011 were average or a little above average in terms of sunshine and temperature.
This is almost beyond parody, so I'm not even going to try.
Interestingly, Hanlon says that there is a consensus that weather will become more extreme in an warming world. Is this right?
Reader Comments (109)
Jul 18, 2012 at 5:37 PM | Roger Longstaff
Apart from:
1. the observed warming was forecast in advance (in the 1970s) based on the expected CO2 rise
2. the outgoing longwave radiation is seen by satellite to be reduced in exactly the regions of the spectrum that correspond to where CO2 and other GHGs absorb radiation
3. the stratosphere has cooled while the troposphere has warmed, as would be expected from increased absorption of outgoing radiation in the atmosphere
The outgoing LWIR raises questions you have not answered. How much heat was one, and whether one of the CO2 bands was saturated another. Then there was the lack of measurement of H2O-related bands. But most of all, why didn't Harries et al uncover the smoking gun, they had all the data to do so.
Hi Rhoda
I thought I'd answered the question about whether one of the CO2 bands was saturated on another thread - it was just that the band with a small change was in the part of the spectrum where absorption is already small. The other band, where absorption is larger, still showed a decrease in outgoing LW.
But for more experimental evidence, have you seen this paper "Radiative forcing - measured at Earth’s surface - corroborate the increasing greenhouse effect". Seems to be what you were asking about a while ago, ie: ground-based measurements of changes in downward LW.
Cheers
Richard
1. "the observed warming was forecast in advance (in the 1970s) based on the expected CO2 rise" -
The "observed warming" seems to have been overstated by a factor of 2 (see a previous thread), and in any event we expect warming from LIA to MWP temperatures (the last ice fair on the Thames was less than 200 years ago). You are surely not saying that models in the 1970s were more accurate than ones 20 years later - that failed to predict this century's flatlining temperatures?
2. "the outgoing longwave radiation is seen by satellite to be reduced in exactly the regions of the spectrum that correspond to where CO2 and other GHGs absorb radiation" - I do not understand your point. There is radiative balance over the diurnal cycle at TOA - this is just conservation of energy.
3. "the stratosphere has cooled while the troposphere has warmed, as would be expected from increased absorption of outgoing radiation in the atmosphere" - I thought that satellites had FAILED to detect the rise in tropospheric temperatures predicted by GCMs!
@Richard "I thought that both satellite and radiosonde datasets showed a long-term increase in tropospheric water vapour, with the radiosonde datasets going back to the 1970s."
Doesn't matter what you thought, this is the actuality:
Climate models assume that as the world warms, the relative humidity stays constant. Relative humidity is a measure of how much water vapor is in the air compared to how much water the air could contain at a given temperature and pressure. And that carrying capacity increases as the air gets warmer, meaning that warming with a constant relative humidity results in an increase in total water vapor in the atmosphere.
It turns out, though, that as we have warmed over the last 50 to 60 years, relative humidity at most levels of the atmosphere has actually fallen off. So, the models are actually wrong here. They're overestimating the increase in water vapor from rising temperatures, and thus overestimating feedbacks and total warming.
Source: Data via KNMI climate explorer, compiled by Ken Gregory (http://www.friendsofscience.org/assets/documents/The_Saturated_Greenhouse_Effect.htm) . Further discussion here http://www.climateaudit.org/?p=5416 including Partridge, 2009
Re: Jul 18, 2012 at 6:41 PM | Richard Betts
"the observed warming was forecast in advance (in the 1970s) based on the expected CO2 rise"
well perhaps you would care to link to this particular model, Richard, so we can see just exactly what was forecast and at what expected rate of CO2 rise, oh and any error bars that may be contained therein.
If the summers in the North Atlantic get warmer, the polar ice will melt faster, but then the North Atlantic will get colder, and the summers will cool, and then the polar ice won't melt as fast, but then if the ice melts slower, then everything isn't warming, but if it's not warming, the summers will get hotter, and then ....
"Listen to me, carefully, Norman. Everything I say is a lie." - Spock
Jul 18, 2012 at 8:03 PM | Roger Longstaff
Hi Roger,
"the observed warming was forecast in advance (in the 1970s) based on the expected CO2 rise" ... Richard may be referring to the following paper. It is well worth the read anyway.
Can you do a better job of critiquing it then I could? See my comment here at Dec 17, 2011 at 11:02 AM.
Jul 18, 2012 at 8:34 PM | Don Keiller
The Paltridge paper uses a reanalysis, which is part-model part-observations, basically assimilating the observational data into the model and kind of using the model as a more physically-based way of interpolating the data. However, you have to be careful as this can still be vulnerable to inconsistencies and step-changes in the observations, such as changes in instruments, which can give rise to artificial trends.
A careful study of RH observations from radiosondes, accounting for changes in instruments and sampling biases does not show any evidence of a change in tropospheric RH. This radiosonde record also agrees with surface records.
Jul 18, 2012 at 3:42 PM | Philip Richens
Hi Philip
I mean variability at any timescale. I am not claiming that the climate does not vary naturally on long timescales, but the observational evidence (and, dare I say it, models and theory too) are all consistent with anthropogenic forcing playing an important role in what we've seen in recent decades.
Cheers
Richard
Jul 18, 2012 at 9:25 PM | Marion
See Philip's post at Jul 18, 2012 at 11:15 PM
A few days ago was St Swithun's Day, Swithun being a ninth century Saxon. According to legend, if it rained on St Swithun's Day, it would carry on raining for the next forty days (after which, since you'd practically be in September, you could, presumably, expect it to continue tipping down until the following May at the earliest). You don't have to believe the legend (although it's probably no more bonkers than what we get out of the Met Office and its super-computers) to understand that really wet summers are not "extreme weather" in even southern England, never mind the wetter parts of the UK.
Here are a few examples from the past that might repeat themselves in the coming months:
August 1912, 1952, 1956 - all very wet
September 1968 - widespread flooding in SE England and E Anglia
October 1987 - one of the wettest months of the century, October 1967 - wettest since 1903
November 1929 - wettest for 60 years
(extracted from The Weather of Britain by Robin Stirling 1997)
Britain has wet periods and dry periods, they can happen at any time of the year; they are rarely unprecedented. There is no need for hand-wringing and magical attribution. Get yourself a copy of the above book and mutter (for any weather phenomenon) 'lucky it wasn't as bad as ...' (fill in the blank from the book).
A certain organisation recently headquartered in the SW of England needs to get some perspective and hope that the rainfall in the SW this coming November is not as bad as it was in November 1929 when it was 11.5 times the amount that fell in November 1879.
Natural variability in precipitation (and the rest) takes some beating!
Jul 18, 2012 at 11:28 PM | Richard Betts
Hi Richard,
Thanks for the clarification, I appreciate your reply very much. I certainly accept that anthropogenic forcing has played a role in recent decades, but I remain unsure how important that role has been. Here is another interesting paper, which I think you may agree with regarding natural variability?
In section #3 of this paper, the left hand side of figure 1 shows unforced GCM generated global variability over a 1000 year period. The right hand side shows the instrumental record over a ~100 year period. Figure 2 shows a comparison between the spectra of the left and right hand sides of figure 1. Since only the instrumental record is used, comparison of the variability is limited to periods less than 100 years.
At the shorter time scales (less than 20 years), the agreement between model and observations looks quite good to me, in both figures.
At the longer time scales, comparison between the left hand side of figure 1 and reconstructions (ice-cores, recent multi-proxy; not used in the paper) is not so good (to the eye at least), even if I would agree that a comparison with earlier multi-proxy reconstructions would look fine (especially if you compare them with figure 1 as a whole!).
In figure 2, there is the hint of a divergence between model and observation beyond the 20 year period, although again I would agree that the briefness of the observational record makes this inconclusive. However, I imagine that if the model generated spectrum in figure 2 had been carried on into still lower frequencies that it would remain flat, because according to a number of later papers this is what happens with other models.
It does seem to be the case that there is low frequency variability in observations that is not seen in unforced model runs. If that variability is not caused by solar or volcanic forcing (as I've pointed out before, there is also evidence that this is so), then there are causes of long term variability that are not encoded into the model.
Is it not possible that these unidentified causes of the low frequency variability seen in earlier times are also responsible for the 20th C variations? At least I've not seen any good arguments that this is not the case.
Thanks,
Philip.
Especially given the figures provided by Stott, eg "the second hottest November on record in the UK in 2011 was 60 times more likely than in the 1960s because of climate change".
-------------------------------------------------
I am no statistician, but even I can see what a load of the proverbial this statement is. It is like saying that because chestnut horses have won (in percentage terms) 60 times more races in the last 10 years, that proves that chestnut horses have 60 times more chance of winning races in the future.
Pathetic.
Richard Betts’ claim [6.41 pm] that lower TOA IR in the CO2 absorption band implies greater CO2-AGW is a logical fallacy. The following is an attempt to devise the real science. Implicit is that ‘back radiation’ can do no thermodynamic work.
Because of its simple band structure, above ~200 ppmV in a long optical path at ambient temperature CO2 is in the ‘self-absorption’ mode. It’s because the ~95% of inactivated CO2 molecules absorb thermal IR. IR from the Earth’s surface in that band competes for these inactivated molecules so lower atmosphere DOWN emissivity increases as you get nearer the Earth’s surface and the temperature gradient changes to compensate.
The additional DOWN flux competes for the emission sites on the surface, reducing its emissivity in that wavelength band. So, the real origin of the GHE is lower IR flux in GHG bands, more in the ‘atmospheric window’. The reduction of TOA flux in a particular band is because there was less to start with.
The sequitur is that once self-absorption occurs in a GHG, its contribution to the GHE self limits. This inversion of present thinking will I suspect be controversial. There can be no GHG-AGW, never, nix, zilch. Taking account of the ~400% increase in IR assumed in the models, it’s time we stopped this charade. GIGO proves nothing.
Richard Betts, I read the downwelling paper you linked. Kudos to them for looking, in 2003, and claiming to be the first group to actually try to measure it. Unfortunately it seems to this Oxfordshire housewife that their data is rubbish. They found prima facie that the DWIR net was increasing at three times the GCM-assumed rate and it seems to me that if that were the case we would all know it. They then tried to correct the initial figure by hedging their bets. The water vapour was from elsewhere, of course. The temp increase they measured was way above the global average (all the sites in the same 200km square) and thus skewed the result. This study was done in Switzerland using Swiss stations. Not really significant. Good try, but doomed. They ought to have done it in the Atacama.
Jul 18, 2012 at 11:15 PM | Philip Richens
Thanks for the references Philip, I will read them with interest when I get time.
rhoda; you must realise 'downwelling LW' is a myth. There is no such energy flux. The pyrgeometers claimed to measure it really measure air temperature convolved with emissivity. Because under a cloud they also measure enhanced IR in the atmospheric window which comes from indirectly thermalised IR from the atmosphere, hence the very different IR spectrum compared with clear sky, the signal is much higher, why the meteorologists were confused.
In reality, the bulk of the pyrgeometer signal is artificial, calculated S-B emission of a black body at ambient temperature in a vacuum. The manufacturers sell lots of them but are clearly embarrassed by their misuse because they point out in their literature that to measure real energy flux, you need two, back to back: http://www.kippzonen.com/?product/16132/CGR+3.aspx
'Two CGR 3s can form a net pyrgeometer'
So, the IPCC 'Energy Budget' with its 333 W/m^2 'back radiation' [2009 data] is science fiction as are all the 'climate models' based on it. No professional scientist should accept this abuse of science.
Philip,
I have skimmed the discussion thread and I have no disagreement with your take on the physics. I also have sympathy for Spartacus, and others, who complain about "back radiation" nonsense. There is only radiation, convection and conduction, in a dynamic system constantly seeking equilibrium. In my view we should start from TOA calculations and fill in the detail in logical, verifiable steps. Feedback mechanisms are surely the key.
I have more concern about numerical models. You state "It does seem to be the case that there is low frequency variability in observations that is not seen in unforced model runs". In reading the literature I have found references to the use of low pass filters (Fourier, Shapiro) in the time step integrations, and even the use of pause/reset/restart techniques when data fields violate boundary limits or violate physical laws (conservation of mass, energy, momentum). All of this in a model that is seeking the effect of an a priori signal (CO2 concentration), over decadal timescales, in a multivariate system with non-determinate processes. It is my opinion that with such a simulation you will only see what you expected to see - and this could very easily be wrong.
Back to the day job.
Roger; Their most basic error was to assume IR from the earth's surface is the S-B black body level in a vacuum, impossible because convection, conduction and radiation are coupled as any professional should have realised. This error was made some time after the root climate modelling paper, Manabe and Wetherald 1967, who assumed SW DOWN = LW UP, a gross exaggeration but not wrong physics.
Spartacus, while not dismissing your arguments. because I can't understand them, I am debating Richard on his own terms. If he is right, there should be a discernible CO2 signal in the DWIR and a WV signal too. Similarly in the outgoing IR the extra absorb(p)tion in the CO2 bands should equate to an amount of heat still in the earth system. This is not a TOA equilibrium thing because the system only seeks equilibrium, it never gets there. If the IR is retained it must be another variable signal with CO2 ppm. Capable of being measured , along with a WV signature which would show rather shorter-term variation. Of course if Spartacus is right, there will be no variation with CO2 over time.
Much might be learned also by a simple measure of seasonal and diurnal variations at a desert site, as well as the variation of the various signals with inclination.
All of this depends on my understanding being up to the job. Not guaranteed, but if I am just being silly I am sure somebody will point it out. I am not a sock puppet, I have no qualifications in any form of science past A level physics.
Rhoda; if you did A-levels 15 or more years ago, you will be near degree level for many present graduates!
My post was to explain how you may get a reduction in TOA band emission for CO2 with no AGW. This assumes the GHG is in the self-absorption mode.
The rise in temperature due to the GHE will be a fixed level for a water planet. The ice age bistability is explicable by a step change in cloud albedo as biofeedback is reduced. [Got to correct Sagan's physics for this].
Spartacus, I took my A-levels in 1966. I can't remember an awful lot about them now. But there was no radiation beyond Newtonian optics and michelson-morley. Of course we thought weather was weather in those days. There used to be a map of climatic regions. They were fixed. I wonder how much actual climate change has been mapped in the Koeppen-Geiger system in the last 50 years.
@Richard you said
"A careful study of RH observations from radiosondes, accounting for changes in instruments and sampling biases does not show any evidence of a change in tropospheric RH." and kindly provided a link.
This is the Conclusion:
"Following the adjustment process the daytime temperature and specific humidity trends are increased and more consistent with the nighttime trends and are in the range of 0.1–0.4 K decade−1. Relative humidity trends are reduced from a negative value in the unadjusted data to near zero."
Well here we go again. What this paper is saying is the real data does not show what the models require, so we will "adjust" it until it does.
S.O.P. for climate "science".
Re: Jul 18, 2012 at 4:40 PM | susanc
"... regarding Arctic sea ice thickness mentioned in this article, I'd like to point out a few things that may be of interest from my own research. Most of this comes from a paper of mine on polar bears from 2008 available here
http://scienceandpublicpolicy.org/originals/some_things_we_know_and_don_t_know_about_polar_bears.html
modified slightly and updated with more recent data below, with refs.
-A frequently cited reference regarding ice thickness (Lindsay and Zhang 2005) concludes that Arctic sea ice is experiencing a continual decline that cannot easily be reversed, but this is not a data-based paper ─ it is a model based on what is now considered old, substandard data from coastal submarine surveys.
- Many statements made regarding sea ice thickness in the Arctic do not acknowledge the incompleteness of this data: one frequently cited study (Laxon et al. 2003) surveyed (via satellite) only ½ of permanent sea ice and did not include ANY of the region in the central Arctic Basin (above 810 N).
- Sea ice thickness in the huge Arctic Basin region is based on very few actual measurements (taken from submarines in the early period) that have been extrapolated to represent the entire region and used in various climate models to predict future conditions (Rothrock et al. 2003; Yu and Rothrock 1996); the newest data, from electromagnetic sounding surveys done from aircraft in 2009, highlight the inadequacy of the long-term record (Haas et al. 2010);
Regardless of the fact that they are all we have got, these data are insufficient for assessing long-term trends. We do not have an accurate long-term measure of ice thickness.
Note: sea ice volume measures offered by PIOMAS (extent plus thickness) here http://psc.apl.washington.edu/wordpress/research/projects/arctic-sea-ice-volume-anomaly/
is another model, it is not a measure of actual total thickness. ..."
---------------------------------------------------------------------------------------------------------
Hi Susanc,
Many thanks for your post above. I feel we need to frequently remind ourslves of this ie
"the incompleteness of this data"
where conclusions are drawn
"based on very few actual measurements ...that have been extrapolated to represent the entire region and used in various climate models to predict future conditions"
which not only applies to sea ice thickness but also so many of the other areas as included in climate models.
Models that it would appear to me are rather based on subjective (ie 'characteristic of or belonging to reality as perceived') rather than objective ('Uninfluenced by personal prejudices') assumptions.
After all our climate system is both incredibly complex and chaotic -
http://wattsupwiththat.com/2012/01/21/the-ridiculousness-continues-climate-complexity-compiled/
so that one can sympathise with our climate modellers as to the difficulties of deciding which parameters / criteria to include and of course on the quality of the 'data' on which they are working.
http://wattsupwiththat.com/2009/11/25/climategate-hide-the-decline-codified/
Certainly there can be very different perceptions of reality as shown by my comment above (Jul 18, 2012 at 3:10 PM) on the winter of 2010 where Julia Slingo, Chief Scientist of the Met Office, was telling us -
"This is not a global event; it is very much confined to the UK and Western Europe and if you look over at Greenland, for example, you see that it's exceptionally warm there."
when the UK (and much of the rest of the world) was experiencing exceptionally cold conditions!!
Jul 19, 2012 at 1:22 PM | Don Keiller
Well, here we go again with the old "the data is fiddled" meme!
If there are known biases in parts of the record but not others, then it is important to account for this in order to avoid "seeing" trends which are merely artifacts of changes in how the observations were made.
For example, in this case there was poor sampling of dry conditions in the early part of the record, giving it a wet bias. Also, modern sondes "burn off" water after passing through clouds, whereas early sondes didn't.
When these are accounted for, the supposed "drying" trend in RH turns out to be just an artifact of these wet biases in the early part of the record.
Well, the data is only ever adjusted in a direction favouring AGW. That ought to raise some kind of a concern, not that the fiddles themselves are necessarily wrong, but that nobody goes looking at data that happens to suit their position. What it means is we can't trust anything. And as a climate scientist you cannot really go with the 'here we go again' meme if you will not condemn all the fiddles and cheats that various players have been caught in. We the sceptics know a lot of you cheat and don't get cast out for it. Why would we not assume every adjustment was a cheat?
Now, how much is that change in CO2-related outgoing radiation in heat (or power) units? And why did Harries stop where he did, if it was not to avoid unwelcome conclusions?
Jul 19, 2012 at 7:07 AM | Philip Richens
Hi Philip
Thanks for your thoughtful post, as always. Good to see you reading my colleagues' papers!
You make an important point. While the internal variability of HadCM3 mostly agrees with observations out to periods of 30 years, and it does still have strong internal variability at longer timescales, this is indeed weaker than the obs. Given that the observed 20th Century changes are so much larger than HadCM3's internal variability, I think the difference between model and obs at low-frequencies would have to be much larger to make a strong case that the late 20th Century warming is entirely natural - but nevertheless the proportion of warming which is natural may indeed be larger than indicated purely from the model.
The IPCC AR4 statement on attribution of the late 20th Century was weaker than might have been expected if the models were taken to be perfect. The AR4 statement, as you know, was that it is "very likely" that most of the warming since the mid 20th Century is due to anthropogenic GHG increases. But if HadCM3 and other climate models were assumed to be perfect then it would be stated that all of the recent warming was anthropogenic - HadCM3 (for example) does not produce anywhere near the observed warming when anthropogenic forcings are excluded - see this figure from Stott, Tett et al, 2000.
So, yes, you are right that HadCM3 seems to underestimate long-term variability to some extent, but this is already accounted for in our interpretation of the models. (However, I suspect it is not always accounted for when people show the classic AR4 attribution figure in presentations, so when you see that figure presented you should make your point that the natural contribution to recent warming may be larger than the zero contribution indicated on that figure.)
Are you reviewing any of the IPCC AR5 drafts? I think it would be good if you could - you clearly have an eye for detail and a knowledge of the literature.
Cheers
Richard
" the difference between model and obs at low-frequencies would have to be much larger to make a strong case that the late 20th Century warming is entirely natural "
In fact, it is or it is not, and the model, the sum of someone's ideas and prejudices and the hostage to his programming skill, well the model has nohing to say in the matter. Observations. Measurements, they may tell you. The model never can, it can at the very best give an indication of where to look.
Quick questions regarding adjustments: Can the public around the world, de facto, promisingly try to reconstruct / reproduce for instance the adjustments that were and are!(sic) being made (for example by modellers) here and there?
If I remember correctly then there are some other threads on Bishop Hill which include virtually the same questions that appear, to me, not to be remotely resolved:
Some issues arose for instance with questions whether the public can access all the data / metadata / algorithms / formulas / code / "modules" (and how the details might otherwise be called) which were used for (half-)adjustments / homogenizations / modulating / calibration?... (Apart from the technical question whether the public(s "Power") can handle the data.).
Re: Jul 18, 2012 at 11:21 PM | Richard Betts
"The Paltridge paper uses a reanalysis, which is part-model part-observations, basically assimilating the observational data into the model and kind of using the model as a more physically-based way of interpolating the data. However, you have to be careful as this can still be vulnerable to inconsistencies and step-changes in the observations, such as changes in instruments, which can give rise to artificial trends.
A careful study of RH observations from radiosondes, accounting for changes in instruments and sampling biases does not show any evidence of a change in tropospheric RH. This radiosonde record also agrees with surface records."
Hi Richard,
"part-model part-observations, basically assimilating the observational data into the model and kind of using the model as a more physically-based way of interpolating the data."
but isn't this also true of other climate models, and this
"can still be vulnerable to inconsistencies and step-changes in the observations, such as changes in instruments, which can give rise to artificial trends"
again also true of other models.
And didn't
"A careful study of RH observations from radiosondes, accounting for changes in instruments and sampling biases "
http://journals.ametsoc.org/doi/full/10.1175/2009JCLI2879.1
include Peter W.Thorne as one of its authors, as did the paper on the homogenisation of radiosonde data that I have linked to elsewhere ie
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3816.1
And aren't the assumptions made subjective rather than objective, so what level of confidence can we have in the ability of the models to reflect reality?
Re:"This radiosonde record also agrees with surface records" but then the surface records too have been the subject of homogenisation and again biased by subjective assumptions, particularly true in relation to UHI (Urban Heat Island) effect where there seems to be much disagreement on the level of 'homogenisation' required.
http://climateaudit.org/2012/07/17/station-homogenization-as-a-statistical-procedure/
For example in the Met Office booklet "Warming, Climate Change - the Facts" it was stated "The urban heat island effect already warms central London by more than 10C on some nights. Increased urbanisation and release of waste heat would increase this still further - on top of the effects of global warming" as does the more recent updated version dated 2011 11.
(as you have severed the links to the publicly available versions perhaps you can provide the links here to the archived versions so that any interested readers can see for themselves).
Wouldn't a model need to be incredibly complex to nullify the different levels of UHI encountered in the various cities throughout the world, essential one would have thought to retain highly detailed metadata to enable this with any degree of accuracy so I'm totally at a loss as to why Menne should conclude that
"Metadata records are helpful, but we must be prepared to have less than comprehensive station histories".
http://wattsupwiththat.files.wordpress.com/2010/09/7_1wed_exeter-menne.pdf
What confidence can we have that UHI has been adequately accounted for in the homogenisation process - and properly reflected in the IPCC reports?
Jul 19, 2012 at 2:44 PM | rhoda
Actually, no it isn't. The "bucket correction" to the SST records removes a cold bias in the older period, making warming less than would be expected on the basis of the original, unadjusted data.
Jul 19, 2012 at 3:32 PM | rhoda
It's hard to see how measurements of a single entity such as the Earth without an equivalent control entity can give experimental "proof". We cannot do a controlled experiment in which we keep one Earth unchanged and then subject the other Earth to known perturbations. Measurements can show us whether the changes are in accordance with theory, but we can't physically remove a particular factor (like GHGs) and re-do the measurements to see whether we still get the same changes without it. This is why models are needed - in the absence of a real controlled experiment, we do the next best thing and do a controlled experiment on a mathematical representation of the Earth.
Hi marion
Yes, the problems I mentioned with the Paltridge paper would apply to other models if they were used for re-analyses, but most are not used for that. The point is, it's better to use observations than re-analyses for looking at trends in humidity.
I think the way to have confidence in the surface datasets is to see whether similar results are obtained by other people re-doing them using the original data, as in the BEST study for example.
I'm afraid I don't have links to an archived online version of the "warming" brochure - it has been taken completely offline so people don't keep finding old versions and claim that we are saying something that we're no longer saying. Old versions can be requested from the National Meteorological Library.
Cheers
Richard
rhoda: Here's a thought experiment. Take a standard GCM and instead of putting in correct IR energy into the lower atmosphere, 23 W/m^2, increase it times 5 [add 94.5 W/m^2 = [333-238.5]] and offset it by increased cloud albedo,
The average World temperature at any one time would remain the same as if the correct inputs were used but because the kinetics of evaporation of water are a strong function of temperature, the water vapour content in the modified model would rise much more rapidly. The temperature rise would not be so strong because the energy is stored as latent heat energy. A non meteorologist might claim this would lead to a higher risk of extreme weather even though meteorologists know that warmer average temperature will lead to fewer extreme events.
Here are the real data: http://www.leif.org/EOS/2012GL052094-pip.pdf the key chart is Figure 4: http://pielkeclimatesci.files.wordpress.com/2012/07/vonderhaar-et-l-20121.jpg
This shows that for the past 14 years, total precipitable water has been falling despite CO2 increasing. This graph shows that temperature has been near constant over the same period: http://stevengoddard.wordpress.com/2012/07/18/now-warming-since-kyoto-was-rejected-fifteen-years-ago/
Did I say this was a thought experiment? Sorry, my mistake......
@Richard, you may well be right about the validity of adjustments in the case of relative humidity.
However climate "science" is riddled with ad hoc, undocumented, subjective and just plain wrong adjustments that anything I read in the area of climate "science" which has adjustments that convert a trend in real data that does not agree with model assumptions to one that does, as fatally flawed.
That is my bottom line, it is up to you guys, pushing AGW, to prove it beyond a scintilla of doubt.
At the end of the day what the IPCC and grasping Governments are proposing is a dismantling of our way of life on the basis of a "maybe" from you guys.
No Richard, your model may be useful for a lot of things, but it cannot act as a control. It's a guess. It may be an informed guess, but it is a guess. Subject to all the much-mentioned shortcomings of a model. You cannot even verify it. Nobody has validated your code to any known standard. You do not have an exact idea of the phsical processes which are going on. You may have them within a range of uncertainty, but that is not enough. You don't have the carbon cycle down, nobody does. You don't even have the water cycle to any degree of certainty. You cannot use that to prove anything. With actual data, you'd be able to measure insolation, albedo, LWIR up and down in whatever bands and demonstrate what degree of warming was down to CO2 and its variation over time. Then you would have something on the road to proof. What you are doing now does not lead to proof, only to dispute and suspicion. Any reasonably informed person could run up a dozen experiemnts to tease out this data. Some of them would not be too expensive. In fact the data may already exist and need to be made public and analysed. Why are we looking at a bunch of inappropriate Swiss sites or going over comparative data from two different satellites then not working out the answer?
Oh, and can the Met offiice actually entertain the idea that AGW might be falsified and get involved in an effort to do so? Or is it too committed to a position? It really will be the last place to know when the ice age comes.
Apologies, due to working on two computers I am coming up as Rhoda Klapp sometimes and rhoda at other times. It's still me, either way.
Richard Betts (Jul 19, 2012 at 3:43 PM) said:
This is why models are needed - in the absence of a real controlled experiment, we do the next best thing and do a controlled experiment on a mathematical representation of the Earth.
This would be true if the mathematical representation of the Earth is sufficiently detailed (i.e. it includes all key physical processes and interactions). However, it is openly stated by the IPCC that some of the key details (e.g. the effect of clouds) are poorly modeled or even ignored, so there must be considerable uncertainty surrounding the conclusions drawn from them.
Professor Betts has previously referenced the IPCC's assessment of radiative forcing from a list of sources...
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-2.html
...that can then be used to judge range of uncertainty with respect to the climate's 'sensitivity' to a doubling of CO2. However, this list of sources does not include those listed with a 'Very Low' Level of Scientific Understanding (LOSU) in this table...
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-1.html
...which suggests that, if all sources were considered, the true uncertainty may well be very much larger. Moreover, the absence of any key factor (e.g. cosmic rays) could also give rise to a significant bias in all models and so undermine any attempt at averaging out their uncertainties over multiple runs.
Maybe I've missed some subtle step in the IPCC process that was able to compensate for such uncertainties? If so, I'd be most grateful for Prof. Betts or any one else to correct my misunderstanding.
Dave, the subtle step is that they already know the answer.
Jul 19, 2012 at 4:58 PM | Don Keiller
Don
I agree that adjustments, however well justified, should be open, transparent and reproducible, and the data available for people to check. I believe this is the case with the humidity dataset we have been discussing.
You mention "dismantling our way of life" which presumably refers to mitigation policies. However, there is more than one potential response to AGW - the other is adaptation. Possibly we need a mix of both (but I don't know what the balance should be).
But my point is, understanding GHGs as a climate forcing is also important for improving predictability in order to inform adaptation - as is understanding other forcings, both man-made and natural. For adaptation, it is kind of irrelevant whether long-term external forcing is man-made or natural - if it's going to happen anyway, we need to know what it means so we can plan ahead to live with the changes. This includes making the most of opportunities (eg: new shipping routes in the Arctic) as well as reducing vulnerability to any increased risks (eg: flooding).
Rhoda
I was slightly confused about whether you were same person under two names or not! Thanks for clarifying.
Yes the Met Office could cope with AGW being falsified. Our main effort in climate model development is now on improving regional-scale forecasts on seasonal to decadal timescales. Understanding the response to GHG forcing is one factor that is needed for decadal timescales, but natural forcings and internal variability are also very important and (as we know from seasonal forecasting!) extremely challenging.
Re: Jul 19, 2012 at 4:10 PM | Richard Betts
"The point is, it's better to use observations than re-analyses for looking at trends in humidity."
The problem is haven't the historical 'observations' been homogenised
"Radiosonde humidity records represent the only in situ observations of tropospheric water vapor content with multidecadal length and quasi-global coverage. However, their use has been hampered by ubiquitous and large discontinuities resulting from changes to instrumentation and observing practices. Here a new approach is developed to homogenize historical records of tropospheric (up to 100 hPa) dewpoint depression (DPD), the archived radiosonde humidity parameter....When combined with homogenized radiosonde temperature, other atmospheric humidity variables can be calculated, and these exhibit spatially more coherent trends than without the DPD homogenization. The DPD adjustment yields a different pattern of change in humidity parameters compared to the apparent trends from the raw data. The adjusted estimates show an increase in tropospheric water vapor globally."
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3816.1
So that by homogenising historical raw data the trends now differ and isn't this 'homogenisation' vulnerable to subjective assumptions?
Or will the IPCC simply look at trends from the raw data?
Interesting to see what the Paltridge report actually said -
" It made the point (not an original point, but on the other hand one that is not widely known even among the cognoscenti) that water vapour feedback in the global warming story is very largely determined by the response of water vapour in the middle and upper troposphere. Total water vapour in the atmosphere may increase as the temperature of the surface rises, but if at the same time the mid- to upper-level concentration decreases then water vapour feedback will be negative. (There are hand-waving physical arguments that might explain how a decoupling such as that could occur)
Climate models (for various obscure reasons) tend to maintain constant relative humidity at each atmospheric level, and therefore have an increasing absolute humidity at each level as the surface and atmospheric temperatures increase. This behaviour in the upper levels of the models produces a positive feedback which more than doubles the temperature rise calculated to be the consequence of increasing atmospheric CO2.
The bottom line is that, if (repeat if) one could believe the NCEP data ‘as is’, water vapour feedback over the last 35 years has been negative. And if the pattern were to continue into the future, one would expect water vapour feedback in the climate system to halve rather than double the temperature rise due to increasing CO2."
http://climateaudit.org/2009/03/04/a-peek-behind-the-curtain/
And haven't recent papers seemed to show that the positive water vapour feedback effect has been somewhat overstated in climate models?
http://pielkeclimatesci.wordpress.com/2012/07/16/new-paper-weather-and-climate-analyses-using-improved-global-water-vapor-observations-by-vonder-haar-et-al-2012/
Regarding the surface datasets the article I linked to
http://climateaudit.org/2012/07/17/station-homogenization-as-a-statistical-procedure/
Re your comment
"I think the way to have confidence in the surface datasets is to see whether similar results are obtained by other people re-doing them using the original data, as in the BEST study for example."
But haven't their been major quality issues with the BEST data -
http://climateaudit.org/2011/11/06/best-data-quality/
http://climateaudit.org/2011/12/20/berkeley-very-rural-data/
and methodology
"Temperature stations are known to be affected by numerous forms of inhomogeneity. Allowing for such inhomogeneities is an interesting and not very easy statistical problem. Climate scientists have developed some homemade methods to adjust for such homogeneities, with Menne’s changepoint-based algorithm introduced a few years ago in connection with USHCN among the most prominent. Although the methodology was entirely statistical, they introduced it only in climate literature, where peer reviewers tend to be weak in statistics and focused more on results than on methods.
Variants of this methodology have since been used in several important applied results. Phil Jones used it to purportedly show that the misrepresentations in the canonical Jones et al 1990 article about having inspected station histories of Chinese stations “didn’t matter” (TM- climate science). More recently, the Berkeley study used a variant."
http://climateaudit.org/2009/03/04/a-peek-behind-the-curtain/
Re: "I'm afraid I don't have links to an archived online version of the "warming" brochure - it has been taken completely offline so people don't keep finding old versions and claim that we are saying something that we're no longer saying. "
Does that mean you are now saying that the original statement
""The urban heat island effect already warms central London by more than 10C on some nights. Increased urbanisation and release of waste heat would increase this still further - on top of the effects of global warming"
contained in the two Met Office brochures was inaccurate and the Met Office is no longer claiming that?
Dave; in this report, G L Stephens uses satellite data to show the climate models use double real low level cloud optical depth: www.gewex.org/images/feb2010.pdf [See page 5]. I noticed the same effect at the same time. Sagan's physics applied to clouds is wrong.
Depending on how the models work, it's possible the 5 times exaggeration of IR warming offset by exaggerated cooling, could through the high dependence of evaporation rate on air-water temperature difference lead to artificial positive feedback through the water cycle, the opposite of what is observed when you look at real data [TPW].
This would be the case despite the hind-casting being done correctly, giving the appearance of a stable system.
Is there anywhere I can see radiosonde RH records plotted before and after on the same graph? When was the change in sondes made to burn off water? Is there a discontinuity in the graph at any given point? Did the two kinds run side by side to enable an actual idea of the difference?
I despise adjusted data. The bucket adjustment always looked like a farce to me, along with the idea of resting a temperature record on the lowliest sailor on a ship where it was essential to have a log entry, not that it be correct. Like the temp records on the DEW line where you had to walk a hundred yards outside in minus forty. Yeah right!
Marion: an interesting and detailed analysis. However, by ~200 ppmV in a long optical path at ambient temperature, the emissivity hence absorptivity of CO2 in dry air levels off. This means that with no further argument about the true nature of the GHE [mine is based on the physics of spectral line inversion], there appears to be no possibility of any CO2-AGW!
[The warming in the 'PET bottle' experiment is probably at the walls from pseudo-scattering of IR energy, kinetically favoured over direct thermalisation and Nahle has apparently proved this experimentally.]
@Richard "dismantling our way of Life"
How's this for starters?
The numpties in DECC want a massive change to windmills and other "renewables" and hamstring shale gas power with unproven and massively costly carbon capture.
This will have the effect of undermining our industrial base by making the cost of energy-intensive processes prohibitive. As a country, we cannot afford this.
I do not want to live in the Third World.
Maybe the bunny-huggers and yurt lovers who now dominate our "Government" do and it is organisations, like the MET Office, who are giving these misguided cranks the ammunition to do just that.
Re: Jul 19, 2012 at 3:19 PM | Richard Betts
Richard,
Wouldn't becoming an AR5 reviewer effectively place the reviewer under a type of 'gagging order' - see comment from
Steve McIntyre
Posted Jan 13, 2012 at 12:45 AM
"I’m accepted as a AR5 reviewer but have refrained from downloading First Draft documents sinc I am not prepared to agree to the confidentiality terms. IPCC’s main enforcement mechanism seems to be their threat to expel someone as a reviewer. Since they ignored my review comment. I dont see that this thread has much downside for me.
Otherwise I’m not sure what they can do against someone who doesn’t rely on government grants. (Anyone relying on government grants who defied IPCC would pay the price when he sought new funding – that’s for sure.)
Would they sue me for commenting publicly on their documents? If so, for what? Offhand, it seems an unattractive course of action for them, but you never know."
http://climateaudit.org/2012/01/12/stockers-earmarks/
Re: Jul 19, 2012 at 7:25 PM | Don Keiller
Hi Don,
Well there's certainly been a lot of hype about Renewables and Energy [In]Efficiency.
Prior to Copenhagen they were pushing Tradable Energy Quotas - according to Wiki -
"TEQs (Tradable Energy Quotas) is a proposal for a national emissions and energy trading scheme that includes personal carbon trading as a central element."
http://en.wikipedia.org/wiki/Tradable_Energy_Quotas
It seems that 'Common Purpose' were the preferred buzz words at one time
UK
David Fleming's guide to TEQs -
"Energy and the Common Purpose: Descending the Energy Staircase with Tradable Energy Quotas" (3rd edition), downloadable as a PDF.
http://www.theleaneconomyconnection.net/downloads.html#TEQs
US
"Our Common Purpose: Addressing Climate Change"
"But no nation can solve this crisis on its own. Climate change is a global challenge that demands a global solution. The united states is engaging developed and developing country partners around the world to forge the necessary international response and to achieve a successful international agreement." ie the update to Kyoto
http://www.state.gov/documents/organization/133389.pdf
Nowadays of course the preferred terminology is simply 'sustainable development' based on the Agenda 21 theme and amazing how many times 'Agenda 21' cropped up in the Climategate mails - E.M.Smithe of Chiefio reports -
http://chiefio.wordpress.com/2011/12/18/foia-agenda-21/
but yes, very much undermining our economy and enforcing massive social changes.
Amendment - Jul 19, 2012 at 5:44 PM | Marion
Apologies some of the links appear to be out of order - amended on this post
Re: Jul 19, 2012 at 4:10 PM | Richard Betts
"The point is, it's better to use observations than re-analyses for looking at trends in humidity."
The problem is haven't the historical 'observations' been homogenised
"Radiosonde humidity records represent the only in situ observations of tropospheric water vapor content with multidecadal length and quasi-global coverage. However, their use has been hampered by ubiquitous and large discontinuities resulting from changes to instrumentation and observing practices. Here a new approach is developed to homogenize historical records of tropospheric (up to 100 hPa) dewpoint depression (DPD), the archived radiosonde humidity parameter....When combined with homogenized radiosonde temperature, other atmospheric humidity variables can be calculated, and these exhibit spatially more coherent trends than without the DPD homogenization. The DPD adjustment yields a different pattern of change in humidity parameters compared to the apparent trends from the raw data. The adjusted estimates show an increase in tropospheric water vapor globally."
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3816.1
So that by homogenising historical raw data the trends now differ and isn't this 'homogenisation' vulnerable to subjective assumptions?
Or will the IPCC simply look at trends from the raw data?
Interesting to see what the Paltridge report actually said -
" It made the point (not an original point, but on the other hand one that is not widely known even among the cognoscenti) that water vapour feedback in the global warming story is very largely determined by the response of water vapour in the middle and upper troposphere. Total water vapour in the atmosphere may increase as the temperature of the surface rises, but if at the same time the mid- to upper-level concentration decreases then water vapour feedback will be negative. (There are hand-waving physical arguments that might explain how a decoupling such as that could occur)
Climate models (for various obscure reasons) tend to maintain constant relative humidity at each atmospheric level, and therefore have an increasing absolute humidity at each level as the surface and atmospheric temperatures increase. This behaviour in the upper levels of the models produces a positive feedback which more than doubles the temperature rise calculated to be the consequence of increasing atmospheric CO2.
The bottom line is that, if (repeat if) one could believe the NCEP data ‘as is’, water vapour feedback over the last 35 years has been negative. And if the pattern were to continue into the future, one would expect water vapour feedback in the climate system to halve rather than double the temperature rise due to increasing CO2."
http://climateaudit.org/2009/03/04/a-peek-behind-the-curtain/
And haven't recent papers seemed to show that the positive water vapour feedback effect has been somewhat overstated in climate models?
http://pielkeclimatesci.wordpress.com/2012/07/16/new-paper-weather-and-climate-analyses-using-improved-global-water-vapor-observations-by-vonder-haar-et-al-2012/
Re your comment
"I think the way to have confidence in the surface datasets is to see whether similar results are obtained by other people re-doing them using the original data, as in the BEST study for example."
But haven't their been major quality issues with the BEST data -
http://climateaudit.org/2011/11/06/best-data-quality/
http://climateaudit.org/2011/12/20/berkeley-very-rural-data/
and methodology
"Temperature stations are known to be affected by numerous forms of inhomogeneity. Allowing for such inhomogeneities is an interesting and not very easy statistical problem. Climate scientists have developed some homemade methods to adjust for such homogeneities, with Menne’s changepoint-based algorithm introduced a few years ago in connection with USHCN among the most prominent. Although the methodology was entirely statistical, they introduced it only in climate literature, where peer reviewers tend to be weak in statistics and focused more on results than on methods.
Variants of this methodology have since been used in several important applied results. Phil Jones used it to purportedly show that the misrepresentations in the canonical Jones et al 1990 article about having inspected station histories of Chinese stations “didn’t matter” (TM- climate science). More recently, the Berkeley study used a variant."
http://climateaudit.org/2012/07/17/station-homogenization-as-a-statistical-procedure/
Re: "I'm afraid I don't have links to an archived online version of the "warming" brochure - it has been taken completely offline so people don't keep finding old versions and claim that we are saying something that we're no longer saying. "
Does that mean you are now saying that the original statement
""The urban heat island effect already warms central London by more than 10C on some nights. Increased urbanisation and release of waste heat would increase this still further - on top of the effects of global warming"
contained in the two Met Office brochures was inaccurate and the Met Office is no longer claiming that?
This is how it was reported in the press -
http://www.telegraph.co.uk/news/7792212/Cities-to-get-hotter-at-night-predicts-Met-Office.html
and how it's currently reported on the Met Office website -
"How do urban heat islands affect human beings?
Currently, about half of the world's population live in urban areas. By 2050 it will be closer to two-thirds. Even where urban heat islands are not a major problem now, they could exacerbate some of the projected impacts of climate change such as heatwaves and hot summer spells.
During the summer, higher night-time temperatures can already lead to nocturnal heat-stress and disrupted sleep for city residents in some parts of the world, posing a bigger risk during a heatwave. During the day, roads, walls and roofs exposed to the Sun can become very hot, resulting in even greater discomfort. In 2003, the heatwave across Europe (recorded as the hottest summer on record) is estimated to have resulted in an additional 35,000 deaths, many of them in major towns and cities"
http://www.metoffice.gov.uk/services/climate-services/case-studies/urban-heat-islands