Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Pielke Jr says Field has misled Congress | Main | Comments »
Wednesday
Aug012012

Climate heroism

More mathematically inclined readers will be interested in a discussion paper by Jonty Rougier, a colleague of Tamsin Edwards' at Bristol. I met Jonty while I was at the Met Office earlier in the year and found him very engaging as well as having a very sharp mind.

The paper is rather mathematical for me. It discusses the difficulties of getting a calibration from systems that are sensitive to initial conditions and where there is an attractor, describing the difficulties as almost intractable. It then goes on to list the further complications that are found in environmental systems and in particular in paleoclimate, and concludes:

When these additional complications are added to the intractability of palaeoclimate reconstruction (climate de finitely has sensitive dependence on initial conditions and an attractor), that enterprise must be seen as heroic in the extreme, and we must expect the uncertainties to be very large indeed. But, somewhat surprisingly, they are not; e.g., as shown in the celebrated hockey stick, which was used so much in the Third Assessment Report of the IPCC (Houghton et al., 2001); Montford (2010) provides a readable if slightly hair-raising account.

PrintView Printer Friendly Version

Reader Comments (52)

The draft chapter "Uncertainty in climate science and climate policy" by Rougier and Crucifix is also very interesting. It addresses in passing some interesting psychological aspects to the concept of uncertainty (and the mathematics are less daunting.)

The pdf is available on Rougier's website.

Aug 1, 2012 at 10:37 AM | Unregistered CommenterAlan Kennedy

[Snip - manners]

Aug 1, 2012 at 10:49 AM | Unregistered CommenterGixxerboy

I draw comfort from the fact that the issues raised in HSI and the importance of statistics in defining the direction/promotion of climate study and climate policy are increasingly being recognised. Most of our academics have little choice in choosing the direction of their studies and it is unsurprising if has taken time for questions to be raised. Better late than never.

We should never forget that most of our academics are in favour of transparency/scrutiny and have chosen their profession in order to advance knowledge that will help humanity to progress to a better place.

Aug 1, 2012 at 10:57 AM | Unregistered CommenterChairman Al

I've been saying for a while that most recursive climate simulations suffer from exponential error. this makes CCM results into basically an expensive "dice" with less predictive power than say weather (+ x) = weather(now).

Aug 1, 2012 at 11:23 AM | Unregistered CommenterAC1

The first paper I ever published included simulations of a system exhibiting what we then called "parametric sensitivity": that is to say, the predictions were sensitive to the values of the parameters in the Partial Differential Equations, in their Boundary Conditions, and in their Initial Conditions.

Perhaps that's why my initial reaction on reading about Climate Modelling was a snort of derision. Of course, as I read more my views changed; my estimate of the competence of many of the workers changed from derision to contempt, and my emotions came to include outrage, as I realised that some of them were crooks. I dare say that most are no worse than holy fools, which puts them well ahead of many workers in other areas of Climate "Science".

Aug 1, 2012 at 11:48 AM | Unregistered Commenterdearieme

@dearime
I agree. I have been involved with modelling cardiac activation with a view to understanding cardiac arrhythmias. This has recently become a fashionable field for mathematicians as it involves a very stiff, highly non-linear set of coupled PDEs and ODEs.

The behaviour of any model depends critically on parameters tthat are fitted to an empirical, rather than first principles, model of how currents pass across cell membranes.

I have pointed out to a number of mathematicians that their models simply do not conform to anything measured experimentally. Do they care? Do they believe that they are making a valuable contribution to medical science? Absolutely!

Like you, I have been astonished at some of the models in climate science, their ill-conditioning and their sensitivity to initial conditions.

Aug 1, 2012 at 12:17 PM | Unregistered CommenterRCSaumarez

Bish, ignore the partially 'redacted' swear word (one routinely voiced in full, for example, on BBC's Top Gear as indicative of a person worthy of contempt).

Jonty Rougier's words iimply a contempt for uncertainty and your own work. Does he really mean what he says?

[I have no idea where you are getting this from. He explains that the uncertainties are huge and then says that in the hockey stick they are not. Seems spot on to me]

Aug 1, 2012 at 12:49 PM | Unregistered CommenterGixxerboy

His comments about McShane and Weiner, in the para below where he refers to you, are inaccurate.
He says "The recent involvement of statisticians in palaeoclimate reconstructions does not seem to have increased the uncertainty very much;" citing M&W.

In fact M&W say in the abstract
"Our model provides a similar reconstruction but has much wider standard errors, reflecting the weak signal and large uncertainty encountered in this setting."
and in the conclusions,
"Climate scientists have greatly underestimated the uncertainty of proxy-based reconstructions".

I have emailed him.

Aug 1, 2012 at 1:13 PM | Registered CommenterPaul Matthews

Dearnieme, I think we are all indebted to you for reminding us of the concept of the "Holy Fool". Sometimes the taxonomy of idiocy is complex, and I had long forgotten this class of idiot.

Aug 1, 2012 at 1:20 PM | Unregistered CommenterJeff Wood

Hmm, interesting paper, I'll need to read a bit more carefully but good to see something like this (even mildly heretical) coming from my old uni.

Paul, I may be giving the author the benefit of the doubt (and I may be biased, see above!) but I took that as meaning that the observation by McShane and Wyner did not seem to propagate much to the rest of the field of research. But I accept if this is what was intended (and it may not be), then the phrasing is ambiguous and would benefit from tweaking.

Aug 1, 2012 at 1:20 PM | Unregistered CommenterSpence_UK

Alan Kennedy is right - Rougier's paper on Climate Uncertainty is VERY interesting. (Find it here: http://www.maths.bris.ac.uk/~MAZJCR/climPolUnc.pdf.) This is the takeway summary:
"In a nutshell, we do not think that academic climate science equips climate scientists to be as helpful as they might be, when involved in climate policy assessment. Partly, we attribute this to an over-investment in high resolution climate simulators, and partly to a culture that is uncomfortable with the inherently subjective nature of climate uncertainty." They advocate shifting research funding from a small number of complex models that do not deal adequately with uncertainty to a greater variety of smaller scale ones that more rigorously address key aspects of it. In this world it would be much easier for statisticians sympathetic to Steve McIntyre's arguments to find funding. In the light of this I think the odd last sentence in the Rougier paper linked to by Bishop can be paraphrased as "I don't want the climate police on my back so I'm not going to stick my neck out by being too explicit about the poor state of climate statistics but here's a link to show you what a mess it is."

Rougier's Institute at Bristol has just received £4.2 million of funding for the Sustain initiative. (http://www.maths.bris.ac.uk/research/highlights/SusTain), One of its aims is to "pick apart ad hoc methods (developed to analyse complex data sets), examine them for mathematical rigour versus their computability and quantify the limits of their applicability". Even if their work doesn't address climate science (and it probably will) it will almost certainly vindicate many of the concerns that Steve and others have about climate statistics. (And also reveal that it s just part of a bigger issue of statistical naivety in many fields).

Aug 1, 2012 at 1:34 PM | Unregistered Commentercarbonneutral

"we must expect the uncertainties to be very large indeed. But, somewhat surprisingly, they are not; e.g., as shown in the celebrated hockey stick

Somewhat suprising indeed.

Aug 1, 2012 at 1:48 PM | Unregistered CommenterGeckko

Spence thanks, you may be right, he means uncertainty not increased very much in the general climate science community.

BTW I got an auto-reply saying he is away.

Aug 1, 2012 at 1:49 PM | Registered CommenterPaul Matthews

You all would do better just to consider that the better the data, the less the need for "statistics" to wring meaning out of it. With good physics (and good data), there will be good climate science, and it is the underlying physics (and the data...hopeless sigh) that stinks in climate science. And another clue: Anyone who speaks of "attractors", or attractor states, has already gotten his/her causality backward. Chaos theory--from which "attractors" sprang--is just part of the rising tide of general scientific incompetence today.

Aug 1, 2012 at 1:51 PM | Unregistered CommenterHarry Dale Huffman

Hang on, Harry, you don't need "Chaos Theory" to meet attractors - they arise naturally in examples of nonlinear dynamics where there is no question of chaos. For example, if your system consists of two ordinary differential equations it may have, for instance, two attractors and yet, since it consists of fewer than three ODEs, it won't exhibit chaos.

(Unless everything has changed since last I looked.)

Aug 1, 2012 at 2:14 PM | Unregistered Commenterdearieme

[I have no idea where you are getting this from. He explains that the uncertainties are huge and then says that in the hockey stick they are not. Seems spot on to me]

Aug 1, 2012 at 12:49 PM | Gixxerboy

I think, like me, your initial read is flawed. He is actually saying, I think, that there is a lack of uncertainty in the HS that should be there and .. If it was the HS would not be viewed by its' followers with such religious ferver.

Aug 1, 2012 at 2:39 PM | Unregistered Commenterstephen richards

(Unless everything has changed since last I looked.)

Aug 1, 2012 at 2:14 PM | dearieme

I think it is just a matter of convention. Chaos theory has become à la mode and sequestered the use of 'atractors'.

Aug 1, 2012 at 2:41 PM | Unregistered Commenterstephen richards

Alan Kennedy and carbonneutral, "Rougier's paper on Climate Uncertainty"

Thanks for the link and what a pleasure to see this one - especially coming from Bristol (my old haunt as well!).

They say, "The reason that we are suspicious of arguments about climate founded on experiences in meteorology is the presence of biological and chemical processes in the earth system that operate on climate policy but not weather time-scales."

Maybe worth pointing out that apparently there are also important physical processes not currently incorporated into IPCC class simulators and likely to effect simulation of long-term variability, e.g. land-ice and deep ocean currents.

Aug 1, 2012 at 2:43 PM | Registered CommenterPhilip Richens

Let me see if I get the background.

A proponent of stochastic models "demonstrates" the intractability of deterministic models, while in a parallel paper states that "scientists are not as helpful as they might be ...when involved in climate policy assessment" .... partly because ... "an over-investment in high resolution climate simulators"

Funding must be getting tight.

I don't get the "heroic" bit either.

While we can argue about data assimilation and deterministic models, boundary and initial conditions, plus attractors. It looks to me completely gratuitous to mention the hockey stick: principal component analysis of proxy data hardly can be incorporated in the main topic under discussion. It can hardly be called a model either. A numerical tool that produces a constant shape from noise can be called an algorithm, but not a model, and the problems discussed do not apply. Rougiers is perfectly aware of that, so why the flattering diversion?

Aug 1, 2012 at 3:03 PM | Registered CommenterPatagon

Patagon writes:

"It can hardly be called a model either. A numerical tool that produces a constant shape from noise can be called an algorithm, but not a model, and the problems discussed do not apply."

What a pleasure it is to encounter someone who distinguishes between model and algorithm. (A poorly constructed computer program, no matter how large or sophisticated, can turn out to be a collection of algorithms but not amount to a model.)

Aug 1, 2012 at 3:32 PM | Unregistered CommenterTheo Goodwin

It is good to see any attempts at applying some discipline to the math for climate work. The paper points out the obvious....this is a hard problem with so many unknowns that claims of precision in reconstructions can not be supported.

Comming from an engineering background it has always seemed the claims of precision by the climate people would never pass the most basic review. They have constantly claimed to have extracted a temperature "signal" with great precision. Unfortunately the claimed "signal" is much smaller than the margin of error and the noise within their models. Attempts to verify their models have been a spectacular failure...that was the essence of "hide the decline"... the model diverged from the known recent temps so any claim of being able to reconstruct temps reaching back hundreds of years using the same model became unsupportable.

They would generally get a failing grade even as first year engineering students.

Aug 1, 2012 at 3:54 PM | Unregistered Commenterfuninus

I found much to like in Rougier's paper. I especially like its dispassionate tone. His final sentence is:

"If we are to be successful in our application it will be through
applying physical insights to tune the data assimilation method, proceeding
sequentially, and focusing initially on those aspects of the system that|we
hope|can at least rule out bad candidates for X0 and 0."

It seems to me that he is saying that the problems will remain intractable until climate scientists come up with some (additional) well confirmed physical hypotheses that provide the modelers something more to work with. Climate science is in its infancy. No amount of modeling can cause it to grow.

Aug 1, 2012 at 4:22 PM | Unregistered CommenterTheo Goodwin

I recall talking to my Mey Office fellow chorister back in Feb 09! He was truly amazing, he actually believed that the models were accurate & reliable because they were built to obey the laws of physics! He insisted this because he asked me to explain how they got the temperature curve they had if they weren't right? He just couldn't see the wood for the trees as I had to expalin to him that the Met Office staff programe their computer models to show warming for given amounts of carbon dioxide in the atmosphere, not that the model was showing warming all by itself which he was implying! It's the institutional mind set of mass brainwashing, they really do believe they're that good at modelling climate.

Aug 1, 2012 at 4:26 PM | Unregistered CommenterAlan the Brit

This all loses credibility with the bizzare attempt to confalte "climate attractors" with Lorentz attractors - they are two totally different things, with the former being an a priori assumption based upon the observation of a billion years of negative feedback climate mechanisms on Earth (see Tamsin Edward's blog).

Aug 1, 2012 at 4:27 PM | Unregistered CommenterRoger Longstaff

Sorry, should be "...bizarre attempt to conflate..."

Aug 1, 2012 at 4:58 PM | Unregistered CommenterRoger Longstaff

'we must expect the uncertainties to be very large indeed'

Yes but this is climate science where uncertainties of shapes and any size are welcome with open arms and by the magic of 'the cause ' but behind them their wicked uncertain ways and become born again to become hard core facts which are unchallengeable , accept by evil science deniers.

So no problems really , for have not the prophets Mann and Jones show us that if you have faith you don't actual need valid facts .

Aug 1, 2012 at 5:20 PM | Unregistered CommenterKnR

I don't actually understand why the HSI was referenced in this paper. With regards to the extract that Andrew posted I don't understand what sensitivity to initial conditions has to do with paleoclimate reconstructions. If trees were actually any use for recording temperatures or the Romans read the thermometers they had made and wrote the results down every day I would expect the uncertainty in the reconstructions to be very low. Unfortunately trees are useless and we haven't found the Roman's logbooks yet so we don't really know the temperatures.

However some time ago we did name things like the Holocene climatic optimum, Roman warm period and Little ice age, so we must have some idea what the temperatures were like.

Aug 1, 2012 at 5:46 PM | Unregistered CommenterRob Burton

It doesn't make any sense, Rob.

Aug 1, 2012 at 8:38 PM | Unregistered CommenterPatagon

Re: Aug 1, 2012 at 1:34 PM | carbonneutral

"Alan Kennedy is right - Rougier's paper on Climate Uncertainty is VERY interesting. (Find it here: http://www.maths.bris.ac.uk/~MAZJCR/climPolUnc.pdf.)

"This is the takeway summary: "In a nutshell, we do not think that academic climate science equips climate scientists to be as helpful as they might be, when involved in climate policy assessment. Partly, we attribute this to an over-investment in high resolution climate simulators, and partly to a culture that is uncomfortable with the inherently subjective nature of climate uncertainty." They advocate shifting research funding from a small number of complex models that do not deal adequately with uncertainty to a greater variety of smaller scale ones that more rigorously address key aspects of it."

Interesting bit of spin you've put on that carbonneutral.

Tamsin has already cited the real reason on her blog

"there has been a fairly steady increase in resolution, in how many processes are included, and in how well those processes are represented. In many ways this is closing the gap between simulators and reality.... But the other side of the coin are, of course, the “unknown unknowns” that become “known unknowns”. The things we hadn’t thought of. New understanding that leads to an increase in uncertainty because the earlier estimates were too small."

... so they want to reduce the complexity of the models so they can reduce their uncertainty levels. Typical 'Climate Science', sacrifice an attempt at accuracy for hype.

"Many of those of us who spend our working hours, and other hours, thinking about uncertainty, strongly believe the climate modelling community must not put resolution and processes (to improve the simulator) above generating multiple predictions..."

And there's that word again - I thought the term was supposed to be 'projections'!!

http://allmodelsarewrong.com/limitless-possibilities/

Aug 1, 2012 at 11:14 PM | Unregistered CommenterMarion

One of the more interesting threads in the bishopric library. Particularly the commentary regarding "models" by RCSaumarez, dearieme, and Patagon.

Patagon saith: "A proponent of stochastic models 'demonstrates' the intractability of deterministic models, while in a parallel paper states that 'scientists are not as helpful as they might be ...when involved in climate policy assessment' .... partly because ... 'an over-investment in high resolution climate simulators'...."

I've never gotten far enough into the gears and cogs of actual climate models to find out, but I have a strong suspicion that at some point random number generators are employed to provide "climate like" behaviours. Although climate data are apparently chaotic, there are elements of cause-and-effect even at that level. Substitution of random number generators for the (unattainable) mathematical representation of the natural processes ignores some reality. Any cause-and-effect at this level is now that of the random number algorithm, not the physical system, itself. The ability of such a model to predict climate over long intervals is therefore limited by the stochastic subroutines. Model ensembles? No matter how high you pile it, it's still garbage.

Aug 1, 2012 at 11:43 PM | Unregistered Commenterjorgekafkazar

"Climate heroism"

Really not sure where you get that term from Bish.

After all they're trying to model an incredibly complex and chaotic system -

http://wattsupwiththat.com/2012/01/21/the-ridiculousness-continues-climate-complexity-compiled/

Use subjective assumptions to decide what parameters to include, and indeed how to address the data issues -

and we've already been given a hint of what the data is like from Climate modeller Harry

"seriously fed up with the state of the Australian data. so many new stations have been
introduced, so many false references.. so many changes that aren't documented. Every time a
cloud forms I'm presented with a bewildering selection of similar-sounding sites, some with
references, some with WMO codes, and some with both. And if I look up the station metadata with
one of the local references, chances are the WMO code will be wrong (another station will have
it) and the lat/lon will be wrong too."

"am very sorry to report that the rest of the databases seem to be in nearly as poor a state as
Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO
and one with, usually overlapping and with the same station name and very similar coordinates. I
know it could be old and new stations, but why such large overlaps if that's the case? Aarrggghhh!
There truly is no end in sight."

"the expected 1990-2003 period is MISSING - so the correlations aren't so hot! Yet
the WMO codes and station names /locations are identical (or close). What the hell is
supposed to happen here? Oh yeah - there is no 'supposed', I can make it up. So I have :-) "

http://wattsupwiththat.com/2009/11/25/climategate-hide-the-decline-codified/

GIGO springs to mind and to top it all they think they can narrow their 'uncertainties' by simplifying the models and increasing the number of runs!!

It may be 'heroism' to some but to me it's rather more that we're moving towards a UN sponsored idiocracy.

[I was alluding to the heroic assumptions that Jonty Rougier mentions in his article. Different to the usual type of heroism!]

Aug 1, 2012 at 11:43 PM | Unregistered CommenterMarion

GIGO is the result when models are 100% accurate.
Climate models are not 100% accurate and neither is the initial data (although not garbage). Once they run though a few times any signal will be reduced by exponential error.
Near perfect in, garbage out.

Aug 2, 2012 at 1:50 AM | Unregistered CommenterAC1

Marion,

"And there's that word again - I thought the term was supposed to be 'projections'!!"

Could someone show me a prediction from a model in the wild? You know, a raw prediction prior to any spin. I do not believe that they exist. All I have ever seen is a climate modeler saying that his/her model predicts some phenomenon. But is there something that ties the model to the prediction? Something natural and raw. Or must there always be some climate scientist who says that the model makes the prediction? In physics there is something raw and natural. The hypotheses in conjunction with initial conditions IMPLY some future phenomenon. Is there anything like this in the world of models?

Aug 2, 2012 at 3:22 AM | Unregistered CommenterTheo Goodwin

Aug 1, 2012 at 4:26 PM Alan the Brit

How very well put.

They (the Met Office) really do believe, with the utmost sincerity, that they are pretty good at modelling climate and that their models meaningfully predict future climate because:

- They are based on physical principles*
- They successfully reproduce (more or less) past climate variations.

*except for the bits that are not so well understood and are therefore "parameterised".

Aug 2, 2012 at 7:46 AM | Registered CommenterMartin A

[duplicate post deleted]

Aug 2, 2012 at 7:48 AM | Registered CommenterMartin A

"Could someone show me a prediction from a model in the wild?"

The Tropospheric tropical hotspot appears to be a fairly solid prediction that gets mentioned frequently.

Aug 2, 2012 at 8:27 AM | Unregistered CommenterRob Burton

Some comments (apologies for stating the obvious to some)

1) Rougier is discussing data assimilation-a portmanteau term used to describe the process of updating a model in the light of the acquisition of data , and in the first instance (actually, pretty well throughout) he sticks to seeking to estimate the initial conditions of a dynamical system.
2) attractors- an attractor is just a set of points to which a dynamical system may tend. No necessary connection with chaos. Think, for example of the edge of the bottom of a cup as the attractor for the dynamical system corresponding to a marble dropped down the inner side of the cup as it is gently swirled-the marble ends up rotating around the edge. The marble doesn’t end up at a fixed point, but it does end up confined to the attractor.
3) Now the attractor comments become fairly clear. Suppose I observe the marble with some imperfect level of accuracy at some large time t. The marble will be on the attractor (the inner bottom edge of the cup) and the fairly precise position cannot tell me where the marble started. Worse, starting points a long way apart at the top of the cup will produce the same position of the marble at time t, so I can’t even estimate the starting position well.
4) Now the “uncertainty” issue. I am certain that Rougier is (a) using understatement and (b) asserting strongly that the scientific uncertainty should be there and the fact that it isn’t is a very bad thing. In other words the models he discusses are statistically wrong in the sense that they substantially understate the uncertainty of their conclusions.

Aug 2, 2012 at 8:47 AM | Registered Commentersauljacka

Re: Aug 2, 2012 at 8:47 AM | sauljacka

"4) Now the “uncertainty” issue. I am certain that Rougier is (a) using understatement and (b) asserting strongly that the scientific uncertainty should be there and the fact that it isn’t is a very bad thing. In other words the models he discusses are statistically wrong in the sense that they substantially understate the uncertainty of their conclusions."

Sauljacka, much as I would prefer to think that you are correct, unfortunately it simply isn't borne out by the statement that...

"In a nutshell, we do not think that academic climate science equips climate
scientists to be as helpful as they might be, when involved in climate policy
assessment. Partly, we attribute this to an over-investment in high resolution
climate simulators, and partly to a culture that is uncomfortable with the
inherently subjective nature of climate uncertainty."

http://www.maths.bris.ac.uk/~MAZJCR/climPolUnc.pdf

"An over-investment in high resolution climate simulatores...." ???

"a culture that is uncomfortable with the inherently subjective nature of climate uncertainty" ???

Aug 2, 2012 at 9:53 AM | Unregistered CommenterMarion

From the 2009 Trenberth et. al. 'Energy Budget', the models apparently create 94.5 W/m^2 IR, a ‘Perpetual Motion Machine of the 2nd Kind’ offset by exaggerated cloud albedo, in particular doubled real low level cloud optical depth. Because increasing absorbed IR by a factor of ~5 in the models significantly increases evaporation from the oceans, most claimed ‘positive feedback’ by the water cycle is probably imaginary.

The problem comes from Meteorology which apparently imagines single pyrgeometers measure ‘Downwelling IR’ when they do not. They confuse it with the temperature radiation field. The manufacturers correctly warn that to measure net IR, which can do thermodynamic work, you need two instruments back to back.

Also, the assumption that UP IR from the Earth’s surface is that of an isolated black body in a vacuum is baseless, as is the similar assumption for ‘back radiation’ from the lower atmosphere. There are other mistakes, e.g. correct the IR physics and the assumed 238.5 W/m^2 DOWN IR at TOA vanishes. Also, because CO2 is in IR self-absorption mode by ~200 ppmV, long optical path, it appears there can be no CO2-AGW.

My view, in common with all professionals with substantial real-World heat transfer experience to whom I have spoken is that no climate model with these mistakes can predict climate. Yet the hind-casting gives the illusion they can. I am amazed this state of affairs is allowed to continue.

Aug 2, 2012 at 10:15 AM | Unregistered Commenterspartacusisfree

A few months ago some of us in Oz were looking at errors in that fundamental record, the temperature/time series for climate. People used to looking at numbers will sometimes start to feel that there is something odd, as did we. The cause turned out to be the conversion of deg F to deg C late in 1972. When expressed to one place after the decimal, the deg C has an ambigious conversion back to deg F for some numbers. This ambiguity led to certain numbers being present far more often than by chance, then to the discovery that many pre-1972 readings of thermometers were rounded to the nearest whole number, i.e. nothing after the decimal in deg F. This has to have an effect of the error bounds of a series. Does any bright mind know of a way to determine if a series in deg F, first read to one place after the decimal, has then been rounded down to the nearest whole number, has been rounded up to a whole number, or has been spilt up/down at X.5?
Discussed at http://joannenova.com.au/2012/03/australian-temperature-records-shoddy-inaccurate-unreliable-surprise/

Aug 2, 2012 at 11:44 AM | Unregistered CommenterGeoff Sherrington

@Marion
I'm not sure I follow your concerns.
Rougier is fundamentally concerned with uncertainty and seems to be saying
1) a small number of very high resolution simulations is going to tell you very little
and
2) Climate scientists (as against statisticians) get very itchy about uncertainty, particularly model uncertainty and the admission of subjectivity.

Aug 2, 2012 at 12:49 PM | Registered Commentersauljacka

I learnt virtually everything I know about statistics and inference from Jonty. I collaborate and consult with him on everything I do.

I want to contribute to this thread but have to get some things off my plate today first. Back later to read it all properly...

Tamsin

Aug 2, 2012 at 12:51 PM | Unregistered CommenterTamsin Edwards

Re: Aug 2, 2012 at 12:49 PM | sauljacka

He didn't say 'a small number' in this - he said there had been an 'over-investment' in high resolution climate simulators, an odd phrase to use if simply asking for more investment.

"In a nutshell, we do not think that academic climate science equips climate
scientists to be as helpful as they might be, when involved in climate policy
assessment. Partly, we attribute this to an over-investment in high resolution
climate simulators, and partly to a culture that is uncomfortable with the
inherently subjective nature of climate uncertainty."

But interesting that you say "Climate scientists (as against statisticians) get very itchy about uncertainty, particularly model uncertainty and the admission of subjectivity."

Perhaps you could quote his actual words rather than paraphrasing or perhaps the author himself would like to clarify - I'd rather have his own views rather than your version of what he 'seems' to be saying.

Aug 2, 2012 at 3:30 PM | Unregistered CommenterMarion

The actual words from Rougier's paper:
"At the other end of the modelling spectrum, there are phenomenological models of low-dimensional properties of climate and its impacts ... There are several advantages to such models. First, they are small enough to be coded by the scientist himself, and can be carefully checked for code errors. Thus the scientist can himself be fairly sure that the interesting result from his simulator is not an artifact of a mistake in the programming. Second, they are often tractable enough to permit a formal analysis of their properties. For example, they might be qualitatively classified by type, or explicitly optimised, or might include intentional agents who perform sequences of optimisations (such as risk managers). Third, they are quick enough to execute that they can be run for millions of model years. Hence the scientist can use replications to assimilate measurements (including tuning the parameters) and to assess uncertainty, both within a statistical framework (e.g., using the sequential approach of Andrieu et al., 2010). Of course, ‘big modellers’ will be scornful of the limited physics (biology, chemistry, economics, etc.) that these phenomenological models contain, although they must be somewhat hastened by the inability of their simulators to conclusively outperform simple statistical procedures in tasks such as ENSO prediction (Barnston et al., 2012). But the real issue is one of ownership. A single climate scientist cannot own an artifact as complex as a large-scale climate simulator, and it is very hard for him to make a quantitative assessment of the uncertainty that is engendered by its limitations."

He (or his hypothetical decision maker) is also advocating more work from the more complex climate models: "What I need is a designed ensemble, constructed to explore the range of possible climate outcomes, through systematically varying those features of the climate simulator that are currently ill-constrained, such as the simulator parameters, and by trying out alternative modules with qualitatively different characteristics.”

If funding is unlimited, all this can happen whilst maintaining the current number of complex models. However, if funding is constrained (as it will be), yet the changes are important, then the number of complex models/groups that are being funded would have to fall.

In any event, the big takeaway message for me is that effective statistics is at the heart of the 'climate wars', that many in the younger generation of academics, and professional statistician community when they get involved, recognise either publicly or privately (there are still strong pressures in academia against being explicit on this) that standards have been very questionable, and that there are growing linkages with the broader issues of 'statistics improvement' in all fields that Bristol are addressing. Steve McIntyre has recently pointed out that he doesn't see his site as a generic climate science blog a la Bishop/Watts but in many ways as a statistics community, allowing highly experienced statisticians both in and out of academia to bring their expertise to bear on climate claims. I am quite sure that he will be getting much kudos in the statistics community, and many others, for this in coming years and decades.

Aug 2, 2012 at 4:21 PM | Unregistered Commentercarbonneutral

carbonneutral: Spencer has simple Excel spreadsheet models you can download for yourself. The problem with the big models is that the GCMs are fine, just that in hind-casting, bad IR and heat transfer mistakes can be hidden by assuming exaggerated cloud albedos. Because few people can get to that level of detail, this has been missed.

US cloud physicist G L Stephens and I noticed the cloud physics was wrong in early 2010, a doubling of real low level cloud albedo. This is why no IPCC climate models can be trusted. The positive feedback is an artefact and because the IR physics is so wrong, there is probably no CO2-AGW.

Aug 2, 2012 at 5:20 PM | Unregistered Commenterspartacusisfree

Apologies; a doubling of real low level cloud optical depth, about 10% more albedo.

Aug 2, 2012 at 5:24 PM | Unregistered Commenterspartacusisfree

@Marion
Agreed the author's own response would be better.
Why don't you email him?
I was simply seeking as a mathematician and statistician to intermediate some technical vocabulary and some academic coyness. I'm sorry that it incensed you.

Aug 2, 2012 at 5:27 PM | Registered Commentersauljacka

Re: Aug 2, 2012 at 5:27 PM | sauljacka

"Agreed the author's own response would be better.
Why don't you email him?
I was simply seeking as a mathematician and statistician to intermediate some technical vocabulary and some academic coyness. I'm sorry that it incensed you."

Actually I'm hoping that the good Bishop here will invite him to the forum, for all to benefit - and as for 'incensed' you really should stop ascribing emotions and meanings to others - you are welcome however to quote me! I for one prefer to go to source whenever possible.

Aug 2, 2012 at 5:48 PM | Unregistered CommenterMarion

"...[W]e've already been given a hint of what the data is like from Climate modeller Harry..." --Marion

IIRC, Harry is not a climate modeller, but a programmer ordered to create pandemonium out of chaos in the CRU (3.0?) global temperature program(s). What he found demonstrates an extraordinarily low level of programming competence amongst those climate buffoons boffins who had previously put their hammy fists to the oars of CRU's programs.

http://climateaudit.org/2009/11/23/the-harry-read_me-file/

Aug 2, 2012 at 7:06 PM | Unregistered Commenterjorgekafkazar

Alice Springs airport, in the desert centre of Australia, relates to comments by Harry the programmer, to estimates of error and to models to simulate climate processes - hopefully to predict outcomes.
Alice Springs provides a simple climate example with few confounding variables in the Dry. You can easily gather more supplementary information from the Bureau of Meteorology site, such as relative humidity and rainfall. They more or less go together sometimes.
Here's the observational data (shortened).

Example July 2012, the coldest July on the Tmin record for this 1942-2012 site. Where has global warming gone?

TminC July2012Day SunlightHours
0 1 10.1
-1.7 2 10.1
-4.9 3 10.1
-4.3 4 9.9
-4 5 10.2
-4.4 6 10.2
-5.2 7 10.2
-0.7 8 10.2
2.4 9 10.2
0.9 10 10.2
2.9 *** 11 8.5 Note the asterisks for Hi temps, short sunshine
12.2 *** 12 5.8
8.1 *** 13 9.9
2.9 ** 14 9.8
-1.4 15 10.1
-3 16 10.2
-2.3 17 10.2
-2.1 18 10.2
-1 19 10.2
etc etc.

The postulate is that the rare cloudy days of July are followed by warmer nights than usual.
The challenge is to model this relationship, see if it is general, then show a prediction.
This must be about as simple as nature gives us. Have a go, modellers.

Aug 3, 2012 at 8:42 AM | Unregistered CommenterGeoff Sherrington

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>