Tuesday
Nov242009
by
Bishop Hill
![Author Author](/universal/images/transparent.png)
Code thread updated
![Date Date](/universal/images/transparent.png)
![Category Category](/universal/images/transparent.png)
I've been updating the code thread. Some readers, particularly Mark (thanks Mark), have been doing a fantastic job of uncovering juicy titbits. There's some, ahem, extraordinary comments on that code. Take a look.
Reader Comments (11)
This comment on CA mirror site sums up the code:
I really thought code like this would be super complicated....
obsj04_f7.pro
;
; Corrected version of adjustment series should go linearly from 1950 value
; to zero in 1970 and stay zero thereafter
;
Don't know if this one is new or old, it's the second "oops we lost the data" I've seen
mkinstr.pro
;
; Get precipitation field (1961-90 means contain the land mask, since
; we're using the New and Hulme dataset which is complete by interpolation)
; *** IN FACT, IT DOES HAVE SOME MISSING VALUES IN IT, WHICH HAVE BEEN
; SET TO THE 61-90 MEAN. I HAVE NOW RESET THESE TO MISSING, SO I HAVE
; AN INCOMPLETE DATASET ***
;
O/T
Harrabin being an anal orifice again.
"Met Office project that, barring a very cold December, this year will be the fifth warmest on record. "
http://news.bbc.co.uk/1/hi/sci/tech/8377128.stm
I have had some experience of working with 'soft' scientists before, and these emails and the style of code seem so familiar. Any experimental data are 'truncated', and the 'outliers' are removed, just for starters. Their main 'expertise' seems to be in simple, clumsy statistical manipulation, and one might even imagine that this obsession for statistical jargon is their way of compensating for their inadequacy in mathematics and the 'harder' stuff. But even if their methods are made public, it's pretty difficult to prove that their tortuous massaging of the data is 'wrong' per se.
In this email
http://www.eastangliaemails.com/emails.php?eid=1017&filename=1254147614.txt
the author says:
Clearly they are simply applying arbitrary 'corrections' to make the data fit their pre-conceived ideas. They are making it up as they go along.
The age of stupid?
http://www.youtube.com/watch?v=0j7kefwziz4
This is discussing the recent finding of a flaw in the sea surface temperature data around the 1940s. He's hypothesizing, as everyone did, how much this would make the early 20th century rise more explainable. That's why he picked the largest possible value, 0.15C to hypothesize with. As he says, it would make it more explainable, but there would still be a blip to explain even if it was corrected to that maximum degree.
The reply by phil jones renders it moot anyway, the correction wouldn't drop the early 20th century temperature rise afterall, phil suggests it would increase the period afterwards instead.
hi clot
Thanks for your explanation of the '1940s blip' email. I have to say, though, that as a disinterested observer, it appears that there isn't a temperature measurement that isn't "flawed" and that that the CRU doesn't want to 'correct' in some arbitrary way - if the code is anything to go by.
On nomenclature: we should probably stop referring to CRU's "databases" and refer instead to their "datamidden".
@ Clot, "This is discussing the recent finding of a flaw in the sea surface temperature data around the 1940s." Can you tell us more? Who found the flaw? Where can I read more about it? Thank you in advance for any help you can give.
Dearieme
LOL!