Climate cuttings 38
There are a lot of climate related stories around at the moment, so I thought I'd wheel out the Climate Cuttings series once again.
First up is Roy Spencer, discussing a new paper by Lacis (Schmidt) et al. The authors seem to be trying to sideline the role of water vapour in the climate system so as to leave the road clear for carbon dioxide. Their results, however, appear to rest on the assumptions they make. Pielke Snr wonders why Science published the paper at all, unless for propaganda purposes.
Stephen Goddard looks at Hansen's 1988 predictions and finds that warming of 8 degrees in the Antarctic is probably somewhat (ahem) off the mark.
Jeff Id looks at the proxies from the recent Ljungqvist reconstruction and finds that the temperature pattern in the reconstruction is rather robust.
The Hal Lewis resignation story rumbles on. Andy Revkin has taken a potshot at Lewis here, prompting a further response from another APS member, Roger Cohen, at WUWT.
Geoff Chambers, writing at Harmless Sky, notes the difficulties the Guardian has got itself into over research funded by oil companies.
I'm rather late to this one, but the Environment Spokesman for Germany's CDU/FDP party has come out as a sceptic, referring to climate change as an ersatz religion. The greens are not happy.
And lastly, help Steve McIntyre be chosen as Canada's top science blog by voting here.
Reader Comments (33)
Voted but I note that the results are hidden. (Why?)
Sorry to be pedantic, but isn't it Steven Goddard, not Stephen...
Glad to see the Bishop segue to real science ( Pileke Snr ) rather than relay some op-eds of the past few weeks.
The Bish is a favourite read. I'm only unhappy on the odd occasion when emotion and spin are relayed.
Here's a cutting. It has not been through the Science Media Centre.
Nuclear science versus climate science.
Here is a snapshot of how it was done in the 1930s.
James Chadwick was interned in Germany during WWI. In January 1932 he read a Curie paper mentioning a heavy particle. Immediately, he built apparatus resembling a discarded piece of plumbing and 3 weeks later announced the discovery of the neutron. This led to a publication - Chadwick, Sir James, “Possible Existence of a Neutron”, Nature, p. 312 (Feb. 27, 1932). A few evenings later he lectured to colleagues about this discovery of world importance, which led to a Nobel Prize in Physics in 1935. He had been elected a fellow of the Royal Society in 1927.
………………………………..
Why mention this?
1. Chadwick suffered personal discomfort and the Curies (Snr) death from their work.
2. Chadwick did his experiments before writing a note to Nature.
3. Chadwick told his colleagues of the work within weeks.
4. The Royal Society did not appear to influence Chadwick’s work.
5. There was no implication of impropriety levelled at Chadwick’s work.
6. The work was done at very low expense. Supercomputers did not exist.
7. His Nobel Prize has since been described as in the top 100 order of merit.
8. There was no serious challenge to the validity of his experiment or results.
9. Cockcroft and Walton built a rudimentary accelerator and split the atom on 14 April 1932, confirming Chadwick’s work 4 months after his idea.
Today, in climate science, we might expect –
1. Climate scientists are protected from many dangers discovered at the cost of scientists before them.
2. A preliminary announcement of an “advance” in climate science can be months to years before formal publication.
3. Some Climate scientists have yet to release requested raw data from decades ago.
4. The Royal Society now sometimes expresses desirable social and economic outcomes from climate science work.
5. Accusations of impropriety are often levelled at the work of some climate scientists. Some have arguably confessed to it in “Climategate”.
6. Climate scientists seem more inclined to ask for funding before working.
7. The modern Nobel Prize has not always been seen as earned by deserving recipients.
8. Many climate science experiments and results are yet to be accepted and settled. Many experiments have not been definitive and inarguable.
9. Many climate science results remain to be validated by further work years after publication.
Note: The Chadwick story is told more fully by P.D. Smith, ‘Doomsday Men’, Allen Lane publ., 2007, whose guidance I have used with acknowledgement here.
There's an interesting discussion about the APS response to the Hal Lewis resignation over at the Institute of Physics pysicsworld.com web site:
http://physicsworld.com/cws/article/news/44024#comments
Looks like a lot of physicists are disgusted with the APS response...
As a general comment regarding the "modeling" papers discussed. A few days ago we had a discussion about computational computing; that is heavy duty number crunching such as is used on complex algorithms that use massive amounts of multiplications, divisions and such.
Unless very, very carefully done, such programs will rapidly accumulate massive amounts of error. Put it this way, if there is a 0.0001% error in each calculation, what is the error after 1 Trillion calculations? Since 1 trillion calculations is just a few seconds time on the most modern massively scalable computers used for such modeling work, imagine the what the result of a couple hours running would be. It could be total garbage and you have no way of knowing without verification against real world data.
Every model simulation must be tested against real data. That is why they do wind tunnel tests for aerodynamics, they use to set off nuclear blasts, and such. They need real-world data to verify the code. What they done it with this climate modeling? None that I can see. In fact, the paper you point to by Steven Goddard simply proves the point that Hansen's 1988 model was not valid.
"Garbage in, garbage out"
Geoff Chambers' post mentions a 'run-in' he had with the Moonbat also with 'the snufkin' (Martin Porter from Derbyshire). According to Porter's blog, he and George attended a mysterious function in Wales called 'Dark Mountain' on 29th May this year. Described as a 'training camp for the unknown world' they were joined by Paul Kingsnorth, Doglas Hine and John Vidal. Not a 'tea-party' surely ?
Toad,
CNN - An alternative eco-festival going against the 'green'
They've even got their own website: uncivilsation.co.uk
And a manifesto. It reads a little like moving eco-campaigning from saving the planet to saving civilisation.
Thanks Gareth, sounds like a sort of Glastonbury for those who are waiting for the Apocalypse -.'and over it all looms runaway climate change, which threatens to render all human projects irrelevant, climate change, which brings home at last our ultimate powerlessness'.
No wonder eco-fascists have no sense of humour !
Toad
It is very important to know the relationship between Kingsnorth and Monbiot. Kingsnorth is an extreme ecofascist who was assistant editor of the Goldsmith (fascist) owned Ecologist. Monbiot's mentor, Crispin Tickell was Thatcher's UN ambassador and wants to reduce the UK population by 2/3rds. His family are all extremely right wing.
Observation will show that Monbiot's articles are almost exlusively ecofascist in that they they seek to diminish, rather than improve human activity.
Toad, Gareth, I’ve replied on the subject of theSnufkin ad Dark Mountain at
http://ccgi.newbery1.plus.com/blog/?p=339
The Lacis paper is indeed simply a propaganda piece. The final sentence, is a call to political action, and innumerable press releases have been prepared to spread the word, it is not "science", this is just cynical manipulation. The scant scientific text indicates that the model parameters have been tweaked, giving the expected output, based on the model's inherent assumptions. Of course, the same model which cannot "predict" past events, and therefore should not be trusted at all. It would be more interesting to discuss, for example, why the model cannot account for the impact of increased CO2 during the last decade when the global temperature remained constant.
The gamble which Lacis, Schmidt, Mann, Jones, et al have played is that legislative changes on CO2 would bite before the climate trend changed for whatever natural cycle reasons, and they could claim to have saved the planet. (Not dissimilar to the shaman taking responsibility for a solar eclipse).
Don P.
Writing from total ignorance (mine), is it possible that folks who do modeling involving millions of cumulative calculations are aware of the limitations of the hardware (and compilers) they run on and build-in appropriate compensation into their algorithms?
Toad, E Smith
Kingsnorth on sharing his name with an evil coal burning power station, including the 'stop kingsnorth' t-shirts:
http://www.guardian.co.uk/commentisfree/2008/sep/11/law.climatechange
I loved that Lassie Cartoon, so much I have printed it out and put on my wall :)
The trouble with these CO2 hokum science, snake-oil salesmen, is that their "science" is based on 19th Century ideas, and 20th century fantasies. A scientist in the pro CO2 & AGW camp, put it this way on the Lou Dobbs TV Show (USA). He said that: the temperature was rising, and we studied all different factors in the climate, and the only one that seemed to fit, and was rising in the same way was CO2, so we looked to see if man was causing this rise in CO2....... & etc.
This isn't science ! As Judge Andrew Napolitano said on his Fox News TV Show, This is a Religion !
We are expected to believe some dogmatic statements from people with vested interests, and when we ask for evidence of their claims, they can produce only hearsay statements, distorted and doctored data, and projected "results" derived from dubious computerised predictions. Computer code that contained, such "coders remarks" as, "This is my rotten code again isn't it, I am not very good", and
"fudge factor", "my blunder again"...... and similar. The thousands of ramifications in the "Climategate" e-mails. The Lord Oxburgh & Muir Russell Apologia, which masqueraded as enquiries.
As the renowned Professor David Bellamy has said...."The science has simply gone awry"
See the well over a Hundred Full Length Climate & Related videos by going to the website of ....
The Fraudulent Climate of Hokum Science - Click Here
Where Hokum Climate Science is exposed as Fraudulent !
Re: Don Pablo de la Sierra
There is one experiment I would like to see done and that is running one of the climate models with the code using 64 bit floating point numbers and then run the model again with exactly the same parameters but using 32 bits. It would be interesting to see how far into the future the models have to run before there is a divergence in the results.
re TerryS
I'd bet not very, especially after all the rounding errors, infilling and adjustments used in training data.
Hairdryer, you mentioned that UEA had retained Luther Pendragon as PR consultants. What is your evidence for that other than their involvement with Muir Russell?
What is Psyience?
Re Steve McIntyre
It may have been from the CCE review notes, ie their involvement in the 4th February call, but I need to dig back further to see if they were engaged by UEA prior to Muir Russell, or as part of that engagement. I'm pretty sure I saw a reference prior to that, but can't find anything reliable at the moment.
TerryS
Generally, you want as much precision as you can get. But even with quad precision (128 bit) you still should check the result independently. Your computer program could still be in ga-ga land.
The problem is usually due to scaling. The way floating point hardware works is that it represents the number as an exponent and mantissa. While this can accurately represent a wide range of numbers, the actual mathematical operation requires that the two numbers have the same exponent, thereby requiring the hardware to shift the mantissa of one number or the other to the left or right. If it is shifted right, the lower bits of the mantissa are dropped, and if shifted to the left, zero filled. Do that enough and even quad precision is useless.
In the case of 32-bit floating point, you had to make goddamn sure that all the numbers were close to each other in magnitude. For example, using 10000.001 and 0.000000000001 would give garbage on the first operation because the mantissa could only carry about 8 decimal digits. You do have closer to 20 with double precision, and about 45 or so with quad.
Now, let's throw in mixed mode arithmetic where you have integer and floating point conversions, you now can have all the precision in the world and still have garbage because with integer the fractional part of the floating point is truncated. 1.999 is 1 not 2
Now lets mix precision. Say, double and single precision in the same operation. The compiler may well convert the single precision to double, but it still only has 8 decimal digits of accuracy -- garbage in, garbage out.
As I said, when you are doing serious number crunching, you must have very carefully crafted code, done by people who understand the fine points of how the compiler and hardware work when you design the code and implement it. The typical graduate-student programmer hasn't a clue. Just looking at the samples in the Harry Read Me file showed me that Harry was a rank amateur. And they want us to spend trillions to save the Earth based on that!
Software engineering is a serious engineering discipline for a good reason. if you want to ride a rocket to the moon, I would hope that the guidance program was written by a team of some of the most skilled software engineers, using solid and proven engineering principles and tested thoroughly and rigorously, and not whipped off one night by Harry while doing an "all-nighter."
@Don
I do not disagree with what you point out, but I look at it from a different perspective.
I started off on PDP-8, progressing to the much underrated VAX-VMS. My first real programming task was to model Take Off (V1, V2) speeds for aircraft based on many many factors. My boss had been a senior guy on the Concorde project (essentially a paper blue-print and slide rule project.) The problem of climate modelling is not lack of consistent precision, but too much reliance on precision.
Before the charts I produced where published in the Pilots Manual, they were of course verified. That task now is probably a trivial one with the precision of calculations and curve fittings immensely improved. And those details are built into the planes computers. But those "curves" need to be verified.
Even though planes are now modelled before they fly, that first flight does not have 300 passengers on board. And test flights are mad to verify the designed parameters.
Cars are crash tested to destruction in a virtual world. Yet NCAP still wants to smash them against brick walls. And always will, no matter how wonderful the graphics of the virtual program.
And these are all rigid well defined structures with known coefficients for all the gases and materials involved. With highly developed modelling programs for finite element analysis and fluid dynamics.
Even in the cases where they build something straight from the model, e.g. the finite element analysis of large bridge, all the accuracy is somehow made redundant because the thing is so over engineered to a factor of 2 or 3.
I suppose after all this, whilst accepting accuracy is an issue over long iterations and varying computational platforms, it is secondary to verification. And verification is difficult enough with known material and gas behaviour. How do you do it with unknown behaviour of the climate over a short window in the planet's history? The fact that a weather forecast cannot be verified beyond say 5 days max, is a huge problem to the validity of current modelling...
Verification is THE problem with climate modelling...
(probably too early on a Monday morning to make myself clear.)
From the Pielke, Sr., blog article:
"However, this paper [Lacis et al 2010] perpetuates the narrow view that the since this gas does not condense and precipitate, it is THE dominate forcing since it 'sustains the current levels of atmospheric water vapor and clouds via feedback processes.'"
In a way, CO² does 'condense and precipitate': it is soluble in rain and drops into the ocean, a local negative feedback.
Jiminy Cricket
Verification is THE problem with climate modelling...
We are in complete agreement. I was merely showing just why one needs to know what the feck they are doing when writing code. You are raising the much more fundamental question of do they have the foggest idea about WHAT they are modeling. I agree that they don't.
There is absolutely no question that every computation code needs very thorough verification. And that means PREDICTIVE verification as well.
PS I worked for DEC for about 8 years, mostly on the PDP-11 and VAX. Messed with the PDP-8 a bit, but was an RSX-11A, B,C,D, and M support specialist doing real time programming as we called it back then. I must have written a million lines of PDP-11 assembly code. Nice to find someone who knows what a PDP-8 was. Not many of us any more, I am afraid.
Jiminy Cricket, I started with a Data general Nova (but I thought it was less than 16 bit) then went to a PDP8-e with 8K of core and an ASR-33. Then over a few years I promptly forgot all I had learned.
I liked DEC's. Think by the time I started in computing, PDP's had been relegated into comms servers (DELNI or DEMSA?) fronting the 8840 cluster we had. Neat design compared to the IBM3090 and Amdahl 5990 I also had to wrangle the network on. Still remember when our aircon failed and the rapid local warming prompting the MVS boxes to shut down, but the DEC's kept plugging away, plus VMS being rather more user friendly :)
I used a PDP-8 to solve differential equations for temperature profiles in nuclear fuel pellets. Programs stored on punched paper tape, looped in a sort of figure of eight shape between the fingers for storage. Added name/date etc of program by writing on the exposed end of the tape. What else can I recall? A machine so noisy, it was kept in a sentry-booth of sound-proofing material, in a glass-walled office that you only went into if you were working on the computer - too noisy for anything else. We also had time on a larger IBM computer 100+ miles away, shipping programs to it on punched cards, and getting fanfold output back from it about a week later by van. Unhappy days when a programming error meant the van brought back a large pile of paper fit only for reuse as a notepad.
Notes written to oneself on Hollorith cards which would fit in the pocket protector with the mechanical pencils if you bought the right pocket protector.
Nothing like a good glass of Château de Chasselas, eh, Josiah?
You're right there, Obadiah.
Oh well, the full Monty.
add 6 inch K&E log-log duplex decitrig. or even better, the Pickett equivalent - gasp, aluminum.
Oh, the good old days!
But there is no going back. I found an IBM 360/370 emulator on-line that would run on a PC. I got it, along with VS, set it up as a 370/195 (hell, I have 4 gigabytes of main memory and 2 terabytes of disk on my puny PC, so I thought BIG) and fired it up. The emulator with the OS still ran several times faster than the original, and since there were no blinking lights, card reader chattering, and chain printer roaring away, I soon tired of it. What really got me was no 9-track mag tape drives spinning. It is just not the same having a little box sitting on the corner of your desk pretending it was a very large room of equipment.
And then there was the heat the damn things gave off. While at DEC in Maynard, we had an "office" cat who often slept on top of one of the PDP-11/70s during winter. It was nice and warm above the fans.
Ah, the clatter of Monroematics or Friedens churning out structural calcs. Remington marketed a typewriter named the Silentwriter. It didn't sell. Secretaries hated it - no clatter.
Before the discovery of 'ENTER' but after the disappearance of manual typewriters, I used to get asked what "carriage return" could possibly mean.
Warning balalumba here