Oversensitive.org
Nic Lewis and Marcel Crok have just launched Oversensitive.org, a new website to document the responses to their GWPF report of the same name. Of particular interest is a post outlining Jonathan Gregory's claim that he had shown that the method used by Lewis underestimates climate sensitivity.
It seems that (surprise, surprise!) Gregory's case is based on the output of a climate model, although he neglects to say so. To put forward a hypothesis and to claim it as a proof is shoddy stuff, but all too predictable in the world of climate science.
Reading between the lines it looks as if Gregory has misunderstood the Lewis method and is now rather stubbornly refusing to admit that his mistake.
Reader Comments (11)
Last line needs an edit, Andrew. Cheers.
Off-topic, but related...
There is a new paper out (paywalled, unfortunately) suggesting a TCRE range of 0.8 to 1.0 K/TtC.
TCRE is the ratio of change in temperature, to the cumulative emissions of CO2. AR5 WG1 posited a near-linear relationship between temperature increase and cumulative CO2 emissions. (See figure SPM.10 or TFE.8 Figure 1) The slope, TCRE, was given as likely in the range of 0.8 to 2.5 K per TtC emitted. [Tt = teratonne = 1000 Gt = 1 Eg.] While the WG1 report did not give a "best estimate" (as far as I can tell), the TCRE implied by the figure is about 2.0 K/TtC. So a value in the range of 1.0 K/TtC, as this new paper provides, would be a significant reduction in sensitivity.
The authors have some discussion here. It seems to be based on a single model, so perhaps not a robust result. But intriguing nonetheless.
Last line of post needs an edit, Andrew. Cheers.
HaroldW - is there a widely agreed definition of "robust"? - it's a term that seem often to be used in climate science.
What an apt name...
Good on Nic for the push back, we shall see where this goes.
Martin A
It is creeping in everywhere, seems to me to be "strong, unassailable, we all agree, it must be correct"
or I could have misunderstood the gist :-(
Martin A -
Good question. In this context, what I mean is whether the result holds, or at least is modified only slightly, when weak assumptions are relaxed. For example, ECS is not currently constrained particularly narrowly -- if the model used has an ECS of 3 K/doubling, what happens in a similar model with ECS of 2 or 4? Or varying the assumptions made about carbon uptake...there are probably many more model parameters of relevance.
The authors write, "Structural model uncertainty (i.e. differences in model parameterizations) is larger than emissions pathway uncertainty for TCRE." So I imagine that the paper has already made sensitivity estimates for such key, but uncertain, parameters.
Bish writes:
This seems to be a fairly common - perhaps even "robust" - trait that can be observed in the words of members of the "In crowd": Mann, Lewandowsky, Allen, Gleick, Pachauri, Gergis, Stocker, Schmidt, Weaver and Klein are a few names that spring to mind. Although, to be fair, it should be acknowledged that there are a few among our own who also display this trait - albeit with far less consistency and frequency.
Or...stubbornness :)
Regards
Mailman
HaroldW - thanks for that. I'd agree that the normal meaning in a scientific context would be 'insensitive to details of assumptions, parameter values, etc'. But in the context of climate it often seems to be used as an add-on bullshit word meaning something like 'not open to question', for example:
.Not implying that your use came into that category, of course.
I think I'll open a discussion thread and see what comes up.
In software development, the term "robust" means that the software can withstand all tests that are "normal" and all or most "belligerency" tests. Normal tests are those which a user would typically perform. Belligerency tests stress the software (or try to) in ways that are not normally expected, such as loss of network, power failure, even virus or malware infection. Since computer climate models cannot be easily tested, it's incorrect to say they are robust. When climate scientologists claim that nothing falsifies a climate model, because they are all already "wrong," they are poking robustness in the eye. The only way climate alarmists will ever be able to claim true robustness is to stop playing around with minor statistical differences as compared to an unknown null hypothesis, and return to the scientific method. State your hypothesis. Make a prediction. Test whether the prediction occurs. If it does, you're on your way towards robustness, so long as other scientists can validate and repeat your results, and perhaps build on them. If your predictions are false, you're falsified and so you must move on. Which is where we are, I think, with the post-hoc analysis of the hypothesis that CO2 drives the Earth's climate and the myriad sloppy efforts to test that premise.