John Shade posted this as a comment on the last post and commenters suggested that it was worthy of standing as a post in its own right. Having read it, I agree that it is well worth discussing.
One of the biggest breakthroughs in industrial statistics occurred in the 1920s when Dr Shewhart of the Bell Laboratories came up with a way to decide when it was likely that a cause of some of the observed variability in a system could be identified. He discovered that engineers and quality control managers, as well as machine operators, who ignored this method were liable to mount wasteful investigations into what they thought were odd or unacceptable data values, and almost inevitably make changes to the production process from which the data came. Such interventions generally made the process worse, i.e. with more variability that it had before. There was a great effort in those days to reduce the noise in telephone connections, and part of this was aimed at reducing the variation from one telephone handset to the next. They dreamed of replacing the old phrase 'as alike as two peas in a pod', with 'as alike as two telephones'. But many well-intentioned efforts were making things worse.
The underlying notion is that in complex systems, such as manufacturing processes, there are a great many causes of variation – generally people involved can come up dozens at the drop of a hat. In a well-controlled system, this results in predictable outputs within ranges or limits. The inevitable zigs and zags observed when data is plotted and found to lie within these limits are believed to be causal – they just look like random variation because so many factors are influencing the process in lots of little ways – but it would in general be very hard indeed to take a particular excursion, say the highest value observed last week, and find the reason or reasons why it happened. Such investigations are not only likely to unproductive, they are liable to be harmful since people charged with 'doing something' about a problem (that 'highest value' might be highly undesirable for example), will generally find 'something to do' about it. Shewhart showed how that could often make matters worse if the process had not actually changed in any way to produce an unusual-seeming value. By changing the process in situations in which the process had not actually changed when it produced last week's highest value (e.g. of a measured length of a manufactured part), the change may just add a new source of variation that might make the process worse than before.
The great practical value of his insights came in part from knowing when to leave a process alone, and in part from knowing when to mount an investigation to track down causes of change. In essence, his method was a signal detection system usable on the shopfloor, and it has been credited with a tremendous contribution to quality and productivity in the decades since.
Now industrial processes can be complex enough, but they are not as complex as the climate system, which has influential factors acting on a mind-boggling range of space and timescales. Furthermore, industrial process monitoring data can be of far higher quality than that which has been accumulated about climate. We also know that important factors such as orbital parameters do change, and that the system has had major transitions in the past between, most notably, periods with and without major long-lasting icesheets. A simple monitoring system of the Shewhart kind would indeed allow the weather gods to note using remote sensing that something out of the ordinary had happened during such transitions. We could well do with some such system on the far shorter timescales of greatest interest to us – which are say the order of a few decades. We are hampered by data quality and data sparseness problems, but a goal of producing a statistical model that would be widely accepted over such short timescales would be a highly desirable one.
Those in the CO2 Control Knob camp need no such model. Observations which conflict with a cherished theory are a distraction. Theirs is a revealed truth which they are driven to share, and to 'get some something done about'. They have, to pursue the industrial analogy a little further, won the attention of the management, and so all manner of memos and changes and 'improvements' are being instigated. We are to walk or cycle more. We are to switch off lights or use (toxic) lightbulbs that give off poor light but use less electricity, we are to build windmills in the factory grounds, put solar panels on the roof, and install diesel generators to cover the frequent occasions when neither provide adequate supplies. Meanwhile, important processes and urgent problems inside the factory are being neglected, and it looks like we might go out of business altogether.
Those who favour a calmer, and more scientific approach, cannot but fail to notice that the CO2 Big Driver theory has not led to any improvement in predictive skill, and that there are many 'observational metrics' that contradict the simple-minded excursions of second-sight that the theory encourages in its followers. Such as snow being a thing of the past in the UK. Such as hurricanes getting 'worse', or tornadoes and other extreme weather events becoming more frequent in the States. Or sea levels rising in dramatically different ways from the past. Or polar ice sheets disappearing. Or Himalayan glaciers ceasing to be, and so on and on. Past and present data though can be brushed aside by the acolytes. The future data will be on their side. So they say.
So, merely observing that there is no convincing statistical evidence of anything extraordinary going on in key 'observational metrics' such as temperatures is to tread on the toes of the faithful. They are liable to get upset by any reduction of attention from the future which they are so wedded-to. It is a threat, presumably, to their credibility amongst the wider public – so many of whom have yet to be converted to 'the cause'.