Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Gassing | Main | Where there is harmony, let us create discord »
Friday
Jul042014

Lessons from the shop floor

John Shade posted this as a comment on the last post and commenters suggested that it was worthy of standing as a post in its own right. Having read it, I agree that it is well worth discussing.

One of the biggest breakthroughs in industrial statistics occurred in the 1920s when Dr Shewhart of the Bell Laboratories came up with a way to decide when it was likely that a cause of some of the observed variability in a system could be identified. He discovered that engineers and quality control managers, as well as machine operators, who ignored this method were liable to mount wasteful investigations into what they thought were odd or unacceptable data values, and almost inevitably make changes to the production process from which the data came. Such interventions generally made the process worse, i.e. with more variability that it had before. There was a great effort in those days to reduce the noise in telephone connections, and part of this was aimed at reducing the variation from one telephone handset to the next. They dreamed of replacing the old phrase 'as alike as two peas in a pod', with 'as alike as two telephones'. But many well-intentioned efforts were making things worse.

The underlying notion is that in complex systems, such as manufacturing processes, there are a great many causes of variation – generally people involved can come up dozens at the drop of a hat. In a well-controlled system, this results in predictable outputs within ranges or limits. The inevitable zigs and zags observed when data is plotted and found to lie within these limits are believed to be causal – they just look like random variation because so many factors are influencing the process in lots of little ways – but it would in general be very hard indeed to take a particular excursion, say the highest value observed last week, and find the reason or reasons why it happened. Such investigations are not only likely to unproductive, they are liable to be harmful since people charged with 'doing something' about a problem (that 'highest value' might be highly undesirable for example), will generally find 'something to do' about it. Shewhart showed how that could often make matters worse if the process had not actually changed in any way to produce an unusual-seeming value. By changing the process in situations in which the process had not actually changed when it produced last week's highest value (e.g. of a measured length of a manufactured part), the change may just add a new source of variation that might make the process worse than before.

The great practical value of his insights came in part from knowing when to leave a process alone, and in part from knowing when to mount an investigation to track down causes of change. In essence, his method was a signal detection system usable on the shopfloor, and it has been credited with a tremendous contribution to quality and productivity in the decades since.

Now industrial processes can be complex enough, but they are not as complex as the climate system, which has influential factors acting on a mind-boggling range of space and timescales. Furthermore, industrial process monitoring data can be of far higher quality than that which has been accumulated about climate. We also know that important factors such as orbital parameters do change, and that the system has had major transitions in the past between, most notably, periods with and without major long-lasting icesheets. A simple monitoring system of the Shewhart kind would indeed allow the weather gods to note using remote sensing that something out of the ordinary had happened during such transitions. We could well do with some such system on the far shorter timescales of greatest interest to us – which are say the order of a few decades. We are hampered by data quality and data sparseness problems, but a goal of producing a statistical model that would be widely accepted over such short timescales would be a highly desirable one.

Those in the CO2 Control Knob camp need no such model. Observations which conflict with a cherished theory are a distraction. Theirs is a revealed truth which they are driven to share, and to 'get some something done about'. They have, to pursue the industrial analogy a little further, won the attention of the management, and so all manner of memos and changes and 'improvements' are being instigated. We are to walk or cycle more. We are to switch off lights or use (toxic) lightbulbs that give off poor light but use less electricity, we are to build windmills in the factory grounds, put solar panels on the roof, and install diesel generators to cover the frequent occasions when neither provide adequate supplies. Meanwhile, important processes and urgent problems inside the factory are being neglected, and it looks like we might go out of business altogether.

Those who favour a calmer, and more scientific approach, cannot but fail to notice that the CO2 Big Driver theory has not led to any improvement in predictive skill, and that there are many 'observational metrics' that contradict the simple-minded excursions of second-sight that the theory encourages in its followers. Such as snow being a thing of the past in the UK. Such as hurricanes getting 'worse', or tornadoes and other extreme weather events becoming more frequent in the States. Or sea levels rising in dramatically different ways from the past. Or polar ice sheets disappearing. Or Himalayan glaciers ceasing to be, and so on and on. Past and present data though can be brushed aside by the acolytes. The future data will be on their side. So they say.

So, merely observing that there is no convincing statistical evidence of anything extraordinary going on in key 'observational metrics' such as temperatures is to tread on the toes of the faithful. They are liable to get upset by any reduction of attention from the future which they are so wedded-to. It is a threat, presumably, to their credibility amongst the wider public – so many of whom have yet to be converted to 'the cause'.

PrintView Printer Friendly Version

Reader Comments (24)

An excellent comment by John. It brings back memories of those statistics courses we occasionally had when I was gainfully employed in industry.

Jul 4, 2014 at 7:11 AM | Registered CommenterPhillip Bratby

A good perspective, except for one piece of mangled syntax -'cannot but fail' which means 'will always fail' ; and I'm pretty sure he didn't mean to say that.

Jul 4, 2014 at 7:16 AM | Unregistered CommenterPeter Milford

11 years ago I consulted to a sports car manufacturer which had a major problem with its body development, to the extent that the hatchet men from the ultimate owners were in to choose who would survive the cull. Shewhart analysis was being done in addition to the normal for such industries' 6-sigma evolution.

I told them this was useless because as the missive from John Shade points out, Shewhart analysis is predicated on the factors giving the error being statistically independent of each other, so you can assume the frequency distribution of error is a Normal distribution.

Climate is nothing of the kind: everything is interconnected; it is not a 'component or 'widget', it is a coupled engineering system whose stability is dependent on Proportional, Integral and Differential control where each has a range of time constants. The only sensible analysis is hard physics to displace the non standard physics taught to Atmospheric Sciences which they use as a loin cloth top protect their private parts as they worship the State Funded Cargo Cult, then to assemble the differential equations to give the range of outcomes at a give time from a perturbation.

Sorry, but Shewhart ain't on.

Jul 4, 2014 at 7:39 AM | Unregistered Commenterturnedoutnice

Excellent post by John Shade. "They have, to pursue the industrial analogy a little further, won the attention of the management, and so all manner of memos and changes and 'improvements' are being instigated."

Not to mention that the management has also become fantastically bureaucratic and top-heavy, with a maze of departments and sub-departments and committees and working groups - UNFCCC, UNEP, UNEA, IPCC, IPBES - all dedicated to initiatives like meat-free Mondays or paperless Wednesdays or sugar-free Fridays or redesigning the company logo to make it more Earth-friendly.

Jul 4, 2014 at 7:52 AM | Unregistered CommenterAlex Cull

As Shewhart would of predicted, agreement with whether Shewhart is on or not will form a normal distribution of agreement and disagreement.

Jul 4, 2014 at 7:56 AM | Unregistered CommenterLondon Calling

@London Calling: Shewhart On OR Not On is Bose-Einstein statistics....:o)

Jul 4, 2014 at 8:03 AM | Unregistered Commenterturnedoutnice

But the management do nor understand shewhart at all because they run the rest of the various factories using arbitrary targets, central control and contingent incentives. No wonder they so easily accept the CO2 story.

Jul 4, 2014 at 8:23 AM | Unregistered CommenterSerge

@Serge: the real fun comes from the wicked analysis indicating their main control knob has near zero effect....:o)

Jul 4, 2014 at 8:28 AM | Unregistered Commenterturnedoutnice

It's worth a quick look at http://en.wikipedia.org/wiki/Walter_A._Shewhart

On thing that stood out for me was "he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles)."

And instead of obsessing about the trend, people could benefit by wondering about variance

Jul 4, 2014 at 9:14 AM | Unregistered CommenterJeremy Shiers

I was working in electronics manufacturing when TQM / Six Sigma became the methodology that was going to save the industry. We had presentations and courses on the history from the work of Deming and Juran to the latest stuff from Crosby, I can't actually remember any other names.

I do remember the confirmation of something that experience in test departments and failure investigations teach you the hard way is that changing one thing at a time and evaluating the result is the quickest way to identify the cause, even when there are two or more issues.

Jul 4, 2014 at 9:56 AM | Unregistered CommentersandyS

Good post, many thanks.

Climate is not a coupled engineering system it is a coupled chaotic system which models can never simulate especially if they start with the wrong premise.

Jul 4, 2014 at 9:58 AM | Unregistered CommenterJohn Marshall

@John Marshall: agree - 'engineering' is the wrong adjective; it should have been 'thermodynamic'. Nevertheless, chaotic is simply the description of a control system for which there are non-linear aspects leading to abrupt changes. To analyse this statistically you need to use different approaches, e.g. Weibull statistics.

However, the non-linear aspects of climate are superimposed on well-established engineering thermodynamic principles. This is why the latest research in Atmospheric Physics uses the the coupling between irreversible intra-atmospheric thermodynamics and the irreversible thermodynamics of OLR. But they have got it wrong by claiming the atmosphere is a grey body emitter/absorber of radiation, when it is semi-transparent.

The latter is why it is akin to engineering systems yet this has been rejected by Climate Science, therefore has led it up a very expensive cul-de-sac from which there is no exit but full retreat, to 1965!

Jul 4, 2014 at 10:19 AM | Unregistered Commenterturnedoutnice

I ran some control charts when the Met Office was panicking about our recent winter rainfall. As you can see from the control charts on my wordpress blog, http://oldgifford.wordpress.com there didn't appear to be anything particularly abnormal apart from one recent blip.

Jul 4, 2014 at 10:41 AM | Unregistered CommenterAdran Kerton

sandyS:

I was working in electronics manufacturing when TQM / Six Sigma became the methodology that was going to save the industry. We had presentations and courses on the history from the work of Deming and Juran to the latest stuff from Crosby, I can't actually remember any other names.

I've never had the privilege of working in manufacturing but Deming was a key influence on the software thinker who most influenced me, from the mid-80s, Tom Gilb, whom I first came across through consultancy at Reuters on their vast and lucrative financial information systems (pre Bloomberg!)

I do remember the confirmation of something that experience in test departments and failure investigations teach you the hard way is that changing one thing at a time and evaluating the result is the quickest way to identify the cause, even when there are two or more issues.

That was the biggest lesson Tom took from Deming and from pioneering software projects at IBM and elsewhere, leading, according to the best historians, to the agile software movement. I've always been concerned that GCMs, because they take so long to run, cannot easily benefit from such techniques. Another reason I have my doubts about the wisdom of Doug Keenan's call in the discord-from-harmony thread yesterday. Well, in fact, my concerns should make me enthusiastic about increasing the power of the supercomputers involved tenfold, because that should make such testing techniques more feasible. But my other long-lasting concern has been how to apply the wisdom of crowds to these areas, as we've been benefiting from in some (not all) open source projects (sorry, users of OpenSSL!) I don't see many easy answers for climate modelling but thanks a lot for your reminders here.

Jul 4, 2014 at 10:43 AM | Registered CommenterRichard Drake

I was very moved to read the favourable remarks about the above comment here and under the original posting, and of course for the Hall of Fame entry, and for the above elevation to a post. The notion that you are on the same wavelength as others is a reassuring one, and encourages me to think my all too rare spells of analysis might be helpful and worth jotting down. Thank you all.

Peter Milford (7:16AM) is right about the mangling. The comment was quite quickly written, and posted too quickly – I used the grace period for edits several times to correct typos and worse. But I didn't get them all. To modify a well known remark, if I'd known the comment was going this far, I'd have taken better care of it.

'turnedoutnice (7:39AM)' raises several points, and points to interconnections within the climate system such that stability depends on various types of feedback controls. I do not agree with his assertion that Shewhart's core method requires the presumptions of the Normal distribution (and Jeremy Shiers' comment at 9:14 AM backs me up). The method works pretty well for process data of all kinds of shapes, or even lack of simple shape – and by 'working' I mean that the approach is practicable, informative, and gives good guidance re the condition of a process and opportunities for improving it. Wheeler makes a detailed case for this here. I also do not agree that related statistical methods are not relevant/helpful for systems with feedbacks. Box and others provide an excellent introduction to ways in which simple graphical methods can complement Shewhart charts when dealing with processes with known adjustment mechanisms (feedback controls). John Marshall (9:58AM) notes that there are chaotic aspects to the climate system which bring problems for simulations and I think this also weakens 'turnedoutnice's claim that 'hard physics' ought to do the trick. But see the comment at 10:19 AM which finds common ground.

Adran Kartan links to some Shewhart charts he created using UK precipitation data. His first chart suggests at least two distinct periods, and his second chart looks at the second of those. At the very least, such charts are good conversation pieces which can trigger productive discussions about what is going on. This is one of their strengths.

Jul 4, 2014 at 11:03 AM | Registered CommenterJohn Shade

@John Shade: the reason why I made the analogy of a sports car manufacturing process with climate is because to solve the problem of body distortion I had to find out why a casting, the largest ever tried in the automotive industry, was distorting. Because this was used as the datum for the rest of the assembly, the non-random distortion led to subsequent non-random distortion of the rest of the body.

The solution was very simple, but required going back to basic heat transfer physics; a trivial die modification reduced distortion dramatically; the project was allowed to continue. It's the same with climate science; they have got the heat transfer badly wrong. Because of this they add in 'distortion control' in the modelling which they hope will fix the problem, a warming bias in that part of the GCMs.

However, because they are based on chaotic systems, with the warming bias solved by cheating in hind-casting, the project is irretrievably lost. But to fix the key mistake would mean that the politicians would lose positive feedback and the Thermageddon scare story!

There is no way out except to correct mistakes made in 1965 by Sagan and Pollock in their attempt to analyse the Venusian atmosphere, which paper created the Enhanced GHE concept. Statistics can't solve a basic out-of-control process whether that is in manufacturing or mathematical modelling.

Jul 4, 2014 at 11:22 AM | Unregistered Commenterturnedoutnice

@ Jul 4, 2014 at 11:22 AM | turnedoutnice

Interesting example. I know of another instance of a suspension casting (for a light truck) that passed all its tests as a prototype but failed catastrophically (it killed the test driver) as a pre-production part. The cause was eventually traced back to a change in the supplier`s casting process, between the development prototype and the production part, which changed the stress characteristics of the casting. That was discovered after an exhaustive, step by step analysis of what had changed between production and prototype using Kepner-Tregoe analysis techniques, in which the management and engineers had been trained and which the chief engineer freely acknowledged.

Jul 4, 2014 at 12:07 PM | Unregistered Commenteroldtimer

+10.
This is an excellent and insightful post.

Jul 4, 2014 at 1:13 PM | Unregistered Commenterhunter

The use of control charts is valuable/essential in certain areas of science and measurement. It is important to understand what you are trying to achieve and what method is going to give the information that is needed.

Thus, (Shewart) Control Charts are widely used in chemical analysis. With each batch of samples a "control sample" is included. This has a known concentration but is treated in exactly the same way as the samples to be determined. The value of the control sample is plotted on a control chart and compared with estimates of the standard deviation of the method. (Typically this is aided by lines are drawn for +/- 2SD and 3SD). By applying simple, pre-determined, rules it can be seen whether the analytical process is not "under control", that is, whether something has "gone wrong" in the application of the analysis method. An investigation is then carried out. Possible "wrongs" might be a new batch of reagents or a mistake in the application of the method by an inexperienced analyst.

Whatever the reason, the use of the control chart give an early warning of a problem which needs to be assessed and fixed. The values for the unknown samples in the batch are scrapped and the analyses repeated once the problem with the method is determined and corrective action taken. There are British and International standards that give a step-by-step guide to the use of control charts in this way.

Alan Bates, retired power station chemist

Jul 4, 2014 at 9:14 PM | Unregistered CommenterAlan Bates

These concepts were old news when I was appointed Chief Geochemist in a very successful mineral exploration company in 1973. My first official report was about quality control of chemical analyses.
What is some new that the topic surfaces again here?
Is the main point the lack of use of such techniques by climate workers?

Jul 5, 2014 at 6:18 AM | Unregistered CommenterGeoff Sherrington

So to summarise;

The road to hell is paved with good intentions.

That should do it.... :)

Jul 5, 2014 at 11:02 AM | Unregistered CommenterRightwinggit

Geoff,
Could you take another cut at your second to last sentence?

Jul 5, 2014 at 12:41 PM | Registered Commenterjferguson

John Shade,
The great gift you've given is the introduction of a pertinent activity to those of us who were unaware of it - a bit like recommending a useful book.

Maybe there will be readers who are familiar with a discipline addressed to determining whether a problem is soluble. I had an all too brief (spouses objected to too much talking shop) lunch with a fellow who had a PhD from Brown in this work and who had spent a career at the Navy Torpedo Factory pursuing it.

I didn't get to how one decided that a problem was susceptible to this sort of analysis, whether it was limited to those which could be mathematically described, or maybe earlier in their consideration when one was trying to grasp whether the thing could be mathematically described. Alas, I've lost contact.

He did share one bit that I was surprised by. Some of you may remember the scandal during the unpleasantness in the forties regarding a torpedo type (Mark III, was it?) which failed to detonate on striking a targeted ship - (well if not targeted, at least in the way). These torpedoes were designed to detonate on encountering the magnetic field of the target as they ran underneath having been set to run at a depth appropriate to the target. Few ships had effective armor on the bottom and the thought was that more bang for the buck could be realized with this scheme.

Accordingly, there had been no real provision for setting the things off if they actually ran into the target. The political reaction was to fix the contact detonator, while the real problem went unaddressed - the inability to accurately detect depth while under way.

Is the orderly determination as to whether the effect of a single perturber on our climate is a soluble problem susceptible to this activity?

Jul 5, 2014 at 1:28 PM | Registered Commenterjferguson

I am a complete newbie to this site who has much experience in computer modelling especially of the statistical variety.

John Shade's article is of immense value to science in general. Putting something new together at the invention stage has already involved much trial and error, but as long as the idea works as we wanted it to we may tend to forget the things that didn't work as we admire what we may intend to have produced on a larger scale. As someone who programs computers there are many conflicting views about what you want it to do at the design stage. One person may want ease of use whereas another may want freedom of access to all the data such as reports on the fly. In the early days of computing the capacity of the machine dictated most of what could be done, and whilst we have much more powerful computers now the same rule holds tight. Computer languages, such as "C" and its derivatives, make much larger programs viable since routines can be switched in and out, and databases can be contructed in such a way that data entry and access to it is fast and relatively simple. Because "C" is modular at many different levels it is possible to write code that can serve many different purposes in many different programs. As an example a calendar entry system with date and time notification can be constructed which simply cannot allow a non-existent date (historically or in the future). That is no mean feat considering calendars have changed so many times, and time has only become standardised in the past century. In essence this routine can be tested for accuracy and never fail.

But not all programs are made equally, and not all problems fit neatly into a computer brain. Think of music and video in this digital age and you see compromise in serious action, leaving our senses to fill in some of what is missing, conveniently knowing our brains do that really well when it is our eyes we are using. But could we ever design a computer system to act like a human eye which sees things that are not there, or make believes it did see something.

I have no idea what is causing climate to change and whilst I trust science to help me find out I don't get the feeling it actually does know the cause, and, therefore, Mr Shade's item places the spotlight fairly and squarely on how good our models are and how that measurement is made. If we are going to rely on models to predict the future accurately then they should be able to predict the past with absolute accuracy, and, of course, they do not. Think about the calendar problem and multiply it by something much closer to infinity and you have the size of the problem we are asking computers to deal with. There must be compromise, and it will be ever greater the higher the resolution of the data gets. We are getting to know what we don't know but everyone should admit that getting to the final solution is going to take a very long time.

Jul 6, 2014 at 4:10 AM | Registered Commenterandrewdavid

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>