Wednesday, October 14, 2009
Six Sigma Metrics...Why?
I have a client who is in a tizzy over whether they are recording short-term or long-term process sigmas. They are not high-speed, high-volume manufacturers, so I told them just use DPMO; don't worry about process sigma. It's not an intuitive metric anyway, nor is it anything like an accurate estimate of what to expect. And once you start putting "long-term" and "short-term" stuff in, you end up with really stupid non-intuitive discussions. I have yet to see an explanation of "short-term sigma using long-term data" that helps anyone understand anything. I'm not quite in the school of the purists who believe that, if you have a process in control, you don't have any undetected shifts. I can show you any number of real data sets to prove that...even using all four of the Western Electric Zone tests. In the interests of trying to maintain some standard metric with the rest of the Six Sigma world, I have had all my clients who were interested in using process sigmas use the standard Motorola tables or the calculator at isixsigma.com. This calculator takes your number of defects and opportunities and looks up a Sigma, applying the 1.5 Sigma shift. The assumptions for the calculator say that it assumes long-term data but provides a short-term sigma. Why should we care, and how does this terminology help anyone or make sense? Long-term data are supposed to be data that come from a process that has run long enough for some shifts to have taken place. If we're going to talk long-term, what we should use for an operational definition is "data that display a significantly different overall standard deviation than that of its local dispersion statistics." What all this is intended to provide is an estimate of what we can expect from a process in the future, given a stable process. Isn't the idea derived from a capability study? Essentially a cpk of 2 (process mean six sigma units from the nearest specification limit) equated to "Six Sigma Quality." If you buy the Motorola 1.5 sigma shift, then you gig yourself a sigma and a half, so it's really 4.5 sigma; instead of 2 ppb defective, you get 3.4 ppm. Now, there are a lot of people who object to predicting parts per million non-conforming from a capability study; I'm not one of those, as long as everyone involved realizes that we can't take any of those predictions too literally or assign too much precision. If I were claiming a process sigma of 6, and a count of defectives in the next million opportunities turned out to be 5 (or even 10 or 15), I wouldn't re-assess my sigma.Another thing I've seen lately is using a transformation to transform perfectly good data to poisson data, then deriving a DPMO and sigma from that. Now there's a great example of what Don Wheeler would call "a victory of computation over common sense." If I have 100 defective parts in a run of 1000, I have 10 percent defective. Assuming that 10% is stable over time, That equates very simply to a DPMO of 100,000. This is also assuming one opportunity per unit. It does make sense to me to do the opposite...if I'm getting more than one defect per unit (so I probably have Poisson data), it makes sense to me to transform the data using e^-dpu; that gives me an approximation to the binomial that lets me estimate DPMO.My biggest question, though, is why--other than to comply with a very short-lived tradition--should we use process sigma at all? It certainly doesn't provide any more information than DPMO, and we always have to translate it into DPMO anyway. If we must continue to process sigma, can we please just can all the short-term/long term stuff and (in a nod to standardization, even if controversial), just assume that shift happens and calculate DPMO from a stable process and use the sigma table? Maybe I'm wrong, and there is some real great reason to continue doing this, but I need an explanation and justification that makes sense. It seems to me that a lot of this is arbitrary and unnecessary.
Subscribe to:
Posts (Atom)