Friday, November 20, 2009

More on Six Sigma Metrics

There is an excellent article in this month's Six Sigma Forum magazine about the process sigma. The authors have examined a lot of the literature about the metric and come to some very interesting (and, in my opinion, accurate) conclusions. Six Sigma Forum offers an email address to respond to each article; this was my response:

I was very excited to see this article. I have been questioning this for years, and had just started doing some research with a view toward writing a similar article. I heartily agree with most of the authors’ points. I think they did a great job of catching at least the high level of the controversy and their enumeration of the advantages, disadvantages and myths should be made required reading for anyone in a Six Sigma role, especially Master Black Belts and Black Belts.
An area to which I had planned to give a bit more attention is the Statistical Process Control component of the metrics equation. Before you can make any assumptions about capability or process performance over time, you must measure it over time, and it must display a reasonable degree of statistical control. Only then do you have the assumption of homogeneity of data that makes any assumptions about an underlying distribution valid.
A foundational basis of SPC also sharpens the focus of the discussion surrounding the shift, and the short-term/long-term question. While it is possible for a process in a state of statistical control to have some underlying shifts that are not detected using Shewhart charts, sustained mean shifts of up to 1.5 sigma will almost certainly be detected within 10 subgroups following the shift, if the four most common Western Electric Zone Tests are applied.
Now, if you’re taking four samples per day for your monitoring subgroup, from a high volume operation—say, 2,000 units per hour—that might mean 9 days before the signal shows up; you’d have run approximately 64,000 units from a process whose mean had shifted. In those situations, CUSUM or other schemes more sensitive to gradual sustained shifts might be more appropriate.
Having said all that, though, it’s unlikely that shifts of that sort will go completely undetected in a well-monitored process. What we are essentially saying is that some assignable-cause variation is going to show up randomly, and for time periods too short to be detected by our charts. In that case, I believe that the local measures of dispersion used for control chart factors provide a reasonable way to operationally define short-term and long-term variation, if you must. R-bar/d2 and S-bar/c4, used to calculate control limits, provide very good estimates of short-term (within-subgroup) variation. Comparing that estimate with the standard deviation for the entire set of data will reveal whether any significant shifting has taken place. This would provide a fairly unambiguous test for shifts. Whether and how you want to define and measure the magnitude of any shift detected using this method could be another discussion; the fact that this argument is taking place without a method is another source of confusion.
It seems that we not only have to discriminate between short and long-term sigma, but we have to have a “short-term sigma assuming long-term data” and “long-term sigma.” Apparently, we can also have negative process sigmas, with DPMO greater than one million! Just look at the commonly-used sigma calculator at www.isixsigma.com, and click on the link for more information about the calculations. Their explanation, that sigma is just a z-score, shows how far we have come from an understanding of capability in some of these discussions. I think most of this falls under Wheeler’s category of victories “of computation over common sense.”
I can understand if you want to gig yourself 1.5 sigma to make your process sigma align with the one in all the tables, accepting the Motorola shift. What I don’t understand is why you would then decide that, in the longer term, it’s going to shift another 1.5 sigma (this seems to be the logic behind the “benchmark Z” used in some software packages these days.) So…now six sigma is actually three sigma by default? What’s the point?
I recently taught a Six Sigma certification exam prep course for my local ASQ chapter, and the primer for that course—a popular reference used by a huge number of applicants for that ASQ certification—suggested that, given binomial data (a proportion defective), you should use a log transformation to force the data into an approximation of the Poisson. I don’t know why anyone would do this, unless you really have to be able to have that negative process sigma and a DPMO of more than one million. For years, I have been doing just the opposite; transforming Poisson data to Binomial using e-DPU to estimate DPMO.
This brings up another excellent point from your authors: we need to figure out how we are going to count units and opportunities. My own belief is that we should limit ourselves to definitions that end up providing an estimate of proportion defective. An opportunity, in that case, would be the most discrete thing we could count; in other words, there could be no more than one defect per opportunity. This would get rid of the “negative sigma” nonsense.
I strongly endorse their recommendation that we use DPMO. The procedure they outline for finding DPMO is straightforward and useful. Calculating DPMO this way would provide a reasonable estimate. If we want to err on the side of safety, we might continue to use the 1.5 sigma shift for high-volume processes, or no shift for lower-volume processes. DPMO is a more intuitive metric, and would keep people from having to go to the table to translate DPMO to process sigma, and then decode it again later for anyone who wants to know what it means. That’s unnecessary rework, something we’d all like to avoid.

Wednesday, October 14, 2009

Six Sigma Metrics...Why?

I have a client who is in a tizzy over whether they are recording short-term or long-term process sigmas. They are not high-speed, high-volume manufacturers, so I told them just use DPMO; don't worry about process sigma. It's not an intuitive metric anyway, nor is it anything like an accurate estimate of what to expect. And once you start putting "long-term" and "short-term" stuff in, you end up with really stupid non-intuitive discussions. I have yet to see an explanation of "short-term sigma using long-term data" that helps anyone understand anything. I'm not quite in the school of the purists who believe that, if you have a process in control, you don't have any undetected shifts. I can show you any number of real data sets to prove that...even using all four of the Western Electric Zone tests. In the interests of trying to maintain some standard metric with the rest of the Six Sigma world, I have had all my clients who were interested in using process sigmas use the standard Motorola tables or the calculator at isixsigma.com. This calculator takes your number of defects and opportunities and looks up a Sigma, applying the 1.5 Sigma shift. The assumptions for the calculator say that it assumes long-term data but provides a short-term sigma. Why should we care, and how does this terminology help anyone or make sense? Long-term data are supposed to be data that come from a process that has run long enough for some shifts to have taken place. If we're going to talk long-term, what we should use for an operational definition is "data that display a significantly different overall standard deviation than that of its local dispersion statistics." What all this is intended to provide is an estimate of what we can expect from a process in the future, given a stable process. Isn't the idea derived from a capability study? Essentially a cpk of 2 (process mean six sigma units from the nearest specification limit) equated to "Six Sigma Quality." If you buy the Motorola 1.5 sigma shift, then you gig yourself a sigma and a half, so it's really 4.5 sigma; instead of 2 ppb defective, you get 3.4 ppm. Now, there are a lot of people who object to predicting parts per million non-conforming from a capability study; I'm not one of those, as long as everyone involved realizes that we can't take any of those predictions too literally or assign too much precision. If I were claiming a process sigma of 6, and a count of defectives in the next million opportunities turned out to be 5 (or even 10 or 15), I wouldn't re-assess my sigma.Another thing I've seen lately is using a transformation to transform perfectly good data to poisson data, then deriving a DPMO and sigma from that. Now there's a great example of what Don Wheeler would call "a victory of computation over common sense." If I have 100 defective parts in a run of 1000, I have 10 percent defective. Assuming that 10% is stable over time, That equates very simply to a DPMO of 100,000. This is also assuming one opportunity per unit. It does make sense to me to do the opposite...if I'm getting more than one defect per unit (so I probably have Poisson data), it makes sense to me to transform the data using e^-dpu; that gives me an approximation to the binomial that lets me estimate DPMO.My biggest question, though, is why--other than to comply with a very short-lived tradition--should we use process sigma at all? It certainly doesn't provide any more information than DPMO, and we always have to translate it into DPMO anyway. If we must continue to process sigma, can we please just can all the short-term/long term stuff and (in a nod to standardization, even if controversial), just assume that shift happens and calculate DPMO from a stable process and use the sigma table? Maybe I'm wrong, and there is some real great reason to continue doing this, but I need an explanation and justification that makes sense. It seems to me that a lot of this is arbitrary and unnecessary.

Wednesday, September 2, 2009

During a recent discussion in the LinkedIn Deming HR group, one of the discussants posted the following link:
http://www.ted.com/talks/view/id/618
I can't recommend it too highly. It's a talk by Daniel Pink, about the science associated with rewards for performance. Anyone who was a follower of Deming, and anyone who has read Alfie Kohn, is already familiar with the concept that reward for performance can be harmful. Daniel's discussion, especially his piece about the candle problem, is an eye-opener. I would like to try that experiment at a conference or with a large class sometime.
My question or concern is the same as Daniel's: why do we continue to do, in business, what the science says is exactly the wrong thing to do? In its most public fashion, we do it on a grand scale with CEO compensation.
Daniel does a good job of pointing out that pay for performance, when it's linked to any job that requires thinking or problem solving, does more harm than good. What his talk doesn't cover, though, is the systems thinking aspect of this topic; the fact that you can't measure the performance of anyone in isolation. It's often the system that creates most of the performance we attribute to individuals.
This is certainly a worry in education these days, with many government officials pushing for performance pay for teachers. Stuck in the old carrot and stick paradigm, with nothing to go on for metrics but aggregate standardized test scores, these schemes will go a long way toward further suboptimizing our education system. Instead, put the money you might use for rewards into building systems like those built by Geoffrey Canada in the Harlem Children's Zone. Canada showed that by taking a systems approach you can improve the performance of even the most underpriveleged student populations, and put those students on an even playing field with the most priveleged.

Monday, August 10, 2009

What Happened to the Deming Philosophy?

This is taken from part of a discussion on LinkedIn. Rafael Aguayo, a Consultant in Quality, Management and Strategy and Instructor at Stony Brook University, discussed this history and posed these questions today. See my response below.

Rafael's Post:

Two people in the same situation can have very different experiences. So let's consider some more objective measures of what has occurred. In the late 1980s and early 1990s Quality in the US was largely associated with Deming. Media articles regularly referred to Deming as the preeminent quality expert. Success at Ford, Harley Davidson and many other companies, that had adopted some or much of Deming's ideas, created interest and excitement in quality and other people and movements tried to position themselves as the next big thing. Specifically I would mention Reengineering and Six Sigma. At the time there were many successes accomplished under the banner of TQM and Six Sigma was a minor influence.

My assessment is that Deming-influenced quality represented 50% of the market. Yes, that is subjective, but I think anyone who was active at the time would have said that his influence and recognition was profound. Given the successes,the publicity and seeming strength of the quality movement I would have expected that by 2009 every hospital and US corporation to have been talking red beads and funnels in addition to statistical tools.

Instead, when I joined this group there were 171 members, while the Lean Six Sigma group had about 30,000. That translates to a market share of .57%. Even allowing an order of magnitude error this represents but 5.7% of the market. I have to ask what happened?
The logic of marketing is very different from formal logic. When GM discarded the Oldsmobile brand they expected those buyers to buy other GM cars. Instead they went elsewhere. If a significant amount of success were achieved under the banner of TQM and the brand is then disavowed then those successes are disavowed. At the time I observed the disappearance of some successful and respected consulting firms, such as Joiner Associates. And the interest in Deming shrank precipitously. It is possible that this outcome was inevitable. Once Jack Welch endorsed Six Sigma it may have been inevitable. Also the fiasco of Reengineering that some organizations, including ASQ, implicitly endorsed could not have helped. But whatever the causes the current reality is disappointing. While you say that has not been “your experience” your actions say something very different. By obtaining black belt certification and selling your services as such you implicitly acknowledged that Deming or SoPK is not a viable marketing brand.

It is not just that the market for a more profound understanding of quality has shrunk. Today young people who appreciate the importance of quality and process need not even once see a demonstration of the Red Beads or the Funnel. They can go out and do their best, blissfully unaware that their actions are tampering, on a massive scale. Luckily there are still many people laboring in schools and in firms with a deeper appreciation of the fundamentals.

Maybe I am hallucinating, but the reality of today is so far from what I would have expected that I must ask the question what happened? And what can be done to turn the situation around?

My response:

I'd like to add a comment to at least offer my observations in answer to Rafael's question: "What happened?"
It is, of course, not an easy thing to determine. Part of it was some admitted hubris on the part of those of us who were, or aspired to be, "Deming Disciples." One of the things we admired about Deming was his unyielding and unflinching ability to speak truth to power. He was often seen as curmudgeonly in his approach, but he never let anyone doubt that he didn't suffer fools gladly, and he was unabashed about putting anyone--including CEOs--into that category, if they offered any evidence that they belonged there. He was also very compassionate and thoughtful, and freely offered help and advice to anyone interested in learning. He just didn't have much patience for those who thought they had nothing left to learn. So he was a bitter pill for many CEOs to swallow. They did it, when they thought he could help; and, as Raphael pointed out, for a while Deming was the one person that almost everyone relied on for help.

Once he was gone, and the crisis of the 80's was over, Jack Welch and others were selling Six Sigma--not as a Quality initiative, but as a cost-cutting one--I think many of them jumped at Six Sigma because it seemed simpler, more prescriptive, more programmatic, less lofty and philosophical...maybe instant pudding. They certainly didn't have use for those Deming practitioners who (without Deming's extensive background or credibility) tried to act as Deming had. I have had Quality executives from major corporations tell me that "Deming was just a philosophy," implying that it was pie-in-the-sky, without any practical use for business. It's hard to get these people to listen to you after you explain how ignorant a statement that is...

Another thing that happened is that Six Sigma provides a roadmap that actually does work, when used well. Many companies had a lot of success with their Six Sigma projects. GE had some highly vaunted and publicized success...I will never know how much of it was real, because between making it mandatory and "firing the bottom 10%," who knows which GE numbers can be trusted? In any case, these projects can be very effective, when used as one component of an overall Quality Management System.

I think Rafael's insight about marketing is a good one. Many Deming practitioners were blindsided by Six Sigma, saw its statistical and other flaws, and concluded that it was the enemy, not worthy of consideration. We did get out-marketed, because we had no champion like Welch or Bossidy or Galvin touting huge success stories; most of the stories in Quality Progress and Quality Digest were about Six Sigma. Virtually all the mainstream business literature abandoned Quality; the only mention of it was the occasional Six Sigma story. Then Lean reared its head, and perversely became a competitor to Six Sigma.

When I joined Process Management International, they were working to develop a Deming-based Six Sigma methodology. We had people with a strong Deming foundation who had worked for Motorola and GE, and I think we were successful, with a sound methodology, presented as one set of tools in an overall tranformational approach, that took into consideration all the aspects of SoPK and the 14 points. At least we were able to continue to tell people about Deming, the SoPK and the 14 points, to show the Red Bead and the Funnel. Interestingly, during a conference that included a lot of the Deming and JUSE elite, a consensus position was developed that saw Six Sigma as a [marketing] "vehicle" for quality...a way to explain it and to act as a lever for change, a foot in the door.

Would I have been happier teaching and consuling in "pure" Deming? That's all I wanted to do when I first retired from the Navy. No one was hiring for that, though, because the jobs for consultants who did that were few and far between. In any case, Would I still do it? You bet...I do, as much as I can.

What are your thoughts?

Thursday, August 6, 2009

Back Again

Sorry I've been out of this loop for so long...my wife is fighting a life-threatening cancer, and dealing with that just sucks every ounce of spare energy and time out of your life.
I'm going to jump back in by recycling my latest answer in the Deming discussion group on LinkedIn. The question was about Six Sigma and Deming. Two of my favorite Deming Disciples, John Dowd and John Constantine, feel that Six Sigma is fundamentally flawed and has very little place in any discussion of Deming. This is what I wrote:

I generally agree with John Dowd and John Constantine on most things, and I agree that Six Sigma-- as taught by many of the companies consulting in it these days--has some serious problems. Some of these approaches are, indeed, incompatible with the Deming philosophy. And some of the consultants using those incompatible approaches are large enough that the argument for "as generally taught" is probably sound. My suggestion, though, is that it doesn't have to be that way. Six Sigma is just a marketing vehicle; there is no standard for it (although ASQ would like to think that their SSBOK is one). As with TQM and every other quality approach there are people who do it well, and people who don't. Unfortunately, those who don't could care less about transformation, because many of them know very little about variation, and next to nothing about systems theory, psychology, or theory of knowledge. My disagreement is in what we do about this. I have been doing what I could, through my consulting practice, any conference appearances or workshops that I am able to do, and any writing that I can get published, to bring Deming principles into Six Sigma, and to use an approach in Six Sigma that is consistent with Deming. Clients who work with me only calculate the "1.5-sigma shift" and the "process sigma" as a curiosity and a metric for communicating with those less enlightened. They see and discuss the Red Bead, the Funnel, systems theory, SoPK, and the 14 points. They learn to see and use Six Sigma projects as one tactic in an overall process management system. I think if more Deming practitioners could find it in their hearts to do something like this, we'd have more success in using Six Sigma as one tactic in our overall aim, and reach more managers and executives (and potential managers and executives) with our message. This is what Lou Schultz and William Scherkenbach taught me many years ago. As to "picking a target and pretending to improve the system by keeping centered on it," I think we need more context. It's true that arbitrary targets are harmful, but this is really the basis for world-class quality; Taguchi defined it in 1960 as "on-target with minimum variation." Getting a process centered on its specified nominal value and constantly reducing variation around it has long been the the goal of anyone trying to understand variation and use that understanding for improvement. I don't think that has changed; used properly, DMAIC projects can help a team achieve the kind of fundamental changes to a system that are needed when the process is stable but off-target or out-of-specification.

Wednesday, April 1, 2009

If It's Measurable (and Important) Why Aren't You Measuring It?

If you're not measuring something that's important to you, why not?
I am always amazed when I talk to executives or managers--sometimes at a conference, sometimes when they've hired me to help--and find that they are not measuring the things they claim they care about, at least not in any useful or meaningful way. As an example...a couple of years ago I was talking to a manager who wanted a Six Sigma project chartered around reducing scrap for a cut-off process. He said at the outset, "If I could just do something about scrap. It's KILLING me!" I asked, "How much scrap does the process produce?" He said, "Well...last quarter it was about 23%." End of quarter had been almost 2 months prior to this conversation; I said, "OK...what was it yesterday?" "Don't know...the last number I had was from last quarter. I do know it was up from the quarter before..." I got him to agree to attend our next "Statistical Thinking for Leaders" course, and we worked out a plan to start collecting his data differently (and at a useful frequency), so we could learn enough about his scrap to make it worth chartering a project. A colleague of mine, Charles Liedtke, put it this way..."Quarterly numbers? Would you manage your checkbook that way? Using one balance per quarter?" To my scrap manager's credit, at least he was measuring something. I have seen many managers attempt to charter projects without having any data (or any way to get the data). Even given Deming's admonition that the most important numbers are unknown and unknowable, there are measurements that ARE important...if you're not measuring it, and measuring it daily, and tracking it in some useful fashion like a control chart, then why not?

Friday, March 27, 2009

More on Performance Evaluation

Note: this was originally a comment in a LinkedIn Discussion. The question was "Two of the problems I currently face: 1. If work standards and numerical goals get eliminated, we have to establish a new performance monitoring system. How do we evaluate performance fairly ? Leadership is hard to measure. 2. How do we design a fair compensation / reward system after that? Any suggestion?"

I think we all wrote reams on this in the DEN several years ago.
The problems are many, but mostly they have to do with an understanding of what it means to measure, and what it means to know. I had a conversation with a young HR person a couple of years ago about the criteria they were using to put people into a training program. I asked whether the peoples' managers actually knew them well enough to be able to predict whether an individual would be successful in the program. She said, "Well, I hope they are using performance evaluations to select them...you know, some objective data instead of just a manager's opinion." I asked her where they got the "objective data" from, and (of course) she told me they came from scores and rankings assigned by the managers. Then I asked, "How is that different from manager opinions?" She looked at me as though I were an alien or an idiot or both, and said, "They assign NUMBERS!"
It's always dangerous to try to make an inherently subjective task objective, but in this case, it's impossible. Anyone with a modicum of understanding of General Systems Theory knows that you can't separate a person acting in a system's performance from the performance of the system. It's one of the primary lessons from the Red Bead. Deming usually pointed out that it's trying to solve an equation with two unknowns.
It also ignores variation theory, although in many cases, the schemes put forward by management attempt to use variation theory as an excuse for some of the bad practices. In any group of people, for any given measure, there will be a distribution of performance. Because of system effects and interactions, the distribution is mostly the result of random variation. Some people will be at the higher end in some years and the lower end in other years, due to random variation.
Some people may end up in the upper tail of the distribution, more than three sigma away from the average. They may even stay there for a few cycles. These are people who are doing better than the system, and should be studied to find out how the system could be improved. They are the ones who should probably be rewarded more. If you asked everyone in the organization "who's our number one person?" all fingers would point at that person.
Some people might end up in the lower end of the distribution...they have managed to underperform the system. If you asked anyone who should go, they would point at that person. Those people are in need of help...maybe a different job in the same organization, maybe some training or a different manager, maybe a job in another organization.
Those people in the tails are relatively easy to identify. They are also rare. Most of the rest of the people are doing their best, and their performance is a result of random variaiton. Rating and ranking them is a step away from reality, and can't be done on any sound rational basis.
If you want to try some strategies that make sense, add "Abolishing Performance Appraisals," by Tom Coens and Mary Jenkins, to your reading list. They provide a very comprehensive treatment of the downsides, but also offer some great suggestions for "what to do instead."

Friday, March 20, 2009

"Making Six Sigma Faster"

I recently had a manager tell me that his company had decided that Six Sigma just “takes too long,” and that they were implementing a new mandate to “Make Six Sigma Faster.” I asked him why they thought Six Sigma takes too long. He told me that the average time to complete a project there had been 8-9 months; under the new program, Black Belts were going to be expected to complete projects (at least through the IMPROVE phase) in 90 days or less.
I couldn’t help it…I broke out laughing, and asked him “By what method?”
“Well,” he said, “we’re not sure yet, but we think if they start holding more meetings, and never go into a meeting without having the deliverable already drafted, that will help.”
I had asked Deming’s famous question for a reason. I was very familiar with this particular deployment, and I knew that the reasons their projects always ran long were many, but almost none had to do with the Black Belts or the number of meetings they held.
This organization had decided at the beginning that they just didn’t have time to do a couple of days of Champion training. They had decided instead that they could get along with a 2-hour teleconference and a required reading list. This was a big organization, and they had never bothered to set up any kind of listening posts or other pipeline-feeders, had no project portfolio management, had not coordinated with the PMO, hadn’t trained any middle managers in SPC, Six Sigma familiarization, or anything else. Black Belts were pretty much expected to find their own projects, and in many cases had to hunt down anyone willing to sign on as a Champion (sometimes, they just picked another Black Belt, because “at least I got someone who understands Six Sigma.”) Getting good data was another problem. Usually, there were no data available for even deriving a baseline, much less for stratification or for digging into cause systems to find “x’s.” Just collecting the data for a baseline might involve a couple of months’ worth of work. The organization often relied for stratification on “reason codes,” the use of which were consistently proven unreliable when tested using attribute agreement analysis.
These were the primary factors driving project lead times, but there was no plan to deal with these factors, because it meant getting leadership to change, and no one at my manager friend’s level had the ability to push that noodle uphill. So they were just going to do the usual thing…put it into the expectations for the Black Belts. All you have to do to get the variation narrower is tighten the specs, right?
This was Deming’s point; to paraphrase, “If you could cut the project time by 60 percent this year without a method, then why didn’t you do it last year? Must have been goofing off…”
Listen, executives: This is too important. Six Sigma, implemented properly and led from a systems perspective, is a proven methodology that will make your business better, your customers happier, your revenues higher, and your costs lower. But it’s not something you bolt on, walk away from, and just wait for the cash to roll in. You have to lead it, you have to be engaged, you have to remove obstacles and make Six Sigma a strategic component of a larger Quality Management System. In the end, if it fails, you can’t blame the Black Belts you didn’t support, or the culture you didn’t change, or even the consultants you didn’t listen to. It’s not that it won’t work at your company…but it certainly won’t work if you don’t lead it!

Tuesday, March 17, 2009

Courage a Habit?

(Originally posted as a LinkedIn Change Management answer. The question involved seeing courage as a habit, and reinforcing it in our daily work.)
I'm not sure I would characterize courage as a habit, but we could probably launch a whole new discussion group just to deal with the question of what courage is and how it's defined.
In the military, I knew any number of people endowed with great physical courage: they would--without hesitation--charge into a burning compartment to save a shipmate, jump on a grenade to save the rest of the people in their platoon, or do any number of other things that might result in death or medals for valor or both.
Many of these same people, however, would not stick their necks out an inch when it came to their performance evaluations or anything else that might jeapordize their careers. Performance evaluations were pretty much the arbiters of career advancement in the military, and this gave the person that signed your evaluation--and the committee that force-ranked you against your peers--unbelievable power over your ability to excel.
As a result, leaders who valued innovation might welcome an innovator, and welcome someone who stood up to buck conventional wisdom. Since many military leaders are conventionalists and authoritarians, however, the rule at all too many commands is that "the nail that sticks up gets hammered down."
This is one of the reasons Deming was so adamant about his eighth point: "Drive out Fear." It's one of the reasons he hated performance evaluation. Besides the fact that it's unconnected to reality, performance evaluation is a carrot-and-stick approach that engenders fear, crushes joy in work, stifles innovation and keeps peers in artifical competition that sub-optimizes organizational performance.
If you want to reinforce courage in the workplace, you have to be in a workplace that values courage, that values "speaking truth to power." Developing some political tact, the ability to speak truth to power without insulting or diminishing those in power, can help, if the leadership is open to listening.
One of the things I have always used to some advantage was an agreed understanding of the leaders' goals. If they can articulate their goals up front, bringing in new ideas that advance those goals becomes easier. When they jerk knees to try to squash a new idea, a good consultant can say, "I'm sorry...I'm confused now. I thought you wanted to get [goal] accomplished. This idea is the best we've seen yet for getting there. Do you want to get there or not?"
Another important habit is that of providing evidence for your case. Bringing data, well-analyzed and presented for clear understanding and insight, is also very helpful. Even though resistance is an emotional reaction, appeals to reason are still valuable in demonstrating the superiority of a proposed idea. It provides evidence for trialability and competitive advantage. It helps make the vision clearer and more concrete, therefore more feasible.
So, for those at the top...you want innovation? Create a climate for innovation. Drive out fear, articulate and share your vision to build a vision community in which every contribution toward reaching the vision is valued, innovation is a corporate strategy, and creativity is recognized as a core value.
For those not so near the top, develop good communication skills and tact, provide good clear evidence for your ideas, and persist as long as possible. If you continue to get hammered down, find another place to work that DOES value innovation; in the end, that organization will be more competitive and you will be much happier.

Friday, March 13, 2009

More Prevaricating about Sigma

The one anonymous comment I received the other day after my initial posting seemed to imply that I am somehow against Six Sigma. Nothing could be further from the truth. I am currently a Quality consultant, and a large part of my business has been (and continues to be) Six Sigma. My background goes beyond Six Sigma, though, and I do have some reservations about some of the common practices within Six Sigma.
Before I ever heard of Six Sigma, I was heavily involved in the Navy’s Total Quality Leadership initiative. I had studied all the quality gurus, consulted and written courses in Statistical Process Control and systems approaches to process improvement, taken a masters degree in Quality and applied statistics. I had been director of quality for a large overseas base and an internal consultant to the entire Department of the Navy.
From the viewpoint of statistical methods, I initially viewed Six Sigma with some suspicion. There was the matter of the “1.5 Sigma Shift” applied blindly to all the calculations. This is just For one thing, one of the first slides in the deck, in the first presentation I saw, was a direct copy of one I had seen at a Crosby presentation. It talked about what you might get in “a three-sigma world,” and listed the usual “Babies dropped on heads” and “Airplane crashes,” etc. This is a great marketing slide, especially when coupled with the one that inevitably follows it, showing the several-order-of-magnitude improvement in “a six-sigma world.” The Crosby presentation, of course, followed the initial slide with a slide showing the improvement in a “zero-defects world.”
This slide is a pretty persuasive slide. It deals with errors that we would, of course, want to get as close to zero as possible. We can’t tolerate any aircraft falling from the sky, and can’t tolerate any babies being dropped on their heads, so of course three sigma (Cpk = 1) is never going to be as good as six sigma (Cpk = 2), but it’s somewhat disingenuous, for a number of reasons:
The calculations are correct, unless you are using the common “1.5-Sigma Shift,” in which case the six-sigma world calculations are too pessimistic (more on the shift later).
The idea of three sigma and six sigma come from the world of Statistical Process Control (SPC) and Capability Studies for continuous data, where you have tolerance limits (usually two-sided), around a process average. The data in the slide are for errors, countable things, discrete data, and all the examples are about the types of errors where the only acceptable tolerance limit is the lower bound of zero. Normal distribution theory doesn’t usually apply in this situation.
I’ve never understood why three sigma was the starting point for these slides, but I’ve always suspected that it was to promote the misconception that SPC somehow “stops” at three sigma (because of the three-sigma control limits used in SPC), and so the other approaches touted in the second slide are superior. Nothing could be further from the truth; if you watch or read “The Japanese Control Chart” by Don Wheeler, you’ll see a Japanese company that gets to TWELVE sigma, just using a paper control chart.
So I don’t really like that slide, but it worked OK as a marketing tool. From the viewpoint of statistics, I worried mostly about the mixing of the methods used to assess DPMO (Defects per Million Opportunities) and the 1.5-sigma shift. An excellent paper by Roger Hoerl got me over that. He pointed out that traditional views of capability left out the idea of mistakes or defects, and made a strong case for a metric like DPMO that can translate the idea of capability across distributions.
The “1.5 Sigma Shift” is another can of worms, but not for statisticians. Put simply, statisticians pay very little attention to it. It’s just a transformation that we will probably all be stuck with until someone with enough credibility to stop a flawed practice yells “STOP!” It’s relatively harmless, anyway, and it works as a “fudge factor” or a safety factor so you generally end up beating expectations. While it is true that undetected shifts in even a well-controlled process might allow for a lot of closer-to-spec product to be produced over many days, that does not justify an arbitrary value of 1.5 sigma to be universally applied. There are a lot of reasons for this…we’ll get into it another time.

Wednesday, March 11, 2009

A Rookie's First Blog

Welcome to my new Blog! As a new blogger, I thought I’d take the time to talk in this first post about my vision for this blog. My intention is to provoke thought-provoking, open discussion on all aspects of enterprise improvement, operational excellence, continual improvement, “Big-Q” quality, Six Sigma, Lean, “Lean Six Sigma,” or whatever label you wish to use for a systemic approach to optimizing the performance of your organization for all its stakeholders.
It might seem odd that I don’t pin anything down with a label. Although I consider myself a Quality Consultant, I have recognized that over the past 20 years many of my fellow consultants have split improvement consulting into ever-smaller competitive niches. While there is much to be said for specialization and developing depth in a particular knowledge area, there is very little to be said for promoting that one area as “the” fix to a complex system. I have seen presenters at Quality conferences tout Lean over Six Sigma, Six Sigma over Lean, DMADV over DMAIC, “Lean Six Sigma” over everything else…it’s harmful to all of us, and harmful to all our clients, and it needs to stop. Now.
When he was alive, people at Deming seminars used to ask why Dr. Deming, with all his knowledge, didn’t provide specific prescriptions and methodologies. Quality professionals tended to study not only Deming, but Juran, Crosby, Kano, Ohno, Shingo and many others (i.e., Ackoff and Senge for Systems Theory, Kohn and Maslow for psychology, Lewis and Peirce for epistemology). Good quality practitioners were expected to know project management, standardization, lean (as they came to be called) tools, Cost-of-poor-quality, SPC, QFD, “seven old tools,” “seven new tools,” statistical theory, and systems theory.
Having said all that, there was a lot of variation in the Quality world. Some people followed their Gurus and only studied the others to find targets for disdain. Some became very dogmatic. Deming said that “the most important numbers are unknown and unknowable” so some people took that to mean that you don’t worry about costs, even those that are knowable. Some decided that they could act as curmudgeonly and arrogant as the Gurus themselves. These things (and more) turned many executives off.
After Deming died, consultants found that very few hiring executives cared much for philosophical approaches. Some of this reluctance could be laid on some of the consultants themselves; some of the most dogmatic “Deming Disciples” spent much of their time quoting Deming, often arguing about “what Dr. Deming said” or “what Dr. Deming meant” like biblical scholars interpreting a prophet.
I think, though, that managers just didn’t want to deal on a conceptual basis…nothing in business school had prepared them for seeing the long view or managing a system. They just wanted simple methods they could install quickly and painlessly. Six Sigma seemed in many ways to fit that bill. It was proven at GE, Motorola, and Allied Signal. It was trainable, had a defined roadmap, a hierarchical structure that could (with a reasonably small amount of pain) be bolted on to existing structures. Importantly, it could be positioned as a cost-cutting initiative rather than a quality initiative.
In the beginning, it fell to Quality professionals to develop the initial Six Sigma training materials. Black Belts were trained, who became Master Black Belts, and trained other Black Belts, who became Master Black Belts, etc. Because many of those Black Belts had had little or no previous exposure to Quality concepts and principles, their understanding of the tools suffered. They passed their subset of knowledge on to others, and more concepts and tools were lost in each generation…classic “rule four of the funnel.”
Management, too, fell victim to the dilution. Busy executives became too busy to take the time to learn enough to effectively lead Six Sigma as a strategic initiative, and upper/middle managers became too busy to take a few days to learn to be an effective champion. Process owners received little or no training to help them understand and cope with the changes and resource needs. Project selection became less strategic, sometimes devolving into Black Belts looking for their own projects and trying to find their own project champions. Some of these Black Belts ended up laid off; they sometimes found work with consulting companies.
As the focus narrowed from optimizing systems to local improvements and cutting costs, much of the rigor was diluted, and the conceptual underpinnings were often cut or minimized in the training. Deming’s fourteen points, seven deadly diseases and System of Profound Knowledge were treated as an important historical footnote…maybe included in introductory material, maybe not. Because Black Belts were not trained in simpler, non-project or small-project approaches, and all responsibility for tools rested with the Black Belts, many opportunities for process standardization & control, continual improvement, and reduction of waste were missed.
The Toyota Production System (Americanized to “Lean Manufacturing”) came in to help fill some of the gaps in Quality Management Systems. Because lean provides a relatively simple set of tools for daily management and doesn’t require much statistical knowledge, it quickly caught on. In some organizations, it was seamlessly integrated into an overall quality system; in others, it became another flavor of the month, replacing Six Sigma. In others, the boardroom decided to let Six Sigma and Lean compete for viability. Some consulting companies began selling “Lean Six Sigma.” Even simpler approaches, like Shop Floor Standardization, were lost in many organizations.
I could go on, but it’s already been a long path to get to an explanation of why I don’t call this a “Six Sigma” blog or a “Lean” blog, or even a “Lean Six Sigma” blog, even though I hope and fully expect to be discussing all these topics in depth and detail, as time goes on. What I’m going for is the kind of reasoned and mostly respectful discourse we used to have in the Deming Electronic Network. Hopefully, we can all suspend our assumptions, bring our knowledge to the table, learn and have fun!