# Is SPC obsolete?

In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability. In this broad sense, you couldn’t say that it is obsolete, but common usage is more restrictive. The semiconductor process engineers who apply statistical design of experiments (DOE) to the same goals don’t describe what they do as SPC. When manufacturing professionals talk about SPC, they usually mean Control Charts, Histograms, Scatter Plots, and other techniques dating back from the 1920s to World War II, and this body of knowledge in the 21st century is definitely obsolete.

Tools like Control Charts or Binomial Probability Paper have impressive theoretical foundations and are designed to work around the information technology of the 1920s. Data was recorded on paper spreadsheets, you looked up statistical parameters in books of tables, and computed with slide rules, adding machines or, in some parts of Asia, abacuses (See Figure 1). In Control Charts, for example, using ranges instead of standard deviations was a way to simplify calculations. These clever tricks addressed issues we no longer have.

Figure 1. Information technology in the 1920s

Another consideration is the manufacturing technology for which process capability needs to be achieved. Shewhart developed control charts at Western Electric, AT&T’s manufacturing arm and the high technology of the 1920s. The number of critical parameters and the tolerance requirements of their products have no common measure with those of their descendants in 21st century electronics. For integrated circuits in particular, the key parameters cannot be measured until testing at the end of a process that takes weeks and hundreds of operations, and the root causes of problems are often more complex interactions between features built at multiple operations than can be understood with the tools of SPC. In addition, the quantity of data generated is much larger than anything the SPC techniques were meant to handle. If you capture 140 parameters per chip, on 400 chips/wafer and 500 wafers/day, that is 28,000,000 measurements per day. SPC dealt with a trickle of data; in current electronics manufacturing, it comes out of a fire hose, and this is still nothing compared to the daily terabytes generated in e-commerce or internet search  (See Figure 2).

Figure 2. Data, from trickle to flood, 1920 to 2011

What about mature industries? SPC is a form of supervisory control. It is not about telling machines what to do and making sure they do it, but about checking that the output is as expected, detecting deviations or drifts, and triggering human intervention before these anomalies have a chance to damage products. Since the 1920s, however, lower-level controls embedded in the machines have improved enough to make control charts redundant. The SPC literature recommends measurements over go/no-go checking, because measurements provide richer information, but the tables are turned once process capability is no longer the issue. The quality problems in machining or fabrication today are generated by discrete events like tool breakage or human error, including picking wrong parts, mistyping machine settings or selecting the wrong process program. The challenge is to detect these incidents and react promptly, and, for this purpose, go/no-go checking with special-purpose gauges is faster and better than taking measurements.

In a nutshell, SPC is yesterday’s statistical technology to solve the problems of yesterday’s manufacturing. It doesn’t have the power to address the problems of today’s high technlogy, and it is unnecessary in mature industries. The reason it is not completely dead is that it has found its way into standards that customers impose on their suppliers, even when they don’t comply themselves. This is why you still see Control Charts posted on hallway walls in so many plants.

But SPC has left a legacy. In many ways,  Six Sigma is SPC 2.0. It has the same goals, with more modern tools and a different implementation approach to address the challenge of bringing statistical thinking to the shop floor. That TV journalists describe all changes as “significant” reveals how far the vocabulary of statistics has spread; that they use it without qualifiers shows that they don’t know what it means. They might argue that levels of significance would take too long to explain in a newscast, but, if that were the concern, they could save air time by just saying “change.” In fact, they are just using the word to add weight to make the change sound more, well, significant.

In factories, the promoters of SPC, over decades, have not succeeded in getting basic statistical concepts understood in factories. Even in plants that claimed to practice “standard SPC,” I have seen technicians arbitrarily picking parts here and there in a bin and describing it as “random sampling.” When asking why Shewhart used averages rather than individual measurements on X-bar charts, I have yet to hear anyone answer that averages follow a Bell-shaped distribution even when individual measurements don’t. I have also seen software “solutions” that checked individual measurements against control limits set for averages…

I believe the Black Belt concept in Six Sigma was intended as a solution to this problem. The idea was to give solid statistical training to 1% of the work force and let them be a resource for the remaining 99%. The Black Belts were not expected to be statisticians at the level of academic specialists, but process engineers with enough knowledge of modern statistics to be effective in achieving process capability where it is a challenge.

## 118 comments on “Is SPC obsolete?”

1. Mike Clayton

Nice historical view.
Modern SPC is really Statistical Monitoring and Adjustment, as Box and Hunter have pointed out, leaving realtime control to the automation folks, but setting up a hierarchy of monitoring to catch bad sensors or drifting tools for example. In the 1990’s sensor-based fault detection was added in the semiconductor industry, slowly, along with run-to-run adaptive control (adjustments based on models) and in this 21st century that evolved further. Earlier, in the “process” industries such as oil refineries and chemical plants, multivariate statistical process monitoring, sensor-based fault detection, and process tuning algoritms were developed. So if you like, you can say Old SPC >>> New SPM&A but it does not have a “ring” to it. Even in the 1920’s, SPC was really an open-loop human-intervention system of long-feedback not realtime control. So to that extent, nothing has changed in my opinion except the sensorization and automation improvements, and the massive amount of useful or misleading data that creates to be sorted out by experts or expert systems.

2. Old SPC based on manual control charts produced by operators with pencil and paper by sampling may be dead specially in automatically controlled processes.
But we should not forget that we still have lots of manual processes and some automatic processes where process parameters can not be calculated theoretically (for instance solder processes).
When inspection and test data is collected automatically SPC charts based on 100% control could be made available to the operators in real time in order to detect trends which require immediate operator action and also avoid operator over-reaction (treating common cause variability as a special cause).
SPC concepts, as explained by E. Deming, are also important for Management in order to interpret data adequately specially nowadays when instant data is available in real time. I have seen many instances of management over-reaction and tampering leading to an increase of variability and process degradation.
It is true that a capable process (Cpk > 1.5) might not require any control, but unfortunately the state of the art in many processes is far from that.

3. In Lean plants, you normally have team boards with charts of multiple metrics showing actual performance over time against target values.The metrics of quality usually include a first-pass yield and a list of problem reports traced back to the team. The use of these boards by team leaders and supervisors leaves traces in the form of manual annotations.
Even in manual processes, I don’t recall ever seeing a Shewhart-style control chart on any of these boards. The only place I recall seeing them in in hallways that visitors go through, beautifully printed, and free of manual annotations.

4. “In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability.”
This is a great point, one I end of having to remind people of all the time. Shewhart’s charts are not the entirety of statistical Process control – only a subset. The notion that they are always applicable is unfounded – although they are very handy where they do fit.
“Tools like Control Charts or Binomial Probability Paper have impressive theoretical foundations and are designed to work around the information technology of the 1920s.”
There are still many shop floors where the cost structure supports the technology of the ‘20s. In many places I end up having to utilize paper charting because many of the SPC programs do not offer the correct chart for the application.
“Another consideration is the manufacturing technology for which process capability needs to be achieved.” “What about mature industries? SPC is a form of supervisory control. It is not about telling machines what to do and making sure they do it,”
This is true – but not entirely the death knell of SPC. What I have found in precision machining is that the variation that may have been seen in 1930 may very well have done well with Shewhart charts. But at higher precision, Xbar-R charts actually encourage overcontrol and ignore critical with-in part variation. X bar- charts create unreasonably tight control limits, because they use the wrong statistics. They end up frustrating operators, and cause implemeters to make up “special rules” for machining that are still not correct. To them, it may seem obsolete – but wrong and obsolete are two different things. A more modern – yet still simple approach, the X hi/lo-R chart resolves many of these issues. It makes charting not only monitoring for special causes, but provides feedback to the operator when to make adjustments and when to change tools. It can accurately compare the improvement of one type of tool to another. The things Xbar-R charts do in precision machining is plot measurement error from samples of statistically insignificant size. What kind of decisions can be made from that? X hi/lo-R is even better information than automatic tool wear adjustment algorithms. They will mask valuable data with their overcontrol.
Blind rubber stamping of X-bar R charts can lead to frustration and ultimately its demise on the shop floor. You need to do your upfront work. You need to have a proper understanding of what is common or special cause. A simple thing of not doing a CNX evaluation of the process variable to see if you are charting noise or a true variable that can affect output. You need to do your FMEA to know ahead of time possible reaction plans, so you are not standing there staring at the chart as your process wanders off. You need to pick the correct chart for the process. Also, charting with “no one paying attention to the charts” is a sure death of the implementation.

“I believe the Black Belt concept in Six Sigma was intended as a solution to this problem.”
If we rubber stamp black belt concepts the way we rubber stamp capability indices and X-bar R charts, there will be little improvement in the overall picture.

To add to those comments, it also helps when it is implemented correctly. The key is, you have to *think.* As much as people think SPC is plug and chug, or as much as they wish it was, it is not. You have have to actually think. If you are not willing to make that investment, be prepared to fail. But, remember, wasting a resource is much different than obsolete. Much different. Chances are, if you *think* it is obsolete, you really have done it wrong. Those are the messes I end up having to clean up.

• As you say, “There are still many shop floors where the cost structure supports the technology of the ‘20s.”
While it is true that many shop floors still use the information technology of the 1920s, but I am not sure it is a question of cost. I still own an Aristo 0968 slide rule with a magnifying cursor from my student days, but I would never use it to get work done. In operations, it may have been wise,to ignore the information technology of the 1960s and 70s, when computers were still expensive, hard-to-use clunkers, but I don’t think it is today.

5. Comment from Joseph (Joe) E. Johnson in the Continuous Improvement, Six Sigma, & Lean Group on LinkedIn:

SPC and QC will never be obsolete, just not used as much as they should be due to (short term) cost advantages and ‘systems”. Six sigma is/was the system du jour. Prior to that TQM was, before that kaizen, house of quality, etc. The problem rests is in Sr Mgt’s lack of statistical use and proper application. SPC is a tool; an analogy is a screw driver. Pounding the screw in with a screw driver doesn’t work well. The operator gets frustrated and throws the screw driver out. He/she then uses a hammer to pound the screw in and declares victory.

6. Comment from Steve Ruegg in the Continuous Improvement, Six Sigma, & Lean Group on LinkedIn:

SPC is still (and will always be) vital to success. On-line, automated SPC systems are still SPC. And I have always found a flaw in relying on automated SPC process controls. They are designed to make the “proper” adjustments to the process, and even to alert the operator to the OOC condition, but there is really a great deal of value potentially lost. There is the high probability that an OOC condition will merely be “corrected” by the system, with a process adjustment. But, the Special Cause that created the OOC condition will still exist. Operator intervention and investigation is still necessary to find, correct, eliminate, and prevent the recurrence of those Special Causes. If not, the system wil tend to “tweak” in response to these Special Causes, and they may become Common Causes over time.

… the paper versions also are a good way to train the operators to write down anything unusual they notice in the process, at or near the time a data point occurs, so that identifying special causes can be easier. Also a good place to document any corrective actions to a special cause.

7. Comment from Bob Mallett in the Continuous Improvement, Six Sigma, & Lean Group on LinkedIn:

SPC is not dead! Even in this highly computerized manufacturing environments, I usually start the operation out using paper so that the operators and line supervisors gain understanding and depth of knowledge before automating.

8. Comment from Attila Fulop in the Continuous Improvement, Six Sigma, & Lean Group on LinkedIn:

SPC is not the way to anayse complex processes any more as it was 20+ years ago. Still it has it’s place in managers’ process control toolbox. A simple hand-written chart sometimes more appropriate than multi-parameter computerized systems.
Of course it can not and should not replace modern IT.

9. Comment in the Lean Six Sigma discussion group on LinkedIn:

Nice. Had a question as a part of a job interview where I was asked to provide a specific example of my previous use of statistical process control. After expounding on how many of our processes are now too complex to make holistic use of control charts, I broke down a customer service organization with AVERAGE timeliness and accuracy of product delivery as an example of relevant SPC. I was then asked why I was using averages instead of specific measurements and was questioned as to relevance of the control chart in customer service process control. These questions made me realize the person who had asked the original question had not been trained in Six Sigma and did not wholly understand SPC. Six Sigma (SPC 2.0) is gaining in relevance every day but has yet to permeate the business environment.

10. Comment in the Lean Six Sigma discussion group on LinkedIn:

I have not worked in real Six Sigma projects, but I was and am involved in different projects that aim to keep processes under control.
In my opinion, as an engineer you can have a feeling of diffrent techical aspects, but to actually understand what happens in a process (the process behaviour) is not possible in absence of the “view from helicopter” ensured by SPC tools.
in my opinion, SPC is not obsolete.

The result of using computerised tools made some (many) people believe that they don’t need anymore the statistical knowledge. they think it enough to know the way to use such tools. And that’s not right!. This is why many preach (openly or subversively) against SPC.

The rule applied is that what you do not understant, you reject… So they return to trial and error and to daily visits to the church and prayers to God to make process work!).

11. Comment in the Lean Six Sigma discussion group on LinkedIn:

Still in many industries is used the pass/not pass to approve/reject the product without considering the data to analyze the process and improve it, so in my opinion SPC is even misused
In addition, from my experience SPC is also a good tool to avoid over actuation by operator in process with large internal variability. It’s a good tool to say “please, just touch when it’s out of control”

12. Comment in the Lean Six Sigma discussion group on LinkedIn:

I have not worked in real Six Sigma projects, but I was and am involved in different projects that aim to keep processes under control.
In my opinion, as an engineer you can have a feeling of diffrent techical aspects, but to actually understand what happens in a process (the process behaviour) is not possible in absence of the “view from helicopter” ensured by SPC tools.

In my opinion, SPC is not obsolete.

The result of using computerised tools made some (many) people believe that they don’t need anymore the statistical knowledge. they think it enough to know the way to use such tools. And that’s not right!. This is why many preach (openly or subversively) against SPC.

The rule applied is that what you do not understand, you reject… So they return to trial and error and to daily visits to the church and prayers to God to make process work!.

• Sanda, Super reply!

13. Comment in the Operational Excellence discussion group on LinkedIn:

Statistical thinking has been missing in action for many organizations for a long time but that doesn’t mean that SPC is obsolete. Your blog post discussed the increasing amount of data available in the modern environment. In my opinion the presence of increased data also increases the need for proper use of SPC to ensure correct sampling (you pointed out that problem) and to ensure that we react properly to the data (not over-reacting to events when they are not statistically significant) which leads to process tampering.

14. I don’t think one can ever claim that a “scientific” tool like SPC is dead or obsolete. One can make a distinction however, between being able to “tell when the process has gone out of control” and ” error-proofing the process” so the former doesn’t even occur! Additionally, any repeatable and sustainable tool/system should be simple enough for the people at the Gemba (Shop-Floor) to understand, implement and then teach. Many “Lean” places do exactly that, where the Team Members on the floor use primitive monitoring & recording tools namely pencils and paper to do SPC charts and arrive at intelligible understandings.

All & all, SPC may not be as fashionable as it once was, it may have given way to Poka-Yoke and alike, but as a systematic and scientific approach, it is not disputable!

15. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

I always use this to provide system or process capability. It helps refine the process and is a good visual analysis of the process. It might be an old method but it works.

16. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

It’s a good tool when actually used to monitor and improve a process. Its a bad tool when its only done to satisfy a random quality system auditor.

17. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

No, it is a tool and only a tool. Is a common screw driver obsolete when Phillips screw drivers and hammers are available. A good tool box has several tools and if the only tool you have is a hammer every problem looks like a nail.

Where does this sort of question come from?

18. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

I agree with the previous comments. If obsolescence is present, it’s much more likely that the obsolete component is the person who should know when and how to use SPC correctly . . but doesn’t.

19. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

I also agree with all of the previous comments. Even though SPC is “dated”, Dr. Edward Deming taught the Japanese to embrace it after WWII, and they surpassed the U.S. in terms of further defining effective quality control measures and making them useful. However, I feel its usefulness is only effective if the user has an understanding of what they are looking at. One can gain a simple yet powerful understanding of the various methods by applying them manually…then move on to other means such as electronic data collectors.

20. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

Are academic grades obsolete? Not when you are trying to help students learn and schools improve. Does it matter to measure and compare our children’s math scores with those in other (competing) countries?

Likewise, SPC is not obsolete. Very few companies (including those with 6 sigma) sustain a process-oriented mindset to use it properly. SPC measures the performance of processes that are important to the business so that company costs are minimized and quality problems are reduced for the customers. DOE is a way to economically test factors that influence the attributes of a product for current and future customers.

In addition, when companies grow or when conditions change, processes also have to be changed to meet the different situations.

There is an investment cost in training frontline, engineering and management personnel to do more than just collect and plug in numbers. It can provide insight to process owners who care to listen, think and understand.

• Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

I would like to see us move to a strength based education system where students are able to master what they enjoy most and the other subject matter is integrated into their are of strength. WE need to understand how learning effects us overall in our successful employment. Most importantly if we understand our strength and passion it is easier to see where we fit in any industry. WE are the only one who limits ourselves once we become an adult but adults limit us as children. If it is not a threat to health or life, we must let children explore and remain curious. Of course it is helpful when adults become parents to support that choice as they would any other career choice. Children need their parent’s values to provide security while they explore. Values give them stability and assure them of a shared vision as a family. When others take over the parenting and instill their values, a child can easily loose respect of the parents authority and value system. Teachers and other authoratative individuals do not necessarily share the parents values. Teachers should present the curriculum and measure their success as a teacher on the results. Parents should enforce the love of learning, keep curiousity and questioning alive and support the application of the learning process as it applies to life. Grade systems are given way too much weight.
(the application of grades in industry amazes me, as well-it should be a collabroative seamless discussion for improvement on realized strengths as well)

21. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

Only if it is obsolete to attempt to minimize the economic impact of making either Mistake 1 and Mistake 2 on the shop floor and in the managment of people. [See items from The New Economics below]. And only if having a catalyst for thinking is obsolete. [See items from WHeeler and Chambers velow]

from The New Economics
“There are two mistakes frequently made in attempts to improve results, both costly (Out of the Crisis, p. 318). We shall study them in detail in Chapter 8.
Mistake 1. To react to an outcome as if it came from a special cause, when actually it came from common causes of variation.
Mistake 2. To treat an outcome as if it came from common causes of variation, when actually it came from a special cause.
Shewhart prescribed procedures aimed at minimum economic loss from the two mistakes (Ch. 8).”
“A process may be in statistical control; it may not be. In the state of statistical control, the variation to expect in the future is predictable. Costs, performance, quality, and quantity are predictable. Shewhart called this the stable state. If the process is not stable, then it is unstable. Its performance is not predictable. (More in Chs. 7 and 8.)
Management of people (leader, supervisor, teacher) is entirely different in the two states, stable and unstable. Confusion between the two states leads to calamity.”
“Textbooks lead readers to suppose that the principles contributed by Dr. Shewhart are control charts on the shop floor. Actually, this application constitutes only a small fraction of the needs of industry, education, and government. (See the table on p. 37.) The most important application of Shewhart’s contribution is in the management of people, as may be obvious from pages of this book.”

In the posted link, the article states -“In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability.”
A different view –
From Understanding Statistical Process Control
Donald J. Wheeler, David S. Chambers
“SPC is ultimately a way of thinking with the charts acting as a catalyst for this thought process.”
“. . . Statistical Process Control has always been, first and foremost, a way of thinking which happened to have some techniques. Without the right concepts – without an understanding of “Production Viewed as a System,” and without an understanding of how to use the Deming Cycle [PSDA] for continual improvement – the techniques are of little use. . . .”

22. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

It is definitely not obsolete but not a solution to every quality problem. It can empower the operators and engineers to act proactively but is not a replacement to other quality related activities/tools like root cause analysis, DOE, FMEA and Common Sense. When decision makers consider it a solution to all quality issues, they get disappointed after investing time and money and later abandon it.

23. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

Tim provides some great insights. To add to those comments, it also helps when it is implemented correctly. Blind rubber stamping of X-bar R charts can lead to frustration and ultimately its demise on the shop floor. You need to do your upfront work. You need to have a proper understanding of what is common or special cause. A simple thing of not doing a CNX evaluation of the process variable to see if you are charting noise or a true variable that can affect output. You need to do your FMEA to know ahead of time possible reaction plans, so you are not standing there staring at the chart as your process wanders off. You need to pick the correct chart for the process. Also, charting with “no one paying attention to the charts” is a sure death of the implementation.

Zaheer is also correct. Especially Shewhart charts are *monitoring* a process looking for problems, not corrective or preventive actions in and of themselves. It is more evidence that your corrective or preventive actions are effective – or, they missed a possible special cause (which is not only possible, but important.)

The key is, you have to *think.* As much as people think SPC is plug and chug, or as much as they wish it was, it is not. You have have to actually think. If you are not willing to make that investment, be prepared to fail. But, remember, wasting a resource is much different than obsolete. Much different. Chances are, if you *think* it is obsolete, you really have done it wrong. Those are the messes I end up having to clean up.

24. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

It is not obsolete but is used for special applications than as a control method. Tracking calibration results for stability or using pc based control charts to provide the evidence of conformity for process parameters or product characteristics.

25. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

@used for special applications
Someone renown for his applications wrote:
“Textbooks lead readers to suppose that the principles contributed by Dr. Shewhart are control charts on the shop floor. Actually, this application constitutes only a small fraction of the needs of industry, education, and government. (See the table on p. 37.) The most important application of Shewhart’s contribution is in the management of people, as may be obvious from pages of this book.”

The book referred to is The New Economics where can be found starting on p 37 “The most important application of the principles of statistical control of quality by which I mean knowledge about common causes and special causes, is in the management of people (Ch. 6).”

26. Although I mean everything I wrote in the post that started this discussion, the title was intended to provoke responses, and it did. While most of you think SPC is not obsolete, none claims that it is embedded in daily shop floor practices and making a clear and obvious contribution to performance. You are saying that it has the potential to do so: it could, and it would, if only it were implemented correctly and supported by management.
I didn’t see the testimonial of a passionate practitioner who could inspire a 25-year old engineer to get involved. Such a testimonial might go as follows:

“At ACME Manufacturing, SPC is credited with our outstanding quality performance and strongly supported by management. In process development, we combine statistical design of experiments with our knowledge of the physics and chemistry of the processes to achieve a Cpk ≥ 1.5 on all critical characteristics before moving to Production. Then we maintain control charts on all these characteristics on the shop floor. The high Cpk’s allow us to detect problems before they have a chance to affect our products. We respond to all alerts with both immediate countermeasures and root cause analysis, using the other tools of SPC.”

I would not expect this story to be true everywhere or even in a majority of plants. I would settle for one. This is the vision you get from the SPC literature. However, in 30 years in Manufacturing, I have never seen it realized anywhere.
These techniques have been around for almost 90 years. Deming died 19 years ago; Shewhart, 45 years ago. How much longer should we wait?

27. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

The mission of a company should be contained in its values and play to its strengths and how it is different than others in meeting customer needs. It is difficult to be profitable and successful otherwise.

Without getting too carried away and being overly data driven, SPC feedback can indicate uncontrolled weaknesses in the CHOSEN or DESIGNED processes that haven’t been accounted for. Where do things need improvement? When incorporated properly, a collaborative team uses SPC objectively to look and listen to processes, examine execution, and to study or validate market/customer responses. For its passions to thrive in reality, companies have to check it with meaningful and timely data. Can we be more effective by operating differently? Is the manifestation of our passion in this specific business idea or model economically do-able? A leader asks, “Does the excellence level of our execution match the quality requirements of our customers?” If not, something has to change.

You are the owner. The degree of application is up to you in your own context. Education perhaps. Second-hand, I’ve heard of SPC used to examine performance consistency in professional baseball athletes. Donald Wheeler published an objective case study of a nightclub business entitled, “SPC at the Esquire Club.” Scientists can be held accountable for their claims if they manipulate or present data untrue to science.

28. Comment in the Continuous Improvement, Six Sigma, & Lean Group on LinkedIn:

In general, I love Michel’s insights, but this one perplexes me.
I agree entirely with Steve Ruegg when he says that “On-line, automated SPC systems are still SPC.” Michel’s article seems to draw a very small box around what the author considers SPC based on some sort of “restrictive common usage” without regard to basic semantics.

Do you have a process? Are you exerting control over it with respect to threshold limits and data patterns? Did you determine the thresholds and patterns that trigger actions/reactions based on statistical confidence relative to empirically-derived expectations of process stability?

If the answers to those three questions are “yes”, you’re practicing SPC.

If the answer to the first question is “yes”, but not the other two, you’re scaring me — particularly if you’re dealing with complex, high-tech, high-volume parts.

My vote: SPC is definitely NOT obsolete.

• What I had said in my original post was, essentially, that statistical process control was still relevant but SPC was obsolete. The following experience I had a long time ago might throw light on what I mean by this distinction.
At the time, I was working on the specifications of a statistical process control software module for semiconductor wafer processing. In this process, you made hundreds of dies per wafer and processed the wafers in lots of 24. It struck me that it made sense to monitor separately the variability within wafers, between wafers in a lot, and between lots, as due to different causes.
It was not overly difficult to structure the data to produce plots with control limits that made statistical sense for all these different levels of variability. The quality managers in the plants, however, refused this idea, on the ground that it didn’t conform to the SPC standards as set by Shewhart in the Western Electric Statistical Quality Control Handbook.
I agree with your semantics, but I didn’t invent the restrictive small box you are talking about: I found myself standing outside of it.

29. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

Let’s put this into perspective. Are Horseshoes obsolete? I mean afterall, doesn’t just about everyone drive cars now-a-days. But wait a mintute, people still do ride horses though, don’t they?

So what’s the point. The point I am trying to make is that maybe it isn’t as important as it once was, but it is hardly obsolete. There are and probably always will be a place for SPC. It is just one of many tools that quality specialists use. But it is just that…a tool. The key is really using the right tool for the right application. That is where I have to give the author credit. I think he has pointed out that sometimes there may be newer and better methods to acheive the desired goal, some of which never really occurred to me.

30. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

@ Michel’s story
“The most important application of Shewhart’s contribution is in the management of people. . . .” p178
“This simple illustration should put to rest forever use of measures of dispersion like Cpk, as it has no meaning in terms of loss.” p226

These quotes from The New Economics indicate Deming would likely have created a different story or testimonial relative to the value of distiguishing between common cause and special cause variation (i.e., in utilizing SPC). And knowledge of variation is incomplete without appreciation for a system, psychology, and a theory of knowledge.

31. Comment in the ASQ – The American Society for Quality discussion group on LinkedIn:

Under process automation, it look like. It is being done by system.

32. Comment in the Lean Six Sigma discussion group on LinkedIn:

Hi Michael.

As you probably already know, using SPC on the shop floor is not a easy job, because is not a simple indicator of things go wrong or not.

Even though I have not met a situation as you describe it, I’ve met some of interest. Let me describe you a case.

This is a company with some major assembly processes as well as some processing ones.
They implemented two procedures that make use of SPC:

1.One is used in oerder to define (during process development phase) the controls to be applied to the most important product characteristics. In this case, they establish the actual effectiveness of different methods of inspection and then decide which would be most adequate for the scope.

2 The other is used to identify the level in variability in specific processes/ process steps and uncover the variable on which improvement is really needed.

Another tool I found largelly used was the Shaining chart, used for the control of set-ups.
This makes me think to an explanation for the situation you described.

In the mass production industries (where SPC would be more useful than in short serial production runs ), the trend is to use highly specific equipment. These are usually designed for one or two operations. This equipment is normally fed-up with pokayoke devices that tend to control every single operation and part. So why use SPC in these cases?

But SPC is still used, as I have stated before in product and process development phases and for data processing in the “Analysis” phases of problem solving.

Scheward helped us look into process variation and made us understand it. Still, I have to recognise that the use of its charts by operators is not going to spread as long as the trend is to use operators with very basic skills for producing high quality products.

33. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

I think the contribution of SPC (along with other tools) to performance is clear in the automotive industry where the overall GD&T strategy is more stringent in most areas of the vehicle than it was 20 years ago. The implementation of critical characteristics (measured by SPC with higher Cpk expected) contributed to tighter door gaps, better closing efforts, improved trim fits, and less squeaks and rattles. All quality indicators point to better engineering and operational control in this industry.

The idea of using a third of the normal tolerance drove better tooling and process engineering upfront. Organizations pay more attention to tool wear and punch life because of SPC requirements. Preventative maintenance programs were driven by SPC. I could go on and on.

@Michel- I believe our quality department and management team could accurately and truthfully deliver your testimonial for SPC above. I don’t think it is question of SPC being obsolete but perhaps mundane in most high performing operations.

34. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

Nothing is obsolete or modern except by comparison. Do we have another system that could replace SPC system in terms of quality result? I see a proliferation of SPC software in the market. I don’t see a replacement for SPC. What I see is SPC applications that could improve the process but not replace SPC as we know it. Therefore, it is still a control chart whether done manually or with software inspired method. The basic concept remains the same. No SPC is not obsolete.

• In the art of analyzing data, nothing has been frozen for the past 80 years except standard SPC, and, frankly, it is to the state of the art as a slide rule is to an iPad. My slide rule still works, and I could still use it if I had to. I wouldn’t be very good at it because I have not needed it for decades.
Computers have been invented since standard SPC was developed. Statisticians have developed numerous techniques that require computers and enable manufacturing engineers to solve process variability problems that are beyond the range of standard SPC. As I indicated earlier, these techniques are extensively used, for example, in semiconductor process development, but they are called “yield enhancement” rather than SPC.

35. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

It’s very true that my experience mirrors Michel’s in that most organizations I’ve experienced don’t use SPC, under use it, mis-use it, or outright abuse it. I also agree wholeheartedly with the ideal testimonial he presents.

But I also agree completely with Roland Johnson’s very succinct points — to which I would only add the following rhetorical and metaphorical question:

As human beings, we apparently only use a very small fraction of our brain’s theoretical capabilities. (And some of us use much less than others.) This has presumably been true for a very long time. Does it follow then that thinking is “obsolete”?

36. Comment in the SME Society of Manufacturing Engineers discussion group on LinkedIn:

Anything you don’t know how to use is obsolete for you.

37. Comment in the Lean & Six Sigma Community discussion group on LinkedIn:

Can never be. It’s the basic!!

38. Comment in the Lean & Six Sigma Community discussion group on LinkedIn:

SPC will be obsolete once variation no longer exists. I don’t think we are there yet 🙂

39. Comment in the Lean & Six Sigma Community discussion group on LinkedIn:

Tried and true tools like SPC might not always be in style, but are never obsolete (like wingtip shoes!).

40. Comment on Michel Baudin’s LinkedIn page:

For improvement purposes, yes. Although too polite to say so, that was a key difference between the Deming approach and T. Ohno at Toyota. Mr. Ohno believed that every defect had an assignable cause. It is impossible to fix all at once. He wanted data used to find the most important causes and to test solutions. While he was at Toyota that resulted in single digit fabrication and double digit assembly ppm nonconformance rates. By the time Jim Womack’s group arrived at Toyota was evolving away from the Ohno system.

41. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Read this article that I co-authored, which is in the new (January) issue of Quality Progress. The successes that I achieved at Supplier SPC were largely related to the implementation of “useful” SPC, rather than the “doing” SPC. I showed them how to “use” SPC information from their in-process and finished product SPC charts to identify critical paramaters in the processing and to put those critical parameters under SPC control (among other practical SPC applications). This is a supplier in the silicon wafer industry. They were “doing SPC” but were falling under the classical downfall I see in many industries. They (and many others) have been taught how to “do SPC,” but have never been shown how to “use SPC” to its full power.

It is interesting that this industry (semiconductors) seems to shun the use of real SPC, because I recall nearly all of the SPC classes I have takes seem to use silicon wafers as the examples in the classes. But, what I have found is that the Shewhart rules are rarely all used. The responses I get are mainly that “if we use all of the rules, we would always be out of control!” Ummmmm…. DUH!!!!! This being the case, SPC is NOT being “used” — and really not even being “done.” If you are not using all of the OOC rules, then you cannot find all of the Special Causes within your process, and can never achieve an acceptable CpK.

I have also seen the semiconductor industry devise all kinds of new ways to calculate “control limits” and pick the limits that they like the best – to make the process appear to be the most in-control. Granted, in some applications, alternative control limit calculations are necessary, but to just pick limits that look the best is not appropriate (or as I like to say, it is statistically illegal).

The key to a successful working SPC system is to use it as it was designed by Shewhart. The trick is to find someone to work with who can make it practical (useful), and to appropriately develop the sampling size, frequency, and location to get the most “use” of the data. The real problem is that most people only know how to “do SPC!”

42. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Hello friend,
SPC has its relevance in certain specific areas as online control and not as a postmortam activity . – a system prevailing in most of the places.

Attribute chart should be simplified by deleting the control limits. Instead of aimimg to control the process within limits, it is better to aim for perfection i.e. zero defect. In many countries SPC is neirther understood properly nor practiced. Unfortunately industry in many parts of the world has not taken the advantage from the applications of many statistical tools.

43. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Anand – I totally disagree with your comment on removing control limits from attribute charts. The purpose of limits is to show when a process has gone out of control. For an attribute chart – tracking waste, for example – in many operations, these things are “in control at an unacceptable level.” The desire is to reduce the level of waste, which may only be accomplished by forcing the process to go out of control. To do this, a “special cause” must be introduced intentionally into the process. This could be a new piece of equipment, new gauges (i.e., better temperature sensors), training, or procedures. If the introduction of this new special causes does not significantly impact the process, the waste will remain in a state of control, and you did not improve the process. If the new special cause did work as desired, the process will go out of control — shifting toward zero, or reducing the variability. Without control limits on the attribute chart, you cannot know that the change you implemented actually worked.

Let’s say that you spend \$2500 on a new thermocouple system on a process. If the process goes out of control down (less waste) by 1% actual, you can calculate how much savings you get from saving 1% (let’s say \$50,000). You now have power that shows that your \$2500 investment resulted in a true savings of \$50K. If you can’t show a statistical shift (with control limits changing in a positive direction), you can’t say that the new thermocouple did indeed have an impact on the waste.

44. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Hello Steve,
I feel, there is a communication gap.
In my opinion, Control Limits do not play any significant role, especially if you are on the journey of continuous improvement. Your main objective is to reach near zero level as early as possible and maintain that level by continuous vigil. Assignable causes do come and go. They can be detected even without the control limits. You are on a journey of continuous improvement and suddenly you notice deterioration in the performance. It is an indication of the presence of an assignable cause. If you are alert you can notice it, even without the control limits.
Another drawback: For illustration, a process is producing on average 4% defectives with lower and upper control limits as 2.5% and 5.5%. As per control chart theory, process is under control as long as it produces between 2.5 and 5.5 % defectives. This sets a wrong mind set, and improvement activities suffer. Considerable time is spent in the calculation of control limits. If it happens to be a p chart, fresh control limits are to be calculated when sample size changes. Because of these drawbacks, many times these charts are either withdrawn or not maintained regularly. Some of my own theory base colleague did not approve of my approach. I feel, strongly that we have to be more practical in our approach to quality. We should think of a system which should be easily understood and easy to operate. I have tried it at many places with significant improvement in a very short time. An article titled “The master chart: a potent and simple tool for continuous improvement” was published in the journal Total Quality Management, vol. 7, No 3, 1996, 257-266 from UK.
I have made even the X bar and range chart redundant in many cases by converting the machine dominance system to set up dominance through technical innovation. I believe developing robust processs where the need for control and inspection is bare minimum..
Regards
Anand K N

Author: High Quality at Economic Cost–Concepts, Approaches, Techniques in Design and Manufacture

45. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Anand,
Control limits and the application of the Western Electric rules are essential in attribute control charts.
Obviously if you have to do it by hand it would be time consuming but today we have plenty of applications which do this automatically for us.
I have used attribute control charts to control solder quality in real time: the operator performing the operation gets an update of the chart every 5 minutes with control limits and a graphical indication of an out of control situation after applying all the rules automatically.
If the operator changes a solder parameter he can quickly get a feedback to see if the process has improved overall or got worse (maybe he eliminated a short circuit but produced 3 more open circuits).
The fact that the process is in control does not mean it is OK: all it means is that it has not significantly changed neither for better nor for worse.
The lower control limit in the case of defects is also important: if I get points below this limit I should still investigate to understand the cause: Is it a real improvement or is it the visual inspector falling asleep and not catching defects?
The purpose of control charts is to avoid over-reaction and under-reaction in the operator actions. They are also useful to confirm if the process has improved or got worse after a modification.

46. Comment in the Continuous Improvement, Six Sigma, & Lean Group discussion group on LinkedIn:

Anand – you are quite wrong in your assessment. You stated that as long as defects are between 2.5% and 5.5 % they are in control. That only is true if you are only using “points beyond the control limits” as the only rule! What about all of the other rules for OOC??? You can see an mean shift (process improvement) without ever dropping below 2.5%. If you havbe a trend down or have many points below the mean, then your process did improve! And the goal is to force changes to force the process to improve – not to ever believe that between 2.5 and 5.5% (in control) is good. But you MUST use all of the rules!

But hey – I only have 25 years experience in using, teaching, and consulting the best ways to use SPC in practical and effective ways, with tremendous success. But if you think you know better than Shewhart, Deming, Juran, Crosby, and the rest of the experts, then go ahead and make up your own way to try to find success. I guess the rest of us are just wrong…

47. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Hoo boy, what a mash-up.

First of all, I disagree with your starting premise, “In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability.” In fact, in common usage, Statistical *Quality* Control (SQC) is the phrase you’re looking for. [Many excellent texts on the topic are so titled.]

Second of all, to base your reasoning on high-tech manufacturing alone ignores the on-going *essential* – and very successful – application of SPC in many, many other “mature” industries, and in many diverse areas of those industries.

Thirdly, the use of SPC as a way to introduce the concepts and principles of variation, and distinguishing between random (Common Cause) variation and real effects (Special Cause variation), is key to making good decisions about process behavior and taking appropriate actions to *improve* process behavior – in an on-going, *continual* approach to improvement.

Fourth, the use of SPC is an excellent way to know the capability of your measurement system *before* applying any of the many other statistical methods (like DOE) and being completely misled by measurement error. [Even Gage R&R has nothing to say about *how* to improve the accuracy and precision of a measurement system – only how its total variation is distributed.]

And finally, on a more specific point, the use of range as a measure of variation was never just about ease of calculation – witness the XmR chart.

More people need to learn and apply SPC, not only in their work but also in their life (read about diabetics controlling their blood glucose levels, hypertensive people controlling their blood pressure) – not fewer, as your blog implies.

48. I do not understand your objection to my broad definition of SPC. The only difference with the one in Wikipedia is that, instead of calling it a method, I broaden the scope to the use of any statistical tool towards the same purpose. To me, it is a subset of SQC, which also includes, for example, tools to design final inspections on outgoing products that have nothing to do with process capability.

The reason I think it should be broadened is that I fail to see any value in restricting yourself to a fixed set of tools from 80 years ago. Neither manufacturing, nor statistics, nor information technology have been asleep in that time.

Range and Standard Deviations both measure spread. In the case of the XmR charts, since you only consider two data points, they are identical except for a constant factor of $\sqrt{2}$ . For larger samples, the range R is easier to calculate than standard deviation S, as can be seen from the formulas for both:

$R = Max\left ( X_{1},...,X_{n} \right )- Min\left ( X_{1},...,X_{n} \right )$

$S = \sqrt{\frac{\sum_{i=1}^{n}\left ( X_{i}- \overline{X}\right )^{2}}{n-1}}$

It made a difference manually. but doesn’t with software. Other than that, I don’t see any reason to use ranges rather than standard deviations.

49. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

As per Dr. Wheeler, Shewhart developed a simple and effective way to define the voice of the process – he called it a control chart. While there are several different types of control charts, they are all interpreted in the same way, and they all reveal different aspects of the voice of the process. Control charts characterize the behavior of the time series.

Shewhart further wrote that a process “will be said to be in control when, through the use of past experience, we can predict, at least within limits, how the process will behave in the future.” Thus, the essence of statistical control is predictability, and the opposite is also true. A process that does not display a reasonable degree of statistical control is unpredictable.

Shewhart’s terminology of “controlled variation” and “uncontrolled variation” must be understood in the context of predictable and unpredictable, rather than in the sense of being able to “exert” control. The user does not get to “set the limits.” We should talk about “predictable processes” and “unpredictable processes.”

Thus as per Dr. Wheeler, SPC is not about statistics, process control or conformance to specifications. While it can be used in all of these ways, SPC is much more than any one of these narrow interpretations. It is, at its heart, about getting the most from processes and continually improving those processes and outcomes.

He further goes on to state, “Where there is merely training in the use of tools, SPC will fail. However, organizations that promote education and allow the way of thinking to take root will be transformed. For this reason, SPC’s “success” appears spotty, and it will continue to do so in the future.”

Several years ago, Robert T. Amsden heard Kaizen author Masaaki Imai describe the attitude of some Japanese consultants. If asked to help improve an organization’s productivity 10 percent to 15 percent, they would politely decline. If the organization wanted 30-percent to 50-percent improvement, they would still refuse. However, if the organization said it was looking for 100-percent improvement, or even greater, they would jump at the opportunity to work with it.

What Imai was saying didn’t immediately sink in. Later, when Robert T. Amsden viewed “An Address to Dow Chemical Company,” a 1984 video by Bill Conway, then president of Nashua Corp., Imai’s message finally got through to him. Conway recalls how he directed the company’s hard memory disk division to use SPC and the Deming philosophy to improve yields. When the project began, yield was 63 percent; seven months later, the division had surpassed the target of 95 percent. Production volume increased 300 percent without adding employees or floor space. This was exactly what Imai had said was possible. And this was accomplished in the United States, with an American’s ideas.

They accomplished all these improvements, as well as others, by gaining control over business processes through SPC, explains Conway. The division identified the manufacturing processes’ 26 or so fundamental variables. Using about 125 control charts, it brought these processes into statistical control and then narrowed the operating ranges. The division learned the lessons so well, it could run a special order for a customer and, immediately afterward, easily change the process parameters back to standard disk production. This example illustrates SPC’s incredible potential.

Dr. Wheeler on “But is SPC obsolete?”
No. No other technique can boast the same combination of ease of use and statistical sophistication as the control chart. SPC’s empirical approach has been the dominant investigative technique for 400 years and shows no signs of faltering. The control chart’s unequaled ability to facilitate communication always will be beneficial. And the unparalleled simplicity with which it identifies opportunities for improvement always will have a place in organizations that wish to stay in business.

• You say: “No other technique can boast the same combination of ease of use and statistical sophistication as the control chart.”

“Ease of use” is not a term I would apply to control charts. Shop floor operators are much more comfortable with go/no-go gauges, stop ropes and Andons than with an Xbar-R chart. Even the best application attempts I have seen were riddled with mistakes and confusion, for example comparing sample averages with tolerances for individual values.

And, yes, it was sophisticated and clever statistics in 1930. The field has not been asleep since.

“SPC’s empirical approach has been the dominant investigative technique for 400 years and shows no signs of faltering.”

Are you equating SPC with the scientific method in general? Otherwise, I can’t figure out what you mean by “400 years.”

Dan Strongin and you repeatedly invoke Shewhart, Deming and Wheeler about the irrelevance of the normal distribution to Control Charts. What I see them doing is using the math of the normal distribution to compute all the parameters with which to set limits and issue alarms, and then vigorously assert that it makes no difference.

There is a branch of Statistics called non-parametric, distribution-free, or robust, in which you find confidence intervals about data without making any assumption on their distributions. But Control Charts are not that way.

If I were to argue the case, I would do it otherwise. Chemists routinely apply perfect gas formulas to gasses they know are not perfect. This is justified whenever the added accuracy of an exact formula would not change whatever decision you are making. You could argue that Control Charts are doing the same.

You could also say that measurements are not just any random variables and that the actual departures from normality that you observed are mild enough not to invalidate the method. They may depart from normality in being skewed, more concentrated around the mean, or less concentrated around the mean, but you don’t expect physical measurements on work pieces coming out of the same machine to have multiple modes or have a distribution without a mean or standard deviation.

You could add that training large numbers of people to use control charts in a variety of contexts is complex enough as it is, and that taking into account multiple distributions would have made it impossible…

Instead of making this kind of arguments, Shewhart, Deming and Wheeler just assert that the concerns are invalid. I read Wheeler’s criticism of the “Probability Approach to Control Charts” as trying to have it both ways. You can’t simultaneously use probability theory to calculate your parameters and dismiss it as irrelevant.

Yes, the Central Limit Theorem brings sample averages closer to normality than individual values, but that applies only to Xbar charts. Shewhart invokes Chebysheff’s inequality but it only gives you a 91% probability of having all data within three sigmas of the mean for just about any distribution that has a mean and a standard deviations. Extensions of this inequality for unimodal and for symmetric distributions give you a higher confidence, but not the 99.73% probability that you get with normality. And, in Statistical Method from the Viewpoint of Quality Control, Shewhart repeatedly quotes this “99/73%.” It also strikes me as trying to have it both ways.

50. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I appreciate the modern adaptation of referring to Control Charts as Process Behavior Charts. In this sense the information gleamed is more appropriately viewed as a Lag Measure of the process capability. Provided the Process Behavior Charts are representative of the population (which may be of debate) Process Behavior Charts may still serve a valid purpose; however, as Michel astutely pointed out the arena for application is more conducive to processes of lower volume. I also concur that organizations are benefited by having a small population of residential Black Belts or Master Black Belts to source to the rest organization on a level of statistical depth that will be understood by the audience.

51. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Michel. In my observation SPC is obsolescent mainly in the manufacturing industries where the sample size is small. With the economical point of view, it is not feasible either to put SPC to work on small lot run of critical products. These days’ industries are designing more critical and expensive products that become part of a major assembly with higher cost. No one wants to take risk to make mistake in the process and sit and wait for the results of SPC analysis. SPC tool is purely based on the data presented for analysis, with that very reason accuracy of the data is very important and also the tool used to measure that variable, its accuracy, besides all the above one thing never mentioned and spoken about is the use of the measuring the tool itself how the person uses it. This is the basic of using a precision measuring instrument is the force. Application of force varies from person to person optimum use of the measuring tool is obtained by practice only. Instead we have gage R & R study giving more important to the gage itself not to the person who takes the measurement. When you look all these stringent and comply with those is not much rewarding to the manufacturer. So, odds are against using SPC on short run expensive projects.
However SPC tool may show advantages in long run projects where days in and day out same product is produced and measurements will show some trend during analysis that trend will be converted into some action plan to bring the process under expected level. It seems very lengthy process and any cost of rejection due to unacceptable variations, the manufacturer should absorb the cost of rejection and poor quality products.
The bottom line is that there is no such thing that one tool fits for all. My caution to those who are investing on SPC tool and equipment, do your math again to assure that is the right decision you are making and guarantees return on investment. If you decided to use that tool any way, you are taking a very high risk to your company and to the product as well. More negative comments are there than good about this tool, that is why I started with the word obsolescent, it is a matter of time this tool will be obsolete from majority of the manufacturing shops. You may find benefits in being proactive process control rather than on reactive process control or policing the process that would be best term for that.

52. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I would want to comment on “SPC is obsolescent mainly in the manufacturing industries where the sample size is small.”

The subgroup size is not the most important characteristic of the chart. Rational subgrouping is more important than the subgroup size, even if that requires subgroups of size one. The charts will work with subgroups of size 1, and they will do so
even when the data are not normally distributed.

53. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

If control charts are obsolete, what is there to replace them?

There are nowadays more control chart types than in 1920’s and computers have made it easier to calculate control limits and draw charts. Even if some things have changed, the purpose of the control charts are still the same than in the early days: the separation of common causes from special causes.

It is quite common to draw a time series chart of measurement data. Then questions rise: is something special happening in the process? is there a trend? is the process stable? etc. Without proper decision limits we just have to quess. Guessing brings along tampering. Tampering is quite common, but not by far the very best way to manage.

Please let me know, which tools do the same than control charts, create reasonable decision limits between common and special causes in processes?

54. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

If Statistical Process Control is obsolete, what would replace it in continuous process industries, in industries where there is a lot of physics in the process: temperature, pressure, flow, time, humidity, where there is a lag between processing the materials and seeing the finished product? Or, are those the “mature industries” you reference that don’t meet the definition of “high tech”? Sorry, I don’t see anything in the blog to convince me it’s obsolete.

• I don’t have a one-size-fits-all answer, but here are a few partial one, including some previously given in the discussion on Lies and Statistics.

A rotary cement kiln is not something you would consider high technology, and it has parameters like temperature, pressure, flow times, humidity, and others that influence the outcome. Hirotsugu Akaike was a statistician I met in Japan in 1980 who worked on this problem and proposed a solution that involved a multidimensional autoregressive-moving-average (ARMA) model, with the use of what has since become known as the Akaike Information Criterion (AIC) to prevent overfitting. His approach was implemented in software and commercialized in Japan for embedding in kiln controllers. Admittedly, it works upstream from SPC, on the control parameters of the process rather than on its output, but it is statistics applied to process capability.

More generally, many machines now have computer controls. In the 1930s, none did. These controllers — whether they are CNCs, PLCs, SCADA systems, or custom-developed — make sure the machines follow the right sequence of steps, with the proper parameters, be they feeds and speeds, flow rates, temperatures, pressures, etc., so that what is done to each workpiece is much more tightly controlled than 80 years ago.

Often, these controllers can take measurements on the workpiece itself, using, for example, spindle probes in machining centers, eliminating the need for operators to do it. Based on these and other sensors, the controllers issue alarms and stop the machines as needed to prevent the production of defectives. This is part of Jidoka.

If that is not enough, you incorporate go/no-go checking with a gauge into the manual transfer between machines. SPC specialists prefer measured variables to go/no-go attributes on the grounds that a measurement is richer information. The difference here, however, is between a go/no-go check on every part as it comes off the machine, and measurements that you cannot afford to make on every part. If you have a policy of immediately stopping the machine when it makes a defective part, the go/no-go check on every part can provide better and more timely information.

Then you have Andon lights to communicate the status of the equipment and ensure a prompt response.

I didn’t mention mistake-proofing, because it is about preventing human error rather than detecting process capability problems. It is key to achieving high levels of quality, once you have achieved process capability. It is a 3rd-level technique.

First, you use statistical methods and process knowledge to achieve process capability. That’s Level 1. It gets you to 3% defective.

Then you change production to one-piece flow lines with go/no-go gauge checking as needed between operations. This provides rapid detection of discrete problems like tool breakage. That’s Level 2, and it gets you to 0.3% defective.

The main problem then becomes human error, and mistake-proofing then gets you to 15 ppm. That’s Level 3.

Level 4 is the combination of Change-Point Management and JKK (Jikotei Kanketsu).

• After going through these discussions I am convinced that SPC is not Obselete! In my opinion SPC will not be dead. While working in Manufacturing set-up during initial part of my career, SPC concepts were relevant and used. As I moved to software industry, SPC concepts were of little use and analysis were subject to everyone interpretation! I also observed lot of resistance to use SPC concepts from development and management community (There are exceptions though!). Even I thought it was DEAD!

I suggest that there should be a serious change in the way SPC concepts are taught today (Who does? I don’t know!). I feel books and syllabus should give room for relevant case studies from different industries. Teaching methods should target people who use it not elite and learned. Objectives should be a) To teach most commonly used SPC tools in today’s context , 2) Where and when applicable and c) To help users to interpret probable outcome(s) and arrive at common conclusions.

As Sanda, stated earlier, the rule applied by users is that, “what you do not understand, you reject… ”

” If you don’t use it, you lose it”…. Matthew 25: 14-30

Question is, how do we continue to use well conceived concepts by our gurus? That too without common understanding among us? and Without a plan to carry the legacy forward?

Thanks to every contributor here for giving so much of learning experience 🙂

55. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

There are some key question that are coming out (from a different Thread)— I have attempted to list them; pl. add or modify them for my sake.

Question 1 – Are control limits both economic and probability limits?

Question 2 – Was individual chart used by Shewhart?

Question 3 – Is this true that Shewhart is not attributed to XmR charts?

Question 4 – Is there evidence that 3-sigma limits always provide the right balance for all processes, all conditions and all costs?

Question 5 – If anyone knows of any studies that support t=3 vs. any other value for i) various Type I and II errors, ii) various costs resulting from those errors, and iii) other multipliers than 3, please send me the references of the studies.

Question 6 – Is Deming wrong about control charts not being tests of hypotheses?

• Question 1 – Are control limits both economic and probability limits?

The two epithets are not supposed to be incompatible. Statistical decision theory is about losses. You set decision thresholds to as to minimize your losses on the average. Skewed distributions commonly arise as a result of asymmetrical risks. If you cut a rod too long, you can rework it, but if you cut it too short, you have to scrap it. As a result, your length distribution will be skewed towards the high end of the spec.
I don’t believe, however, that the choice of 3 sigmas arose from a formal risk calculation, which would have been unlikely to produce a round number.

Question 2 – Was individual chart used by Shewhart?

I don’t know.

Question 3 – Is this true that Shewhart is not attributed to XmR charts?

Based on one of the documents I received from you or Dan Strongin, it comes from Britain in the 1940s, not Shewhart in the US.

Question 4 – Is there evidence that 3-sigma limits always provide the right balance for all processes, all conditions and all costs?

The Six Sigma people obviously disagree.

Question 5 – If anyone knows of any studies that support t=3 vs. any other value for i) various Type I and II errors, ii) various costs resulting from those errors, and iii) other multipliers than 3, please send me the references of the studies.

For what it’s worth:
Burr, I.J. (1967) “The effect of non-normality on constants for Xbar and R charts,” Industrial Quality Control, Vol. 23
Schilling, E.G. and P.R. Nelson (1978) “The effect of non-normality on the control limits of Xbar charts,” Journal of Quality Technology, Vol. 8.

Question 6 – Is Deming wrong about control charts not being tests of hypotheses?

Without control limits, the charts just visualize time series, and that is often all that is needed to understand what is going on. The minute you slap on control limits, however, with every new point, you are testing the hypothesis that there is a special cause of variation.

56. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

(1) Michel’s “point” was that SPC is obsolete; he based his general conclusion on *one* specific industry.
(2) With “on-stream” analyzers sampling at a high frequency, the data collected and analyzed with SPC are hardly “lagging.”
(3) The beneficial analysis using SPC of the autocorrelated data present in rapidly sampled sources has been extensively covered in the literature.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I don’t subscribe to one shoe fits all in any aspect of CPI. I do find merit that regardless of the industry in question there is cause to seriously analyze how and why SPC is employed…if the determination is in favor of sustaining SPC then at least recognize it as a Lag Measure and make appropriate decisions that are weighted by other factors as well. In this train of thought if process control charts are renamed Process Behavior Charts then SPC could be recognized as part of an overall feedback loop from the output to the appropriate Value Stream input point. It may seem to be semantics, yet there is an underlying difference in the way an organization should be setup to analyze and respond in a Control Chart Vs Behavior Chart environment. Of course if analysis reveals neither is an advantage then other tools are available.

57. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Both the following are comments of Dr. Wheeler. Just FYI. There are from his articles from the internet.

• “No other technique can boast the same combination of ease of use and statistical sophistication as the control chart.”
• “SPC’s empirical approach has been the dominant investigative technique for 400 years and shows no signs of faltering.”

I would want to comment on “Dan Strongin and you repeatedly invoke Shewhart, Deming and Wheeler about the irrelevance of the normal distribution to Control Charts.”

As per Henry R. Neave and Donald J. Wheeler’s paper, “Shewhart’s Charts and the Probability Approach,” — “However, as one reads about “control charts” in today’s technical journals, it does not seem that one is reading about the same technique that was described by Shewhart.”

There has been a lot of distortion against the original work of Shewhart.

Before SPC understanding could mature, it took off in the direction of becoming a system, as opposed to becoming a problem-solving tool. Most businesses have applied SPC in a narrow format; if one was to reassess SPC philosophy and turn the direction of Statistical Quality Control, they would find a realizable gold mine awaiting them.

The definition that I go with for SPC is — SPC is not about statistics, process control or conformance to specifications. While it can be used in all of these ways, SPC is much more than any one of these narrow interpretations. “It is, at its heart, about getting the most from processes and continually improving those processes and outcomes.” It is a problem-solving tool. Control charts and process capability analysis are two basic tools of SPC.

To me, after Shewhart, Dr. Wheeler is “the” authority on the subject of SPC. So, while there are unanswered questions, the quest is to get those out through such discussions.

I will try and attempt to address your questions and comments.

• You wrote: “Before SPC understanding could mature, it took off in the direction of becoming a system, as opposed to becoming a problem-solving tool.”

I think what you mean is that it became a ritual. Such is the common fate of all approaches that become the focus of external audits. Regardless of whether it works or not, it has to look a standard way for the auditors to tick off the items in their checklist and certify you.

You wrote: “The definition that I go with for SPC is — SPC is not about statistics, process control or conformance to specifications. While it can be used in all of these ways, SPC is much more than any one of these narrow interpretations.”

If it’s not about statistics and process control, then why is it called “Statistical Process Control’? And it is process control in an unusual sense of the term. Outside of SPC, process control refers to programming automatic machines and making sure they follow instructions, by means, for example, of feedback control loops. In SPC, on the other hand, it is focused on the output of the process and the goal of achieving and maintaining process capability.

Narrow interpretations are good. Narrow interpretations support communication. With brosd interpretations, everything is everything, and we cannot have a conversation and know exactly what the other person is talking about.

I was surprised to find Shewhart’s own words equating quality with conformance to specification. Of course, in general, we don’t do this anymore but, when you are worrying about process capability, it is still what you have to do. If you get the process to routinely make parts with critical dimensions within tolerances, the job of SPC is done for now; the job of quality assurance is not.

Shewhart’s view of continuous improvement is cycling through specification, production and inspection, which I take to mean that, whenever you have achieved process capability, you tighten the tolerances and do it again. He may have been the first one to express this view. I don’t recall any notion of continuous improvement in the works of Taylor or even Gilbreth. To them, there was a “one best way,” and, once you had found it, by definition, you couldn’t improve on it. It was an optimization rather than a continuous improvement mindset.

This being said, I see no reason to call “SPC” everything that is intended to “improve processes and outcomes.” Techniques like Andon, successive inspection, mistake-proofing, cells, kanbans, etc., are used in continuous improvement but nobody would consider them part of SPC.

In business, it is common for acronyms to be disconnected from their original meaning. When NCR and KFC respectively expanded beyond cash registers and fried chicken, their names no longer meant “National Cash Register” and “Kentucky Fried Chicken.” But it doesn’t usually happen with ideas. “SPC” was introduced as a short way to say “Statistical Process Control,” and I see no value in using it to mean anything else.

58. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

SPC is, generally, a form of feedback control – the feedback, and control, are typically manual rather than “automatic.” But that difference does not eliminate SPC as a form of feedback control. And even the manual part can be automated…

Michel, you may have been surprised by Shewhart’s equating quality with conformance to specifications, but that was *the* definition of quality back then. It wasn’t until 1960 that Taguchi articulated a new (and improved) definition: “On target with minimum variation.”

And Deming in his _Out of the Crisis_ (1980s) pointed out that “the supposition that everything is all right inside the specifications and all wrong outside does not correspond to this world.”

And so, I disagree with your statement: “If you get the process to routinely make parts with critical dimensions within tolerances, the job of SPC is done for now.” Hardly…parts with dimensions within tolerances is not where any quality manufacturer (or his customers) would want to leave his process. SPC would continue to be used – some would say until the nirvana of “six sigma” is achieved. 🙂

And your last two paragraphs would seem to contradict your blog’s assertion that SPC includes “…application of statistical tools…” – which is really SQC – disconnecting SPC from its commonly accepted meaning.

• I have to be very careful in my choice of words. I meant automatic feedback control, the kind you got 200 years ago in a steam pressure regulator with rotating balls and now with PID boxes.

The Taguchi definition you quote still strikes me a too narrow a view of quality. I go with Juran’s “agreement of reality with expectancy.” It is centered on customer experience and treats parameters you can measure as substitute characteristics.

In the paragraph about the job of SPC, the key phrase was “for now,” with the next paragraph elaborating on Shewhart’s cycle and its historical significance, as I see it.

I don’t understand your concern about the last two paragraphs. To me, if you use statistical tools, whatever they may be, for the purpose of getting a vacuum forming machine to consistently put out parts you can use, that’s SPC, and it’s the version that does not become obsolete because it moves with the needs and the technology.

If you apply the same tools to acceptance testing of resin pellets at Receiving, it’s not SPC.

59. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

You said “If you apply the same tools to acceptance testing of resin pellets at Receiving, it’s not SPC.”

Ah, alas and alack, we did indeed apply the same tools at “Receiving” – and it *was* SPC. Our suppliers had to submit Certificate of Analyses (COAs) with each shipment (lot…batch…etc.). We only had to analyze their COA data over time using histograms and SPC charts to see how well, and *what,* they were doing. For example, a truncated tail on a histogram told us they were “inspecting quality in” by making out-of-spec material but not shipping it (to us at least)…an imbedded cost we were paying for. Or, a histogram with a missing mid-section told us they were shipping the “heart-cut” to someone else who had narrower spec limits than we did. Or, a histogram with an unusual “stack” of values at one end or the other – it’s called “flinching” and happens when the test result is just outside the spec limit and, the supplier knowing the measurement system itself has variation, re-tests until the result is just *inside* the spec limit.

• Supplier quality versus SPC:

The time frame is different. You work on arriving shipments rather than workpieces coming out of your own machines, which means you often don’t have the process sequence information and traceability is an issue. As the example of chips supplied to automotive shows, you may not have adequate technical knowledge of your suppliers’ processes. And you don’t have authority over your supplier’s employees. You may use some of the same tools, but you are tackling a different problem, and the range of actions you can take is also different.

60. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Michel, are you just trying to incite us? you can’t really believe SPC is obsolete unless you stick to a narrow minded definition that it consists of doing the charts by hand the way Shewhart did at Western Electric. That would be like saying the lightbulb is not Edison.

SPC is part of Intel’s and Toyota’s overall processes, with the sole aim of watching for “OOC” or “unpredictable” conditions. How delightfully shewhartian!

Those examples I must pull from second hand experience, but in the first hand, though I do not always use process behavior charts, for various reasons, where I do they are most elegant solutions to the problems I help people confront: how to better manage the whole tamale, not just the production side of that business.

I have used them to help food processors better understand their raw material, thereby having a more realistic idea of what they can and cannot expect from their raw material, and tell their customers, and the real overall costs in supplying their products when their assumption about the raw material and its variability was wrong.

You’ll like this, using some of what I shared with them, they developed a very complex kanban system consisting of lowering either a red or a green bin to signal the brineyard they need either red or green peppers. Silly thing saved them thousands of dollars a season, for an investment of… nothing more than brains. Had to abandon it and buy radios when they rebuilt a shed and it blocked their view. Pity. Great ROI!

The machines of nature, though wildly sophisticated, don’t have andon lights, so those old fashioned charts helped the plant isolate who were the better farmers, and the worse, and got them to talk to the other farmers about what they do, and lo and behold, they are seeing substantial improvement in the raw material.

I have shared them with production managers who use them to help predict how much of each thing to produce, to level production and maximize throughput dollars, virtually eliminating late deliveries and excess inventory on key items, the Pareto items, the ones that matter.

I have used them in analyzing sales per item in a supermarket to predict how much to manage the shelves, minimizing inventory, freeing up do re mi for inventory, at the same time balancing the inventory over the month so they could run on purchase to sales reports rather than wait for the P&L, REALTIME in a Supermarket Deli. All with pencil and paper by the way. Oddly enough, the Pareto principle applied here, as around 18% of all the items were responsible for 90% of the profits. Another silly old archaic tool.

But by far the most important application has been in keeping managers from pushing their people when what they were seeing was a predictable part of the process, thereby eliminating fear, and allowing pride and joy in work to flourish, leading to an almost 800% upshot in profit in a small company, increased productivity, increased quality, more loyal customers.

None of this could have been possible for them to understand, and create for themselves without the XmR and SP THINKING. And all of that represents SPC, as it shows the use of a simple STATISTICS, run through an even more simple and elegant tool, that balances the empirical with the theoretical, to create breakthrough improvement for the overall process that is called the business.

So I am honored to be included in any group that includes Gagandeep and Wayne! And to have you mention me in a sentence with Shewhart and Deming. My five seconds of fame, or 4.9999999999999997 for those in Six Sigma.
Logical Error Number four, if I remember, is to use one example to try and prove a general theory, AKA deductive reasoning.

One can take any aspect of something and selectively choose things where it doesn’t apply. Let me go to the absurd, and prove Andon lights are obsolete by the same, though more obvious rhetorical techique.

How would you use Andon lights in bars? There are more people employed in bars than there are in high tech high speed manufacturing, where andon lights are utterly useless. THEREFORE, Andon lights are obsolete. It is a logical error. Using selected examples from a single context.

All kidding aside, as you did incite us, from what you write it seems you might not be that familiar with the Loss Function. If you haven’t read Taguchi, I would highly recommend him. But then you probably have. If not, that’s OK, most people haven’t. His def. of quality is anything but narrow: just the opposite really.

Though I am not enough of an expert to take up that sword, just a Taguchi dilettante.

One more tidbit: you mentioned in the last discussion you didn’t see the difference between pdsa and pdca: here’s a post I wrote on it…. hope you like it, and if you don’t you will lie and say you do…. (wink) http://demingcollaboration.com/pdca-pdsa-is-it-aa-simple-as-a-c-or-an-s/

• First-hand is better than second hand. Another semiconductor company I knew first-hand made chips for automotive applications and spent ~\$20M/year on failure analysis. It received failed chips from the field, chemically stripped them layer by layer down to raw silicon, took measurements and observed them with an electron microscope to identify the cause of the failure, and all of it was reported to the customer. On the face of it, it looked like an elaborate effort as solving process problems. When you looked closer, however, it became clear that the sole purpose as pacifying the customer.

By the time the chips came back from the field, four months had passed since they had been made. By the time the analysis was complete, the operation that had caused the failure had undergone two or three engineering changes, making the analysis technically pointless. Appearances may be deceiving.

The semiconductor industry is a relevant case because it uses statistical tools more than any other industry. Semiconductors and pharmaceuticals are the largest manufacturing customers of statistical software. You can check with suppliers. The pharmaceutical industry uses the tools because they are mandated for drug approval, so their use is externally driven. The semiconductor industry uses them out of technical necessity.

I have worked in other industries too, from aluminum foundries to frozen foods and car assembly, and I have never seen any realization of the vision of control charts used everywhere by operators to control processes, as advertised in the literature.

I am not swayed by your second-hand examples, because either I have first-hand knowledge of the company (Intel), they are ancient (Toyota), or unclear about actual technical content. Again, if you agree that SPC is the application of statistics to the achievement and maintenance of process capability, it doesn’t get obsolete because you are not limited to using yesterday’s technology that was developed for yesterday’s problems.

Your first-hand examples, on the other hand, are interesting. The applications you describe, however, strike me as mostly out of the realm of SPC: stocking shelves, managing inventories, triggering replenishment,… Great stuff, but it is not about getting machines to do what you want them to.

I learned Taguchi methods from his disciple Madhav Phadke in “Quality Engineering Using Robust Design.” It is useful stuff.

About PDCA/PDSA, I think both are overemphasized in the US, compared to Japan. About PDCA, you need to remember that it is an English-language acronym that I believe was made in Japan. If you consider other such acronyms, like SMED or TPM, they don’t necessarily make much sense to Americans. “Single-Minutes Exchange of Die” is not understood in the US without an explanation that “Single-Minutes” means less than 10 minutes. “Total” in TPM means with involvement by everyone, which isn’t the most common usage.

Yes, the “Check” in PDCA doesn’t sound right in English, but I am not sure that Japanese will see the nuance between the two foreign words Check and Study. Do we want to endorse and import back Japanese misuse of English? Or do we want to correct it and make an issue? With SMED, we took the first course of action; with PDSA, Deming took the other.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

“I am not swayed by your second-hand examples, because either I have first-hand knowledge of the company (Intel), they are ancient (Toyota), or unclear about actual technical content.”

Well, thanks for the compliment! What were you, a lawyer in a past life?

According to our beloved scientific method, all you need is one outcome that contradicts a theory and its back to the lab. I fully believe you never saw spc when working with Intel, but, on the other hand,

on the one hand we have, you never saw SPC while at Intel, great:
on the other we have:

1. the previous link that has a diagram showing “SPC” clearly written on line with a caption “SPC: Monitor the statistical data and stop the tool if trending to OCC’s (Out of Control Conditions) 100% of the Fab.” Cmon, that is pure Shewhart… You don’t need a bunch of people on the floor with charts hanging to qualify.

1. But if that is not enough there is this from INtel, 2006: http://cache-www.intel.com/cd/00/00/33/05/330517_330517.pdf See Slide 5. hard to argue, it clearly says SPC.

and the coup de grace:,

THE INTEL QUALITY SYSTEMS HANDBOOK, downloadable from their site!

READ IT AND WEEP!

Section 6.4 Manufacturing Systems, page 43

“Statistical Process Control (SPC) tools allow Intel Manufacturing to identify problems early and prevent excursions. Station-manufacturing SPC is a key tool to reduce variability of key parameters for all of the manufacturing and job processes/products.

In‐line process control monitoring is ongoing, and significant parameters are tracked using tools such as control charts.”

Page 36 on Suppliers Detection: “Suppliers should provide updated control charts and timely closure for any out of control situation.”

Go ahead, try and wiggle out of it now! I love sparring, but have real work to do so either you get it this time, or I am afraid I can no longer afford the time to lock horns on this. Sorry!

Cripes, they even have an SPC database for suppliers on their site: https://supplier.intel.com/sps_spc/

You ask if SPC is obsolete, then you narrowly define specs for SPC and Shewhart that fit your argument. Fight fair!
Do you really limit SPC only to trying to get machines to do what you want them to do? Boy, you are alooooone on that one.

The purpose of SPC is not limited to controlling machines, it is more about understanding processes. The control part means stable, predictable, as in a state of statistical control, not “do what I tell you to do ya big lump of steel.”

A process is defined by the Oxford as “a series of actions or steps taken in order to achieve a particular end:” so, despite the compliments, I beg to differ, the things I work on are processes and can benefit. My version of SPC is about using it for UNDERSTANDING, not just keeping things under control. Cause it aint what ya don’t know as much as what you think you know that aint so that kills us!

• In Intel documents, you have references to SPC in a PowerPoint presentation and a supplier handbook.

What are you supposed to conclude from that?
Just because there is a manual, does it always mean that, when you go out to the shop floor, what you see matches what the manual says?
Have you never seen a customer organization mandate that its suppliers do things it doesn’t do?
And PowerPoint presentations always tell the truth and nothing but the truth?
What do you believe, documents or your own eyes?

This being said, a company’s documents may be half-true. They may simply embellish the truth without being outright lies. They are worth looking, at least to see the facade the company wants to present. What does the Intel presentation actually say?

The slide 5 that Dan quote as refutation of what I have been saying defines SPC as follows:
“The use of statistical techniques to analyze a process, take appropriate actions to achieve and maintain a stable process, & improve process capability.”

It seems to me to be consistent with the definition I like to use and that several of you argue against for reasons that escape me:
“In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability. In this broad sense, you couldn’t say that it is obsolete…”

And the other PowerPoint Dan pointed out to me earlier was by an Intel engineer who reported using EWMA (Exponentially-Weighted Moving Average) charts that are not part of the standard package commonly sold as SPC. I have never used EWMA charts, so I have no opinion of their usefulness. Gagandeep, however, tells me that Don Wheeler has no use for them and excludes them from SPC, making the Intel people heretics.

If you ask about cells, kanbans, andons, kamishibai, etc., it’s not difficult to google pictures of contemporary manufacturing shop floors using these techniques, massively. All I have seen on SPC in the past couple of days from this group is 50+ year-old pictures from Toyota, a reference to a speech an executive from Nashua 30 years ago, and first-hand accounts of the use of the same tools by Wayne and Dan in applications other than process control in manufacturing.

If you have been using statistical tools to achieve and maintain process capability in manufacturing operations, in the past, say 20 years, please share your experience. Details are welcome.

61. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I have first hand experience from semiconductor industry having worked in a semiconductor factory for 20 years.

At least the factory I was in and those I was in contact with used SPC routinely. We used simple xbarR-, xbars-, and XmR-charts. Sometimes EWMA charts.

We also tried automatically moving limits in EWMA charts, but they were too complicated to be taken into wide use, however bright and theoretically corrrect. So we rapidly turned back to the simple ones.

They were not done by pen and paper, but implemented in the production control system. Otherwise they were old stuff.

Of course we were not only stuck to the simple control charts, but used also other statistical tools in process characterization, analyses and experimentation: different graphs, ANOVA, ANOM, linear regression, cluster analysis, data mining etc.

Even if we had knowledge on other statistical tools and we used them, no other tool did undermine the status of the simple control charts. They were simple, understandable and enlightening. They clearly showed how the process was behaving.
To my mind SPC control charts (or process behaviour charts if you like) simply are good and the do well their job.

They are robust. They do their job regardless of the underlying distribution. The limits have been calculated using normal distribution (something must be used), but they are not restricted to it.

Of course the probability of a false alarm or such varies with different distributions. But control charts should not be used as a probability tool, and no confidence limits should be attached to it anyway, just because we do not know the underlying distribution and because all distributions in real world waver.

They stand also (at least slight) autocorrelation. In severe cases modifications should be done, I guess, but this has not occurred yet in my professional life, so I have not examined it any further.

There has been some attempts in literature to calculate “economical control limits” starting from the loss of decision errors of different type, but at least I have not seen them in use. They may be theoretically correct, but unnecessary complicated to be taken into use. So at least I have stuck to Shewhart’s justification for 3 sigma limits: “…it seems to be an economically reasonable value…”. I guess that 2 sigma limits would be too narrow and 4 sigma too wide. So, let it then be 3 sigma.

So, simple control charts are good, robust and they do their job well. However I am more than enthusiastic to learn of tools, which were even better. This has been an interesting conversation (thanks Michel for your provoking claim), but I have not seen anything yet that could replace simple SPC control charts.

• Thanks. I hope you don’t mind, but I have follow-up questions.

To the extent you are allowed to share this, I was wondering which operations this was on, oxidation, deposition, photolithography, etching, etc.

Were the characteristics measured inside the production equipment or on measurement instruments outside the equipment?

How was this work integrated with overall yield enhancement efforts?

Who was using the charts? Was it production operators, process engineers, technicians?

Were the charts on display all the time near the equipment, displayed only when the controller detected an event requiring investigation, or displayed on request off-line on a screen outside the fab area?

What kind of actions were taken in response to alarms?

62. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

In my last experience with SPC, there were no charts. Everything was done in real time by data fed into the SPC software. When an alarm condition happened, literally quality and manufacturing received an alarm e-mail. The e-mail would contain info on the WECO rule broken, the data that caused the alarm, date and time of the incident, (in my case clean room that failed) and a brief explaination that engineering had prepared in advance as to probable causes and actions. SPC is old, but still an esential tool in monitoring a process. All the math hasn’t changed (did we really expect the math to change). Just the results modernized.

• So much for visualization, and having operators manually update charts on the line side! Some participants in this thread think it is of great value. In what you describe, all the analysis is done under the hood by software, and the responders are notified of alarms by email.

In a first phase of automation, you take a manual procedure and implement it in software. Then you realize that it gives you the opportunity to do other things that you couldn’t manually. In a second phase, you migrate from horseless carriages to cars.

Shewhart’s charts were designed to be drawn, maintained, and analyzed manually by people. The techniques used were constrained both by the information technology of the 1920s and by the need for the tools to be understood at some level by their human users. By taking people out of the data collection and analysis loop, you remove these constraints but at the same time sever the relationship between operators and the procedure.

Shewhart modeled process characteristics the sum of a constant and white noise. All the calculations are based on the assumption that the fluctuations are normal and that the measured values on all workpieces are independent. As Lasse said “something must be used,” and you decide that the discrepancies between reality and the model are negligible with respect to the objectives you pursue.

Software allows you to have a plan for every characteristic. You are not restricted to one variable at a time; you can use multivariate time-series models if appropriate. If there is autocorrelation in the process, you can use autoregression, etc. Of course, you have to know what you are doing, both statistically and in terms of process physics and chemistry. But that has always been the case. And then your system must communicate in ways that human responders can understand and act on, which is another challenge.

You don’t say whether your application is from the semiconductor industry. It is an industry of particular interest, again, because its processes are chronically out of control: the minute you have successfully brought an operation under control, you have to switch to the next generation and do it again. It doesn’t happen with shock absorbers.

In this industry, you have unit process engineers who worry about a single operation, and would use SPC in its broad sense, and you have process integration engineers who worry about the complex interactions between device characteristics determined at all operations that may make a working or a defective circuit.

The process integration part is perceived as strategic and the key to yield enhancement, and this is where most of the investments in IT and statistical tools go. The data collection for this happens at test at the end of the process, and the analysis results in tweaks at all operations, which the unit process engineers then implement.

The reasons I was given for this emphasis on process integration were (1) that the key characteristics were simply not possible to measure between operations inside the process, and (2) that subtle interactions between the 500+ operations of the process were the key. I have never heard work at this level called SPC.

Your statement that “the math hasn’t changed” is pessimistic. We have many new analytical tools since the 1920s, that we can use because contemporary IT allows us to do it.

63. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Today, I would want to talk about the Questions that have come up around SPC in this group (these Questions are listed in a previous post in this thread).

Before I answer let me build a context first.

“Quality” is a much more complicated term than it appears. The more common a word is and the simpler its meaning, the bolder very likely is the original thought which it contains and the more intense the intellectual effort which went into its making.

The philosophical development of the idea of quality can be traced to Plato, who invented the word “Poiotes,” to define Quality (circa 400 BC), which was later translated to “Qualitas” by Cicero. The English word “Quality” is derived from “Qualitas.”

The modern quality movement began with Walter Shewhart n 1920’s.

Later, he was supported by Dr. Deming, Dr. Juran, Dr. Feigenbaum, Dr. Crosby, et. al.

The work that Dr. Deming did was more holistic and no one can really refute his contribution and SoPK.

In today’s time the work of Dr. Shewhart is being taken forward by Dr. Wheeler.

Dr. Donald J. Wheeler is an American author, statistician and expert in quality control; he holds M.S. and Ph.D. degrees in statistics. He is the author of 22 textbooks.

He is a Fellow of both the American Statistical Association and American Society for Quality.

Recently he was awarded the 2010 Deming Medal by the American Society for Quality.

To me he is the authority on the subject of SPC after Dr. Shewhart.

So, I reached out to him with the Questions that have been raised in this forum.

What I am going to share next are his inputs to the Questions raised in this forum.

Question – Was individual chart used by Shewhart?

As per Dr. Wheeler, “Shewhart charted individual values on a p-chart. Here we also see that the limits are not probability limits. Because, rather than using the binomial distribution to achieve some specified alpha level, he used three sigma limits.” “The p-chart, np-chart, c-chart, and u-chart are just special cases of the X chart.”

Question – Is this true that Shewhart is not attributed to XmR charts?

“The XmR chart was invented in 1941 by W. J. Jennett. Jennett worked at a General Electric plant at Wembley England.”

Question – Are control limits both economic and probability limits?

“No, they are not probability limits, and they never were probability limits. See Shewhart in the last paragraph on page 277 of his 1931 book. Probability limits are limits computed in one way, while Shewhart computed his limits in a completely different way. Statisticians never understand this because they are so used to the first computational approach.”

Question – Is there evidence that 3-sigma limits always provide the right balance for all processes, all conditions and all costs?

You need to read Dr. Wheeler’s book “Normality and the Process Behavior Chart.” Or perhaps, his manuscripts number 200 and 220 from his Web site: spcpress.com.

There was a comment posted in the forum, “When you say “This is why process behavior charts, with their three-sigma limits, yield very few false alarms” you are only referring to Type I errors. The cost of Type II errors can be far greater! The wider the limits, the higher the Type II error rate. Shewhart himself said that we need empirical data to select the limits–yet he didn’t provide it in his book. He only calculates Type-I errors.”

In reply to the above comment,

“The fact is we do not want to detect every little signal, but only those that are large enough to be of economic import. In practice, this means generally larger than 2 or 3 sigma shifts. Smaller shifts simply do not have that big of an impact upon the product stream. You are using the language of the analysis of experimental data, where you perform a single analysis, and make a single decision. A process behavior chart is a sequential procedure. Every time you add a data point you perform an act of analysis. Thus, there is the risk of a false alarm, and there is the opportunity to wait for more evidence. But when you get a signal on a process behavior chart, you will almost always have an assignable cause. Since the purpose is to know when to take action, you want it this way.”

“Shewhart used the inequalities as existence theorems.” Dr. Wheeler has done studies involving almost 5000 probability models, and three sigma limits will cover more than 98% of any mound shaped distribution and more than 97.5% of any reasonable j-shaped distribution you might use.

As per Dr. Wheeler, “What we are talking about here is not estimation, but the characterization of the process behavior. This is completely different form everything we do in statistics.”

And, on three-sigma limits this is what Dr. Deming had to state … “Shewhart perceived that control limits must serve industry in action. A. process, even in statistical control, wavers. Control limits can thus not be associated with any exact probability of looking for trouble (an assignable cause) when there is none, nor with failure to look for trouble when an assignable cause does exist. It was for such reasons that he used 3-Sigma control limits. Experience of over 50 years shows how right he was.”

Few more cmments related to the Questions posted.

ON Type II errors being even more costly and missed at 3 sigma:

“The idea of a Type II error is not really appropriate for a process behavior chart, and I say this even though I was the one who first published the Power function for the Western Electric Zone Test rules.”

ON the idea that special cause variations can be signs not of trouble, but of improvements.

“See manuscript 209 on my Web site. Spontaneous improvements are rare. Spontaneous deterioration is much more common.”

ON finding opportunities to improve and not just eliminating trouble, the limits should be narrower, as that reduces the Type II error.

“This is absolute wrong. It shows a fundamental lack of understanding of what three sigma limits are all about.”

ON using Probability theory they are probability limits; to make them empirical you have to do the study rather than merely accept someone else’s word.

“Wrong.”

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I’m not sure who else you are quoting, but Shewhart’s statement on p. 277 is based on prior statements. Here is a sample of numerous such statements:
–p. 121: “a phenomenon that can be predicted, at least within limit associated with a given probability, is said to be controlled.”
— p. 275 referring to his formula (85), “(85) gives the probability that the statistic ϴ will have a value lying within the limits ϴ1 and ϴ2.”
–p. 276: “How then shall we establish allowable limits on the variability of samples? Obviously, the basis for such limits must be empirical. Under such conditions it seems reasonable to choose limits ϴ1 and ϴ2 on some statistics such that the associated probability P is economic in the sense now to be explained.”
–p. 277: “Tchebycheff’s theorem tells us that the probability P that an observed value ϴ will lie within these limits so long as the quality standard is maintained satisfies the inequality P>1 – 1/t^2. We are still faced with the chocie of t.”

And here is an application after p. 277:
–p. 329: on using Criterion V to detect lack of control “if the probability P is less than 0.001, take this fact as an indication of lack of control.”

How many times does he have to say that there is a probability associated with the limits for the limits to be also probabilistic?

Question 4 – Is there evidence that 3-sigma limits always provide the right balance for all processes, all conditions and all costs?
The Six Sigma people obviously disagree.

Are you saying that Six Sigma people disagree that it always provides a right balance? In other words, are you saying that Six Sigma people believe that there are times/conditions when other limits are better economically?

Or does this answer mean that Six Sigma people obviously disagree that there is any evidence for the claim?

“The fact is we do not want to detect every little signal, but only those that are large enough to be of economic import. In practice, this means generally larger than 2 or 3 sigma shifts. Smaller shifts simply do not have that big of an impact upon the product stream.”

That may be okay for Wheeler but he certainly doesn’t speak for all industries’ companies or clients. When I worked at Bausch & Lomb, they didn’t use 3-sigma limits because for their product–contact lenses–even small, microscopic changes can and do have a large impact. A slight consistent defect on a lens can cause large liability problems for the manufacturer because of severe damage to clients’ eyes.

Dan had asked for peer-reviewed data supporting my questioning orthodoxy. B&L did have product recall for lens solutions which is a type of peer-review. J&J has had product recalls as well as other companies, some resulting in deaths. Were these large or small changes? We don’t know, but we do know that for some products, small changes can cause severe consequences to the customer and later to the company.

Is it better or worth using 3-sigma limits? No–B&L does 100% inspection.

P.S. What does “Six Sigma people” mean? Am I one of them? I don’t think Deming was–certainly Shewhart was not.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

The three sigma limit is are probably the least important of the WECO rules. Each rule shows a specific failure warning. The 3 sigma limit detects a wild swing of a single data point. Others show a proces which is oscilating about the mean, offset from the mean, etc. A data point swinging past the three sigma limit is easy to catch if the process limits are set correctly. The other indicators are warnings of less obvious and insidious issues. The WECO rules are designed so that each rule has the same probability of occuring in random data as the 3 sigma limit. No super science involved. 3 sigma and the WECO rules jsut work. Developed through trial and error of a long period of time. (artisan science vs mathematical).

64. When I say “Six Sigma people,” I mean the Master Black Belts who first explained it to me and gave a copy of Mikel Harry’s 1990 training book from Motorola University.

The idea was that having the the plus or minus 3 sigmas interval within tolerances was insufficient when faced with stackup issues with large numbers of characteristics. You needed plus or minus 6 sigmas, or 3.4 defects per million opportunities in order to have decent quality in finished products with many defect opportunities.

In this discussion, I find much use of the argument of authority. X must be true because Dr. Y said so, and I confess to using it too. It is counterproductive. Instead of exchanging technical information on what works and what doesn’t based on our own experience and thought, this turns into a scholastic debate about dogma.

As engineers, we should not accept anything as true just because anybody asserts it, no matter who it is. If the Wright brothers had done that, they wouldn’t have flown. Instead, they established through their own measurements that the published figures on lift were wrong.

@Dan – If I understand you correctly, when you say “As to empirical proof, what happened at Western Electric once Shewhart introduced his ’empirical’ limits… history is proof. ‘Ye shall know them by their fruits’,” you are attributing the business success of AT&T in the decades after 1930 to the use of Control Charts at Western Electric.

It is similar to justifying Lean/TPS by the success of Toyota. Except that, for AT&T, causality is harder to establish. Until deregulation in the 1980s, having a monopoly on the US telecommunications market might have had something to do with the company’s success.

Speaking of authorities, I was surprised when reading Out of the Crisis, that Deming thought deregulation would destroy the US telecommunications industry!

65. The whole theory of Shewhart’s control charts is based on the null hypothesis that measurements on successive work pieces are independent, identically distributed, normal random variables. You use this hypothesis to set control limits, and, with every point you add, you test the actual distribution against it. If any point is outside the limits, you reject the null hypothesis at a 0.027% level of significance.

I am not defending or attacking the theory but just stating my understanding of what it is. It sounds like classical statistical decision theory, and I just don’t understand why anybody would say that these limits are based on anything else. The question remains, however, of why use this level of significance rather than another.

If asked to justify it, I would consider the impact of false alarms, not when you are adding one point on one chart but many points on many charts every day.

Let us assume you use this technique throughout your plant and add 100 points to charts every day. Let us also assume that all your processes are and stay under statistical control, so that all the alarms you get are false. How many do you get?

Based on 3-sigma limits, the probability that none of your charts will generate an alarm in a given day is 99.73% to the power of 100, or 76.31%. It means, roughly, that you will get at least one false alarm on the average every four days.

Lower the threshold to 2 sigmas, and the probability of having no false alarm in a day drops to 95.5% to the power of 100, or barely 1%. False alarms will then be a daily occurrence, sending your responders on wild goose chases and destroying the system’s credibility.

That’s why you use 3 sigmas rather than 2. But why 3 sigmas rather than 4, 5, or 6? Every time you raise the bar, as you reduce the number of false alarms. In this hypothetical plant in a state of perfect statistical control, you could raise it as much as you want, because all the alarms you get are false…

In a real plant, however, you use the tool to detect real changes, and higher thresholds
decrease the sensitivity of your tests. With even 4 sigmas, you would miss most of the events you would actually want to catch.

I would call this a risk analysis.

If you can fit the +/- 6 sigma interval within tolerance limits,control charts are uselss.

If the measurements are not normally distributed or not independent, that is a different question, and one we have already discussed extensively. I think the key issue is the ratio of true to false alarms. If your process is in perfect statistical control, as assumed above, all your alarms are false and the control charts are a waste of effort.

The worse your process, the higher your ratio of true to false alarms, and the more useful the charts are. If it is bad enough, investigating even false alarms makes you uncover real problems.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I think we need to be more specific. You say “The whole theory of Shewhart’s control charts is based on the null hypothesis that measurements on successive work pieces are independent, identically distributed, normal random variables.”

First, we need to specify which control charts are we talking about? Shewhart described control charts on sample mean, standard deviation, correlation coefficient, and proportion defective.

Second, we need to distinguish between the population distribution (he calls it universe) and the sampling distribution. Which is it that is normal?

For the sample mean, Xbar chart, the individual observations (universe) are not required to be normally distributed. His reason: “In Part IV, however, we saw that no matter what the nature of the distribution function (58) [population distribution] of the quality is, the distribution function of the arithmetic mean [sampling distribution] approaches normality rapidly with increasing in n (p. 289).” This is application of the central limit theorem–to this chart.

For the sample standard deviation, he acknowledges that only for the normal distribution of the universe/population did he know its distribution function but approximated it based on normality. The sampling distribution of standard deviations, however, is not normal.

• You are correct that I was discussing only measured variables and said so when I wrote of “measurements on successive work pieces.” I didn’t want to discuss the whole array of p-charts, c-charts, and others.

As discussed before, it is the sampling distribution that needs to be normal, and CLT gives it to you approximately within a broad range and even with small sample sizes, although there are distributions for which convergence requires 1,000 values.

It doesn’t apply to standard deviations. If the underlying population is normal with independence between points, sample variances will follow a chi-squared distribution, from which you calculate a control limit for the sample standard deviation.

And it doesn’t apply to charts of individual values.

I am just describing the math he is using in terms of standard statistical decision theory. You can philosophize around it all you want, but that is the bottom line. It is how the numbers are calculated. It’s what you find under the hood. You are, of course, free to interpret them as you see fit. In Gagandeep’s Wheeler quotes, I only see assertions, but no argument.

66. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

This is one of the most bizarre threads I have ever come across in LinkedIn.
Most comments seem to fall into two camps:
1. Those who argue we are beyond using a simple, robust, reliable tool because we now have computers, in-line analysers, deeper understanding of math, better definitions of quality etc etc than Shewhart did. Therefore, time to ditch the horse and buggy and get with the hydrogen powered jet propelled car.

1. Those who not only believe in SPC, but extend its use to 6-sigma, massive over-analysis, and over-reliance on numbers coming off the statistical gymnastics we can now perform on a computer with powerful software.

My 2 cents worth are this:
A) SPC is a simple, powerful tool that is great for many processes that have variable attributes that are critical for quality, end-user acceptance etc. Certainly worth having in the toolkit.
B) It is simple for everyone to understand and gives us one way of seeing how our processes are behaving.
C) You don’t need a computer, a keyboard, fancy software, computer-literate operators, or fancy in-line analysing machines that sample a million times a second. The technology of a pen and a paper chart, while old-fashioned, has a wonderful robustness.

Sadly, in the last 10 years I’ve not met a QC/QA or Technical Manager who still knows how to construct simple, reliable SPC control chart systems. Sure, they can purchase \$20K analysers, install remarkable Six-sigma software and produce masses of data and reports on the 99.999999% compliance to spec results.
But how do you make sensible use of this vast arsenal, which can be very useful, when you don’t understand the simple, elegant and time-tested principles that underly SPC?
I shake my head with disbelief at the number of processes I see when I am with clients that manage to be, at the same time, both minutely analysed and largely out-of-control.

Besides all of this, my view is that the primary reason we use SPC is quite simply to stop operators (or should I say process owners) over-adjusting. There are only two outcomes from any SPC test result. Either, firstly, something has changed or gone wrong, when we trigger the ‘Taking Action’ rules. Or, most of the time, everything is running normally, SO LEAVE THE DAMN THING ALONE AND STOP FIDDLING.

Unfortunately, lots of on-line sensors and and in-line measures and lots of statistical analysis usually leads to the bizarre combination of lots of fiddling and lots of variability and lots of ‘out of control’.

Anyway, plenty more I could say, but SPC is simple, elegant, useful in some applications and sadly neglected in favour of usually over-engineered, overly technical control systems.

• I agree with you that this thread is bizarre. I am also surprised by the large number of posts, by the confusion they reveal, and by the unusual resort to the argument of authority. It is more like a 16th century theological debate than a 21st century discussion of how to make products with consistent characteristics.

I think it reveals that, while elegant, SPC is neither simple nor easy to understand. From your profile, I see that you have spent almost 20 years in the food industry, which has its specific quality challenges, like making products that taste the same in spite of variability in the ingredients, or freshness issues… And I am sure you have plenty of ideas on how to address them.

You feel that all the high-level computer hardware and software paraphernalia is unnecessary, and I’ll take your word for it in food processing. That does not prevent it from being indispensable in other industries, like semiconductors.

You ask how you can make sensible use of this vast arsenal, which can be very useful, when you don’t understand the “simple, elegant and time-tested principles” that underlie SPC. To me, it’s like saying that you can’t understand computers unless you know how to use a slide rule.

Using modern tools requires many skills, that only have a minor overlap with the old ones. You need to know how to locate errors in databases and filter or correct defective data. Then there are challenges in deciding which analysis tools to use, how to interpret the results, and how to communicate your conclusions.

You are describing a situation among your clients where they have lost the old skills, not learned the new ones, and are clueless as to what their needs are. It seems you have plenty of work to do.

67. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Shewhart summarized the conclusions by saying: “… the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.”

So, YES, ± 3-StDev. has its base in math of the normal distribution; but it EXTENDS to almost all distributions in terms of area under the curve (PDF) between [Mean ± 3-StDev.] for any probability distribution in consideration.

Dr. Wheeler has done studies involving almost 5000 probability models, and three sigma limits will cover more than 98% of any mound shaped distribution and more than 97.5% of any reasonable j-shaped distribution you might use.

I further shared the following calculations on this thread:
___

For “The remarkable thing we learn the calculation is that, in each case, the three-sigma limits cover virtually all of the common-cause variation,” I would want to share some numbers in support, for

– Normal dist. (mu=0, s=1 | Bell Shape) the area btw. + 3 StDev. and – 3 StDev. = 99.73
– Chi-Square dist. (df=2 | Skewed) the area btw. + 3 StDev. and – 3 StDev. = 98.16
– Chi-Square dist. (df=225 | Bell Shape) the area btw. + 3 StDev. and – 3 StDev. = 99.70
– Exponential dist. (lambda=0.2) the area btw. + 3 StDev. and – 3 StDev. = 99.54

We see that numbers are close to each other irrespective of the shape (Bell / Skewed).

If we look at the formulae for Control Charts (the original ones) there is NO distribution aspect in them.

For e.g., for a p-chart — CL = p ± sqrt((p(1-p))/n)
For e.g., for a c-chart — CL = c ± 3*sqrt(c)

This is the pure form of limits which is distribution free.

Now, w.r.t. “However, as one reads about “control charts” in today’s technical journals, it does not seem that one is reading about the same technique that was described by Shewhart;” let me share Probability Limits method for c-chart as described on http://www.jmp.com

Refer the link http://www.jmp.com/support/notes/36/484.html

So, yes there are Probability Limits being calculated; but they do not represent Shewhart’s work.

Shewhart never plotted limits basis distributions.

His work was distribution “free.”

Although the control chart constants were created under the assumption of normally distributed data, the control chart technique is essentially insensitive to this assumption. The normality of the data is neither a prerequisite nor a consequence of statistical control.

It is a fact that in real life, in “most” cases, we do not end up with data sets which are normal.

So, a fundamental question, I were to put a percentage to “in most cases,” conservatively say 90%, does it imply that Shewhart created Control Chart tool under SPC for “ONLY 10% opportunities? A man who is attributed to modern quality movement and on whose work became the foundation for Dr. Deming?

As we Google, 95% of the results talk of Normality; but it is a Myth.

If you really deep dive, you do get your answer that it indeed is a Myth.

I am reminded of a saying by Rudyard Kipling “I keep six honest serving men (they taught me all I knew); their names are What and Why and When and How and Where and Who.”

68. What Shewhart says of empirical evidence is true of any kind of data analysis. If we pay attention to Nate Silver, it’s not for his mastery of bayesian theories but for his ability to predict election results with great accuracy.

There are three levels to any system that deals with data. The first is the way it looks to people or machines that interact with it, including what input it receives and in what form, and what output it produces: charts, flashing lights, automatic adjustments, etc.

The second level is its internal logic, the model it is based on and how its diffferent components interact. This is where distributions and risk calculations reside.

The third level is the way it maps to external reality: how effectively it addresses the problem it is intended to solve.

This is nothing special to SPC. In manufacturing, you have to address these issues in Production Control or Maintenance as well as in Quality. And you do in airline reservation systems too when you use models of no-shows to overbook flights with the goal of filling planes without denying seats to any passenger with a reservation.

Why do you say that p-charts limits are distribution-free? The formula you give is for a null hypothesis that the percentage defective follows the normal approximation of a binomial distribution.

69. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Hello Michel, I am sorry for my late response to your direct questions concerning my experience in a semiconductor factory. I have no problem in sharing the information, but I have been too busy in other issues, and have not been here for a while. I now try to give answers. However it has been many years, while I have been in the company and I was not a process engineer, so my answers may not be completely accurate. I try anyway.

QUESTION: “To the extent you are allowed to share this, I was wondering which operations this was on, oxidation, deposition, photolithography, etching, etc.”

ANSWER: More or less in all of them. Thin film thicknesses were controlled with an optical reflectometer or ellipsometer. Sheet resistivities were measured with four point measuring system. Gate oxide quality was measured with a CV-measuring system. Optical stress measuring system was used for film stresses. CD:s and overlay accuracy in masking and etching were measured optically.

In addition visual inspection was done in masking and etching stages (aligning and linewidth structures). Attribute results.

In addition to these in-line controls, off-line controls were done as well, such as junction dephts, refractive indexes of films. phosphorus content of glassivation…

QUESTION: “Were the characteristics measured inside the production equipment or on measurement instruments outside the equipment?”

ANSWER: Mostly the measurements were done with separate measuring instrument. If I remember correctly only some equipment had inner measurement systems. Plasma etching had an end point detector, sputterer had a time control system depending on target age…

In thin film deposition the production lots were accompanied by test wafers, in which the measurements were done. The lots couldn’t proceed without adding measurement results into production control system (PROMIS).

The wafers had also special control devices, which were measured after the wafers were completed. They were measured electrically, so no measurements could be done before completing wafers.
QUESTION:”How was this work integrated with overall yield enhancement efforts?”

ANSWER: The measurement results had three purposes. One was to stop the lots for proceeding if results were out of spec. The other was to keep the process stable, by eliminating special causes and the third was to use the data for finding causes for bad yield or some other problem.

QUESTION: “Who was using the charts? Was it production operators, process engineers, technicians?”

ANSWER: Control charts were mainly used by process engineers. The measurement results were fed into process control system (PROMIS). It sent automatically an alarm to the engineer in case of control chart rule violation. In addition the charts were gone through in weekly meetings amongst process engineers (and quality). If a lot was put on hold, an operator was obliged to inform shift leader and write a comment into PROMIS. All lot comment history was readily available in PROMIS.

QUESTION: “Were the charts on display all the time near the equipment, displayed only when the controller detected an event requiring investigation, or displayed on request off-line on a screen outside the fab area?”

ANSWER: The charts could be seen in PC , when asked, but I am not sure whether the operators looked it regularly. I recall, that the charts were also visible after every measurement, but the operators perhaps did not utilize them much. Process engineers got an alarm in every case of a chart rule violation. They could see the graphs through the net.

QUESTION: “What kind of actions were taken in response to alarms?”

ANSWER: I cannot classify them. I guess all kind of actions. Process engineer collected more data, went downstairs to interview operators, did tests and experiments etc… Normal problem solving. We did not have any formal procedure for problem solving in control chart alarms, as we did in out-of-spec case.

One thing perhaps is worth mentioning here: We once transferred out technology to a foreign silicon foundry. In qualification period they counted every week the number of control chart violations (they had WE rules). It was nice to see a downward curve in number of alarms, which indicated that the process was steadily going to a more and more stable state.

I hope this answers at least to some extent to your questions.

70. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I would want to comment on “Lower the threshold to 2-sigma, and the probability of having no false alarm in a day drops to 95.5% to the power of 100, or barely 1%.”

First, let me first do Theory:

In designing a control chart, both the sample size to be selected and the frequency of selection are important factors. Alongside Type I and II (alpha and beta risks), we also employ Average Run Length.

The Average Run Length (ARL) is a very important way of determining the appropriate sample size and sampling frequency.

The ARL gives us the length of time (or number of samples) that should plot In-Control before a point plots outside the control limits. The desire is to have a large ARL when the process is in control and a small ARL when the process has shifted an unacceptable amount.

If p = probability that any point exceeds the control limits; then ARL = 1/p.

Next, let me bring in Study:

Case 1 – 99.73% of the data lies within 3-StDev. of the Mean (0.27% of the data lies outside the control limits)

ARL = 1/0.0027 = 370.4

Case 1 – 95.46% of the data lies within 2-StDev. of the Mean (4.54% of the data lies outside the control limits)

ARL = 1/0.0027 = 22.1

Thus Shewhart control chart has a run length of 370.4 for 3-sigma limits and 22.1 for 2-sigma limits.

BUT,

We also need to look at Average Production Length (APL). APL is the total amount of production between a shift in the process mean and signal of the shift. And, APL is a criterion for evaluation.

And, for a Shewhart chart the minimum feasible value for APL is 2401 units.

(Source: Keats, J. B., Miskulin, J. D. and Runger, G. C. (1995). Statistical Process Control scheme design, Journal of Quality Technology, 27(3), pp. 214-225).

So,

The statement “False alarms will then be a daily occurrence, sending your responders on wild goose chases and destroying the system’s credibility,” in essence doesn’t hold true.

We need to have at least 2401 in-control production units.

For example, assume it is “economical” for a supplier to check no more than 2% of production for control purposes. The process is known to have a mean of 100 mm with a standard deviation of 0.1 mm, and a Cpk equal to 1.33. The ABC Corp. consumes 250 pieces per day. Therefore, we want to detect a one sigma process shift before producing 250 pieces.

Thus, to detect 1 sigma shift of the mean (d), with 2% sampling rate (r) and 2401 in-control production units (APL-o), for this setup, we need to sample for and check a subgroup of 6 pieces every 294 pieces. The expected number of pieces made before a chart signal is 225 (APL-d).

Thus, if the process sigma changes by one unit, the control chart will indicate a significant change before 225 parts are produced for ABC Corp. Since 225 are fewer than 250 the plan provides the protection that ABC Corp. requires.

Thus, Control Limits are often referred to as action limits. The goal of a control chart is to detect a shift in the process mean after as few items as possible are produced while minimizing the number of false alarms.

Even if we get a false alarm when routine variation causes a point to fall outside the lines by chance, we still investigate it for special or common cause. We don’t classify it as special cause just because it went out of control.
I would want to comment ON “The formula you give is for a null hypothesis that the percentage defective follows the normal approximation of a binomial distribution;” and, ON “Please define distribution free.”

Binomial Distribution gives the probability of exact successes in “n” independent trials, when probability of success “p” (0 ≤ p ≤1) on single trial is a constant.

But,

A binomial distribution with a “high success rate” can be approximated by a Gaussian distribution if the following condition is met – n>= (5/min (p, q))

Also,

A binomial distribution with a “low success rate” can be approximated by a Poisson distribution.

So, when I say it is distribution free, what I mean is that no where the formula is taking into consideration, basis p, whether it is Binomial, Poisson, or Gaussian, even though the formula has its base in normal approximation of a binomial distribution as per Michel.

The fact that studies have been done involving almost 5000 probability models, and three sigma limits cover more than 98% of any mound shaped distribution and more than 97.5% of any reasonable j-shaped distribution you might encounter, is a proof that the 3-Sigma limits are distribution free.

Back to my earlier post,

So, though Shewhart initially experimented with limits based on probability distributions, Shewhart ultimately wrote:

“Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.”

71. In the accounts both from Lasse and Drew, Control Charts are neither posted on the shop floor nor used by production operators but instead used for automatic alarm generation, with the alarms passed by email to process engineers who view the charts on demand on computer screens.

This is fundamental, because it is consistent with the traditional view of quality as being the job of specialists rather than involving everyone, as opposed to Art Smalley’s example from the 1950s at Toyota that Dan pointed out. (http://bit.ly/WTxpdx).

It means, that, whatever their job title may be, there is a group of responders in the plant organization, tasked with acting on alarms generated by control charts. This may mean stopping the machine, putting together a problem-solving team, running tests and experiments on the machine, and implementing both immediate countermeasures and long-term solutions.

To manage how the responders’ group performs and how large it should be, you cannot just consider what happens when you add one point to one chart but instead the consequences of having multiple charts routinely updated within the plant, all of which generate alarms that this group must respond to.

The average run length (ARL) parameter Gagandeep describes is indeed useful, but it should be clear that it is the mean number of points between FALSE alarms. When you look at the probability of a point being within 3 sigmas of the mean, 98% looks close to 99.73%, and that is the reason Gagandeep says that departures from normality in the underlying distribution are not a problem. When you look at ARLs, however, the picture is different.

As Gagandeep pointed out, the 99.73% figure corresponds to ARL = 370, meaning that you have one false alarm every 370 points. On the other hand, a distribution with only 98% of the points within 3 sigmas of the mean will give you a false alarm on the average every 50 points, about 7 times more often! This is not a minor difference.

How important are false alarms? Their absolute number is not the issue. What really matters is the ratio of true to false alarms. If, as in semiconductors, your processes are chronically out of control, you may get, say, 10 true alarms for every false alarm. And your processes are so unstable that even an investigation launched on the basis of a false alarm is likely to provide real improvement opportunities.

If, on the other hand, you have processes that are as mature and stable as drilling, the picture is different. I was suggesting earlier the thought experiment of a process that NEVER goes out of statistical control. Based on the ARL, it will still generate alarms, and they will be ALL false and sending the responders on wild goose chases. It is like testing a population for a disease that none of its members has. Since tests are not perfect, some will come out positive, and the positives will be all false.

In a real manufacturing situation, not only Control Charts but even more modern statistical methods lose their effectiveness when you are so close to statistical control that lack of process capability is no longer the main source of defects.

Rather than a drift in mean or spread in a measured characteristic, your defects are then caused by discrete events like tool breakage or operator errors. That is the point where one-piece flow with go/no-go checking take over, followed by mistake-proofing.

72. @Gagandeep – You seem to be forever parsing Shewhart’s words. The way the same material in math, science, or technology is explained evolves over time. I could find no occurrence of expressions like “null hypothesis” or “level of significance” in his writings. Yet, in the way I was taught the subject 40 years later, these are central concepts that support clear and concise explanations of methods, including Shewhart’s control charts.

I suspect that, when he was writing in the 1930s, the general framework of statistical decision theory either didn’t exist yet or was just emerging. It would explain why he didn’t use it. I am sure he meant something by “probability limit,” but I have no idea what, and the expression makes no sense to me.

And, frankly, I have a hard time with the idea that it makes any difference to manufacturing quality in 2013.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

@Michel,

I would want to comment on “I could find no occurrence of expressions like “null hypothesis” or “level of significance” in his writings.”

There is no linkage between Shewhart’s work and Hypotheses.

Shewhart’s charts characterize a time series as being either predictable or unpredictable. When a test of hypotheses is applied to a time series it presumes the timeseries to be predictable, and looks for changes in parameters of the predictable timeseries.

In the words of Deming, “Some books teach that use of a control chart is test of hypothesis: the process is in control, or it is not. Such errors may derail self-study.” “Rules for detection of special causes and for action on them are not tests of a hypothesis that a system is in a stable state.”

In DJW article No. 232, Dr. Wheeler concludes with the following paragraph, “Shewhart’s approach to the analysis of data is profoundly different from the statistical approach. This is why people end up with such confusion when they try to “update” Shewhart by attaching bits and pieces from the statistical approach to what Shewhart has already done. Shewhart provided us with an operational definition of how to get the most out of any process. Nothing extra is needed to make process behavior charts work. We do not need to check for normality or transform the data to make them “more normal.” We do not have to use subgrouped data in order to receive the blessing of the central limit theorem before the chart will work. We do not need to examine our data for autocorrelation. And we do not need to wait until our process is “well-behaved” before computing limits. All such “extras” will just mess you up, get in your way, leave you confused, and keep you from using one of the most powerful data analysis techniques ever invented.”

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

@Gagandeep. I think you are misinterpreting my comments by combining two distinct and independent issues that are addressed in different ways.
Issue 1: Regardless of what Shewhart did, can control limits be probability limits? Yes. You yourself do this each time you calculate ARL—it is based on the probability of a point occurring beyond the control limits—that’s a definition of a probability limit. When Wayne/Drew (?) says other rules have the same probability, these probabilities require probability distribution assumptions, making the limits probability limits.

Issue 2: What did Shewhart mean?
That answer, unfortunately, can only be gotten by referring to what he says and interpreting it if he is ambiguous. Quoting Wheeler or anyone else but Shewhart, does not provide any evidence of what Shewhart meant—only their interpretation.

Let’s agree: Repeating (or quoting others who repeat) they are not probability limits is not evidence the claim is true.

Here, as with distribution-free, we need to define the terms. I mean by probability limit, a limit that encompasses a certain probability of being within or beyond. I’m not sure what you mean—nor what Wheeler means. By the way, Wheeler, just told me “So, three-sigma limits are not, never have been, and never will be probability limits.” Note that his claim mixes the two issues–and is his opinion without defining the term (and contradicts answer to Issue 1).

I see my definition one way to interpret Shewhart. Why? There are more than a dozen quotes where he defines the limits as including a probability, e.g.:
-p. 121: “a phenomenon that can be predicted, at least within limit associated with a given probability, is said to be controlled.”
– p. 275 referring to his formula (85): gives the probability that the statistics ϴ will have a value lying within the limits ϴ1 and ϴ2.”
-p. 276: “How then shall we establish allowable limits on the variability of samples? Obviously, the basis for such limits must be empirical. Under such conditions it seems reasonable to choose limits ϴ1 and ϴ2 on some statistics such that the associated probability P is economic in the sense now to be explained.”
-p. 277: “Tchebycheff’s theorem tells us that the probability P that an observed value ϴ will lie within these limits so long as the quality standard is maintained satisfies the inequality P>1 – 1/t^2.”
-p. 329: on using Criterion V to detect lack of control “if the probability P is less than 0.001, take this fact as an indication of lack of control.”

Dan disagrees, quoting Shewhart on p. 277 “and on empirical evidence on which to justify the choice of limits.” But this argument depends on empirical justification meaning not probabilistic. This is fallacious reasoning. If I say my house has two floors, I am not implying it doesn’t have a garage. Limits are empirically justified is not incompatible with them being probability limits.

No matter how you interpret Shewhart’s comment about the limits being empirically justified, how do you explain the quote above from p. 329? He must be assuming a probability distribution to calculate the probability; and he is assuming the limits define a probability of 0.001–regardless of the distribution assumed. That is all I mean by probability limits—and, it does not mean they aren’t or can’t be empirically justified.

Each person has to provide their definition of “probability limit” and reconcile that definition with not just a selected quote but all of Shewhart’s statements.

P.S. Of course, the disagreement might simply be what to call the limits. That’s why the definition is critical. My definition specifies how it can be used–that is it’s practicality—and shows that that is how Shewhart used it. I guess I could quote Shakespeare on a rose by any other name….

• @Gagandeep –

You quoted Deming saying: “Rules for detection of special causes and for action on them are not tests of a hypothesis that a system is in a stable state.”

How does this make any sense? Obviously, when you are applying a rule, you are doing a test.

You also quote Wheeler saying: “Shewhart’s approach to the analysis of data is profoundly different from the statistical approach.”

Then why does Shewhart call his book “STATISTICAL Method from the Viewpoint of Quality Control”?

I don’t see anything wrong with transforming data to make it talk. You do it all the time when you show scatter plots with semi-logarithmic or bi-logarithmic axes. The commonly used lognormal distribution is DEFINED by the logarithms of a variable being normal. You also transform time or space series in more complex ways when you calculate power spectra.

73. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Having collected SPC information from a process gives me results from a set of interacting causes that (in concert with my measurement and recording) generated the process output figures. The set of interacting causes that created those results will never be seen again. They are unique to the frame; let’s call it frame 1. I can utilize probability theory to make inferences about frame 1.

The results that will happen in the future, when I next perform the process, will come from a different set of interacting causes. A different frame, we can call frame 2. How can my study of the frame 1 (and any inferences associated with that study) be useful in making inferences about frame 2? The theory of probability does not provide rationale for me to apply inferences form a study of one frame to a different frame. The rationale that might be of use comes from subject matter expertise rather than statistical expertise.

The confidence I have in electing (deciding) to extend the 3 sigma (or any other) limits out into the future and utilize them to provide me action signals (categorize the process behavior as predictable or unpredictable) is not statistically quantifiable. It is based upon my level of confidence that the set of interacting causes I anticipate for frame 2 (the future) will be sufficiently similar to the set of interacting causes of my results for frame 1(the past). A confidence born of specific knowledge of the process, the set of interacting causes producing the results. My confidence has its basis not in probability theory, but in process knowledge (subject matter expertise).

Since the act of deciding to extend the process predictability limits into the future is not probability based, in what way is it wrong to claim the process limits are not probability limits?

• As I understand what you are saying, knowledge of the process is necessary to interpret the data it puts out and make sensible predictions. It is true for manufacturing processes, but it is also true in every domain whenever you are trying to use data on the past and present to predict the future, whether you are talking about sales, baseball scores, or the weather.

You will not get anywhere by applying statistical algorithms without understanding the dynamics of the domain. When statistics works, it is always as an add-on to domain knowledge, never as a substitute to it.

As for “probability limits,” again, I don’t know what it means, but I can think of one analysis technique that is still in use today and was invented without any recourse to probability theory. Candlestick charts are still used by stock traders, mostly in Japan. They were invented in 18th century Japan by rice trader Munehisa Honma, who also invented futures trading, made a fortune and lived a long life. He based trading decisions on patterns observed in Candlestick charts. It was purely empirical.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

@Tim. I completely agree with you on your description of how probability/statistics can’t be used to make predictions about the future. Despite this, Shewhart does define control as being able to predict within limits and with a certain probability what will happen in the future. Deming and others have accepted this view even though Deming himself would have made the same argument you did (the term frame comes from him, I think). Ironic (or just inconsistent)?

Since statistics is the use of probability models to describe real-world phenomena, the check is whether using this model helps us. There are applications of this: e.g., chances of rain not now but later today or tomorrow.

To be fair to Shewhart, he did qualify his statement: “the product to have been controlled up to the present does not prove that we can predict the future course of this phenomenon. We always have to say that this can be done provided the same essential conditions are maintained [p. 148].” It is the science and conditions assumed that are the basis for the prediction (as with the weather)—statistics provide the value of what we predict and the range in which it will occur. I agree with this view.

You ask “Since the act of deciding to extend the process predictability limits into the future is not probability based, in what way is it wrong to claim the process limits are not probability limits?” (I interpret “process limits” as control chart limits.)

In this way it is wrong.
There is an application of probability models that does “predict” the future. And one we all use. If you were (in the future) to flip a fair coin, what is the probability of getting heads? For limits, if you were to flip that fair coin 30 times, what is the probability of getting from 12 to 18 heads?

It is this sense of probability limits that is used with the 3-sigma rule: What is the probability of getting a point beyond the control chart limits assuming that the probability distribution is such-and-such (depending on the statistic and its average and the standard error)? This is what Shewhart says repeatedly and defines the control limits.

74. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

As I understand (or misunderstand) Deming, he offers that one’s confidence in the future, in prediction, is qualitative, even when informed by statistical (enumerative) studies.

An illustrative story [or not] relative to inference and future results

I have two types of wheat seed. I conducted a test to see which one would generate higher quality results. The measured quality characteristic was flour yield as a percentage of cleaned wheat. I planted two 10 acre fields side by side in the state of Washington. Both fields were treated the same during the planting, growing and harvesting. The fields were harvested and the wheat conveyed to separate silos. Probabilistic samples of each field’s wheat were taken to evaluate the quality of the wheat.

I need to decide which field, if either, gave better results. I utilize probability theory (sampling and inference) as an aid to my enumeration (counting) of the results. Let’s say, my samples told me that seed A produced higher quality flour than seed B, and that I have confirmation the difference is statistically significant. Then theory of probability is a basis for determining (calculating and quantifying) my level of confidence in the conclusion that seed A yileded higher quality flour.

The interacting set of causes that generated my results is unique to the frame that I studied (last year in Washington). A decision to plant seed A next year in a different location, say in Kansas, will not be based upon probability theory. The basis of the decision, the prediction, that seed A will be better than B in Kansas next year, rests upon knowledge of the similarities of conditions (soil; weather; planting, growing, and harvesting techniques . . . ) between Washington last year and Kansas next year. It will be based upon understanding of the interacting causes, on subject matter expertise. Probability theory will not be the basis for my confidence that a particular seed type will be the proper choice for Kansas.

My confidence can range from “darn certain” to “pretty doubtful.” It is not quantifiable with probability theory. What justifies my “darn certain,” “pretty doubtful,” or any other place between the extremes, is not probability theory; subject matter knowledge sways me.

My confidence in my understanding (having reviewed it with a statistician who has expertise and who worked closely with Deming) is can be categorized qualitatively as “nearly certain.” I recognize being “darn certain” and “absolutely wrong” can exist simultaneously. Thus I am open to knew knowledge from anyone willing to share.

• Ronald Fisher‘s Studies in Crop Variation from the 1920s were actually the basis for his development of many of the classical tools of statistics, in particular analysis of variance (ANOVA) and design of experiments (DOE).
In making predictions, probability theory and subject matter expertise are not mutually exclusive but complementary, with probabilistic models helping you avoid biases and quantifying risks. And probability theory is not always needed. A molecular biologist who was renowned as an experimenter once told me that she had never heard of DOE. The outcomes of her experiments were binary: either a result was there or it wasn’t.
Agriculture as a domain is much more variable than anything we deal with in a lab or in manufacturing. Differences in, as you say, soil; weather; planting, growing, and harvesting techniques cause variations that may dwarf the impact of seed choice. In addition, agriculture is an intrinsically batch process: you plant your seeds, watch them grow for months, and then harvest the whole crop in a few days.
This means that it can take a year before you can measure the results of an experiment. In your example, if you wanted to make sure that your choice of fields in Washington did not influence the outcome, you would need to switch seeds between the fields and grow another crop, which would take another year. Then, to make sure it wasn’t the influence of the weather, you would have to grow the crops enough times to filter it out… After 10 years, you might be sure.
By contrast, in a machine shop, you work with variations on the order of 1 mil on dimensions in multiple inches, in the controlled environment of a factory, with the well-known physics of metal cutting, and with process times for single workpieces measured in seconds for automotive applications and, at worst, in weeks in aerospace, which means cycles of learning that are orders of magnitude shorter than for agriculture.
It is not necessarily a simpler challenge, but it is a different one, for which Shewhart developed specific statistical tools, and Shingo later non-statistical tools for more mature processes.
Once you complete a process capability study on a machine, you use the results as a basis for control on this machine. That’s the whole point of the study. But what about using the same parameters on other machines? That they apply is your null hypothesis; if your controls refute it, then you conduct another study on the new machine.

75. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Then with an enumerative problems I utilize probability theory to study the past (for example did method A of producing hte parts give better results than method B). I need expertise in probability theory. The analytic problem of predicting which method will give better results in the future requires I utilize subject matter expertise.

So I can use probability theory to study what already exists and subject matter expertise to evaluate how the results of that study will pan out in the future. Yes?

• Whatever works… What I don’t understand is why you would not use subject matter expertise to understand the past, and why you would not use probability to predict the future.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

“So I can use probability theory to study what already exists and subject matter expertise to evaluate how the results of that study will pan out in the future. Yes?”
Yes. I wouldn’t worry about whether it’s enumerative or analytic.

A. Statistical analysis uses probability models (distributions). An analysis estimates characteristics of that model, e.g., its mean, standard deviation, or shape, as it is applied to data (frame or universe). From the characteristic estimates you can then estimate probabilities. For example, I flip a coin 100 times and get an estimate of the probability, p, of a single head; or, test the probability against 0.5 to see if it fair, or compare to the probability of another coin.

B. Now that you have that description (through estimates from data or just theortical assumptions, e.g., it’s a fair coin), you can estimate what would happen in the future—ASSUMING conditions stay the same. For example, will you flip the coin the same way, will the wind factors be the same, will the distance it travels be the same, will it land on the same surface, etc.?

How do you determine which conditions are relevant? In order of effectiveness:
1. Scientific laws whenever possible (they also help identify the relevant conditions).
2. Your knowledge of the condtions—good idea to document them in the initial study.
3. Subject matter expertise of similar situations..
4. Guess.

C. You might also develop specifications for those relevant conditions.

D. In the future, you have to control (possibly statistically) the relevant conditions to be within those specs or be similar enough to the original conditions in A.

In your wheat example, (A) the initial study would give you estimates of (average) yield. [Enumerative]

Your prediction of what the average yield would be next year would depend on the the relevant conditions be approximately the same (or more conducive). You would have to (B) determine what conditions were relevant, (C) set specs on those conditions (e.g., amount of fertilizer, fertilizer components), and (D) then control those conditions with their specs. Now you estimate the future average yield will be (about) the same as you got previously.

B-C are analytic studies and could be done statistically. You might use designed experiments for B and C (see Michel’s post) and control charts on the measures of the conditions for D. The prediction of average yield, however, is not probabilistic in this sense: a) the prediction is outside the frame and b) requires the assumption of similar conditions.

One way to test this conclusion is to see whether you can make the prediction without the statistics vs. make the prediction only with the statistics (i.e., without the assumptions of conditions staying the same). As Michel said, use both, but recognize what each provides.

76. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

To me, the math used to calculate the mean and the standard deviation may come from probability theory but where the limits are placed does not. The theoretical is used as a convenience.

Sure could be seen as arguing over how many angels are on the head of a pin, mea culpa… Why does it matter?

Very real consequences come from seeing them as probability limits, including twisting of the original intent of the charts. Questions that come from seeing the math as the source include things like:
• “there must be a better way to compute them” based on theory alone
• and transforming data

weakening the tool in practice, by discouraging people not trained in, and uncomfortable with statistical math from using them. The great benefits they offer to our world don’t happen unless they are used.

So, as I see it, we agree that probabilistic math is used to define the center line and a factor to use in placing the limits: but for me it stops there. The placement of the limits is a choice from observations in the physical world. As it clearly states above if the math didn’t work, we would have to find other math.

• You wrote: “… the math used to calculate the mean and the standard deviation may come from probability theory but where the limits are placed does not…”

The logic of that statement escapes me.

Some understanding of methematical statistics is necessary, and the absence of this knowledge in manufacturing organizations has severely limited the diffusion of these methods, the Black Belt system being an attempt to fill this gap.

Attempts to apply these tools without the requisite knowledge result in mistakes like checking averages against limits set for individual values.

As for transformations, your “raw” data is already usually a voltage tranformed by software into a length, emperature, weight, decibels, etc. Some further transfomations may be perfectly valid, if you know what you are doing. I understand this is a big “if.”

77. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

The choice of 3 as a multiplier: reworded, the mean and the standard deviation calculation may come from probability theory but the choice of the multiple of the 3 above and below came from empirical observation. As Shewhart wrote, if the math hadn’t worked they would have to find a new mathematical definition.

I am aware and we may never bridge the gap, as, I think the disconnect is on the issue of approximation. Looking for a way to help people have a way to place limits on their unique data, and to use limited monetary resources wisely, a “limit” was found that would ensure that when surpassed, it was worth spending to explore.

That limit was found to be above or below 99 to 100% of all occurrences, again, based on trial and error.

In looking for a workable way for other people to calculate the limits, given that finding where to place that limit by looking at a table of data was obscure, statistics was turned to, and it was found that probability limit calculations came close enough to help, not exact, but close. I doubt you would characterize that as a rigorous probabilistic argument.

The transformations I am talking about is transforming the data to compensate for a lack of a normal distribution, sorry not to be precise. This both puts people off using the charts.

As to some basic knowledge of statistics, I completely agree, and clear training in the rules for the charts, and some of the pitfalls.

The way I understand it, SPC is a rule based system for establishing and maintaining the predictability of processes, (stability) where you stick with preset limits once stable, and you only recalculate limits when there is a fundamental change in the process,

the rest is from Mike Clayton “and only if you have studied the actions that really correct for quantifiable shift and drift(such as tool wear), and can assure that all operators have access to that knowledge, in form of action plans that are very specific. No tweaking.”

If you head over to a discussion in the ASQ Statistics Division group, on Hand done vs. computerized charts. The people there are experienced in high speed manufacturing and statistics, both applied and theoretical, and can answer much better than I can, especially on the relative obsolescence of SPC. http://www.linkedin.com/groups/Computerized-Process-Behavior-Charts-versus-2115190%2ES%2E213891763?qid=cccfa05c-2388-461a-812f-05e3f7fecd66&trk=group_most_popular-0-b-ttl&goback=%2Egmp_2115190

• The application of probability in question here is statistical decision theory. It has been developed specifically for situations in which you don’t have perfect knowledge and must approximate.

The range of probabilities of not having a false alarm whenever you add a point has been quoted earlier in this thread as being between 98% and 99.73% for a range of distributions. As I have explained earlier, it is not a small range when you consider not one point but 50 or 100 different points added to charts and checked against limits every day.

I don’t know why you keep bringing up “high speed,” when the determining factor for the relevance of statistical tools to quality improvement is not speed but technology. Making and packing detergent powder is high-speed manufacturing; fabricating state-of-the-art integrated circuits, high technology.

In mature processes, capability is not an issue, and the statistical tools are ineffective. That is why you see plants producing top quality auto parts that don’t have a single employee trained in statistical tools. Quality enhancement at this level is pursued by non-statistical tools, like one-piece flow, stop ropes, andons, go/no-go gauges, mistake-proofing, JKK,…

In high technology, where processes are always immature, you need statistical tools, but the old SPC can’t cut the mustard. As Lasse and Drew pointed out, measurements on individual operations are automatically collected and analyzed, with specialized responders for software-generated alarms that are nowadays based on more advanced methods than Shewhart’s, as your Intel 2006 article indicated.

From my own experience of the industry, whatever you do on individual operations is necessary but minor. What determines yield is the integration of hundreds of operations in a process whose outcome cannot be seen until it is finished. And the yield enhancement groups are using much more sophisticated tools than SPC.

Browsing through a just published handbook of quality management, I was struck by the absence of any discussion of the range of applicability of the various tools described. In addition, this book fails to cover the methods used by the few companies that are actually able to make 1 million units of product without a single defective.

78. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

Michel, please start a conversation over in the ASQ statistics division on this as there you will both find people who disagree as strongly as I do, but also the requisite level of experience and statistical training for you to listen. Your point of view is not shared.

• Sometimes, entire professions get so wrapped around their own dogmas that they lose touch with reality. It happened to management accountants in the 1980s. They were so attached to techniques developed in the 1910s at Dupont that they failed to see how irrelevant they had become. They did things like give you unit costs that were calculated by Materials, Labor, and Overhead, in which the Overhead category was 90% of the total.

They got wake-up calls from some of their own, like Robert Kaplan, but also from outsiders like Eli Goldratt. And, since then, there have been several efforts to update the field, from Kaplan’s Activity-Based Costing and Goldratt’s Throughput Accounting to Brian Maskell and Orrie Fiume’s Lean accounting.

When I look at the literature from the American quality establishment, or the program of courses offered by the ASQ, I see a similar situation. In cars, the J.D. Power and Associates Initial Quality Survey for 2012 has Toyota and its Lexus and Scion brands back at or near the top in most categories. Incidentally, Honda/Acura is not far behind, and, in some categories, ahead. Such results should make the quality profession curious about the quality side of the Toyota Production System, but it doesn’t act interested.

The methods to achieve these results are not secret. They can be learned. More than 10 years ago, Kevin Hop and I developed a course on Lean Quality that was well received, … in China. But these methods are not statistical. You don’t need to know them in order to pass a certification exam, ISO-900x audits, or receive the Macolm Baldridge Award.

There are other elephants in the Quality room that the profession seems to be ignoring. The 2001 Firestone tread separation debacle, or the 2010 Toyota recalls should make it clear that a major part of quality management is emergency response. When such events happen, quality managers have to step in with immediate countermeasures, organize the search for permanent solutions, and communicate about their actions with customers and the media. I could not find a word about this in the latest Handbook for Quality Management. The word “recall” is not in the index. On the other hand, you have several pages on Maslow’s hierarchy of needs.

Unlike Shigeo Shingo, I am not saying we should get rid of statistical tools in quality engineering and management. I think instead that we should use the right tools where they apply, whether they are statistical or not.

If you juxtapose quotes from Mikel Harry and Shigeo Shingo, you have the appearance of irreconcilable differences, until you realize they came from radically different industrial backgrounds, and that each one was expressing as universal truths statements that applied to specific and different contexts.

79. Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

I still think that you should not attach probabilities to control charts. It may be confusing and miss the point.

If you knew your distribution and if you could guarantee, that it stays perfectly stable in the future, then you could calculate probabilities of whatever events.

But you don’t know your distribution accurately enough, and you cannot guarantee that it stays unwaverd in the future. Hence you don’t know your probabilities (I mean in real world, in the artificial world of computer generated random numbers it sure is possible).

But does this mean, that the control charts are obsolete? No! Sure it would be nice to know whether the probabilities were 0,27%, 1%, 2% or whatever, but I think it cannot be done, with any statistical tool. No tool can provide accurate probabilities of some future event.

Anyway, using statistical tools is far far better than not using them (guessing? simply looking at the data? tampering?) The control charts show reasonably well how the process is behaving, and they are reasonably good decision tools, regardless that they cannot provide exact probability of some special event in the future.

In this discussion, nothing has come forth for me to replace control charts with something else. I still have them as invaluable tools in my statistical tool box.

• In the most classical SPC model, you start by doing a Process Capability Study, during which you take measurements of a quality characteristic that you deem representative of normal behavior. It’s a judgement call, but it often involves excluding outliers.

You then estimate the mean and standard deviation of the characteristic from this sample, and use these parameters to set Control Limits based on standard formulas.

Implicitly, what you are doing is fitting a probabilistic model to your data, and this model is that the characteristic follows a normal distribution, and that the sequence of measurements are independent and identically distributed (i.i.d.). It is implicit, because the literature doesn’t say so, but all its calculations are based on it.

What you do afterwards when you plot your Control Charts is not assuming that this model remains valid but instead using it as a null hypothesis that you test against. What every limit crossing is telling you is that the model no longer applies.

What determines the usefulness of the tool when you use it systematically, day in and day out on a factory floor is the ratio of real to false alarms that it produces. If, the process is unstable, you will get many real alarms and the tool will be useful. On the other hand, if the actual process fits the model perfectly all the time, you will have nothing but false alarms and the tool will be useless.

Why were Control Charts used at Toyota in the early 1950s? Because the processes were unstable.

Why, as Art Smalley reports, did Toyota stop plotting Control Charts 30 years ago? Because its processes were so stable that the charts were useless. They were actually worse than useless, because they started people chasing non-existent causes.

Providing probabilities for future events is the whole purpose of probability theory.

Having stable processes does not mean that your products are defect-free, only that the defects are not caused by drifts in your processes. Discrete-events like sudden tool failures and human errors still remain; the Lean approach to progress further is to use tools like one-piece flow, go/no-go gauges, and mistake-proofing that are not statistical in nature.

The quality characteristics of a part coming out of an operation over time are a time series, and there are time series models in multiple dimensions that can represent, for example, correlations between characteristics of the same part, and autocorrelations between them over time. The EWMA (Exponentially Weighted Moving Average) that Dan quoted as used at Intel is an example of such a tool.

Shewhart couldn’t use such tools because they didn’t exist in 1931 and neither did the technology to make calculations based on them. Had they been available, he might still have chosen not to use them, as being impossible for his manufacturing audience to understand. But then, in the ensuing 82 years, even his Control Charts have been widely misunderstood.

• Comment in the Lean Six Sigma Worldwide discussion group on LinkedIn:

@Lasse. Stastistical inference is the application of probability models. Probability models are theoretical distributions used to understand real-world phenomena. We don’t “know the distribution” of any real-world phenomena nor can we “guarantee that it stays unwavered in the future”—not even distributions for flipping a coin.

However, the key condition in your statement is “accurately enough.” That is why I said before that probability models are useful or not—but we shouldn’t say they are correct or not. Is using a 50-50 distribution for a coin accurate enough? Depends on the situation. We do this with hypothesis testing—yet no one argues that we don’t know the distributions nor can we guarantee they will stay that way.

As far as accuracy is concerned, any finite population (which all real-world populations are), is always discrete. Yet, we model them often with continuous probability distributions. Such models are useful because they are “accurate enough”—but incorrect.

I think the confusion you are making is that there are (at least) two stages in using control charts. The first is to determine whether the data you collected during a specified time shows the process to be stable or not during that time (what Michel referred to as hypothesis test). This is done by assuming that points beyond the control limits have a low probability. We can estimate that probability by assuming a probability distribution. Do we need to? Depends. Is assuming it is less than 1% accurate enough to be useful? Most people think so.

The second stage has two options depending whether you conclude it is stable or not. If stable, then you might do other analyses, e.g., predicting future performance not only of stability but of process capability. This prediction could be that the probability of getting a defect is such-and-such. How accurate does that estimate need to be? That prediction is relative to specifications not control limits.

If unstable, then you do analyses to find causes of the instability (what was called analytic studies). This does not require probabilities or probability models. Although the “statistical” tool DOE can be used, you don’t need probabilities to analyze the data as it is a causal relationship you seek.

To help close this discussion (or extend it!), I have two questions for those who insist that control chart limits are not and should not be probability limits:
1. Why does Wheeler recommend using probability limits for c- and u-charts?
2. With respect to the accuracy issue, why does he claim these probability limits “provide a balance between sensitivity to process improvements and deterioration, while minimizing the chance of false alarms”? [In other words, they do exactly what he says control limits that are not “probability limits” do.]