Acceptance Sampling In The Age Of Low PPM Defectives

Today, some automotive parts manufacturers are able to deliver one million consecutive units without a single defective, and pondering quality management practices appropriate for this level of performance is not idle speculation. Of course, it is only achieved by outstanding suppliers using mature processes in mature industries. You cannot expect it during new product introduction or in high-technology industries where, if your processes are mature, your products are obsolete.

While still taught as part of the quality curriculum, acceptance sampling has been criticized by authors like W. E. Deming and is not part of the Lean approach to quality. For qualified items from suppliers you trust, you accept shipments with no inspection; for new items or suppliers you do not trust, you inspect 100% of incoming units until the situation improves. Let us examine both what the math tells us about this and possible management actions, with the help of 21st century IT.

Continue reading

Why Your Lean and Six Sigma Improvement Efforts Aren’t Driving Better Results | IndustryWeek | John Dyer

“Don’t expect a positive ROI from your lean and Six Sigma investments if they are nothing but a pretty picture.

I once had a plant manager tell me his factory had implemented Six Sigma, but there was not a single statistical process control chart.  How is that possible? Another had the control charts in place but refused to allow the operator to shut the process down when it indicated an out-of-control condition.  Another plant claimed it was lean but had a dozen bins of parts stacked on the floor as part of a two bin system.  Another plant routinely violated the daily production plan by rescheduling orders, and then the plant blamed the supply chain for causing it to frequently run out of parts (which then drove it to change the schedule… a vicious circle).”

Source: www.industryweek.com

Continue reading

A summary of mistakes about Lean

In an invitation to the Lean Enterprise Academy ‘s Lean Summit 2014, David Brunt included the following summary of Lean since 1990:

“Early implementations focused on empowered teams and continuous improvement (kaizen) or attempts to replicate a pre-defined box of tools such as 5S, SMED, SPC and kanban. For others lean became synonymous with kaizen events – that were actually kaikaku – radically reconfiguring individual operations. For some, this led to them developing their version of Toyota’s famed Production System (TPS) including their own schematic ‘house’ or ‘temple’ of lean along with departments of continuous improvement specialists.”

It is a pretty accurate account of what happened — the only major omission being the omnipresent VSMs — and it goes a long way towards explaining why the vast majority of these efforts failed. They were limited at best to superficial details of TPS, included elements that were not part of TPS, and misjudged implementation priorities. Let’s us go through the list:

  1. “Empowered teams.”  As a manager you have a team to work with. What decisions should you allow this team to make on its own? This is best subjected to the sleep-at-night test. Knowing that you are responsible for the outcome, what can you delegate to the team and still sleep at night? It obviously depends on the team. If it is a team of production operators with 10 years of TPS practice behind it, the answer will not be the same as if they are beginners. Implementations that start with empowering teams put the cart before the horse.
  2. “Continuous improvement (kaizen).” Lean, or TPS, are often described as approaches to continuous improvement (CI), when CI is in fact only one component of the system. You cannot convert a plant from mass production to Lean manufacturing by continuous improvement, because it is not about tweaking details. For example, if you have implemented cells in machining or assembly, you can make them perform better with CI, but you have to have cells first, and that is beyond the scope of CI.
  3. “Replicate a pre-defined box of tools.” It can work, if your situation is sufficiently similar to the one you are copying, you really know what the tools are, and you master them.
    • SMED and Kanban are tools of TPS but often misunderstood. For example, you often see SMED used to try to increase equipment utilization instead of flexibility, and Kanban is often confused with the two-bin system or even reorder-point.
    • SPC is not part of TPS. This is so shocking to American and European professionals trained by the Quality establishment that they just inserted it back in, regardless of what Toyota actually did. The latest examples of SPC control charts at Toyota are from the 1950s.
    • 5S is part of TPS, but is mistakenly assumed easy to implement because its technical content is trivial. In fact, the absence of technical content is what makes it difficult to implement and certainly unfit for an initial project.
  4. “Kaizen events” are an American invention and not part of TPS. As Brunt points out, the name is misleading, because what they do is not Kaizen. The popularity of this method over the past 25 years and the confusion created by the name have in effect prevented Lean implementation from including the real Kaizen.
  5. “Departments of continuous improvement specialists.” The creation of these departments has often made Lean implementation into a function alongside Production Control, Maintenance, or Quality Assurance, with the result of making it a professional specialty instead of part of everybody’s job. It works to make a good show for outside visitors, but not for much else. This department cannot be large enough to have the capacity to do all that needs to be done. Even if it did, it does not have the authority to make the changes take root in daily operations.

These efforts failed because the approach was simplistic. Both the technical and managerial content of TPS are deeper and take a while to learn. A successful implementation, particularly is a different industry, is not based on copying tools but on understanding underlying principles and deploying them as appropriate to the new context.

Standards and Opportunities for Deviation

“Every time you set a standard, you create possibilities for deviations, and the need to respond,” David Meier said. The setting was a presentation on problem-solving at a corporate in-house conference. It struck me as a concise statement of why managers should issue standards only where clearly and unquestionably useful.

Unnecessary standards were discussed in an earlier post, but the topic is worth revisiting in greater detail, first because there are so many of them in companies, making work life more complex than it has to be, and second, to elaborate on their impact on the organization.

Standards are rules set for others to follow, and rarely welcome, particularly when these others are thereby  required to change behaviors they consider rational and appropriate, and sometimes more advanced than the standard. If you don’t enforce your unnecessary standard, your hurt the credibility of all your standards, including the useful ones; if you do, you turn into Captain Queeg, as he “tried to run the ship properly, by the book.”

All pens different

All pens different

All standard pens

All standard pens

The reception desk in the lobby of the building where the conference took place had a holder  full of pens for visitors to sign in. No two pens were alike. A Captain Queeg would have had none of that; he would have mandated a model of pen and a color of ink, and set  up regular audits to monitor compliance.

Office 5SThe example David gave was of office 5S zealots in Germany who had marked a location for a computer mouse on an individual’s desk and written him up for not complying. Last year, Mark Graban had posted a similar example of “5S” at a desk in the UK, shown on the right.

It reminded me of my experience of working in a Japanese office in the 1980s. It was a room with about 30 gray metal desks arranged in facing rows without any partitions. Everywhere else I have worked, each desk had its own supplies of staplers, staple-removers, scissors, glue sticks, etc., but it was not so in that office. These were shared resources, stored in a transparent plastic chest in the center of the room,  with a label for each compartment.

This arrangement sounds right out of a Staples commercial, but that was the way it was. What struck me about it,  however, was that the sharing created the need for labeled locations and for the discipline to return the items to assigned locations after use. This approach might make sense in offices used in hotelling mode. Everywhere else, however,  each office worker has a dedicated desk that comes with a set of  tools, that the employee organizes as he or she sees fit.

In the 21st century, the tidiness of desks does not have much to do with the performance of an office. What really makes a difference is the way information is organized on its computer network, and that is not visible when you walk through the office. But effective ways to do this are a whole other discussion. In factories, 5S in the offices is sometimes justified “to show solidarity with the shop floor.” It has been suggested to me that a better way to show solidarity might be to make the offices as hot, smelly and grimy as the shop floor.

Sometimes, the consulting companies that guide 5S implementation in client offices do not practice it in their own. In one particular firm, as consultants were in the field most of the time, they had no permanent desk, and grabbed an available one when they happened to be in town. With such a system, you might have expected the rooms to be numbered, and to have a magnetic board at the entrance with token for each present consultant to mark on a map of the facility where he or she could be found, but the managers felt that such a board would have been “ugly.” They never told me why they didn’t number the rooms. To locate a consultant you had to call his or her cell phone, and then follow instructions like “go left at the top of the stairs and it’s the third door to the right.”

Besides the size of steel balls at the end of motorcycle brake handles and company email addresses listed in my earlier post, there are many other things that are better off not standardized, and prominent ones include the analytical tools used in problem-solving.

The institutions of the quality profession in the US still promote as a standard the 80-year old tools of SPC, as if the art of collecting and analyzing data had not evolved since 1930. These tools are obsolete, but it would be a mistake to replace them with another set as a new standard.

There should be no standard. The professionals who work on improving quality should learn to use a broad variety of tools, and should be free to use whichever they think may help in their current circumstances. And they should always be on the lookout for new approaches and new technology.

Likewise, Value Stream Mapping (VSM) has, in the past 15 years, been elevated in the US to the position of a standard that all Lean practitioners are mandated to use. The need to map flows of materials and information in a manufacturing supply chain is often real, but there is no need for it to be done in a specific format with a specific set of symbols.

In fact, what happens in both situations is that formal compliance with the standard rapidly becomes the only criterion against which the outputs of quality or flow improvement teams are reviewed. The tools, their appropriate use, and their range of applicability are understood neither by the teams mandated to use them nor by the auditors who make sure they do, and the result is wallpaper.

 

Is SPC obsolete?

In the broadest sense, Statistical Process Control (SPC) is the application of statistical tools to characteristics of materials in order to achieve and maintain process capability. In this broad sense, you couldn’t say that it is obsolete, but common usage is more restrictive. The semiconductor process engineers who apply statistical design of experiments (DOE) to the same goals don’t describe what they do as SPC. When manufacturing professionals talk about SPC, they usually mean Control Charts, Histograms, Scatter Plots, and other techniques dating back from the 1920s to World War II, and this body of knowledge in the 21st century is definitely obsolete.

Tools like Control Charts or Binomial Probability Paper have impressive theoretical foundations and are designed to work around the information technology of the 1920s. Data was recorded on paper spreadsheets, you looked up statistical parameters in books of tables, and computed with slide rules, adding machines or, in some parts of Asia, abacuses (See Figure 1). In Control Charts, for example, using ranges instead of standard deviations was a way to simplify calculations. These clever tricks addressed issues we no longer have.

Figure 1. Information technology in the 1920s

Another consideration is the manufacturing technology for which process capability needs to be achieved. Shewhart developed control charts at Western Electric, AT&T’s manufacturing arm and the high technology of the 1920s. The number of critical parameters and the tolerance requirements of their products have no common measure with those of their descendants in 21st century electronics. For integrated circuits in particular, the key parameters cannot be measured until testing at the end of a process that takes weeks and hundreds of operations, and the root causes of problems are often more complex interactions between features built at multiple operations than can be understood with the tools of SPC. In addition, the quantity of data generated is much larger than anything the SPC techniques were meant to handle. If you capture 140 parameters per chip, on 400 chips/wafer and 500 wafers/day, that is 28,000,000 measurements per day. SPC dealt with a trickle of data; in current electronics manufacturing, it comes out of a fire hose, and this is still nothing compared to the daily terabytes generated in e-commerce or internet search  (See Figure 2).

Figure 2. Data, from trickle to flood, 1920 to 2011

What about mature industries? SPC is a form of supervisory control. It is not about telling machines what to do and making sure they do it, but about checking that the output is as expected, detecting deviations or drifts, and triggering human intervention before these anomalies have a chance to damage products. Since the 1920s, however, lower-level controls embedded in the machines have improved enough to make control charts redundant. The SPC literature recommends measurements over go/no-go checking, because measurements provide richer information, but the tables are turned once process capability is no longer the issue. The quality problems in machining or fabrication today are generated by discrete events like tool breakage or human error, including picking wrong parts, mistyping machine settings or selecting the wrong process program. The challenge is to detect these incidents and react promptly, and, for this purpose, go/no-go checking with special-purpose gauges is faster and better than taking measurements.

In a nutshell, SPC is yesterday’s statistical technology to solve the problems of yesterday’s manufacturing. It doesn’t have the power to address the problems of today’s high technlogy, and it is unnecessary in mature industries. The reason it is not completely dead is that it has found its way into standards that customers impose on their suppliers, even when they don’t comply themselves. This is why you still see Control Charts posted on hallway walls in so many plants.

But SPC has left a legacy. In many ways,  Six Sigma is SPC 2.0. It has the same goals, with more modern tools and a different implementation approach to address the challenge of bringing statistical thinking to the shop floor. That TV journalists describe all changes as “significant” reveals how far the vocabulary of statistics has spread; that they use it without qualifiers shows that they don’t know what it means. They might argue that levels of significance would take too long to explain in a newscast, but, if that were the concern, they could save air time by just saying “change.” In fact, they are just using the word to add weight to make the change sound more, well, significant.

In factories, the promoters of SPC, over decades, have not succeeded in getting basic statistical concepts understood in factories. Even in plants that claimed to practice “standard SPC,” I have seen technicians arbitrarily picking parts here and there in a bin and describing it as “random sampling.” When asking why Shewhart used averages rather than individual measurements on X-bar charts, I have yet to hear anyone answer that averages follow a Bell-shaped distribution even when individual measurements don’t. I have also seen software “solutions” that checked individual measurements against control limits set for averages…

I believe the Black Belt concept in Six Sigma was intended as a solution to this problem. The idea was to give solid statistical training to 1% of the work force and let them be a resource for the remaining 99%. The Black Belts were not expected to be statisticians at the level of academic specialists, but process engineers with enough knowledge of modern statistics to be effective in achieving process capability where it is a challenge.