Michael Ballé opens his 1/29/2018 Gemba Coach column with “all methodologies are about making a better use of our minds.” Are they? Tom DeMarco and Tim Lister disagree. In Peopleware, they describe methodologies as follows:
“A Methodology is a general systems theory of how a whole class of thought-intensive work ought to be conducted. It comes in the form of a fat book that specifies in detail exactly what steps to take at any time, regardless of who is doing the work, regardless of where or when. The people who write the Methodology are smart. The people who carry is out can be dumb. They never have to turn their brains to the ON position. All they do is start on page one and follow the Yellow Brick Road, like happy little Munchkins, all the way from the start of the job to its successful completion. The Methodology makes all the decisions, the people make none.”
Six years ago, one of the first posts in this blog — Is SPC Obsolete? — started a spirited discussion with 122 comments. Reflecting on it, however, I find that the participants, including myself, missed the mark in many ways:
- My own post and comments were too long on what is wrong with SPC, as taught to this day, and too short on alternatives. Here, I am attempting to remedy this by presenting two techniques, induction trees and naive Bayes, that I think should be taught as part of anything reasonably called statistical process control. I conclude with what I think are the cultural reasons why they are ignored.
- The discussions were too narrowly focused on control charts. While the Wikipedia article on SPC is only about control charts, other authors, like Douglas Montgomery or Jack B. Revelle, see it as including other tools, such scatterplots, Pareto charts, and histograms, topics that none of the discussion participants said anything about. Even among control charts, there was undue emphasis on just one kind, the XmR chart, that Don Wheeler thinks is all you need to understand variation.
- Many of the contributors resorted to the argument of authority, saying that an approach must be right because of who said so, as opposed to what it says. With all due respect to Shewhart, Deming, and Juran, we are not going to solve today’s quality problems by parsing their words. If they were still around, perhaps they would chime in and exhort quality professionals to apply their own judgment instead.
5 years ago, I pointed out several omissions in the ASQ’s History of Quality pages, which have not been corrected. Specifically, I faulted them for ignoring the TPS/Lean approach to quality, the role of interchangeable parts technology, and the Roman philosopher Cicero, who coined the word “quality.” The first page, however, also contains what I think is an error of commission, where it credits the guilds of medieval Europe as precursors in the field, as follows:
“From the end of the 13th century to the early 19th century, craftsmen across medieval Europe were organized into unions called guilds. These guilds were responsible for developing strict rules for product and service quality. Inspection committees enforced the rules by marking flawless goods with a special mark or symbol.[…] Inspection marks and master-craftsmen marks served as proof of quality for customers throughout medieval Europe. This approach to manufacturing quality was dominant until the Industrial Revolution in the early 19th century.”
“Six Sigma as a problem-solving methodology causes many hang-ups for Japanese managers. Many Americans seeking training in Six Sigma in Japanese organizations face resistance with little explanation as to why. This often leads to frustration and contempt towards management. They write off the Japanese resistance to the training as resistance to change, preventing growth and feeling unrepresented.“
Sourced through Nipponica
Michel Baudin‘s comments: In this post, Ian Moore makes the case that rejection of Six Sigma by Japanese organizations is rooted in the national culture, which is ironic, given that Six Sigma’s Black Belt concept was borrowed from Japanese martial arts with the obvious intent of creating the perception of a connection to Japanese culture.
I own two dishwashers in two homes, different models from the same brand, bought in the same store, and both on a service contract. For the first one, the model number is SHE55R56UC; for the second one, SHE65T55UC. Today, we needed help on the first one, but customer service shipped us parts for the second one, which the repair technician discovered when unpacking them.
Chet Marchwinski recently exhumed a 2011 discussion about Poka-Yoke that had been started by the following question:
I’m a manufacturing engineer and since I have started participating in kaizen workshops, I have noticed that production supervisors tend to disconnect some of the poka-yokes we’ve put in place in the machines. When I challenge them about this they argue that operators can’t run production and cope with the complexity of our machines. I am perplexed by this and wondered whether you’d have a comment.
In short, I can think of two reasons for production supervisors to disconnect Poka-Yoke:
- No production supervisor in his right mind would disconnect devices that make the work easier for operators. If they do disconnect them, the most likely explanation is that the devices described as “Poka-Yoke” actually add work for the operator. If you have to pick a part from one of ten open bins in front of you, you will spend precious seconds finding the right one; if all bins are covered with lids except the right one, not only are you physically prevented from picking the wrong one but you don’t have to look for it. It makes your job easier. On the other hand, if you have to scan a bar code on the part to validate the pick, it adds to your work load and your supervisor will pull the plug on the next production rush.
- The manufacturing process is not ready for Poka-Yoke. A production supervisor is quoted in the question as saying “operators can’t run production and cope with the complexity of our machines.” This suggests that the line has process capability issues that must be addressed before implementing Poka-Yoke.
The following paragraphs elaborate on these points.
What passes for “business analytics” (BI), as advertised by software vendors, is limited to basic and poorly designed charts that fail to show interactions between variables, even though the use of scatterplots and elementary regression is taught to American middle schoolers and to shop floor operators participating in quality circles.
But the software suppliers seem to think that it is beyond the cognitive ability of executives. Technically, scatterplots are not difficult to generate, and there are even techniques to visualize more complex interactions than between pairs of variables, like trendalyzers or 3D scatterplots. And, of course, visualization is only the first step. You usually need other techniques to base any decision on data.
Some of the standard charts used in manufacturing for decades don’t meet today’s criteria for effective visualization. But using them is now a tradition; they are taught in school and their value is unchallenged, but it is time to challenge it. If we were to see these charts for the first time in 2015, would we consider the information they provide useful, and would we want to use the classical formats? This post suggests answers in the case of the venerable Pareto chart.
According to Chip Chapados, the concept of one-piece flow emerged from the need to rapidly detect defects in engine castings when Kiichiro Toyoda was reverse-engineering a Chevrolet engine in 1934, and it was originally called “one-by-one confirmation.”
“You shouldn’t believe everything you read on the internet, but according to some of the more reliable sources, during World War II:
- Over 12,000 Bomber Command aircraft were shot down
- 55,500 aircrew died.
- The life expectancy of a Lancaster bomber was 3 weeks
- Tail-gunners were lucky if they survived four missions.”
Michel Baudin‘s comments:
This is a great story both about effective visualization of series of events in space-time and about proper interpretation in the face of sample bias.
Manufacturing, thankfully, is less dangerous than flying bombers in World War II was, but it is still more dangerous than it should be. Posting the locations of injuries on a map of the human body is also an effective way to identify which body parts are most commonly affected, and which safety improvements are most effective.
But are all injuries reported? Many organizations blame the victims for lowering their safety metrics, and discourage reporting. As a consequence, we can expect under-reporting and a bias towards injuries severe enough that reporting is unavoidable.
If you get data on an entire population, or if you thoughtfully select a representative sample, you can avoid bias, but many of the most commonly used samples are biased, often in ways that are difficult to figure out.
Customer surveys of product quality, for example, are biased by self-selection of the respondents. Are unhappy customers more likely to take the opportunity to vent than happy customers to praise? If so, to what extent? The effect of self-selection is even stronger for posting reviews on websites.
See on Scoop.it – lean manufacturing