Dec 17 2020
QRQC at Valeo | Rob van Stekelenborg
“Recently, Michel Baudin […] invited practitioners to further contribute to the knowledge on QRQC, among which myself. As I feel a brief answer on LinkedIn would not do justice to the richness of QRQC, I decided to dedicate a post to the topic. Without ambition, however, to try and be complete in this post, which I feel is not possible with a vast topic like QRQC. But let’s dive in and share some of my experiences with and views on QRQC, the way I experienced and lived it at Valeo at the time.”
Source: Dumontis
Michel Baudin‘s comments: Thanks to Rob van Stekelenborg for stepping up and sharing all of these details.
Of particular interest is his description of Kazuo Kawashima’s rejection of the Pareto principle:
“I also still remember one of Kawashima-san’s strong coaching sessions in 2002 when he proverbially threw the well-known principle of Pareto or 80-20 analysis into the garbage bin; still a quality 101 tool you would say. He explained in his own personal way (he was quite a strong personality, I can tell from experience…) how stupid we were to first collect enough data for a long enough period of time to be able to make such a chart, and to only then start problem-solving.”
Michel Baudin‘s comments: According to Franck Vermet, Pareto was a bone of contention between Kawashima and Ken Sato, who brought QRQC into Faurecia. Sato used Pareto analysis to select problems to solve.
In Vade retro, Pareto!, Cécile Roche weighed in on Kawashima’s side. In her experience, Pareto diagrams were not nearly as useful as expected in setting priorities. I also put out a paper in IE Magazine on Revisiting Pareto, back in 2011.
If defects are sufficiently rare, then running to ground every defect makes sense. On the other hand, if you are overrun with defects, you need triage. This is where Pareto analysis is supposed to help.
Relying exclusively on occurrence counts, however, fails to take into account the challenge of problem-solving. A less frequent defect that is easier to eliminate may be a lower-hanging fruit than the most frequent one.
#qrqc, #valeo, #faurecia, #dumontis, #pareto
Jim Hudson
December 17, 2020 @ 11:34 am
I like to have my clients do 20/80 (if only from the gut if data is missing), and then actually pick the top problems to solve using an ‘Effort vs. Impact’ chart. I will also usually invoke Stephen Covey’s ‘Urgent vs. Important’ quadrangle to insure the executive leadership team is thinking in terms of leverage. At the same time, if it’s the beginning of a transformation, I will make sure to pick something that has a high percentage of being successful, so that the first one doesn’t provide resistors with ammunition to say “see, I told you it wouldn’t work.”
Michel Baudin
December 20, 2020 @ 3:43 pm
I have trouble swallowing the “only from the gut if data is missing.” You really cannot use a data-based tool without data. If you compare humans’ gut feel of the relative importance of items to what the data actually say, it is usually different. In addition, if you are charting gut feel, the people to whom you show these charts will assume them to be based on data, creating confusion at best.
Joerg Muenzing
December 18, 2020 @ 5:24 am
I found Pareto-prioritization simple, but also limiting, because based on a single dimension: occurrences. Using multiple dimensions – severity or cost x occurrence x ease of elimination – is more useful in my view, similar to what you have suggested in the last sentence of the post.