Nov 14 2011
Last Call! Manufacturing Data Mining and
Last Call! Manufacturing Data Mining and Beyond 6σ: 2 Webinars on 11/15-16/11 http://ow.ly/7sIFi, #lean, #datamining, #sixsigma
Nov 14 2011
Last Call! Manufacturing Data Mining and Beyond 6σ: 2 Webinars on 11/15-16/11 http://ow.ly/7sIFi, #lean, #datamining, #sixsigma
By Michel Baudin • Events • 0 • Tags: Data mining, Information systems, Lean, Quality, Six Sigma
Oct 24 2011
Prof. Mei-chen Lo, of National University and Kainan University in Taiwan, worked with Operations Managers in two semiconductor companies to establish a list of 16 dimensions of data quality. Most are not parameters that can be measured and should be considered instead as questions to be asked about a company’s data. I learned it from her at an IE conference in Kitakyushu in 2009 and found it useful by itself as a checklist for a thorough assessment of a current state. Her research is about methods for ranking the importance of these criteria.
They are grouped into four main categories:
Each category breaks further down as follows:
Prof. Mei-Chen Lo’s research on this topic was published in The assessment of the information quality with the aid of multiple criteria analysis (European Journal of Operational Research, Volume 195, Issue 3, 16 June 2009, Pages 850-856)
By Michel Baudin • Management • 4 • Tags: Data mining, Information systems, Information technology, IT, Lean manufacturing, Manufacturing, Quality
Apr 22 2010
Originally posted on The Lean Edge on 4/22/2010
Quality not central to Lean? Says who? Lean is about simultaneously improving all dimensions of performance, including quality.
Quality professionals frequently miss this, because what they learned primarily addresses process capability issues that are central only in high technology, where, if your process is mature, your product is obsolete. This is the context where statistical approaches like Six Sigma make a difference.
Modern machine tools, on the other hand, can easily hold required tolerances, and most quality problems are not due to lack of process capability. They are instead due to discrete failure of the equipment or human error.
The main issue with discrete equipment failures is to detect them quickly so that they affect few parts and can be diagnosed before their trail is cold. With one-piece flow, defects are detected immediately instead of being buried in WIP, and this is why conversion from batch production to one-piece flow typically yields large improvements in quality.
The next step is having machines stop as soon as they start producing defectives, but this still leaves human error, and that is addressed by mistake-proofing.
Beyond these approaches, there is also management to prevent the deterioration over time, and planned responses to potential new problems.
This is a hierarchy of approaches. Actual numbers vary, but, in orders of magnitude, statistical tools will get you from 30% defectives to 3%, one-piece flow to 0.3%, mistake-proofing to 15ppm, and I know of one case of a Toyota supplier achieving <1ppm on some parts.
By Michel Baudin • Management • 0 • Tags: Lean manufacturing, Quality
Apr 22 2010
This is in response to Mike Micklewright‘s question on Why Is Quality So Rarely Central In Lean?:
“I see so many internal Lean “experts” using “Lean” as a means to increase efficiencies and productivity, and therefore, reduce costs. They still do not see the connection to quality. They see quality and the reduction of variation in significant product characteristics as something that is outside of the Lean scope and something that should be handled by the quality folks independently of the lean effort. What a shame! If you agree with this observation, why does this exist and what can we do to change this perception?”
Following is my response:
Quality not central to Lean? Says who? Lean is about simultaneously improving all dimensions of performance, including quality. Quality professionals frequently miss this, because what they learned primarily addresses process capability issues that are central only in high technology, where, if your process is mature, your product is obsolete. This is the context where statistical approaches like Six Sigma make a difference.
Modern machine tools, on the other hand, can easily hold required tolerances, and most quality problems are not due to lack of process capability. They are instead due to discrete failure of the equipment or human error. The main issue with discrete equipment failures is to detect them quickly so that they affect few parts and can be diagnosed before their trail is cold. With one-piece flow, defects are detected immediately instead of being buried in WIP, and this is why conversion from batch production to one-piece flow typically yields large improvements in quality.
The next step, which Dennis alluded to, is having machines stop as soon as they start producing defectives, but this still leaves human error, and that is addressed by mistake-proofing. Beyond these approaches, there is also management to prevent the deterioration over time, and plan responses to potential new problems.
This is a hierarchy of approaches. Actual numbers vary, but, in orders of magnitude, statistical tools will get you from 30% defectives to 3%, one-piece flow to 0.3%, mistake-proofing to 15ppm, and I know of one case of a Toyota supplier achieving <1ppm on some parts.
By Michel Baudin • Management • 0 • Tags: Lean, Mistake-Proofing, One-piece flow, Poka-Yoke, Process capability, Quality, Six Sigma
Nov 22 2011
Key details on Poka-Yoke/Mistake-Proofing
The following are my inputs to a discussion on AME’s LinkedIn group initiated last August by Xola Qhogwana, which also included contributions from Steve Bathe, Richard Foster, Karen Wilhelm, Steven Wade, Wesley Bushby, Ron Turkett, and Trevor Krawchyk.
When to use Poka-Yoke
Poka-Yokes prevent human error, and are therefore relevant when and only when human error is the main cause of your quality problems.
If you have process capability issues, focus on resolving them, not on preventing human error. What you need is deep understanding of your technology combined with statistical tools to enable your process to consistently hold tolerances.
If your process is capable but you are still producing in batches, focus on converting to flow to prevent your defectives being buried in WIP. Your problem is that it takes you too long to detect problems, not human error.
If your process is capable and you practice one-piece flow, then the defects you still produce are due to human error. At this point, and not before, Poka-Yoke is the relevant technique.
For details, see When to use statistics, one-piece flow, or mistake-proofing to improve quality.
Poka-Yokes do not require extensive a-priori analysis
Poka-Yokes are usually small devices, such as a permanent magnet to suck up a panel already containing a metal bracket, or a hole in a container to prevent overfill.
Doing an FMEA do decide whether to design and implement a Poka-Yoke is more expensive than just doing it. If you sort of think a process might need a Poka-Yoke and you have an idea of what it might be, just go ahead, try it, and document it in your company-specific Poka-Yoke library to inspire others. Don’t over-analyze it upfront. On the other hand, if you are building a spacecraft, you should definitely do an FMEA.
If it adds labor, it’s not a Poka-Yoke
By definition, also a Poka-Yoke device adds no labor. Manually scanned barcodes on parts to validate picks, for example, do not qualify as a Poka-Yokes because they add labor. A barcode that is automatically read or an RFID tag , on the other hand, would qualify. A Poka-Yoke has to become part of the process in the long run. If you look at the old big red book of Poka-Yoke from Productivity Press, you will notice that none of the examples adds labor, and there is a reason: any device that adds labor is likely to be bypassed under pressure.
This even happens with safety. Take, for example, the traditional approach of requiring the pressing of two buttons to start a press. How many times to you see plants where one button is taped down so that you can start the press with just the other one? By contrast, safety light curtains add no labor, and are not bypassed.
Using bar codes reading for data acquisition effectively eliminates the errors due to keyboarding because it is faster. If it weren’t, operators would revert to keyboarding and typos would creep back in. This is exactly what you see happening after two or three failed attempts at scanning a code. A barcode on a workpiece that is automatically read can be a Poka-Yoke. The workpiece passes under a reader in the proper orientation and under good lighting conditions and the barcode is reliably read. Under these conditions, it can even drive the lighting of the proper bins in a pick-to-light system. It does not work as a Poka-Yoke s if an operator has to wave a bar code gun in front of a part for pick validation.
Just because you use a device with the intent of preventing mistakes doesn’t mean it works. You have to make sure it does, and not just at the time you implement it. If you don’t pay attention, Poka-Yokes tend to deteriorate and to be set aside, for example when new operators are assigned to the station.
Share this:
Like this:
By Michel Baudin • Technology • 3 • Tags: Lean manufacturing, Mistake-Proofing, Poka-Yoke, Quality