Christoph Roser has more impressive credentials than most Lean consultants, from a PhD in Engineering to a research job at Toyota labs, stints in operations at Bosch, and a professorship at Karslruhe University of Applied Sciences. So, if anyone is qualified to write a theory of Lean, he is, and he is trying his hand at it in production planning and scheduling.
Organizations that produce documents — whether they are publications for sale, standard tests for schools, legal templates, or work instructions for production — face challenges that differ from manufacturing, because data and materials don’t flow the same way. The production of a document by a team is a process of collaborative editing, not a fixed sequence of standardized operations.
With electronic documents, you need a revision management system to prevent inconsistent updates, you need to cap the number of documents in process to control lead time, and you may need to improve the work flow or increase the team size if saturated.
Tools like 5S are irrelevant in this context, because the work takes place inside a computer network, not in the physical office, and setting up an effective network — with the right software properly configured — requires information systems professionals at the state of the art. What looks like rework in this context is a collaborative editing process that must be managed, not eliminated.
Philip Marris told me of the case of a machining cell in an auto parts plant where management was ready to buy more machines because it was “lacking capacity,” but he was able to find a cheaper way to increase capacity by 17% in 15 minutes.
Unlike manual assembly cells, in which work can be balanced among stations, cells that involve machines always have one that is slower than all others, and, reallocating work among machines with different capabilities is not an option. In particular, almost all machining cells have a bottleneck, and the situation Philip described involved this bottleneck and the machine feeding it. The cell practiced one-piece flow. Therefore, if the feeder machine had worked perfectly, the timelines of the Feeder and the Bottleneck would have been as follows:
The Feeder would have started one piece at the beginning of each takt interval, and, since it is faster than the Bottleneck, it would have finished the piece before the end of the interval. The Feeder then would have waited for the bottleneck to pick up the piece before starting the next one. The Bottleneck would have been working 100% of the time; the Feeder would not.
But what Philip discovered by observing operations was that the Feeder had microstoppages. When the Feeder was hit by a microstoppage, the delay it caused passed to the bottleneck, which was prevented from working 100% of the time, as shown below:
This reduced the capacity of the entire cell. In the actual case, even with its microstoppages, the Feeder had enough capacity to feed the Bottleneck, on the average, just not on a takt basis. The microstoppages caused the output of the Feeder to fluctuate and disrupt the operation of the Bottleneck.
To anyone trained in Lean, the only appropriate solution was to eliminate the microstoppages… But it was easier said than done. Sometimes, all it takes is slowing down the machine, or changing a maintenance policy from “clean for one minute” to “clean until it is clean.” But it is not always that simple.
Microstoppages are often unreported because they are fixed on the fly by production operators. To understand microstoppages, you need to monitor the machine to observe when they occur and trace their causes. Eliminating them may require you to modify chutes, fixtures, jigs or dies, or even the basic process, and it can take time, but you need to do it if you want one-piece flow to work.
In the meantime, what do you do? Buying more equipment is an expensive solution, especially when you don’t expect to need it once you are rid of the microstoppages. A cheaper countermeasure is to protect the supply of parts to the bottleneck against fluctuations by decoupling the two machines with a buffer of WIP. You can set the size of this buffer by trial and error, knowing that it is not a long-term solution.
Of course, manufacturing engineers understand that you cannot have one-piece flow with microstoppages. So why did they ignore their own wisdom? The most likely explanation is a demand from a corporate “Lean group” to implement one-piece flow everywhere and “damn the torpedoes!” These engineers had complied not because they thought it was a good idea, but because it was required to keep their jobs.
Technically, Philip sees this story as a case study in the addition of Theory of Constraints (TOC) thinking to Lean; I just see it as due consideration of equipment issues in cell design, as I was taught it more than 25 years ago. From a management standpoint, I see it as an example of the local consequences of half-baked corporate mandates.
Festo has implemented a manufacturing system that allows it to produce parts for linear handling systems economically in batch sizes of just one item. At its global production centre for electric and pneumatic drives at St. Ingbert-Rohrbach in Germany, the company has introduced a “one piece flow” method of assembly, based on Toyota’s production system.
See on www.drives.co.uk
“I see so many internal Lean “experts” using “Lean” as a means to increase efficiencies and productivity, and therefore, reduce costs. They still do not see the connection to quality. They see quality and the reduction of variation in significant product characteristics as something that is outside of the Lean scope and something that should be handled by the quality folks independently of the lean effort. What a shame! If you agree with this observation, why does this exist and what can we do to change this perception?”
Following is my response:
Quality not central to Lean? Says who? Lean is about simultaneously improving all dimensions of performance, including quality. Quality professionals frequently miss this, because what they learned primarily addresses process capability issues that are central only in high technology, where, if your process is mature, your product is obsolete. This is the context where statistical approaches like Six Sigma make a difference.
Modern machine tools, on the other hand, can easily hold required tolerances, and most quality problems are not due to lack of process capability. They are instead due to discrete failure of the equipment or human error. The main issue with discrete equipment failures is to detect them quickly so that they affect few parts and can be diagnosed before their trail is cold. With one-piece flow, defects are detected immediately instead of being buried in WIP, and this is why conversion from batch production to one-piece flow typically yields large improvements in quality.
The next step, which Dennis alluded to, is having machines stop as soon as they start producing defectives, but this still leaves human error, and that is addressed by mistake-proofing. Beyond these approaches, there is also management to prevent the deterioration over time, and plan responses to potential new problems.
This is a hierarchy of approaches. Actual numbers vary, but, in orders of magnitude, statistical tools will get you from 30% defectives to 3%, one-piece flow to 0.3%, mistake-proofing to 15ppm, and I know of one case of a Toyota supplier achieving <1ppm on some parts.