Is it a Bad Idea to Pay a Lean Consultant Based on a Percentage of Cost Savings? | Mark Graban

See on - lean manufacturing
Blog post at Lean Blog : The price paid for most management consulting work is based on either a daily rate or some variation of a flat-rate fee based on what is being delivered. Enterprise software pricing is also often fixed. In both cases, the client pays this with some expectation of benefits and even an “ROI” for the customer.[..]

Michel Baudin's comments:

I agree with Mark, and I am happy when clients report that they get ten times in benefits what our services cost. A daily fee for work done on site and a fixed fee for deliverables for offsite work are simple arrangements; paying a percentage of benefits, whether cost savings or revenue increases, is a complicated arrangement, conducive to misunderstandings and disagreements.

See on

Renault: An international school of Lean Manufacturing opens at Flins | Automotive World

See on - lean manufacturing
"Jose-Vicente de Los Mozos, Executive Vice President, Manufacturing and Supply Chain, of the Renault Group, inaugurated the International School of Lean Manufacturing yesterday."



Michel Baudin's insight:

When I visited this plant in 1994, I never imagined that it would be the site of an international school of Lean 20 years later.

We were working at the time on Lean implementation with CIADEA, the Renault licensee in Argentina. It had originally been a subsidiary, was sold to local entrepreneur Manuel Antelo in 1992, and was repurchased by Renault in 1997.

At the time, my hosts in Flins thought that Lean was just a way to cut heads and that implementing 5S would cause production to drop.

Times change.

See on

Is it Lean’s Fault or the Old Management System’s? | Mark Graban

See on - lean manufacturing
Blog post at Lean Blog :

"[...]The problem is the culture doesn’t change overnight. Leaders have years or decades of old habits (bad habits) that run counter to Lean thinking. They might be (might!) be trying to change, but people will still fall back into old habits, especially when under pressure.

I hear complaints (in recent cases) coming from different provinces in Canada that say things like:

Lean is causing hospitals to be “de-skilled” by replacing nurses with aides. Lean drives a focus on cost and cost cutting, including layoffs or being understaffedLean is stressing out managers by asking them to do more and taking nothing off their plateNurses hate Lean because they aren’t being involved in changes[...]"

Michel Baudin's comments:

In this post, Mark Graban explains how the leadership in Canadian hospitals is slapping the "Lean" label on ancient and counterproductive "cost-cutting" methods, and how the victims of these practices unfairly blame Lean.

This is definitely L.A.M.E., Mark's apt term for "Lean As Misguidedly Executed," and is found in Manufacturing as well as Health Care. Much of the article -- and of the discussion that follows -- is about what I call yoyo staffing: you hire more than you should in boom times, and lay off in recessions.

Of course, it isn't what Toyota did, and churning your work force in this fashion not only disrupts people's lives but is bad business. Hiring, training and firing repeatedly prevents your organization from accumulating the knowledge and skills it needs.

Mark makes the case that Lean should not be blamed for mistakes that have nothing to do with it. Other than raising consciousness, however, the post does not propose solutions to keep this from happening.

While there have been studies published on Toyota's approach to Human Resources (HR), I don't recall seeing much in the American Lean literature on topics like career planning for production operators.

In his comments, Bob Emiliani paints the current generation of leaders as "a lost cause," and places his hopes on the next. He seems to suggest that the solution is to wait out or fire the current, baby-boomer leadership and replace it with millenials. I don't buy it and, deep down, neither does Bob, because he ends by saying "While one always hopes the “next generation will do better”, it could turn out to be a false hope."

Like everything in HR, generational change has to be planned carefully. The people who rose to leadership positions presumably did so not just because of bad habits but because they also had something of value to offer. And the way the baton is passed is also a message to the incoming leaders: it tells them what to expect when their turn comes.

See on

A Definition of Lean | Mike Rother

See on - lean manufacturing

Maybe it's time for a better definition of "Lean." Here's one for you to consider and build on.

Michel Baudin's comments:

The proposal is "Lean is the permanent struggle to flow value to one customer."

Permanent struggle is fine, but I prefer pursuit. It means the same thing but it is shorter and "pursuit of happiness" sounds better than "permanent struggle for happiness."

On the other hand, I have a problem with "flow value," which I see as the sort of vague abstraction that would prompt Mike Harrison to ask whether it come in bottles. It is exactly what Dan Heath is warning against in the video included in the slideshare.

I also have a problem with the exclusive focus on customers, which I see as Business 101 rather than Lean. Lean includes many features like heijunka, that are intended to make life easier for suppliers and are transparent to customers. Going Lean means looking after all the stakeholders of the business, not just its customers.

This is why I define it instead as the pursuit of concurrent improvement in all dimensions of manufacturing performance through projects that affect both the production shop floor and support activities. 

Yes, I know, it is specific to manufacturing, but that is not my problem.

See on

Averages in Manufacturing Data

The first question we usually ask about lead times, inventory levels, critical dimensions, defective rates, or any other quantity that varies, is what it is "on the average." The second question is how much it varies, but we only ask it if we get a satisfactory answer to the first one, and we rarely do.

When asked for a lead time, people  usually give answers that are either evasive like "It depends," or weasel-worded like "Typically, three weeks." The beauty of a "typical value" is that no such technical term exists in data mining, statistics, or probability, and therefore the assertion that it is "three weeks" is immune to any confrontation with data. If the assertion had been that it was a mean or a median, you could have tested it, but, with "typical value," you can't.

For example, if the person had said "The median is three weeks," it would have had the precise meaning that 50% of the orders are delivered in less than 3 weeks, and that 50% take longer. If the 3-week figure is true, then the probability of the next 20 orders all taking longer, is 0.5^{20}= 9.6\,ppm. This means that, if you do observe a run of 20 orders with lead times above 3 weeks, you know the answer was wrong.

In Out of the Crisis, Deming was chiding journalists for their statistical illiteracy when, for example, they bemoaned the fact that "50% of the teachers performed beneath the median." In the US, today, the meaning of averages and medians is taught in Middle School, but the proper use of these tools does not seem to have been assimilated by adults.

One great feature of averages is that they add up: the average of the sum of two variables is the sum of their averages. If you take two operations performed in sequence in the route of a product, and consider the average time required to go through these operations by different units of product, then the average time to go through operations 1 and 2 is the sum of the average time through operation 1 and the average time through operation 2, as is obvious from the way an average is calculated. If you have n values X_{1},...,X_{n}

the average is just

\bar{X}= \frac{X_{1}+...+X_{n}}{n}

What is often forgotten is that most other statistics are not additive.

To obtain the median, first you need to sort the data so that  X_{\left(1\right)}\leq ... \leq X_{\left(n\right)}. For each point, the sequence number then tells you how many other points are under it, which you can express as a percentage and plot as in the following example:

Median graphic

Graphically, you see the median as the point on the x-axis where the curve crosses 50% on the y-axis. To calculate it, if n is odd, you take the middle value

\tilde{X}= X_{_{\left (\frac{n}{2}+1\right )}}

 and, if n is even, you take the average of the two middle values, or

\tilde{X}= \frac{\left[ X_{_{\left (\frac{n}{2}\right )}}+X_{_{\left (\frac{n}{2}+1\right )}}\right]}{2}

and it is not generally additive, and neither are all the other  statistics based on rank, like the minimum, the maximum, quartiles, percentiles, or stanines.

An ERP system, for example, will add operation times along a route to plan production, but the individual operation times input to the system are not averages but worst-case values, chosen so that they can reliably be achieved. The system therefore calculates the lead time for the route as the sum of extreme values at each operation, and this math is wrong because extreme values are not additive. The worst-case value for the whole route is not the sum of the worst-case values of each operation, and the result is an absurdly long lead time.

In project management, this is also the key difference between the traditional Critical Path Method (CPM) and Eli Goldratt's Critical Chain. In CPM, task durations set by the individuals in charge of each task are set so that they can be confident of completing them. They represent a perceived worst-case value for each task, which means that the duration for the whole critical path is the sum of the worst-case values for the tasks on it. In Critical Chain, each task duration is what it is actually expected to require, with a time buffer added at the end to absorb delays and take advantage of early completions.

That medians and extreme values are not additive is experienced, if not proven, by a simple simulation in Excel. Using the formula "LOGNORM.INV(RAND(),0,1)" will give you in about a second, 5,000 instances of two highly skewed variables, X and Y, as well as their sum X+Y. On a logarithmic scale, their histograms look as follows:

lognormal histogram with sum

And the summary statistics show the Median, Minimum and Maximum for the sum are not the sums of the values for each term:

Simulation stats

Averages are not only additive but have many more desirable properties, so why do we ever consider medians? There are real problems with averages, when taken carelessly:

    1. Averages are affected by extreme values. It is illustrated by the Bill Gates Walks Into a Bar story. Here we inserted him into a promotional picture of San Fancisco's Terroir Bar:Bill-Gates-in-terroir_bar-SFAttached to each patron other than Bill Gates is a modest yearly income. But his presence pushes the average yearly income above $100M, which is not a meaningful summary of the population. On the other hand, consider the median. Without Bill Gates, the middle person is Larry, and the median yearly income, $46K. Add Bill Gates, and the median is now the average of Larry and Randy, or $48K. The median barely budged! While, in this story, Bill Gates is a genuine outlier, manufacturing data often have outliers that are the result of malfunctions, as when wrong measurements are recorded as a result of a probe failing to touch the object it is measuring, or the instrument is calibrated in the wrong system of units, or a human operator puts a decimal point in the wrong place...Large differences between average and median are a telltale sign of this kind of phenomenon. Once the outliers are identified, assessed, and filtered, you can go back to using the average rather than the median.
    2. Averages are meaningless over heterogeneous populations. The statement that best explains this is "The average American has exactly one breast and one testicle." It says nothing useful about the American population. In manufacturing, when you consider, say, a number of units produced, you need to make sure you are not commingling 32-oz bottles with minuscule free samples.
    3. Averages are meaningless for multiplicative quantities. If you data is the sequence Y_{1}, ...,Y_{n} of yields of the n operations in a route, then the overall yield is Y= Y_{1}\times ...\times Y_{n}, and the plain average of the yields is irrelevant. Instead, you want the geometric mean \bar{Y}=\sqrt[n]{Y_{1}\times ...\times Y_{n}}.
      The same logic applies to the compounding of interest rates, and the plain average of rates over several years is irrelevant.
    4. Sometimes, averages do not converge when the sample size grows. It can happen even with a homogeneous population, it is not difficult to observe, and it is mind boggling. Let us say your product is a rectangular plate. On each one you make, you measure the differences between their actual lengths and widths and the specs, as in the following picture:Plate dimensions
      Assume then that, rather than the discrepancies in length and width, you are interested in the slope ΔW/ΔL and calculate its average over an increasing number of plates. You are then surprised to find that, no matter how many data points you add, the ratio keeps bouncing around instead of converging as the law of large numbers has led you to expect. So far, we have looked at the averages as just a formula applied to data. To go further, we must instead consider that they are estimators of the mean of an "underlying distribution" that we use as a model of the phenomenon at hand. Here, we assume that the lengths and widths of the plates are normally distributed around the specs. The slope ΔW/ΔL is then the ratio of two normal variables with 0 mean, and therefore follows the Cauchy distribution. This distribution has the nasty property of not having a mean, as a consequence of which the law of large numbers does not apply. But it has a median, which is 0.

    The bottom line is that you should use averages whenever you can, because you can do more with them than with the alternatives, but you shouldn't use them blindly. Instead, you should do the following:

    1. Clean your data.
    2. Identify and filter outliers.
    3. Make sure that the data represents a sufficiently homogeneous population.
    4. Use geometric means for multiplicative data.
    5. Make sure that averaging makes sense from a probability standpoint.

    As Kaiser Fung would say, use your number sense.

Forthcoming book: The Deming Legacy

The-Deming-Legacy-coverAbout two years ago, I started posting essays on this blog about Deming's 14 points and their current relevance. Now I am writing on Points 11.a and 12 through 14, which I have not covered yet, organizing the material, and editing it into an eBook entitled The Deming Legacy, that will be available shortly in PDF, iBook and Kindle formats. If you are interested, please visit the site and let me know. Comments here are also welcome.

The posts on the topic to date are as follows:

  1. Create constancy of purpose toward improvement of product and service.
  2. Adopt the new philosophy.
  3. Cease dependence on inspection to achieve quality. 
  4. End the practice of awarding business on the basis of a price tag. 
  5. Improve constantly and forever the system of production and service.
  6. Institute training on the job.
  7. Institute leadership.
  8. Drive out fear.
  9. Break down barriers between departments. 
  10. Eliminate slogans and exhortations.
  11. b. Eliminate management by objectives.

The title is a ploy to convince Matt Damon to play Deming in the movie version.


Lean Handbags and Micro Failures | Mark Graban

See on - lean manufacturing

Blog post from Mark Graban at Lean Blog :

"I enjoy reading the magazine Inc. for my interests in startups and entrepreneurship. There are often examples and case studies that directly reference Lean thinking or just sound like Lean and Kaizen with another label..."


Michel Baudin's comments:

Well run businesses are always good reading, even if their stories are usually embellished. Starting the design of fashion accessories from a market price or organizing to allow chefs in a restaurant chain to experiment with new dishes, however, just sounds like good management, not examples of "Lean Thinking."

I have never found much depth in the contrasting of "Margin = Price – Cost" with "Price = Cost + Margin," maybe  because I have never worked in a cost-plus business. Commercial manufacturers usually do not have the power to set prices this way. Perhaps, the Big Three US automakers did have that power in the 1950s, and Toyota didn't.

In Tracy Kidder's 1985 documentary book House,  a Boston lawyer hired a local contractor named to build a house in the suburbs. The contractor rigorously calculated the costs of the materials and labor, tacked on a 10% profit, and presented a bid with no wiggle room. It was not intended for negotiation, but the lawyer just had to wrangle some concession out of the contractor.  The culture clash between the two makes great reading, but also throws light on how "cost-plus" works in practice.

The equation "Margin = Price - Cost" is based on the assumption that Price and Cost are characteristics of the same nature, both attached to each unit of product. It is true of Price: whenever a unit is sold -- in whatever form and however it is financed -- it has a unit price, and it is not ambiguous.

Unit cost, on the other hand, is the result of allocations among products and over time done in a myriad different ways, with different results. By shifting overhead around, managers make the products they like appear cheap, and the ones they want to kill appear expensive. Once the "expensive" products are terminated, the same overhead is spread among fewer survivors, thus making new ones unprofitable, and the death spiral ends only with closure of the factory.

Instead of the simplistic  "Margin = Price - Cost" for each unit, a sound economic analysis of manufacturing considers the flows of revenues and expenses associated with making a product in given volumes over its life cycle, and sometimes a product family rather than an individual products with, for example, some products given away as free samples to promote the sale of other products.

See on

Is OEE a Useful Key Performance Indicator? | Jeffrey Liker

See on - lean manufacturing
"For manufacturing that is equipment-intensive, how the equipment works is often the main factor in productivity. Total Productive Maintenance (TPM) has become a buzzword in lean and a generally accepted metric is Overall Equipment Effectiveness (OEE). This is measured as the product of three factors:

  • OEE = Availability x Performance x Quality
  • Availability = run time/total time
  • Performance = Total count of parts/target count (based on a standard)Quality = Good count/Total count

Ignacio S. Gatell questions whether companies using OEE really understand it, can explain it clearly to their customers, and understand what it means to compare OEE as a KPI across plants. He questions whether even plant managers understand how it is calculated and what it means.

The only good argument for OEE is that at a macro-level in a plant it provides a high level picture of how your equipment is functioning."

Michel Baudin's insight:

About 15 years ago, a summer intern came to work at a client plant in aerospace machining. I thought a great project for him would have been to identify a common tooling package for machining centers that were grouped in a "Flexible Manufacturing System" (FMS). It was challenging, but it would have actually given the FMS the flexibility it was supposed to have. It was a real engineering project that would have improved performance.

Management, however, decided that a better use of his time was to collect data and calculate OEEs for another set of machines. It did keep the student busy all summer, but resulted in no change, and no improvement bragging rights for the student.

I have had a problem with OEE ever since. It is an overly aggregated and commonly gamed metric that you can only use by breaking it down into its constituent factors; you might as well bypass this step and go straight to the factors.

Among these factors, I find Availability to be most often confused with Uptime. The availability of a device is the probability that it works when you need it, and the total time in the denominator has to be the time you need it for. For example, if you work two shifts a day, the availability of a machine is not affected by your taking it down for maintenance on third shift. There have been cases of managers overproducing to increase run time and thereby boost the OEE of their machines...

See on

Shortage of skills, not yet - but very soon - a wake up call (part 2) | Wiegand's Watch

This is a translation of the bulk of Bodo Wiegand's latest newsletter, about Lean in Germany, followed by my comments:

"In Part 1, we discussed the possibility of becoming more effective in your own work environment by stemming the flood of email and reducing the extent of meetings. In Part 2 , we want to focus on how you can optimize cooperation between employees and departments.

In production, there are precise procedures and instructions , on how a product is to be made. There, the processes are stable , documented and visual. We have not considered this to be necessary in support departments. Everyone works as he sees fit , then delivers when he is ready and at the quality he is capable of.

Sorry - in administration, we produce nothing .

We don't! Or do we?

In any case, work is not done according to a plan or delivered just in time at a precisely defined quality. Don't we need to? We do! We need to gradually start to handle administrative processes like production processes - because we need more effectiveness and efficiency on our office floors to reduce skill shortages and remain competitive .

It is not about takt in administration but about flow and on-time delivery. Run time, interfaces, and flexibility are the principles. I can already hear the staff complain in Development or in Construction: "For us no project is like any other - so you can't define processes , let alone standardize. And yet 7o% to 80 % of the activities are routine and repetitive, consisting of foolishly long meetings and secretarial or travel agency work that is unrelated to project content.

Defining and standardizing the processes of development and construction saves employees valuable time , while proceeding with fixed rules and checkpoints prevents errors or detects them faster, improves the quality and timeliness of the work, and avoids interface problems, for example in making prototypes or starting up manufacturing.

I can already hear the complaints of managers in Human Resources ,  Information Technology, or Accounting : "We produce nothing - we can't optimize anything." The most beautiful expression I frequently hear from this faction is "Mr. Wiegand, without us, nothing runs here ." And then when I ask , what products do you make or what services do you render ? Then I see usually only blank stares.

Hello! Is hiring, challenging, and coaching employees not a service? Are indicators that show facts, or figures that support decisions not defined products? Or implementing software , delivering training, and other support functions? Of course, these are products and services.  Can we describe these products, deliver them more efficiently, standardize them, define quality requirements, and visualize their processes?

Yes, we can !

So what is the difference between the production of goods and the products in the so-called indirect areas?

None - except for the fact that the first are visible, tangible, and palpable, while the product of Administration is information - to interpret,  invisible and intangible. If it is possible, therefore, to make the information visible and to define it , then you can treat it like a product and make the processes more effective and efficient. And why do we not  do it?

We had the same problem in production 20 to 30 years ago. Processes were previously under the responsibility of  master craftsmen who delivered as they saw fit. We had to define the processes, specify interfaces, and establish quality, formulate work orders and convert from the functional organization of workshops and production areas to an organization along manufacturing processes.

I remember vividly how the Craftsmen, Workshop Supervisors , and Production Area Managers fought and defended their kingdoms. It was a long, hard struggle. Today, however,  less than 10% of companies are still aligned functionally in production. They all fought to the end, against better judgment, against the greater economic performance, and for their kingdoms.

This is what we face today every day on our office floors. The same arguments are repeated. As an acccountant said, "If we move to a process-oriented organization, the specific know-how goes down the drain." By the way - the last major innovation in accounting -- breakeven analysis --  is more than half a century old. So what kind of know-how must be centrally held, promoted, and protected ?

Do not get me wrong -- we need accounting to measure our success , but not in an ivory tower, but on the spot, so you know what you need to measure and therefore can support the decision makers , thus giving guidelines to your trade (see also my article in the Book: The accountant as in-house consultant).

So we anchor the controls in the process , where  needed , and not in a functional department. If we want to raise the potential in the indirect areas , we must not look at the individual functions , but at processes across functions and optimize the functions themselves. Now you know now why it is so hard to find support for Lean Administration. But, as 12 years of Lean Administration consulting have shown , it pays. Here are a few examples :

  • Today, 900 employees in Development and Administration are doing work that used to require 1,300.
  • Capital goods are shipped six months earlier.
  • A service center saves €17M.
  • A pharmaceutical company handle 20% in sales without adding employees.
  • A government office reduced processing time from three weeks to two days .

Now how is this done? It starts with process mapping, defining products , analyzing the task structure and the job  structure,  and then optimizing the value streams . Quite simple - or not?

Unfortunately,  not quite that simple. You can make many mistakes. I have seen many process maps. Some were created from an IT perspective, others from the organization's point of view -- but why not from a customer perspective?

Others avoid analyzing the structure of the activity usually with the argument "Not acceptable to the Works Council."

What a joke!

We have been implementing Lean administration in companies for 12 years and have never had problems with the Works Councils due to an activity structure analysis. Mostly we were rather supported with the motto: "Finally in this area something is happening."

Often the products are not defined from a customer perspective. The optimized value streams are contradictory and  watered down by compromise at the interfaces and turned into overcomplex processes.

Why ?

Out of consideration to individuals and functions. Lean Administration projects rarely succeed from the inside out , but require external coaches to bring to light self-interests and put the process in the foreground.

You should however not be deterred by these difficulties . Especially with projects in Administration, the five success factors I so often stress  are:

  • Planning
  • Leadership commitment
  • Holistic approach
  • Resolute implementation /change in mindset
  • Measurement

The potential is large and success easy to achieve. You and your colleagues just have to really want it and, of course, start properly. "

Michel Baudin's comments:

As many discussions of the "Lean office" do, Wiegand's lumps together all activities other than production. Much of his letter is devoted to the standardization of office work, which he presents as essential to avoiding a skill shortage by increasing productivity. While a case can be made for the value of following documented procedures in transaction processing  like rental car issue and return, it is far-fetched for creative knowledge work like R&D.

In product development, it helps to have some discipline in managing the flow of projects through phases, with appropriate validation at various checkpoints, but there is little evidence that it is essential. The history of product development is replete with cases where all the procedures were in place but the products failed, and, on the contrary, of cases of product developers who broke the rules and succeeded.

Wiegand describes the transition from craft control to controlled, documented processes in production as a battle fought won in the past 20 to 30 years. I view it instead as a struggle that started with the industrial revolution about 1750 and is still going on, with the Lean approach to it being only the last of a long list. And it does not involve standardizing everything. If you have machines with controls that are visually obvious and mistake-proof, you don't need instructions.

Another theme of Wiegand's letter is the change from organization by function, where employees are in departments focused on one operation, to organization by process, where they are in teams in charge of all the operations needed to generate a finished output. It is like the change from a machining job-shop with departments for turning, milling, heat treatment, grinding, etc. to a flow shop with lines or cells that machine blanks from start to finish.

Wiegand asserts that only 10% of companies still have functional organizations in production. It is a number I have a hard time believing. I don't believe it's true even in Japan. In fact, the functional, or job-shop, organization is not wrong for everything. Once you have done your Runner/Repeater/Stranger analysis, it is actually what you need for Strangers. And it is not always wrong in office work either. Product development at Toyota, for example, is done by functional departments.

I am also puzzled by his description of "break-even analysis" as the last great innovation in accounting. It does not strike me as particularly advanced. What about discounted cash flows, internal rates of return, activity-based costing, and other concepts that shine a light on different aspects of operations than just break-even points?

One last comment is that Wiegand mentions "optimization" six times and "improvement" never. One of my pet peeves is that, in Lean, you always improve but never optimize, because it is, by definition, the end of improvement. I have been assured both in Germany and France, that they mean "improvement" when they say "optimization," which begs the question of what they use when they actually mean "optimization."

When One-Piece Flow Restricts Capacity

Philip Marris told me of the case of a machining cell in an auto parts plant where management was ready to buy more machines because it was "lacking capacity," but he was able to find a cheaper way to increase capacity by 17% in 15 minutes.

Unlike manual assembly cells, in which work can be balanced among stations, cells that involve machines always have one that is slower than all others, and, reallocating work among machines with different capabilities is not an option. In particular, almost all machining cells have a bottleneck, and the situation Philip described involved this bottleneck and the machine feeding it. The cell practiced one-piece flow. Therefore, if the feeder machine had worked perfectly, the timelines of the Feeder and the Bottleneck would have been as follows:


The Feeder would have started one piece at the beginning of each takt interval, and, since it is faster than the Bottleneck, it would have finished the piece before the end of the interval. The Feeder then would have waited for  the bottleneck to pick up the piece before starting the next one. The Bottleneck would have been working 100% of the time; the Feeder would not.

But what Philip discovered by observing operations was that the Feeder had microstoppages.  When the Feeder was hit by a microstoppage, the delay it caused passed to the bottleneck, which was prevented from working 100% of the time, as shown below:


This reduced the capacity of the entire cell. In the actual case, even with its microstoppages, the Feeder had enough capacity to feed the Bottleneck, on the average,  just not on a takt basis. The microstoppages caused the output of the Feeder to fluctuate and disrupt the operation of the Bottleneck.

To anyone trained in Lean, the only appropriate solution was to eliminate the microstoppages... But it was easier said than done. Sometimes, all it takes is slowing down the machine, or changing a maintenance policy from "clean for one minute" to "clean until it is clean." But it is not always that simple.

Microstoppages are often unreported because they are fixed on the fly by production operators. To understand microstoppages, you need to monitor the machine to observe when they occur and trace their causes. Eliminating them may require you to modify chutes, fixtures, jigs or dies, or even the basic process, and it can take time, but you need to do it if you want one-piece flow to work.

In the meantime, what do you do? Buying more equipment is an expensive solution, especially when you don't expect to need it once you are rid of the microstoppages. A cheaper countermeasure is to protect the supply of parts to the bottleneck against fluctuations by decoupling the two machines with a buffer of WIP. You can set the size of this buffer by trial and error,  knowing that it is not a long-term solution.

Of course, manufacturing engineers understand that you cannot have one-piece flow with microstoppages. So why did they ignore their own wisdom? The most likely explanation is a demand from a corporate "Lean group" to implement one-piece flow everywhere and "damn the torpedoes!" These engineers had complied not because they thought it was a good idea, but because it was required to keep their jobs.

Technically, Philip sees this story as a case study in the addition of Theory of Constraints (TOC) thinking to Lean; I just see it as due consideration of equipment issues in cell design, as I was taught it more than 25 years ago. From a management standpoint, I see it as an example of the local consequences of half-baked corporate mandates.