# Working outside in rather than inside out | Bill Waddell

See on Scoop.it - lean manufacturing

"Perhaps one of the most inane – but very typical – aspects of the business process in manufacturers is the construction of the supply chain from the inside out.  Three times in the last week – count ‘em – three for three – I visited a manufacturing company with (1) problems delivering in the time frame customers want; (2) lots of inventory but rarely the right inventory; and (3) a supply chain constructed by their supply chain people based on some idea of how to construct a supply chain but not one constructed based on a delivery objective.

In other words, some factory guys got together at some point – probably with an accountant or two breathing down their necks and decided this is how we purchase and this is how we schedule production and that is the resulting lead time, so sales …. Go out and try to shove those lead times down customers’ throats, regardless of what customers want or need...."
See on www.idatix.com

# The Discovery of Lean | Narrated Prezi by Mark Warren

See on Scoop.it - lean manufacturing
Brief description on the origins of lean. Lean is an outcome of implementing Flow Principles + the TWI program

This is a short version of a one-hour presentation I heard live a few months ago. Mark's take is the result of more than 30 years of practical experience in all sorts of plants around the world and more than a decade of intensive research of original documents in numerous archives in several countries.

To understand where concepts and techniques are useful in manufacturing today, we need to know who invented them and for what purpose. The historical perspective is not a luxury, and the explanations of this history must be accurate if it is to enlighten us.

At historical research, Mark is a pro; I am an amateur. John Hunter thinks I have a "library full of dusty tomes." In truth, I only have a few old books on manufacturing, half of them recommended by Mark.

See on prezi.com

# Is it a Bad Idea to Pay a Lean Consultant Based on a Percentage of Cost Savings? | Mark Graban

See on Scoop.it - lean manufacturing
Blog post at Lean Blog : The price paid for most management consulting work is based on either a daily rate or some variation of a flat-rate fee based on what is being delivered. Enterprise software pricing is also often fixed. In both cases, the client pays this with some expectation of benefits and even an “ROI” for the customer.[..]

I agree with Mark, and I am happy when clients report that they get ten times in benefits what our services cost. A daily fee for work done on site and a fixed fee for deliverables for offsite work are simple arrangements; paying a percentage of benefits, whether cost savings or revenue increases, is a complicated arrangement, conducive to misunderstandings and disagreements.

See on www.leanblog.org

# Renault: An international school of Lean Manufacturing opens at Flins | Automotive World

See on Scoop.it - lean manufacturing
"Jose-Vicente de Los Mozos, Executive Vice President, Manufacturing and Supply Chain, of the Renault Group, inaugurated the International School of Lean Manufacturing yesterday."

Michel Baudin's insight:

When I visited this plant in 1994, I never imagined that it would be the site of an international school of Lean 20 years later.

We were working at the time on Lean implementation with CIADEA, the Renault licensee in Argentina. It had originally been a subsidiary, was sold to local entrepreneur Manuel Antelo in 1992, and was repurchased by Renault in 1997.

At the time, my hosts in Flins thought that Lean was just a way to cut heads and that implementing 5S would cause production to drop.

Times change.

See on www.automotiveworld.com

# Is it Lean’s Fault or the Old Management System’s? | Mark Graban

See on Scoop.it - lean manufacturing
Blog post at Lean Blog :

"[...]The problem is the culture doesn’t change overnight. Leaders have years or decades of old habits (bad habits) that run counter to Lean thinking. They might be (might!) be trying to change, but people will still fall back into old habits, especially when under pressure.

I hear complaints (in recent cases) coming from different provinces in Canada that say things like:

Lean is causing hospitals to be “de-skilled” by replacing nurses with aides. Lean drives a focus on cost and cost cutting, including layoffs or being understaffedLean is stressing out managers by asking them to do more and taking nothing off their plateNurses hate Lean because they aren’t being involved in changes[...]"

In this post, Mark Graban explains how the leadership in Canadian hospitals is slapping the "Lean" label on ancient and counterproductive "cost-cutting" methods, and how the victims of these practices unfairly blame Lean.

This is definitely L.A.M.E., Mark's apt term for "Lean As Misguidedly Executed," and is found in Manufacturing as well as Health Care. Much of the article -- and of the discussion that follows -- is about what I call yoyo staffing: you hire more than you should in boom times, and lay off in recessions.

Of course, it isn't what Toyota did, and churning your work force in this fashion not only disrupts people's lives but is bad business. Hiring, training and firing repeatedly prevents your organization from accumulating the knowledge and skills it needs.

Mark makes the case that Lean should not be blamed for mistakes that have nothing to do with it. Other than raising consciousness, however, the post does not propose solutions to keep this from happening.

While there have been studies published on Toyota's approach to Human Resources (HR), I don't recall seeing much in the American Lean literature on topics like career planning for production operators.

In his comments, Bob Emiliani paints the current generation of leaders as "a lost cause," and places his hopes on the next. He seems to suggest that the solution is to wait out or fire the current, baby-boomer leadership and replace it with millenials. I don't buy it and, deep down, neither does Bob, because he ends by saying "While one always hopes the “next generation will do better”, it could turn out to be a false hope."

Like everything in HR, generational change has to be planned carefully. The people who rose to leadership positions presumably did so not just because of bad habits but because they also had something of value to offer. And the way the baton is passed is also a message to the incoming leaders: it tells them what to expect when their turn comes.

See on www.leanblog.org

# A Definition of Lean | Mike Rother

See on Scoop.it - lean manufacturing

Maybe it's time for a better definition of "Lean." Here's one for you to consider and build on.

The proposal is "Lean is the permanent struggle to flow value to one customer."

Permanent struggle is fine, but I prefer pursuit. It means the same thing but it is shorter and "pursuit of happiness" sounds better than "permanent struggle for happiness."

On the other hand, I have a problem with "flow value," which I see as the sort of vague abstraction that would prompt Mike Harrison to ask whether it come in bottles. It is exactly what Dan Heath is warning against in the video included in the slideshare.

I also have a problem with the exclusive focus on customers, which I see as Business 101 rather than Lean. Lean includes many features like heijunka, that are intended to make life easier for suppliers and are transparent to customers. Going Lean means looking after all the stakeholders of the business, not just its customers.

This is why I define it instead as the pursuit of concurrent improvement in all dimensions of manufacturing performance through projects that affect both the production shop floor and support activities.

Yes, I know, it is specific to manufacturing, but that is not my problem.

See on www.slideshare.net

# Averages in Manufacturing Data

The first question we usually ask about lead times, inventory levels, critical dimensions, defective rates, or any other quantity that varies, is what it is "on the average." The second question is how much it varies, but we only ask it if we get a satisfactory answer to the first one, and we rarely do.

When asked for a lead time, people  usually give answers that are either evasive like "It depends," or weasel-worded like "Typically, three weeks." The beauty of a "typical value" is that no such technical term exists in data mining, statistics, or probability, and therefore the assertion that it is "three weeks" is immune to any confrontation with data. If the assertion had been that it was a mean or a median, you could have tested it, but, with "typical value," you can't.

For example, if the person had said "The median is three weeks," it would have had the precise meaning that 50% of the orders are delivered in less than 3 weeks, and that 50% take longer. If the 3-week figure is true, then the probability of the next 20 orders all taking longer, is $0.5^{20}= 9.6\,ppm$. This means that, if you do observe a run of 20 orders with lead times above 3 weeks, you know the answer was wrong.

In Out of the Crisis, Deming was chiding journalists for their statistical illiteracy when, for example, they bemoaned the fact that "50% of the teachers performed beneath the median." In the US, today, the meaning of averages and medians is taught in Middle School, but the proper use of these tools does not seem to have been assimilated by adults.

One great feature of averages is that they add up: the average of the sum of two variables is the sum of their averages. If you take two operations performed in sequence in the route of a product, and consider the average time required to go through these operations by different units of product, then the average time to go through operations 1 and 2 is the sum of the average time through operation 1 and the average time through operation 2, as is obvious from the way an average is calculated. If you have n values $X_{1},...,X_{n}$

the average is just

$\bar{X}= \frac{X_{1}+...+X_{n}}{n}$

What is often forgotten is that most other statistics are not additive.

To obtain the median, first you need to sort the data so that  $X_{\left(1\right)}\leq ... \leq X_{\left(n\right)}$. For each point, the sequence number then tells you how many other points are under it, which you can express as a percentage and plot as in the following example:

Graphically, you see the median as the point on the x-axis where the curve crosses 50% on the y-axis. To calculate it, if n is odd, you take the middle value

$\tilde{X}= X_{_{\left (\frac{n}{2}+1\right )}}$

and, if n is even, you take the average of the two middle values, or

$\tilde{X}= \frac{\left[ X_{_{\left (\frac{n}{2}\right )}}+X_{_{\left (\frac{n}{2}+1\right )}}\right]}{2}$

and it is not generally additive, and neither are all the other  statistics based on rank, like the minimum, the maximum, quartiles, percentiles, or stanines.

An ERP system, for example, will add operation times along a route to plan production, but the individual operation times input to the system are not averages but worst-case values, chosen so that they can reliably be achieved. The system therefore calculates the lead time for the route as the sum of extreme values at each operation, and this math is wrong because extreme values are not additive. The worst-case value for the whole route is not the sum of the worst-case values of each operation, and the result is an absurdly long lead time.

In project management, this is also the key difference between the traditional Critical Path Method (CPM) and Eli Goldratt's Critical Chain. In CPM, task durations set by the individuals in charge of each task are set so that they can be confident of completing them. They represent a perceived worst-case value for each task, which means that the duration for the whole critical path is the sum of the worst-case values for the tasks on it. In Critical Chain, each task duration is what it is actually expected to require, with a time buffer added at the end to absorb delays and take advantage of early completions.

That medians and extreme values are not additive is experienced, if not proven, by a simple simulation in Excel. Using the formula "LOGNORM.INV(RAND(),0,1)" will give you in about a second, 5,000 instances of two highly skewed variables, X and Y, as well as their sum X+Y. On a logarithmic scale, their histograms look as follows:

And the summary statistics show the Median, Minimum and Maximum for the sum are not the sums of the values for each term:

Averages are not only additive but have many more desirable properties, so why do we ever consider medians? There are real problems with averages, when taken carelessly:

1. Averages are affected by extreme values. It is illustrated by the Bill Gates Walks Into a Bar story. Here we inserted him into a promotional picture of San Fancisco's Terroir Bar:Attached to each patron other than Bill Gates is a modest yearly income. But his presence pushes the average yearly income above $100M, which is not a meaningful summary of the population. On the other hand, consider the median. Without Bill Gates, the middle person is Larry, and the median yearly income,$46K. Add Bill Gates, and the median is now the average of Larry and Randy, or \$48K. The median barely budged! While, in this story, Bill Gates is a genuine outlier, manufacturing data often have outliers that are the result of malfunctions, as when wrong measurements are recorded as a result of a probe failing to touch the object it is measuring, or the instrument is calibrated in the wrong system of units, or a human operator puts a decimal point in the wrong place...Large differences between average and median are a telltale sign of this kind of phenomenon. Once the outliers are identified, assessed, and filtered, you can go back to using the average rather than the median.
2. Averages are meaningless over heterogeneous populations. The statement that best explains this is "The average American has exactly one breast and one testicle." It says nothing useful about the American population. In manufacturing, when you consider, say, a number of units produced, you need to make sure you are not commingling 32-oz bottles with minuscule free samples.
3. Averages are meaningless for multiplicative quantities. If you data is the sequence $Y_{1}, ...,Y_{n}$ of yields of the n operations in a route, then the overall yield is $Y= Y_{1}\times ...\times Y_{n}$, and the plain average of the yields is irrelevant. Instead, you want the geometric mean $\bar{Y}=\sqrt[n]{Y_{1}\times ...\times Y_{n}}$.
The same logic applies to the compounding of interest rates, and the plain average of rates over several years is irrelevant.
4. Sometimes, averages do not converge when the sample size grows. It can happen even with a homogeneous population, it is not difficult to observe, and it is mind boggling. Let us say your product is a rectangular plate. On each one you make, you measure the differences between their actual lengths and widths and the specs, as in the following picture:
Assume then that, rather than the discrepancies in length and width, you are interested in the slope ΔW/ΔL and calculate its average over an increasing number of plates. You are then surprised to find that, no matter how many data points you add, the ratio keeps bouncing around instead of converging as the law of large numbers has led you to expect. So far, we have looked at the averages as just a formula applied to data. To go further, we must instead consider that they are estimators of the mean of an "underlying distribution" that we use as a model of the phenomenon at hand. Here, we assume that the lengths and widths of the plates are normally distributed around the specs. The slope ΔW/ΔL is then the ratio of two normal variables with 0 mean, and therefore follows the Cauchy distribution. This distribution has the nasty property of not having a mean, as a consequence of which the law of large numbers does not apply. But it has a median, which is 0.

The bottom line is that you should use averages whenever you can, because you can do more with them than with the alternatives, but you shouldn't use them blindly. Instead, you should do the following:

2. Identify and filter outliers.
3. Make sure that the data represents a sufficiently homogeneous population.
4. Use geometric means for multiplicative data.
5. Make sure that averaging makes sense from a probability standpoint.

As Kaiser Fung would say, use your number sense.

# Forthcoming book: The Deming Legacy

About two years ago, I started posting essays on this blog about Deming's 14 points and their current relevance. Now I am writing on Points 11.a and 12 through 14, which I have not covered yet, organizing the material, and editing it into an eBook entitled The Deming Legacy, that will be available shortly in PDF, iBook and Kindle formats. If you are interested, please visit the site and let me know. Comments here are also welcome.

The posts on the topic to date are as follows:

The title is a ploy to convince Matt Damon to play Deming in the movie version.

# Lean Handbags and Micro Failures | Mark Graban

See on Scoop.it - lean manufacturing

Blog post from Mark Graban at Lean Blog :

"I enjoy reading the magazine Inc. for my interests in startups and entrepreneurship. There are often examples and case studies that directly reference Lean thinking or just sound like Lean and Kaizen with another label..."

Well run businesses are always good reading, even if their stories are usually embellished. Starting the design of fashion accessories from a market price or organizing to allow chefs in a restaurant chain to experiment with new dishes, however, just sounds like good management, not examples of "Lean Thinking."

I have never found much depth in the contrasting of "Margin = Price – Cost" with "Price = Cost + Margin," maybe  because I have never worked in a cost-plus business. Commercial manufacturers usually do not have the power to set prices this way. Perhaps, the Big Three US automakers did have that power in the 1950s, and Toyota didn't.

In Tracy Kidder's 1985 documentary book House,  a Boston lawyer hired a local contractor named to build a house in the suburbs. The contractor rigorously calculated the costs of the materials and labor, tacked on a 10% profit, and presented a bid with no wiggle room. It was not intended for negotiation, but the lawyer just had to wrangle some concession out of the contractor.  The culture clash between the two makes great reading, but also throws light on how "cost-plus" works in practice.

The equation "Margin = Price - Cost" is based on the assumption that Price and Cost are characteristics of the same nature, both attached to each unit of product. It is true of Price: whenever a unit is sold -- in whatever form and however it is financed -- it has a unit price, and it is not ambiguous.

Unit cost, on the other hand, is the result of allocations among products and over time done in a myriad different ways, with different results. By shifting overhead around, managers make the products they like appear cheap, and the ones they want to kill appear expensive. Once the "expensive" products are terminated, the same overhead is spread among fewer survivors, thus making new ones unprofitable, and the death spiral ends only with closure of the factory.

Instead of the simplistic  "Margin = Price - Cost" for each unit, a sound economic analysis of manufacturing considers the flows of revenues and expenses associated with making a product in given volumes over its life cycle, and sometimes a product family rather than an individual products with, for example, some products given away as free samples to promote the sale of other products.

See on www.leanblog.org

# Is OEE a Useful Key Performance Indicator? | Jeffrey Liker

See on Scoop.it - lean manufacturing
"For manufacturing that is equipment-intensive, how the equipment works is often the main factor in productivity. Total Productive Maintenance (TPM) has become a buzzword in lean and a generally accepted metric is Overall Equipment Effectiveness (OEE). This is measured as the product of three factors:

• OEE = Availability x Performance x Quality
• Availability = run time/total time
• Performance = Total count of parts/target count (based on a standard)Quality = Good count/Total count

Ignacio S. Gatell questions whether companies using OEE really understand it, can explain it clearly to their customers, and understand what it means to compare OEE as a KPI across plants. He questions whether even plant managers understand how it is calculated and what it means.

The only good argument for OEE is that at a macro-level in a plant it provides a high level picture of how your equipment is functioning."

Michel Baudin's insight:

About 15 years ago, a summer intern came to work at a client plant in aerospace machining. I thought a great project for him would have been to identify a common tooling package for machining centers that were grouped in a "Flexible Manufacturing System" (FMS). It was challenging, but it would have actually given the FMS the flexibility it was supposed to have. It was a real engineering project that would have improved performance.

Management, however, decided that a better use of his time was to collect data and calculate OEEs for another set of machines. It did keep the student busy all summer, but resulted in no change, and no improvement bragging rights for the student.

I have had a problem with OEE ever since. It is an overly aggregated and commonly gamed metric that you can only use by breaking it down into its constituent factors; you might as well bypass this step and go straight to the factors.

Among these factors, I find Availability to be most often confused with Uptime. The availability of a device is the probability that it works when you need it, and the total time in the denominator has to be the time you need it for. For example, if you work two shifts a day, the availability of a machine is not affected by your taking it down for maintenance on third shift. There have been cases of managers overproducing to increase run time and thereby boost the OEE of their machines...

See on www.industryweek.com