Jun 14 2011
Management Whack-a-Mole and the Value of Lean
On 5/24/2011, NWLEAN moderator Bill Kluck launched the richest, most vigorous debate in the 13-year history of that forum by asking members whether they were cost cutters or capacity enhancers. Over the following three weeks, 29 authors posted 68 spirited, yet courteous messages accessible from NWLEAN to anyone with a Yahoo! ID. The contributors included several well-known authors and some of my favorite sparring partners. In alphabetical order, they were Dan Barch, William Bowman, Robert Byrd, Abhijit Deshpande, Mark Graban, Jim Harrington, Jonathan Harrison, Rob Herhold, Blair Hogg, Gangadhar Joshi, Bill Kluck, Joachim Knuf, Ed Larmore, Paul Layton, Ted Mayeshiba, Jim McKechnie, Larry Miller, Joe Murli, John Nelson, Stephen P. O’Brien, Anthony Reardon, Tom Robinson, Sunita Sangra, Dale Savage, Patrick D. Smith, Tom Stogsdill, Mike Thelen, Chuck Woods, plus a few others who did not use their real names.
The message that started the thread was as follows:
“It seems more and more (especially in this recession!) that lean professionals fall into 2 camps:
– Cost cutters
– Capacity enhancers
There are some out there that are saying that you HAVE to cut costs, or you’re not getting lean.
There are others that say lean is about improving capacity, so you can enhance the customer experience.
Which are you, and why? Is this driven by the top of your organization, people who may not understand what lean is? Do we risk becoming cost cutters, or smoke-and-mirror guys?
Just askin’!Bill”
Although I didn’t participate initially, it was clear to me that the effect of a successful Lean implementation was limited neither to cost cutting nor to capacity enhancement. Instead, I see Lean as the pursuit of concurrent improvement in all dimensions of manufacturing performance, but I had already shared my thoughts on this matter in Lean as the End of Management Whack-a-Mole. Management Whack-a-Mole is the game illustrated in Figure 1, in which managers boost performance in one dimension at the expense of the others, shifting focus every few months without ever achieving a genuine overall improvement.
Figure 1. Management Whack-a-Mole
Several of the early posts, however, prompted me to jump into the fray and respond on a variety of topics, initially as follows:
- People trained in Finance and Management Accounting have had their shot at running US Manufacturing from the 1950s to the 1970s, with outcomes that should make them modest about the value of their approaches.
- Companies that have grown through Lean include Toyota since 1950, Wiremold from 1991 to 2000, and many others that are not publicized. A successful Lean effort helps Sales in specific ways, as, for example, when more flexibility enables Manufacturing to accept a sample order for a new component by a customer, who then designs it into his own products and becomes a major OEM.
- Cost reductions are a by-product of Lean but not its primary purpose. Manufacturing is a competitive sport, and Lean a strategy that makes you a stronger player, and not just right now but in the long run as well. When you hire a top coach for a sports team it’s not to cut costs but to win games.
- ROI is just one ratio, invented at DuPont 100 years ago to report in a common form the activities of multiple business units. There is nothing magic about it, and, by itself, it certainly does not give a complete picture of the health of a business, as even finance people recognize.
This caused further exchanges with Robert Byrd, who wrote: “…The most basic concept is Profit = Price – Cost. The market dictates price, so we have the best leverage for maximizing profit through controlling our cost structure…” My response:
Is this always true? What about situations in which time to market is of the essence? I remember a client that was introducing a new frozen food product. On one of our visits they had us taste the R&D prototype, which was delicious. Three months later, they had a jury-rigged production line making 900 units/minute. The line left much to be desired, but we could not argue with their focus on getting the product to market. During all that time, their focus was not on getting it done cheaply but on getting it done at all. Then they could go back and improve the line.
He also wrote: “…Which is best accomplished through the elimination of waste and problem solving…” My response:
This is the key point. We have yet to encounter a manufacturing or service organization in which the details of how work is being done contains no improvement opportunity, and this is what most managers simply do not believe. They are trained to think in terms of “big pictures,” and look for cost cutting opportunities like eliminating entire departments or reducing every department’s budget by 5%.
And finally: “If the aim is only to increase profit by reducing cost, without focus to grow the business, then the culture will never take root and only a short-term improvement will be realized. Or the company will never achieve it’s true potential.”
Your point does not fit in a 30-sec sound-bite or tag line. Good managers are able to monitor all aspects of performance at the same time, from growth to costs. But communication is tricky. For example, if you showcase ” Profit = Price – Cost,” what behaviors do you expect to stimulate, other than cost cutting?
In his second response, he added: “However, what was their business objective once they focused on improvement? I would venture to guess that it was focused on cost reduction.” My response:
I don’t recall them mentioning cost reduction explicitly. When the line first started, about 20% of the units were defective, and that was a focus of concern. Of course, quality improvements reduce costs as a side effect, but they also have other effects: quality problems have an impact on customer perceptions of the product because of variability among the units that are not defective.
In their other production lines, improvement was driven by marketing concerns. This was in Europe, and, in spite of the existence of the EU, frozen food packages had a different sizes by country, in addition to having labels in different languages. Based in Italy, they pursued quick changeovers in package sizes in order to be able to export to Germany, France, the UK, etc.
Their Lean program fit within their business strategy, but it is a stretch to say that this strategy was centered on cost reduction.
Oct 14 2011
Revisiting Pareto
Revisiting Pareto is a paper about the application of the 80/20 law, also known as the law of the vital few and the trivial many and the Pareto principle. It presents new ways of applying it to manufacturing operations and is scheduled for publication in Industrial Engineer Magazine in January, 2012. As a preview, I am including here the following:
Abstract of the paper as scheduled for publication in 1/2012
The Pareto principle, so named by J.M. Juran in the 1940s, is also known as the 80/20 law, the law of the vital few and the trivial many, and ABC classification. It is a simple yet powerful idea, generally accepted but still underutilized in business. First recognized by Vilfredo Pareto in the distribution of farm land in Italy around 1900, it applies in various forms to city populations as counted in the 2010 census of the US, as well as to quality defects in manufacturing, the market demand for products, the consumption of components in production, and various metrics of item quantities in warehouses.
The key issues in making it useful are (1) to properly define quantities and categories, (2) to collect or retrieve the data needed, (3) to identify the actions to take as a result of its application and (4) to present the analysis and recommendations in a compelling way to decision makers.
In the classical Pareto analysis of quality problems, defect categories only partition the defectives if none has more than one type of defect. When multiple defects can be found on the same unit, the elimination of the tallest bar may increase the height of its runner-up. With independent defect categories and an electronic spreadsheet, this is avoided by calculating the yield of every test in the sequence instead of simply counting occurrences, and multiplying these yields to obtain the yield of a sequence of tests. We can then plot a bar chart of the relative frequencies each defect in the population that is actually tested for it, and, as a cumulative chart, the probability of observing at least one of the preceding defects.
When analyzing warehouse content, there are usually too many different items to count physically, and we must rely on data in the plant’s databases. The working capital tied up by each item is usually the most accessible. But physical characteristics like volume occupied, weight or part count are often scattered in multiple databases and sometimes not recorded. The challenge then is to extract, clean, and integrate the relevant data from these different sources.
In both supply chain and in-plant logistics, the design of the system should start with an analysis of part consumption by a production line in terms of frequency of use, another quantity that is not additive across items. In this application, the classical Pareto chart is replaced by the S-curve, which plots the number of shippable product units as a function of the frequency ranks of the components used, and thus provides a business basis for grouping them into A-, B- and C-items, also known as runners, repeaters and strangers.
The presentation of the results should be focused on communication, not decoration, and use graphics only to the extent that they serve this purpose.
Emergence of the Pareto distribution in a simulation
While the Pareto principle can be easily validated in many contexts, we are short on explanations as to why that is. We do not provide one, but we show how, in a simulation of a multiple-round roulette game in which each player’s winnings at each round determine his odds in the next one, the distribution of chips among the players eventually comes close to the 80/20 rule. This happens even though it is a pure game of chance: players start out with equal numbers of chips and have no way to influence the outcome.
Assume we use a roulette wheel to allocate chips among players in multiple rounds, with the following rules:
This is a pure game of chance. There is nothing players can do to influence the outcome, and the players who do best in one round are set to do even better on the next one. It is also a zero-sum game, in that whatever a player wins comes out of the other players’ hands. There is no safety net: a player with no chips after any round is out of the game. If you know the present state of the game –how many chips the player in each rank has– then the results of future rounds is independent of the path by which you arrived at the present state. Technically, it is a random walk and a Markov process. The concept of the game is shown in Figure 1.
It could be viewed as a representing a simplified Oklahoma Land Rush, where farmers of equal talent are initially given identical plots of land. Even in the first round, some plots will have higher crop yields than others. If the average crop is needed to survive, the farmers with a surplus sell it to the farmers with a shortfall in exchange for land and start the next season with land in proportion to the crop they had on the previous one. The players could also be functionally equivalent, similarly priced consumable products sold in the same channels, like toothpaste or soft drinks, and the chips would be the buyers of these products.
If we run this game through enough rounds, do we tend towards a distribution where 20% of the players hold 80% of the chips, or do all the chips end up in the hands of only one player? We simulated this game in software with 1 million chips to be divided among 1,000 players in 100,000 rounds, and the simulation takes about 2 minutes to run on a desktop PC.
The results show that the “wealth distribution” is stable, beyond 10,000 rounds, with 23% of the players collectively holding 80% of the chips Figure 2 shows the distribution after 100,000 rounds, which roughly satisfies the 80/20 law, but not recursively, because it takes the top 49% of the players to account for 96% of the chips, instead of 36%.
Figure 3 shows other indicators of the simulation results every 10,000 rounds. The proportion of the top players holding 80% of all the chips oscillates between 21.5% and 25%, with a very slow downward trend. The number of chips held by the top 10 players also varies, but is essentially flat.
Figure 3. Indicators every 10,000 rounds
This pure game of chance, which favors at every round the winners of the previous round, does not rapidly concentrate all the chips in a few hands. Instead, it produces a distribution that is as close to Pareto’s as anything we can observe in demographics, quality, or production control. This suggests the following two conclusions:
The simulation has shown the limitations of the conclusions drawn from a Pareto chart. When a product is a Runner, by definition it deserves a dedicated production line. That is a direct consequence of its being at the top of the chart. It does not, however, imply that it is “better” in any way than that the other products. The simulation arrived at a near 80/20 distribution of the chips without assuming any difference in the players’ ability to acquire chips: the roulette wheel has no favorites, and the players can do nothing to influence the outcome.
Share this:
Like this:
By Michel Baudin • Technology • 3 • Tags: industrial engineering, Lean manufacturing, Management, Pareto