Mar 3 2011
A factory can always be improved
Based on an NWLEAN post entitled: Laws of Nature – Pareto efficiency and Pareto improvements, from 3/3/2011
In manufacturing, Italian economist Vilfredo Pareto is mostly known for the Pareto diagrams and the 80/20 law, but in economics, he is also known for the unrelated concept of Pareto efficiency, or Pareto optimality, which is also relevant to Lean. A basic tenet of Lean is that a factory can always be improved, and that, once you have achieved any level of performance, it is just the starting point for the next round of improvement. Perfection is something you never achieve but always pursue and, if you dig deep enough, you always find opportunities. This is the vocabulary you use when discussing the matter with fellow production people. If, however, you are taking college courses on the side, you might score more points with your instructor by saying, as an empirical law of nature, that a business system is never Pareto-efficient. It means the same thing, but our problem is that this way of thinking is taught neither in Engineering nor in Business school, and that few managers practice it.
A system is Pareto-efficient if you cannot improve any aspect of its performance without making something else worse. Managers who believe their factories to be Pareto-efficient think, for example, that you cannot improve quality without lengthening lead times and increasing costs, which is exactly what Lean does. In fact, eliminating waste is synonymous with making improvements in some dimensions of performance without degrading anything else, or taking advantage of the lack of Pareto-efficiency in the plant.
When we say that a factory can always be improved it is a postulate, an assumption you start from when you walk through the gates. The overwhelming empirical evidence is that, if you make that assumption, you find improvement opportunities. Obviously, if you don’t make that assumption, you won’t find any, because you won’t be trying.
This is not a minor issue. Writing in the Harvard Business Review back in 1991 about Activity-Based Costing, Robert Kaplan stated that all the possible shop floor improvements had already been made over the previous 50 years. He was teaching his MBA students that factories were Pareto-efficient and that it was therefore pointless to try and improve them. They would do better to focus on financial engineering and outsource production.
The idea that improving factories is futile and a distraction from more “strategic” pursuits dies hard. It is expressed repeatedly in a variety of ways. The diminishing returns argument is that, as you keep reaching for fruits that hang ever higher, the effort requires starts being excessive with respect to the benefits, but there are two things to consider:
- As you make improvements, you enhance not only performance but your own skills as well, so that some of what was out of reach before no longer is.
- Competition is constantly raising the bar. If your competitors keep improving and you don’t, you lose.
Another argument is that the focus on waste elimination discourages activities like R&D that do not have an immediate impact on sales. The improvement effort, however, isn’t about what we do but how we do it. Nobody in his right mind would call R&D waste, even on projects that fail. Waste in R&D comes in the form of researchers waiting for test equipment, sitting through badly organized meetings, or filling out administrative paperwork.
In manufacturing itself, some see the pursuit of improvement as a deterrent to investment in new technology. While it is clear that the improvement mindset does not lead to solving every problem by buying new machines, the practitioners of continuous improvement are in fact better informed, savvier buyers of new technology. On one side of the shop floor, you see a cell with old machines on which incremental improvements over several years have reduced staffing requirements from 5 operators to 1. On the other side of the aisle, you see a brand new, fully automatic line with a design that incorporates the lessons learned on the old one.
Others have argued that a society that pursues improvement will be slower to develop and adopt new, disruptive technology. But does the machinist improving a fixture deter the founder of the next Facebook? There is no connection. If the machinist were not making improvements, his creativity would most likely be untapped. And his improvement work does not siphon off the venture capital needed for disruptive technology.


Oct 14 2011
Revisiting Pareto
Revisiting Pareto is a paper about the application of the 80/20 law, also known as the law of the vital few and the trivial many and the Pareto principle. It presents new ways of applying it to manufacturing operations and is scheduled for publication in Industrial Engineer Magazine in January, 2012. As a preview, I am including here the following:
Abstract of the paper as scheduled for publication in 1/2012
The Pareto principle, so named by J.M. Juran in the 1940s, is also known as the 80/20 law, the law of the vital few and the trivial many, and ABC classification. It is a simple yet powerful idea, generally accepted but still underutilized in business. First recognized by Vilfredo Pareto in the distribution of farm land in Italy around 1900, it applies in various forms to city populations as counted in the 2010 census of the US, as well as to quality defects in manufacturing, the market demand for products, the consumption of components in production, and various metrics of item quantities in warehouses.
The key issues in making it useful are (1) to properly define quantities and categories, (2) to collect or retrieve the data needed, (3) to identify the actions to take as a result of its application and (4) to present the analysis and recommendations in a compelling way to decision makers.
In the classical Pareto analysis of quality problems, defect categories only partition the defectives if none has more than one type of defect. When multiple defects can be found on the same unit, the elimination of the tallest bar may increase the height of its runner-up. With independent defect categories and an electronic spreadsheet, this is avoided by calculating the yield of every test in the sequence instead of simply counting occurrences, and multiplying these yields to obtain the yield of a sequence of tests. We can then plot a bar chart of the relative frequencies each defect in the population that is actually tested for it, and, as a cumulative chart, the probability of observing at least one of the preceding defects.
When analyzing warehouse content, there are usually too many different items to count physically, and we must rely on data in the plant’s databases. The working capital tied up by each item is usually the most accessible. But physical characteristics like volume occupied, weight or part count are often scattered in multiple databases and sometimes not recorded. The challenge then is to extract, clean, and integrate the relevant data from these different sources.
In both supply chain and in-plant logistics, the design of the system should start with an analysis of part consumption by a production line in terms of frequency of use, another quantity that is not additive across items. In this application, the classical Pareto chart is replaced by the S-curve, which plots the number of shippable product units as a function of the frequency ranks of the components used, and thus provides a business basis for grouping them into A-, B- and C-items, also known as runners, repeaters and strangers.
The presentation of the results should be focused on communication, not decoration, and use graphics only to the extent that they serve this purpose.
Emergence of the Pareto distribution in a simulation
While the Pareto principle can be easily validated in many contexts, we are short on explanations as to why that is. We do not provide one, but we show how, in a simulation of a multiple-round roulette game in which each player’s winnings at each round determine his odds in the next one, the distribution of chips among the players eventually comes close to the 80/20 rule. This happens even though it is a pure game of chance: players start out with equal numbers of chips and have no way to influence the outcome.
Assume we use a roulette wheel to allocate chips among players in multiple rounds, with the following rules:
This is a pure game of chance. There is nothing players can do to influence the outcome, and the players who do best in one round are set to do even better on the next one. It is also a zero-sum game, in that whatever a player wins comes out of the other players’ hands. There is no safety net: a player with no chips after any round is out of the game. If you know the present state of the game –how many chips the player in each rank has– then the results of future rounds is independent of the path by which you arrived at the present state. Technically, it is a random walk and a Markov process. The concept of the game is shown in Figure 1.
It could be viewed as a representing a simplified Oklahoma Land Rush, where farmers of equal talent are initially given identical plots of land. Even in the first round, some plots will have higher crop yields than others. If the average crop is needed to survive, the farmers with a surplus sell it to the farmers with a shortfall in exchange for land and start the next season with land in proportion to the crop they had on the previous one. The players could also be functionally equivalent, similarly priced consumable products sold in the same channels, like toothpaste or soft drinks, and the chips would be the buyers of these products.
If we run this game through enough rounds, do we tend towards a distribution where 20% of the players hold 80% of the chips, or do all the chips end up in the hands of only one player? We simulated this game in software with 1 million chips to be divided among 1,000 players in 100,000 rounds, and the simulation takes about 2 minutes to run on a desktop PC.
The results show that the “wealth distribution” is stable, beyond 10,000 rounds, with 23% of the players collectively holding 80% of the chips Figure 2 shows the distribution after 100,000 rounds, which roughly satisfies the 80/20 law, but not recursively, because it takes the top 49% of the players to account for 96% of the chips, instead of 36%.
Figure 3 shows other indicators of the simulation results every 10,000 rounds. The proportion of the top players holding 80% of all the chips oscillates between 21.5% and 25%, with a very slow downward trend. The number of chips held by the top 10 players also varies, but is essentially flat.
Figure 3. Indicators every 10,000 rounds
This pure game of chance, which favors at every round the winners of the previous round, does not rapidly concentrate all the chips in a few hands. Instead, it produces a distribution that is as close to Pareto’s as anything we can observe in demographics, quality, or production control. This suggests the following two conclusions:
The simulation has shown the limitations of the conclusions drawn from a Pareto chart. When a product is a Runner, by definition it deserves a dedicated production line. That is a direct consequence of its being at the top of the chart. It does not, however, imply that it is “better” in any way than that the other products. The simulation arrived at a near 80/20 distribution of the chips without assuming any difference in the players’ ability to acquire chips: the roulette wheel has no favorites, and the players can do nothing to influence the outcome.
Share this:
Like this:
By Michel Baudin • Technology 4 • Tags: industrial engineering, Lean manufacturing, Management, Pareto