Michel Baudin's Blog
Ideas from manufacturing operations
  • Home
  • Home
  • About the author
  • Ask a question
  • Consulting
  • Courses
  • Leanix™ games
  • Sponsors
  • Meetup group

Mar 3 2011

A factory can always be improved

Based on an NWLEAN post entitled: Laws of Nature – Pareto efficiency and Pareto improvements, from 3/3/2011 

In manufacturing, Italian economist Vilfredo Pareto is mostly known for the Pareto diagrams and the 80/20 law, but  in economics, he is also known for the unrelated concept of Pareto efficiency, or Pareto optimality, which is also relevant to Lean. A basic tenet of Lean is that a factory can always be improved, and that, once you have achieved any level of performance, it is just the starting point for the next round of improvement. Perfection is something you never achieve but always pursue and, if you dig deep enough, you always find opportunities. This is the vocabulary you use when discussing the matter with fellow production people. If, however, you are taking college courses on the side, you might score more points with your instructor by saying, as an empirical law of nature, that a business system is never Pareto-efficient. It means the same thing, but our problem is that this way of thinking is taught neither in Engineering nor in Business school, and that few managers practice it.

A system is Pareto-efficient if you cannot improve any aspect of its performance without making something else worse. Managers who believe their factories to be Pareto-efficient think, for example, that you cannot improve quality without lengthening lead times and increasing costs, which is exactly what Lean does. In fact, eliminating waste is synonymous with making improvements in some dimensions of performance without degrading anything else, or taking advantage of the lack of Pareto-efficiency in the plant.

When we say that a factory can always be improved it is a postulate, an assumption you start from when you walk through the gates. The overwhelming empirical evidence is that, if you make that assumption, you find improvement opportunities. Obviously, if you don’t make that assumption, you won’t find any, because you won’t be trying.

This is not a minor issue. Writing in the Harvard Business Review back in 1991 about Activity-Based Costing, Robert Kaplan stated that all the possible shop floor improvements had already been made over the previous 50 years. He was teaching his MBA students that factories were Pareto-efficient and that it was therefore pointless to try and improve them. They would do better to focus on financial engineering and outsource production.

The idea that improving factories is futile and a distraction from more “strategic” pursuits dies hard. It is expressed repeatedly in a variety of ways. The diminishing returns argument is that, as you keep reaching for fruits that hang ever higher, the effort requires starts being excessive with respect to the benefits, but there are two things to consider:

  • As you make improvements, you enhance not only performance but your own skills as well, so that some of what was out of reach before no longer is.
  • Competition is constantly raising the bar. If your competitors keep improving and you don’t, you lose.

Another argument is that the focus on waste elimination discourages activities like R&D that do not have an immediate impact on sales. The improvement effort, however,  isn’t about what we do but how we do it. Nobody in his right mind would call R&D waste, even on projects that fail. Waste in R&D comes in the form of researchers waiting for test equipment, sitting through badly organized meetings, or filling out administrative paperwork.

In manufacturing itself, some see the pursuit of improvement as a deterrent to investment in new technology. While it is clear that the improvement mindset does not lead to solving every problem by buying new machines,  the  practitioners of continuous improvement are in fact better informed, savvier buyers of new technology. On one side of the shop floor, you see a cell with old machines on which incremental improvements over several years have reduced staffing requirements from 5 operators to 1. On the other side of the aisle, you see a brand new, fully automatic line with a design that incorporates the lessons learned on the old one.

Others have argued that a society that pursues improvement will be slower to develop and adopt new, disruptive technology. But does the machinist improving a fixture deter the founder of the next Facebook? There is no connection. If the machinist were not making improvements, his creativity would most likely be untapped. And his improvement work does not siphon off the venture capital needed for disruptive technology.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Laws of nature • 14 • Tags: Autonomation, Continuous improvement, industrial engineering, jidoka, Line design, Manufacturing engineering

Feb 11 2011

Comparative advantage in the allocation of work among machines

Another NWLEAN post in response to Mike Thelen’s query on Laws of Nature, posted on 2/11/2011

On several occasions, I ran into the problem of allocating work among machines of different generations with overlapping capabilities. There were several products that could be processed to the same levels of quality in both the new and the old machines. The machines worked differently. For example, the old machines would process parts in batches while the new ones supported one-piece flow. But the resulting time per part was shorter on the new machines for all products. In other words, the new machines had a higher capacity for everything.

Given that the products were components going into the same assemblies, they were to be made in matching quantities per the assembly bill of materials and the demand was such that the plant had to make as many matching sets as possible. The question then is: how do you allocate the work among the machines?

When I first saw this problem, I thought it was unique, but, in fact, many machine shops keep multiple generations of machines on their floors and make parts in matching sets for their customers, and it is in fact quite common. The solution that maximizes the total output is to apply the law of comparative advantage from classical economics. Adapted to this context, it says that the key is the ratio of performance between the old and the new machines on each product. For example, if the new machine can do product X 30% faster than the old machine and product Y ten times faster, then the old machine is said to have a comparative advantage on product X, and you should run as much as possible of product X on the old machine.

It is a bit surprising at first, but easy to apply. What is more surprising is that so few plants do. The logic that is actually most commonly used is to load up the new machine with as much work as possible, on the grounds that it has a high depreciation and needs to “earn its keep.” What many managers have a difficult time coming to terms with is that what you paid for a machine and when you paid it is irrelevant when allocating work, because it is in the past and nothing you do will change it. You produce today with the machines you have, and the only thing that matters is what they can do, now and in the future.

The law of comparative advantage is taught in economics, not manufacturing or industrial engineering, and pertains to the benefits of free trade between countries, not work allocation among machines. The similarity is not obvious. This law is attributed to David Ricardo who published in 1817, based on an analysis of the production of wine and cloth in England and Portugal. Trade was free because, at the time, Portugal was under British occupation. Both wine and cloth were cheaper to produce in Portugal, but wine was much cheaper and cloth only slightly cheaper. England had therefore a comparative advantage on cloth, and the total output of wine and cloth was maximized by specializing England on cloth and Portugal on wine. You transplant that reasoning to your machine shop by mapping the countries to machines and costs to process times.

This simple approach works in a specific context. It is not general, but is of value because that context occurs in reality. The literature on operations research is full of more complicated ways to arrive at solutions in different situations. There is an article from IE Magazine in July, 2006 that I wrote about this entitled “Not-so-basic equipment: the pitfalls to avoid when allocating work among machines.” It used to be available on line for free on the magazine’s web site. Now you have to buy it on Amazon to download it.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Laws of nature • 0 • Tags: industrial engineering, Lean manufacturing, Manufacturing engineering

Jan 21 2011

Learning or experience curves

The following is a revision of a posting on NWLEAN in January, 2011 in response to Mike Thelen’s call for “Laws of nature” in manufacturing.

Learning curves are often mentioned informally, as in “there is a learning curve on this tool,” just to say that it takes learning and practice to get proficient at it. There is, however, a formal version expressing costs as a function of cumulative production volume during the life of a manufactured product. T. P. Wright first introduced the learning curve concept  in the US aircraft industry in 1936, about  labor costs; Bruce Henderson generalized in the experience curve, to include all costs ,  particularly those of purchased components.

The key idea is to look at cumulative volume. After all, how many units of a product you have made since you started is your experience, and it stands to reason that, the more you have already made of a product, the easier and cheaper it becomes for you to build one more. The x-axis of the experience curve is defined clearly and easily. The y-axis, on the other hand, is the cost per unit of the product, one of the characteristics that are commonly discussed as if they were well-defined, intrinsic properties like weight and color.  They really are a function of current production volume, and contain allocations that can be calculated in different ways for shared resources and resources used over time. The classic reference on the subject, Bruce Henderson’s Perpectives on Experience (1972), glosses over these difficulties and presents empirical evidence about prices rather than costs.

Assuming an unambiguous and meaningful definition of unit costs, it is reasonable to assume that they would decline as experience in making the product accumulates. But what might be the shape of the cost decline curve?  Engineers like to plot quantities and look for straight lines on various kinds of graph paper. Even before looking at empirical data, we can reflect on the logic of the most common types of models:

  1. In a plot of unit cost versus cumulative volume in regular, Cartesian coordinates, a straight line means a linear cost decline, which makes no sense because you would end up with negative costs for a sufficiently large volume.
  2. In a semi-logarithmic plot, a straight line would mean an exponential cost decline, which makes no sense either, because you could make an infinite volume at a finite cost.
  3. If you try a log-log plot, a straight line means an inverse-power cost decline, meaning, for example, the unit cost drops by 20% every time the cumulative volume doubles. This approach has none of the above problems. It represents a smooth decline as long as production continues, slow enough that the cumulative costs keeps growing to infinity with the volume.

I don’t know of any deeper theoretical justification for using inverse-power laws in learning or experience curves. Henderson, investigated the prices of various industrial products. I remember in particular his analysis of the Ford Model T, which showed prices from 1908 to 1927 that were consistent with a fixed percentage drop in unit costs for each doubling of the cumulative volume. The prices followed an obvious straight line on a log-log plot, suggesting that the costs did the same below.

Today, you don’t hear much about experience curves in the car industry, but you do in Electronics, where products have much shorter lives and this curve is a key factor in planning. When working in semiconductors, I remember a proposal from a Japanese electronics manufacturer that was designing one of our chips into a product. Out of curiosity, I plotted the declining prices they were offering to pay  for increasing quantities on log-log scales, and found that they were perfectly aligned. There was no doubt that this was how they had come up with the numbers.

The slope of your own curve is a function of your improvement abilities. Your market share then determines where you are on the x-axis. The higher your market share the faster your cumulative production volume grows. Being first lets you to grab market share early; being farther along the curve than your competitors allows you to retain it.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Laws of nature • 0

Little's Law concept

Jan 13 2011

Lead times, work sampling, and Little’s Law

On 1/11/2011, Michael Thelen asked in the NWLEAN forum about “laws of nature” as they related to Lean. This is based on one of my answers. 

Lead time is a key performance indicator of manufacturing operations, but how do you measure it? It is not a quantity that you can directly observe by walking through a plant. To measure it directly you need to retrieve start and finish timestamps from historical data, assuming they are available and accurate. Or you could put tracers on a sample of parts, which means that it would take you at least six weeks to measure a six-week lead time. In most plants, however, a quick and rough estimate is more useful than a precise one that takes extensive time and effort to achieve.

That is where work sampling and Little’s Law come in handy. The key idea of work sampling, which the Wikipedia article fails to make clear, is that lets you infer a breakdown of each entity’s status over time from snapshots of the status of multiple identical entities. If, every time you go to the shop floor, you see 2 out  of 10 operators walking the aisle, you infer that, on the average, each operator spends 20% of the time walking the aisle.

There are obviously necessary conditions for such an inference to be valid. For example, you want to take snapshots during normal operations, not during startup orshutdown, and the group you are measuring must be homogeneous: if it is comprised of two materials handlers and eight production operators, the 20% average is not interesting, even if it is accurate. Work sampling is usually described as applied to people, but the same logic is applicable to machines and to work pieces, and that is what makes it possible to infer lead times from snapshots of inventory and throughput rates.

On the shop floor, you can count parts, bins or pallets, and observe the pace at which they are being consumed. Let us assume we are in the context shown in Figure 1, and want to estimate how long we take to turn a blank into a finished good.


Figure 1. Context of Little’s Law

Little’s Law, then, says that on the average,  in steady state, within one process or process segment,

Inventory = Lead time X Throughput

The reason this is true is best explained graphically, as in Figure 2, in the simple case of constant throughput and lead time. The cumulative count of blanks in is a straight line going up over time, and so is the count of finished goods out, offset by the lead time.  The vertical distance between the curves is the number of blanks that have come in but not yet made it out as products, and represents therefore the inventory. The slope of each curve is the throughput, and it is clearly the ratio of the inventory to the lead time.

Figure 2. Little’s Law with constant throughput and lead time

What is interesting about Little’s Law is that it remains valid about averages when both rates of arrivals of blanks and departures of finished goods are allowed to fluctuate randomly about an average. This is probably the best known and most useful general result of queueing theory.

Since we can count inventory and measure throughput, we can infer average lead times from just this data. One snapshot will not give you an accurate estimate, but it is still considerably easier to take a few snapshots of a production line to get a more accurate estimate than it is to research history. The point is to get close to an answer that would take much longer to get if you actually had to be accurate.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Laws of nature • 2 • Tags: Continuous improvement, industrial engineering, Lean manufacturing, Management, Manufacturing engineering, Manufactuting

Apr 22 2010

Lean is about simultaneously improving all dimensions of performance, including quality

Originally posted on The Lean Edge on 4/22/2010

Quality not central to Lean? Says who? Lean is about simultaneously improving all dimensions of performance, including quality.

Quality professionals frequently miss this, because what they learned primarily addresses process capability issues that are central only in high technology, where, if your process is mature, your product is obsolete. This is the context where statistical approaches like Six Sigma make a difference.

Modern machine tools, on the other hand, can easily hold required tolerances, and most quality problems are not due to lack of process capability. They are instead due to discrete failure of the equipment or human error.

The main issue with discrete equipment failures is to detect them quickly so that they affect few parts and can be diagnosed before their trail is cold. With one-piece flow, defects are detected immediately instead of being buried in WIP, and this is why conversion from batch production to one-piece flow typically yields large improvements in quality.

The next step is having machines stop as soon as they start producing defectives, but this still leaves human error, and that is addressed by mistake-proofing.

Beyond these approaches, there is also management to prevent the deterioration over time, and planned responses to potential new problems.

This is a hierarchy of approaches. Actual numbers vary, but, in orders of magnitude, statistical tools will get you from 30% defectives to 3%, one-piece flow to 0.3%, mistake-proofing to 15ppm, and I know of one case of a Toyota supplier achieving <1ppm on some parts.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Management • 0 • Tags: Lean manufacturing, Quality

Apr 22 2010

Lean is about simultaneously improving all dimensions of performance, including quality

This is in response to Mike Micklewright‘s question on Why Is Quality So Rarely Central In Lean?:

“I see so many internal Lean “experts” using “Lean” as a means to increase efficiencies and productivity, and therefore, reduce costs.  They still do not see the connection to quality.  They see quality and the reduction of variation in significant product characteristics as something that is outside of the Lean scope and something that should be handled by the quality folks independently of the lean effort.  What a shame!  If you agree with this observation, why does this exist and what can we do to change this perception?”

Following is my response:

Quality not central to Lean? Says who? Lean is about simultaneously improving all dimensions of performance, including quality. Quality professionals frequently miss this, because what they learned primarily addresses process capability issues that are central only in high technology, where, if your process is mature, your product is obsolete. This is the context where statistical approaches like Six Sigma make a difference.

Modern machine tools, on the other hand, can easily hold required tolerances, and most quality problems are not due to lack of process capability. They are instead due to discrete failure of the equipment or human error. The main issue with discrete equipment failures is to detect them quickly so that they affect few parts and can be diagnosed before their trail is cold. With one-piece flow, defects are detected immediately instead of being buried in WIP, and this is why conversion from batch production to one-piece flow typically yields large improvements in quality.

The next step, which Dennis alluded to, is having machines stop as soon as they start producing defectives, but this still leaves human error, and that is addressed by mistake-proofing. Beyond these approaches, there is also management to prevent the deterioration over time, and plan responses to potential new problems.

This is a hierarchy of approaches. Actual numbers vary, but, in orders of magnitude, statistical tools will get you from 30% defectives to 3%, one-piece flow to 0.3%, mistake-proofing to 15ppm, and I know of one case of a Toyota supplier achieving <1ppm on some parts.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Management • 0 • Tags: Lean, Mistake-Proofing, One-piece flow, Poka-Yoke, Process capability, Quality, Six Sigma

«< 161 162 163 164 165

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 579 other subscribers

Recent Posts

  • Using Regression to Improve Quality | Part III — Validating Models
  • Rebuilding Manufacturing in France | Radu Demetrescoux
  • Using Regression to Improve Quality | Part II – Fitting Models
  • Using Regression to Improve Quality | Part I – What for?
  • Rankings and Bump Charts

Categories

  • Announcements
  • Answers to reader questions
  • Asenta selection
  • Automation
  • Blog clippings
  • Blog reviews
  • Book reviews
  • Case studies
  • Data science
  • Deming
  • Events
  • History
  • Information Technology
  • Laws of nature
  • Management
  • Metrics
  • News
  • Organization structure
  • Personal communications
  • Policies
  • Polls
  • Press clippings
  • Quality
  • Technology
  • Tools
  • Training
  • Uncategorized
  • Van of Nerds
  • Web scrapings

Social links

  • Twitter
  • Facebook
  • Google+
  • LinkedIn

My tags

5S Automation Autonomation Cellular manufacturing Continuous improvement data science Deming ERP Ford Government Health care industrial engineering Industry 4.0 Information technology IT jidoka Kaizen Kanban Lean Lean assembly Lean Health Care Lean implementation Lean Logistics Lean management Lean manufacturing Logistics Management Manufacturing Manufacturing engineering Metrics Mistake-Proofing Poka-Yoke Quality Six Sigma SMED SPC Standard Work Strategy Supply Chain Management Takt time Toyota Toyota Production System TPS Training VSM

↑

© Michel Baudin's Blog 2025
Powered by WordPress • Themify WordPress Themes
%d