Jan 13 2011

## Lead times, work sampling, and Little’s Law

*On 1/11/2011, Michael Thelen asked in the NWLEAN forum about “laws of nature” as they related to Lean. This is based on one of my answers. *

Lead time is a key performance indicator of manufacturing operations, but how do you measure it? It is not a quantity that you can directly observe by walking through a plant. To measure it directly you need to retrieve start and finish timestamps from historical data, assuming they are available and accurate. Or you could put tracers on a sample of parts, which means that it would take you at least six weeks to measure a six-week lead time. In most plants, however, a quick and rough estimate is more useful than a precise one that takes extensive time and effort to achieve.

That is where work sampling and Little’s Law come in handy. The key idea of work sampling, which the Wikipedia article fails to make clear, is that lets you infer a breakdown of each entity’s status *over time* from *snapshots* of the status of multiple identical entities. If, every time you go to the shop floor, you see 2 out of 10 operators walking the aisle, you infer that, on the average, each operator spends 20% of the time walking the aisle.

There are obviously necessary conditions for such an inference to be valid. For example, you want to take snapshots during normal operations, not during startup orshutdown, and the group you are measuring must be homogeneous: if it is comprised of two materials handlers and eight production operators, the 20% average is not interesting, even if it is accurate. Work sampling is usually described as applied to *people*, but the same logic is applicable to machines and to *work pieces*, and that is what makes it possible to infer lead times from snapshots of inventory and throughput rates.

On the shop floor, you can count parts, bins or pallets, and observe the pace at which they are being consumed. Let us assume we are in the context shown in Figure 1, and want to estimate how long we take to turn a blank into a finished good.

**Figure 1. Context of Little’s Law**

Little’s Law, then, says that on the average, in steady state, within one process or process segment,

Inventory = Lead time X Throughput

The reason this is true is best explained graphically, as in Figure 2, in the simple case of constant throughput and lead time. The cumulative count of blanks in is a straight line going up over time, and so is the count of finished goods out, offset by the lead time. The vertical distance between the curves is the number of blanks that have come in but not yet made it out as products, and represents therefore the inventory. The slope of each curve is the throughput, and it is clearly the ratio of the inventory to the lead time.

**Figure 2. Little’s Law with constant throughput and lead time**

What is interesting about Little’s Law is that it remains valid about averages when both rates of arrivals of blanks and departures of finished goods are allowed to fluctuate randomly about an average. This is probably the best known and most useful general result of queueing theory.

Since we can count inventory and measure throughput, we can infer average lead times from just this data. One snapshot will not give you an accurate estimate, but it is still considerably easier to take a few snapshots of a production line to get a more accurate estimate than it is to research history. The point is to get close to an answer that would take much longer to get if you actually had to be accurate.

Jan 21 2011

## Learning or experience curves

The following is a revision of a posting on NWLEAN in January, 2011 in response to Mike Thelen’s call for “Laws of nature” in manufacturing.

Learning curves are often mentioned informally, as in “there is a learning curve on this tool,” just to say that it takes learning and practice to get proficient at it. There is, however, a formal version expressing costs as a function of cumulative production volume during the life of a manufactured product. T. P. Wright first introduced the

learning curveconcept in the US aircraft industry in 1936, about labor costs; Bruce Henderson generalized in theexperience curve, to include all costs , particularly those of purchased components.The key idea is to look at

cumulativevolume. After all, how many units of a product you have made since you startedisyour experience, and it stands to reason that, the more you have already made of a product, the easier and cheaper it becomes for you to build one more. The x-axis of the experience curve is defined clearly and easily. The y-axis, on the other hand, is the cost per unit of the product, one of the characteristics that are commonly discussed as if they were well-defined, intrinsic properties like weight and color. They really are a function of current production volume, and contain allocations that can be calculated in different ways for shared resources and resources used over time. The classic reference on the subject, Bruce Henderson’s Perpectives on Experience (1972), glosses over these difficulties and presents empirical evidence aboutpricesrather than costs.Assuming an unambiguous and meaningful definition of unit costs, it is reasonable to assume that they would decline as experience in making the product accumulates. But what might be the shape of the cost decline curve? Engineers like to plot quantities and look for straight lines on various kinds of graph paper. Even before looking at empirical data, we can reflect on the logic of the most common types of models:

I don’t know of any deeper theoretical justification for using inverse-power laws in learning or experience curves. Henderson, investigated the prices of various industrial products. I remember in particular his analysis of the Ford Model T, which showed prices from 1908 to 1927 that were consistent with a fixed percentage drop in unit costs for each doubling of the cumulative volume. The prices followed an obvious straight line on a log-log plot, suggesting that the costs did the same below.

Today, you don’t hear much about experience curves in the car industry, but you do in Electronics, where products have much shorter lives and this curve is a key factor in planning. When working in semiconductors, I remember a proposal from a Japanese electronics manufacturer that was designing one of our chips into a product. Out of curiosity, I plotted the declining prices they were offering to pay for increasing quantities on log-log scales, and found that they were perfectly aligned. There was no doubt that this was how they had come up with the numbers.

The slope of your own curve is a function of your improvement abilities. Your market share then determines where you are on the x-axis. The higher your market share the faster your cumulative production volume grows. Being first lets you to grab market share early; being farther along the curve than your competitors allows you to retain it.

## Share this:

## Like this:

By Michel Baudin • Laws of nature •