Acceptance Sampling In The Age Of Low PPM Defectives

Today, some automotive parts manufacturers are able to deliver one million consecutive units without a single defective, and pondering quality management practices appropriate for this level of performance is not idle speculation. Of course, it is only achieved by outstanding suppliers using mature processes in mature industries. You cannot expect it during new product introduction or in high-technology industries where, if your processes are mature, your products are obsolete.

While still taught as part of the quality curriculum, acceptance sampling has been criticized by authors like W. E. Deming and is not part of the Lean approach to quality. For qualified items from suppliers you trust, you accept shipments with no inspection; for new items or suppliers you do not trust, you inspect 100% of incoming units until the situation improves. Let us examine both what the math tells us about this and possible management actions, with the help of 21st century IT.

Continue reading

Saturation In Manufacturing Versus Service

In Capacity Planning For 1st Responders, we considered the problem of dimensioning a group so that there is at least one member available when needed. Not all service groups, however, are expected to respond immediately to all customers. Most, from supermarket check stands and airport check-in counters to clinics for non-emergency health care, allow some amount of queueing, giving rise to the question of how long the queues become when the servers get busy.

Patients waiting in Emergency Room

At one point in his latest book, Andy and Me And The Hospital, Pascal Dennis writes that the average number of patients in an emergency room is inversely proportional to the availability of the doctors. The busier the doctors are, the more dramatic the effect. For example, if they go from being busy 98% of the time to 99%, their availability drop by half from 2% to 1%, and the mean number of patients doubles. Conversely, any improvement in emergency room procedures that, to provide the same service, reduces the doctors’ utilization from 99% to 98%, which cuts the mean number of patients —  and their mean waiting time — in half.

Continue reading

Variability, Randomness, And Uncertainty in Operations

This elaborates on the topics of randomness versus uncertainty that I briefly touched on in a prior post. Always skittish about using dreaded words like “probability” or “randomness,” writers on manufacturing or service operations, even Deming, prefer to use “variability” or “variation” for the way both demand and performance change over time, but it doesn’t mean the same thing. For example, a hotel room that goes for $100/night in November through March and $200/night from April to October has a price that is variable but not random. The rates are published, and you know them ahead of time.

By contrast, to a passenger, the airfare from San Francisco to Chicago is not only variable but random. The airlines change tens of thousands of fares every day in ways you discover when you book a flight. Based on having flown this route four times in the past 12 months, however, you expect the fare to be in the range of $400 to $800, with $600 as the most likely. The information you have is not complete enough for you to know what the price will be but it does enable you to have a confidence interval for it.

Continue reading

How to Pick the Fastest Line at the Supermarket | New York Times [Debunk]

Inside a Whole Foods in Brooklyn (New York TImes)

“[…] Choose a single line that leads to several cashiers

Not all lines are structured this way, but research has largely shown that this approach, known as a serpentine line, is the fastest. The person at the head of the line goes to the first available window in a system often seen at airports or banks. […]”

Sourced through the New York Times

Michel Baudin‘s comments:

No! Research shows no such thing. The serpentine line does not reduce the customers’ mean time through the system. Little’s Law tells us that, in steady state, regardless of how the queue is organized:

{Mean\, time\, in\, system = \frac{Mean\, number\, of\,  customers\, in \, system}{Mean\, service\,  rate}}

Continue reading

A factory can always be improved

Based on an NWLEAN post entitled: Laws of Nature – Pareto efficiency and Pareto improvements, from 3/3/2011 

In manufacturing, Italian economist Vilfredo Pareto is mostly known for the Pareto diagrams and the 80/20 law, but  in economics, he is also known for the unrelated concept of Pareto efficiency, or Pareto optimality, which is also relevant to Lean. A basic tenet of Lean is that a factory can always be improved, and that, once you have achieved any level of performance, it is just the starting point for the next round of improvement. Perfection is something you never achieve but always pursue and, if you dig deep enough, you always find opportunities. This is the vocabulary you use when discussing the matter with fellow production people. If, however, you are taking college courses on the side, you might score more points with your instructor by saying, as an empirical law of nature, that a business system is never Pareto-efficient. It means the same thing, but our problem is that this way of thinking is taught neither in Engineering nor in Business school, and that few managers practice it.

A system is Pareto-efficient if you cannot improve any aspect of its performance without making something else worse. Managers who believe their factories to be Pareto-efficient think, for example, that you cannot improve quality without lengthening lead times and increasing costs, which is exactly what Lean does. In fact, eliminating waste is synonymous with making improvements in some dimensions of performance without degrading anything else, or taking advantage of the lack of Pareto-efficiency in the plant.

When we say that a factory can always be improved it is a postulate, an assumption you start from when you walk through the gates. The overwhelming empirical evidence is that, if you make that assumption, you find improvement opportunities. Obviously, if you don’t make that assumption, you won’t find any, because you won’t be trying.

This is not a minor issue. Writing in the Harvard Business Review back in 1991 about Activity-Based Costing, Robert Kaplan stated that all the possible shop floor improvements had already been made over the previous 50 years. He was teaching his MBA students that factories were Pareto-efficient and that it was therefore pointless to try and improve them. They would do better to focus on financial engineering and outsource production.

The idea that improving factories is futile and a distraction from more “strategic” pursuits dies hard. It is expressed repeatedly in a variety of ways. The diminishing returns argument is that, as you keep reaching for fruits that hang ever higher, the effort requires starts being excessive with respect to the benefits, but there are two things to consider:

  • As you make improvements, you enhance not only performance but your own skills as well, so that some of what was out of reach before no longer is.
  • Competition is constantly raising the bar. If your competitors keep improving and you don’t, you lose.

Another argument is that the focus on waste elimination discourages activities like R&D that do not have an immediate impact on sales. The improvement effort, however,  isn’t about what we do but how we do it. Nobody in his right mind would call R&D waste, even on projects that fail. Waste in R&D comes in the form of researchers waiting for test equipment, sitting through badly organized meetings, or filling out administrative paperwork.

In manufacturing itself, some see the pursuit of improvement as a deterrent to investment in new technology. While it is clear that the improvement mindset does not lead to solving every problem by buying new machines,  the  practitioners of continuous improvement are in fact better informed, savvier buyers of new technology. On one side of the shop floor, you see a cell with old machines on which incremental improvements over several years have reduced staffing requirements from 5 operators to 1. On the other side of the aisle, you see a brand new, fully automatic line with a design that incorporates the lessons learned on the old one.

Others have argued that a society that pursues improvement will be slower to develop and adopt new, disruptive technology. But does the machinist improving a fixture deter the founder of the next Facebook? There is no connection. If the machinist were not making improvements, his creativity would most likely be untapped. And his improvement work does not siphon off the venture capital needed for disruptive technology.

Comparative advantage in the allocation of work among machines

Another NWLEAN post in response to Mike Thelen’s query on Laws of Nature, posted on 2/11/2011

On several occasions, I ran into the problem of allocating work among machines of different generations with overlapping capabilities. There were several products that could be processed to the same levels of quality in both the new and the old machines. The machines worked differently. For example, the old machines would process parts in batches while the new ones supported one-piece flow. But the resulting time per part was shorter on the new machines for all products. In other words, the new machines had a higher capacity for everything.

Given that the products were components going into the same assemblies, they were to be made in matching quantities per the assembly bill of materials and the demand was such that the plant had to make as many matching sets as possible. The question then is: how do you allocate the work among the machines?

When I first saw this problem, I thought it was unique, but, in fact, many machine shops keep multiple generations of machines on their floors and make parts in matching sets for their customers, and it is in fact quite common. The solution that maximizes the total output is to apply the law of comparative advantage from classical economics. Adapted to this context, it says that the key is the ratio of performance between the old and the new machines on each product. For example, if the new machine can do product X 30% faster than the old machine and product Y ten times faster, then the old machine is said to have a comparative advantage on product X, and you should run as much as possible of product X on the old machine.

It is a bit surprising at first, but easy to apply. What is more surprising is that so few plants do. The logic that is actually most commonly used is to load up the new machine with as much work as possible, on the grounds that it has a high depreciation and needs to “earn its keep.” What many managers have a difficult time coming to terms with is that what you paid for a machine and when you paid it is irrelevant when allocating work, because it is in the past and nothing you do will change it. You produce today with the machines you have, and the only thing that matters is what they can do, now and in the future.

The law of comparative advantage is taught in economics, not manufacturing or industrial engineering, and pertains to the benefits of free trade between countries, not work allocation among machines. The similarity is not obvious. This law is attributed to David Ricardo who published in 1817, based on an analysis of the production of wine and cloth in England and Portugal. Trade was free because, at the time, Portugal was under British occupation. Both wine and cloth were cheaper to produce in Portugal, but wine was much cheaper and cloth only slightly cheaper. England had therefore a comparative advantage on cloth, and the total output of wine and cloth was maximized by specializing England on cloth and Portugal on wine. You transplant that reasoning to your machine shop by mapping the countries to machines and costs to process times.

This simple approach works in a specific context. It is not general, but is of value because that context occurs in reality. The literature on operations research is full of more complicated ways to arrive at solutions in different situations. There is an article from IE Magazine in July, 2006 that I wrote about this entitled “Not-so-basic equipment: the pitfalls to avoid when allocating work among machines.” It used to be available on line for free on the magazine’s web site. Now you have to buy it on Amazon to download it.

Learning or experience curves

The following is a revision of a posting on NWLEAN in January, 2011 in response to Mike Thelen’s call for “Laws of nature” in manufacturing.

Learning curves are often mentioned informally, as in “there is a learning curve on this tool,” just to say that it takes learning and practice to get proficient at it. There is, however, a formal version expressing costs as a function of cumulative production volume during the life of a manufactured product. T. P. Wright first introduced the learning curve concept  in the US aircraft industry in 1936, about  labor costs; Bruce Henderson generalized in the experience curve, to include all costs ,  particularly those of purchased components.

The key idea is to look at cumulative volume. After all, how many units of a product you have made since you started is your experience, and it stands to reason that, the more you have already made of a product, the easier and cheaper it becomes for you to build one more. The x-axis of the experience curve is defined clearly and easily. The y-axis, on the other hand, is the cost per unit of the product, one of the characteristics that are commonly discussed as if they were well-defined, intrinsic properties like weight and color.  They really are a function of current production volume, and contain allocations that can be calculated in different ways for shared resources and resources used over time. The classic reference on the subject, Bruce Henderson’s Perpectives on Experience (1972), glosses over these difficulties and presents empirical evidence about prices rather than costs.

Assuming an unambiguous and meaningful definition of unit costs, it is reasonable to assume that they would decline as experience in making the product accumulates. But what might be the shape of the cost decline curve?  Engineers like to plot quantities and look for straight lines on various kinds of graph paper. Even before looking at empirical data, we can reflect on the logic of the most common types of models:

  1. In a plot of unit cost versus cumulative volume in regular, Cartesian coordinates, a straight line means a linear cost decline, which makes no sense because you would end up with negative costs for a sufficiently large volume.
  2. In a semi-logarithmic plot, a straight line would mean an exponential cost decline, which makes no sense either, because you could make an infinite volume at a finite cost.
  3. If you try a log-log plot, a straight line means an inverse-power cost decline, meaning, for example, the unit cost drops by 20% every time the cumulative volume doubles. This approach has none of the above problems. It represents a smooth decline as long as production continues, slow enough that the cumulative costs keeps growing to infinity with the volume.

I don’t know of any deeper theoretical justification for using inverse-power laws in learning or experience curves. Henderson, investigated the prices of various industrial products. I remember in particular his analysis of the Ford Model T, which showed prices from 1908 to 1927 that were consistent with a fixed percentage drop in unit costs for each doubling of the cumulative volume. The prices followed an obvious straight line on a log-log plot, suggesting that the costs did the same below.

Today, you don’t hear much about experience curves in the car industry, but you do in Electronics, where products have much shorter lives and this curve is a key factor in planning. When working in semiconductors, I remember a proposal from a Japanese electronics manufacturer that was designing one of our chips into a product. Out of curiosity, I plotted the declining prices they were offering to pay  for increasing quantities on log-log scales, and found that they were perfectly aligned. There was no doubt that this was how they had come up with the numbers.

The slope of your own curve is a function of your improvement abilities. Your market share then determines where you are on the x-axis. The higher your market share the faster your cumulative production volume grows. Being first lets you to grab market share early; being farther along the curve than your competitors allows you to retain it.

Lead times, work sampling, and Little’s Law

On 1/11/2011, Michael Thelen asked in the NWLEAN forum about “laws of nature” as they related to Lean. This is based on one of my answers

Lead time is a key performance indicator of manufacturing operations, but how do you measure it? It is not a quantity that you can directly observe by walking through a plant. To measure it directly you need to retrieve start and finish timestamps from historical data, assuming they are available and accurate. Or you could put tracers on a sample of parts, which means that it would take you at least six weeks to measure a six-week lead time. In most plants, however, a quick and rough estimate is more useful than a precise one that takes extensive time and effort to achieve.

That is where work sampling and Little’s Law come in handy. The key idea of work sampling, which the Wikipedia article fails to make clear, is that lets you infer a breakdown of each entity’s status over time from snapshots of the status of multiple identical entities. If, every time you go to the shop floor, you see 2 out  of 10 operators walking the aisle, you infer that, on the average, each operator spends 20% of the time walking the aisle.

There are obviously necessary conditions for such an inference to be valid. For example, you want to take snapshots during normal operations, not during startup orshutdown, and the group you are measuring must be homogeneous: if it is comprised of two materials handlers and eight production operators, the 20% average is not interesting, even if it is accurate. Work sampling is usually described as applied to people, but the same logic is applicable to machines and to work pieces, and that is what makes it possible to infer lead times from snapshots of inventory and throughput rates.

On the shop floor, you can count parts, bins or pallets, and observe the pace at which they are being consumed. Let us assume we are in the context shown in Figure 1, and want to estimate how long we take to turn a blank into a finished good.


Figure 1. Context of Little’s Law

Little’s Law, then, says that on the average,  in steady state, within one process or process segment,

Inventory = Lead time X Throughput

The reason this is true is best explained graphically, as in Figure 2, in the simple case of constant throughput and lead time. The cumulative count of blanks in is a straight line going up over time, and so is the count of finished goods out, offset by the lead time.  The vertical distance between the curves is the number of blanks that have come in but not yet made it out as products, and represents therefore the inventory. The slope of each curve is the throughput, and it is clearly the ratio of the inventory to the lead time.

Figure 2. Little’s Law with constant throughput and lead time

What is interesting about Little’s Law is that it remains valid about averages when both rates of arrivals of blanks and departures of finished goods are allowed to fluctuate randomly about an average. This is probably the best known and most useful general result of queueing theory.

Since we can count inventory and measure throughput, we can infer average lead times from just this data. One snapshot will not give you an accurate estimate, but it is still considerably easier to take a few snapshots of a production line to get a more accurate estimate than it is to research history. The point is to get close to an answer that would take much longer to get if you actually had to be accurate.