Metrics in Lean – Part 3 – Equipment

The aggregate metric for equipment most often mentioned in the Lean literature is Overall Equipment Effectiveness (OEE). I first encountered it 15 years ago, when a client’s intern who had been slated to help on a project I was coaching, was instead commandeered to spend the summer calculating the OEEs of machine tools. I argued that  the project was a better opportunity than just taking measurements, both for the improvements at stake for the client and for the intern’s learning experience, but I lost. Looking closer at the OEE itself, I felt that it was difficult to calculate accurately, analyze, and act on. In other words, it does not meet the requirements listed in Part 1.

The OEE is usually defined as follows:

OEE = Availability × Performance × Quality

A perfect machine works whenever you need it, and is therefore available 100% of the time. It works at its nominal speed, and therefore its performance is 100%, and it never makes a defective product, so that its yield is 100%, and so is its OEE. The OEE of a real machine is intended reflects the combination of its failures to live up to these ideals.

Availability

The first problem is the meaning of availability. When we say of any device that it is available, we mean that we can use it right now. For a production machine that does one operation at a time, it would mean that it is both up and not in use. The main reason for it to be unavailable is that it is busy, which really shouldn’t count against it, should it? In telecommunications, availability for a telephone switch is synonymous with the probability that it is up. This is because it is supposed to be up all the time, and to have the capacity to handle all incoming calls. In principle, it could be unavailable because of saturation, but the availability formula does not even consider it. It is only based on uptime and downtime,  or on time between failures and time to repair.

But a lathe doesn’t work like a telephone switch it at least two ways:

  1. It is rarely expected to work all the time: it may work two shifts a day, five days a week, and, whether it is down the rest of the time has no effect on performance.
  2. If you have one work piece on a spindle, you can’t load another one at the same time, and the spindle is unavailable.

In the OEE context, we are not talking about the machine being available in the sense being up and ready to take on a new task  but instead of time available to a scheduler to assign it work in the course of a planning period, which may be a shift or a day, or whatever time interval is used for this factory.

If, in a 480-minute shift, a machine stops during a 30-minute break and has up to 60 minutes of unscheduled downtime and setups, then the planner can count of 480 -30-60 = 390 minutes in which to schedule work, which yields a ratio of: Availability = 390/480 = 87%.

This assumes that the machine’s ability to do work is proportional to the time it is up. My first moped as a teenager was a hand-me-down from a relative that had been garaged for 7 years. It started fine when cold, but the spark plug started malfunctioning once it was warm, about 15 minutes later. It would stay up for 75% of a 20-minute ride but that didn’t mean it completed 75% of the rides. It actually left me stranded about 100% of the time;  it was unusable. Likewise, your link to a server may work 99% of the time while uploading a large file and break every time you try to save it. The formula makes it look as if it has 99% availability when in fact it is 0%.

There is also an issue with deducting setups from available time, because, unlike breakdowns, it is not just an issue of the technical performance of the machine but is directly affected by operating policies. The planner can influence the amount of time used for setups, reducing it by increasing the size of production runs or, if setup times vary with all pairs of from- and t0-products, by sequencing them so as to minimize the total setup time.

This is not to say that the formula is wrong but only that it commingles the effects of many causes and that its relevance is not universal. There may be better  ways to quantify availability depending on the characteristics of a machine and the work it is assigned. Companies that calculate OEEs often do not bother with such subtleties and simply equate availability with uptime.

Performance

Performance is a generic term with many different meanings. As a factor in the OEE, it  is the ratio of nominal to actual process time of the machine. If the machine actually takes two minutes to process a part when it is supposed to take only one, its performance is 50%. The times used are net of setups and don’t consider any quality issue, because quality is accounted for in the last factor. This factor is meant to account for microstoppages and reduced speeds, and it is a relevant and important equipment metric in its own right.

Quality

As discussed in Part 2, Quality is not a metric but a whole dimension of manufacturing performance with many relevant metrics. In the OEE, this factor is just the yield of the operation, meaning the ratio of good parts to total parts produced. It is not the First-Pass Yield, because reworked parts are still counted as good.

Conclusions on the OEE

While the OEE summarizes metrics that are individually of interest, not much use can be made of it without unbundling it into its different factors. Since the meaning and the calculation methods for its factors vary across companies, it cannot be used for benchmarking. Within a company and even within a plant, it is not always obvious that raising the OEE of every machine enhances the performance of the plant as a whole.

In principle, it should. Who doesn’t want all machines to be always available, perform to standard, and make good parts? The problem is that, in practice, increasing the OEE is often confused with increasing utilization, and that there are machines for which  it is not a worthwhile goal. Such machines may be cheap auxiliaries to expensive ones, like a drill press following a large milling machine in a cell, or they may have been bought for their ability to take on a large variety of tasks on demand.

Unbundling the OEE into its component factors yields a more easily understandable  set of equipment metrics that is less likely to mislead management. While these metrics can be collected on each piece of equipment, management must then be wary of aggregating them over machines that are intended to be used differently.

Metrics in Lean – Part 2 – Quality

My previous post on this subject listed some requirements on metrics to be useful on the shop floor. Here, we take it one level further, first by establishing that shop floor metrics should be in the language of things rather than the language of money, and then by using the requirements to evaluate examples of quality metrics. In later posts, I will do the same for other dimensions of manufacturing performance.

The language of things and the language of money

The language of the shop floor is that of things, not money. It is a world where you speak in numbers of units made, time spent, or weight consumed, and it is several organizational layers removed from the boundary of the company where money changes hands. Metrics posted on the shop floor must therefore not be financial. This does not mean that financials should be hidden from shop floor personnel, just that they should not be the basis for metrics these people review everyday.

To be effective, the metrics used on the shop floor should be expressed in the language of things, and translated into the language of money by accountants for communication up the management chain. This is not to say that financial information should be hidden from shop floor people, only that it should be made available on request rather than broadcast or posted on bulletin boards.

Metrics based on financials are useful, but at a higher level of aggregation than a cell and for other audiences than teams of operators. In some areas, it is also difficult to arrive at financial metrics that accurately represent the business impact of activities and events occurring on the shop floor. Cost of Quality (COQ) metrics, for example, are limited to the direct costs of failure, appraisal and repair, which do not even begin to account for the sales impact of the company’s reputation for quality. In the Quality Control Handbook, Frank Gryna explicitly excluded such items from the COQ  as being to complex and too controversial. The problem is that what he did include is negligible with respect to what he excluded. Just ask Firestone about tread separation, or even Toyota about its 2010 recalls. As a result, COQ cannot be used to justify improving quality. Staying on the subject of quality, let us examine what metrics can be effective.

Metrics of Quality

Many companies use a scoring system, as illustrated in Figure 1, to measure outgoing quality. Starting from an agreed score for a perfect unit, points are deducted for every defect observed. The quality department publishes averages of these scores by shift, day, or month, looking for an upward trend.

FIGURE 1. A scoring system for cars

While such metrics are frequently a source of pride for those who develop them, they are unintelligible to the rest of the organization. Higher scores are understood to be better than lower scores, but the relation of the score to competitive position and the sources of its variations are difficult to interpret. In addition, the penalty assigned to each type of defect can become a matter of endless debate. Quality is more effectively measured by using multiple, simpler metrics, covering different subtopics, such as:

  1. Ratings by external agencies for consumer goods. The JD Power and Associates Initial Quality Surveys rate products based on the number of problems reported by buyers within the first three months of purchase, which reflect  manufacturing quality. Consumer Reports publishes reliability and maintainability ratings for cars that have been on the road for several years, which are more indicative of design quality.
  2. Counts of customer claims. For parts sold to OEMs, Quality Problem Reports (QPR) are the equivalent of traffic tickets issued by customers. They require failure analysis by the supplier’s quality department and a report to the customer in a prescribed format, such as Ford’s “8D” or Honda’s “5P.” The rate at which such claims are issued is clearly a metric of the customers’ perception of your product quality.
  3. Rejection rates. Defectives are weeded out at Receiving, at various points inside the manufacturing process, at final test, and at customers’ Receiving. Rejection rates are calculated at each of these points and fed back to the managers in charge. There may, however, be too many of these rates to be usable by top management.
  4. First-pass yield. The first-pass yield may be the most useful measure of in-process quality for top management. It is the percentage of the production volume that makes it through the entire process unscathed — that is, without failing any inspection or test, and without being waived, reworked, or scrapped (See Figure 2).

Figure 2. The First-Pass Yield

Metrics in Lean – Part 1 – Requirements

Ever since childhood, we all want to know what grades we get on our report cards, and what these grades mean in terms of how well we are doing. We want to be evaluated based on parameters we understand and we can affect by our efforts.

A key issue in manufacturing is consistency as we go from a shop to an department to an entire plant and to the company as a whole. We don’t want to use parameters in terms of which which excellent local performance can aggregate to poor global performance. Once performance measures are selected, the next challenge is to use them as a basis for management decisions that are in the best interest of the company while being fair and nonthreatening to employees. In particular, actions taken to improve one aspect of performance must not degrade another. In addition to these issues, in a lean environment, we need to consider the impact of improvement projects, before and after they are carried out.

Measuring process compliance or results?

One possible approach to performance evaluation is to measure how closely our practice matches a standard of how things should be done. This is how you will be evaluated if you apply for the Malcolm Baldridge award or for ISO-900x certification. If matters little whether your outgoing quality is any good, as long as you follow the “right” processes. Iwao Kobayashi’s “20-keys” approach follows the same logic. The keys have names like “Cleaning and organizing,” or “Quick changeover,” and each key has 5 levels of achievement. By definition, a plant that is at level 5 in all 20 keys is excellent.

The advantage of process measures is that the corrective action for bad performance is always to bring the plant closer to compliance. But is it impossible for a plant to be at level 5 in all 20 keys and still chronically lose money? Don’t some of the keys matter more than others? The world would be simpler if a process existed such that compliance guaranteed excellence.

In fact, all the stakeholders in a factory care much more about the results it achieves than the processes by which it does. Most commonly used are the five dimensions of Quality, Cost, Delivery, Safety, and Morale. More generally, Harvard’s R. Kaplan has proposed a “balance scorecard” to measure multiple aspects of business performance, as opposed to just manufacturing performance.

Requirements on metrics

Metrics should be focused on results rather than process compliance. The Malcolm Baldridge award criteria, ISO-900x, or Kobayashi’s “20-keys to workplace improvement” promote performance measurement based on check lists of how close actual shop practices are to some norm. The problem with this approach is that it is possible to score high on any of these check lists and still go bankrupt. In other words, it’s not what you do that counts but what good it does. The key requirements for metrics are as follows:

  1. A good metric is immediately understandable. No training or even explanation is required to figure out what it means, and the number directly maps to reality, free of any manipulation. One type of common manipulation is to assume that one particular ratio cannot possibly be over 85%, and redefine 85% for this ratio as “100% performance.” While this makes performance look better, it also makes the number misleading and difficult to interpret.
  2. People see how they can affect the outcome. With a good metric, it is also easy to understand what kind of actions can affect the value of the measurement. A shop floor metric, for example, should not a be function of the price of oil in the world market, because there is nothing the operators can do to affect it. Their actions, on the other hand, can affect the number of labor-hours required per unit, or the rework rate.
  3. A better value for the metric always means better business performance for the company. One of the most difficult characteristics to guarantee is that abetter value of a metric always translates to better business performance for the company. Equipment efficiency measures are notorious for failing in this area, because maximizing them often leads to overproduction and WIP accumulation.
  4. The input data of the metric should be easy to collect. Lead time statistics,for example, require entry and exit timestamps by unit of production. The  difference between these times then only gives you the lead time is calendar time, not in work time. The get lead times in work time, you then haveto match the timestamps against the plant’s work calendar. Lead time information,however, can be inferred from WIP and WIP age data, which can be collected by direct observation of WIP on the shop floor. Metrics of WIP, therefore, contain essentially the same information but are easier to calculate.
  5. All metrics should have the appropriate sensitivity. If daily fluctuations are not what is of interest, then they need to be filtered out. A common method for doing this is to plot 5-day moving averages instead of individual values– that is, the point plotted today is the average of the values observed in the last five days. Daily fluctuations are smoothed away, but weekly trends clearly show.

Is it true that you get what you measure?

Via Scoop.itlean manufacturing

The article Lean Manufacturing: Measuring To Get Results by lists a number of useful metrics. It also opens with the saying, or cliche, that “you get what you measure.”

The implications are (1) that people will always do whatever it takes to maximize their metrics, and (2) that, if you put the right metrics in place, improvement will take care of itself.   While I agree that we need good metrics, we should not overestimate their impact. Peer pressure and personal ethics, among other factors, drive most people more than their performance metrics. And even when employees do their utmost to maximize their scores, they often do not have the necessary skills, and performance targets will have no effect unless backed up by some form of training, coaching and support.
Via ezinearticles.com

The staying power of bad metrics

A speaker I once heard on manufacturing metrics started with a quote from football coach Vince Lombardi: “If you’re not keeping score, you’re only practicing.” In a sport, your score or your rank is, by definition, the correct measure of success, and we assume too easily that this kind of thinking crosses over to every human endeavor, from national economies to plant performance or education. In this process, we begin using highly aggregated metrics as if they were physical measurements like mass or speed, and avert our eyes from how these sausages are made.

Following are a few of the egregious examples:

  • GDP. Gross Domestic Product (GDP), for example, is in the news everyday. If you pollute and spend money to clean up your toxic waste, you contribute more to the GDP than if you produce cleanly. Because of this kind of absurdity, GDP as a metric has been criticized by many economists, including Joseph Stiglitz. In 2009, he even convinced French president Nicolas Sarkozy to seek alternatives. Yet, two years later, the same president is pushing to include in the country’s constitution a “Golden Rule” that caps budget deficits at a percentage of the same flawed GDP!
  • IQ. In the US, IQ  is still widely treated as a measure of intelligence. On its face, the notion that human intelligence is reducible to a number is an insult to its subject. In fact, all an IQ measures is the ability to take an IQ test. Psychologists recognize this, but many school teachers and the public at large don’t. (See Steven Jay Gould’s The Mismeasure of Man.)
  • Food calories. Calories are the most commonly used metric in nutrition. What this number actually represents is the heat generated by drying and burning a food item. But is digestion the same as combustion? Obviously not, for example, for fibers, which cross the human body unchanged. The absurdity of assigning calories to fibers has not escaped one dieter, who questioned it on a Calorie Count forum, and received, among other replies, the following:

Fiber calories are included in nutrition information, but only in come countries. In the US, it is legal to not put in fiber calories because they are not digestible. Therefore, they do not “count” as such. however, if you, like most people, tend to underestimate cals sightly, there is nothing wrong with including them to create a “buffer zone.”

In other words, it makes no sense but you should pretend it does.

Do we behave the same way in the manufacturing world? Yes. For example, many companies measure productivity in terms of Sales/Employee. There is an easy way to boost this metric: outsource all production, close all plants and become a trading company. It is not easy to find metrics for quality, cost, delivery, safety and morale that are meaningful and cannot be gamed, but it can be done. For overall company productivity, for example, you can use Value added/Employee, where

Value added = Sales – (Materials + Energy + Outsourced Services)

This is what Peter Drucker called Contributed Value. Value added/Employee is not a perfect metric, but at least it does not provide a perverse incentive to outsource, and the US census bureau publishes statistics on value added and employment by industry, that are helpful for benchmarking.

Following are a few conditions that a good metric must meet:

  1. A good metric is immediately understandable. No training or even explanation is required to figure out what it means, and the number directly maps to reality, free of any manipulation. One type of common manipulation is to assume that one particular ratio cannot possibly be over 85%, and redefine 85% for this ratio as “100% performance.” While this makes performance look better, it also makes the number misleading and difficult to interpret.
  2. People see how they can affect the outcome. With a good metric, it is also easy to understand what kind of actions can affect the value of the measurement. A shop floor metric, for example, should not a be function of the price of oil in the world market, because there is nothing the operators can do to affect it. Their actions, on the other hand, can affect the number of labor-hours required per unit, or the rework rate.
  3. A better value for the metric always means better business performance for the company. One of the most difficult characteristics to guarantee is that a better value of a metric always translates to better business performance for the company. Equipment efficiency measures are notorious for failing in this area, because maximizing them often leads to overproduction and WIP accumulation.
  4. The input data of the metric should be easy to collect. Lead time statistics, for example, require entry and exit timestamps by unit of production. The difference between these times then only gives you the lead time is calendar time, not in work time. The get lead times in work time, you then have to match the timestamps against the plant’s work calendar. Lead time information, however, can be inferred from WIP and WIP age data, which can be collected by direct observation of WIP on the shop floor. Metrics of
    WIP, therefore, contain essentially the same information but are easier to  calculate. (See Little’s Law.)
  5. All metrics should have the appropriate sensitivity. If daily fluctuations are not what is of interest, then they need to be filtered out. A common method for doing this is to plot 5-day moving averages instead of individual values — that is, the point plotted today is the average of the values observed in the last five days. Daily fluctuations are smoothed away, but weekly trends stand out.

Peter Drucker sold corporate America on the idea that you can’t manage what you can’t measure, and this has led many managers to believe that employees would do whatever it takes to maximize their scores. Given  flawed metrics, it if fortunate for the companies that these managers were wrong. If they had been right, all the  companies that measure productivity in terms of  Sales/Employee would actually have outsourced all production. They didn’t, because metrics are only one of many factors influencing behavior. Most employees, at all level, will not maximize their metrics through actions they feel violate common sense or are inconsistent with their personal ethics.