My previous post on this subject listed some requirements on metrics to be useful on the shop floor. Here, we take it one level further, first by establishing that shop floor metrics should be in the language of things rather than the language of money, and then by using the requirements to evaluate examples of quality metrics. In later posts, I will do the same for other dimensions of manufacturing performance.
The language of things and the language of money
The language of the shop floor is that of things, not money. It is a world where you speak in numbers of units made, time spent, or weight consumed, and it is several organizational layers removed from the boundary of the company where money changes hands. Metrics posted on the shop floor must therefore not be financial. This does not mean that financials should be hidden from shop floor personnel, just that they should not be the basis for metrics these people review everyday.
To be effective, the metrics used on the shop floor should be expressed in the language of things, and translated into the language of money by accountants for communication up the management chain. This is not to say that financial information should be hidden from shop floor people, only that it should be made available on request rather than broadcast or posted on bulletin boards.
Metrics based on financials are useful, but at a higher level of aggregation than a cell and for other audiences than teams of operators. In some areas, it is also difficult to arrive at financial metrics that accurately represent the business impact of activities and events occurring on the shop floor. Cost of Quality (COQ) metrics, for example, are limited to the direct costs of failure, appraisal and repair, which do not even begin to account for the sales impact of the company’s reputation for quality. In the Quality Control Handbook, Frank Gryna explicitly excluded such items from the COQ as being to complex and too controversial. The problem is that what he did include is negligible with respect to what he excluded. Just ask Firestone about tread separation, or even Toyota about its 2010 recalls. As a result, COQ cannot be used to justify improving quality. Staying on the subject of quality, let us examine what metrics can be effective.
Metrics of Quality
Many companies use a scoring system, as illustrated in Figure 1, to measure outgoing quality. Starting from an agreed score for a perfect unit, points are deducted for every defect observed. The quality department publishes averages of these scores by shift, day, or month, looking for an upward trend.
FIGURE 1. A scoring system for cars
While such metrics are frequently a source of pride for those who develop them, they are unintelligible to the rest of the organization. Higher scores are understood to be better than lower scores, but the relation of the score to competitive position and the sources of its variations are difficult to interpret. In addition, the penalty assigned to each type of defect can become a matter of endless debate. Quality is more effectively measured by using multiple, simpler metrics, covering different subtopics, such as:
- Ratings by external agencies for consumer goods. The JD Power and Associates Initial Quality Surveys rate products based on the number of problems reported by buyers within the first three months of purchase, which reflect manufacturing quality. Consumer Reports publishes reliability and maintainability ratings for cars that have been on the road for several years, which are more indicative of design quality.
- Counts of customer claims. For parts sold to OEMs, Quality Problem Reports (QPR) are the equivalent of traffic tickets issued by customers. They require failure analysis by the supplier’s quality department and a report to the customer in a prescribed format, such as Ford’s “8D” or Honda’s “5P.” The rate at which such claims are issued is clearly a metric of the customers’ perception of your product quality.
- Rejection rates. Defectives are weeded out at Receiving, at various points inside the manufacturing process, at final test, and at customers’ Receiving. Rejection rates are calculated at each of these points and fed back to the managers in charge. There may, however, be too many of these rates to be usable by top management.
- First-pass yield. The first-pass yield may be the most useful measure of in-process quality for top management. It is the percentage of the production volume that makes it through the entire process unscathed — that is, without failing any inspection or test, and without being waived, reworked, or scrapped (See Figure 2).
Figure 2. The First-Pass Yield