Process capability

The literature on quality defines process capability as a metric that compares the variability of its output with tolerances. There are, in fact, two different concepts:

  1. The ability of a process under nominal conditions to consistently produce output that meets expectations.
  2. The means of assessing it.

Process Variability

A process running under normal conditions has the materials and tools it needs and skilled operators. There are no broken tools, leaks, blown fuses or cracked columns, or unqualified operators filling in for missing colleagues. Yet, under these perfect conditions, the process still doesn’t put out exactly identical units every time, and the process capability is a rating of this variability against specified expectations.

True and Substitute Characteristics

The expectations are true characteristics that are often not directly observable; the metrics, substitute characteristics that are. The taste of a cake is not a parameter you can measure; its diameter and sugar content are. If the diameter and sugar content are out of spec, the cake is no good.

On the other hand, that they are within specs does not guarantee that the cake tastes good. We want process capability in terms of true characteristics but can only observe it in terms of substitutes that work as one-way filters.

The nature of the expectations and the means of assessing whether the process meets them consistently vary with each case, and the usual metrics found in the literature — C_{p} and C_{p_{k}}— are only a simple, special case, well defined only for single, substitute characteristics that are measured variables.

Examples

Where machines can hold tolerances that are ten times tighter than anything required for the product, process capability is not an issue. This is the case, for example, in the forging and machining of automotive parts. In these cases, the quality problems that do occur are not due to lack of process capability but to discrete events or human error.

Jet Engine Turbine Blades

Each turbine blade in a jet engine, on the other hand, needs balance with the one opposite to prevent precession when rotating. To this day, the machining process of making the blade is not able to achieve it. It is not a capable process. It requires hand finishing.

JetEngineTurbineBlades

Competition forces high-technology manufacturers to produce in high volume before processes are capable. In semiconductors, in fact, if your processes are capable, your products are obsolete. Your filter defectives, imperfectly, at the end of the process through extensive testing.

Mature Industries with Capability Issues

Processes used in high volume, yet not capable, exist also in mature industries:

  • Ceramics. Ceramics are made from raw materials pulled out of the ground, with characteristics that vary sufficiently from batch to batch to cause variations in shrinkage during the kiln firing process. In applications where these variations make a difference, manufacturers bins products by dimension classes after firing.
  • Sock knitting. The knitting process for socks is not precise enough to control sizes. Socks are binned by size after knitting.
  • Lead shot. The process for making lead shot involves pouring molten lead into a sieve at the top of a tower and collecting the hardened drops at the bottom. The drops come in various shapes and sizes, and the ones that don’t meet the specs for shot are recycled.
  • Detergent powders. Detergent base powders are made in a similar way, by spray-drying a slurry through a sieve and collecting the flakes at the bottom, with flakes of the wrong shape or size recycled.

Assessing Process Capability

A general definition of process capability cannot be more specific than the ability of a process under nominal conditions to consistently produce output that works as expected.; for each actual process, however, it must be, in terms of carefully selected substitute characteristics.

Fitting within one tolerance interval

The literature on quality only covers the simplest case, where the product has one critical dimension to be kept within tolerances. If you google “process capability,” all you get is pictures like the following, showing one measured variable of the product following a gaussian distribution and its relationship with the tolerance interval, between the Lower Spec Limit (LSL) and the Upper Spec Limit (USL):

CpCpk

It applies, for example, when you cut a bar into rods that must be within 300 ±0.01 mm in length. The length of the tolerance interval is therefore 0.02 mm. You measure the lengths of actual bars with a sufficiently accurate instrument and compared its distribution with the tolerance interval.

In the classical analysis, you represent the precision of the process by the standard deviation of the lengths, and its accuracy by their mean, both of which you estimate from a representative sample.

Rod

If a rod is slightly too long, you can grind it down to length; if too short, it’s scrap. The risks are asymmetric, and the process engineers are likely to aim for the upper end of the tolerance interval, where rework is possible. Let’s say the mean length on a sample of 100 rods is 300.005 mm, and the standard deviation 0.0033. This makes the ±3σ limits 299.995 and 300.015 mm.

Cp and Cpk

Because the ±3σ range is 0.02 mm, the C_{p} index that represents only precision is 1. On the other hand, the C_{p_{k}} index that takes into account both precision and accuracy, is the dismal

\frac{(300.01-300.005)}{0.01} = 0.5.

If the rod lengths can be modeled as the sum of a constant and a white noise, then C_{p_{k}} ≥ 1 tells you that at least 99.7% of the rods are within the tolerance limits.

Because they are dimensionless ratios, the capability indices apply to any circumstance where you have one measured variable and a tolerance interval. In principle, it enables you to communicate process capabilities, whether about rod length or sugar content in cakes. This is the rationale behind the reporting of process capabilities in terms of C_{p_{k}} in many companies for processes that are more complex than cutting bars into rods.

Yield

Yield is an alternative and more general way of describing process capability. Even in the previous case of rod cutting, you can characterize it by the probability that a rod will meet tolerances, estimated on samples by the relative frequency of rods that actually do. In addition, not all observable quality characteristics are measured variables with tolerances. Many are, in fact, attributes, like the presence of scratches, bubbles, or runs on a painted surface, and any one of these defects makes it defective. It’s a go/no-go situation, characterized by the probability of being defect-free, which is estimated by the yield of the process. If the yield is 80%, then this probability is also 80%.

The reasoning behind the C_{p} and C_{p_{k}} above is not applicable here, as there is no tolerance interval. Control limits based on sample yields serve to validate improvements and detect accidental shifts. Their purpose is not to keep the yield at a fixed level, given that the best yield is always 100%. The common model for defectives in a sample is binomial, which has a single parameter, the probability of being defective.

Multiple dimensions

The C_{p} and C_{p_{k}} do not generalize easily to cases with multiple variables. An automatic transmission case for cars has >2,000 critical dimensions. You can have a C_{p_{k}} for each one but no formula to convert all these C_{p_{k}}’s to a single C_{p_{k}} for the whole case. In 2012, Santos-Fernandez and Scagliarini proposed a multidimensional capability index called “MC_{p}m” but it’s not a simple combination of C_{p_{k}} for each dimension.

The following figure shows the case of just 2 measured variables with positive correlation. For every unit produced, the tolerances tell you to keep the \left (V_1,V_2 \right ) values within the rectangle formed by the tolerance intervals \left [V_1 Lo, V_1 Hi \right ] and \left [V_2 Lo, V_2 Hi \right ]. If \left (V_1,V_2 \right ) is a 2-dimensional gaussian, 99.7% of the actual \left (v_1,v_2 \right )  values will fall within an ellipse as shown and the 2D capability index is based on the ratio of the area of the rectangle to that of the ellipse. This generalizes to n variables \left (V_1,..., V_n \right ), with a tolerance hypercube and an ellipsoid.

ToleranceVActualMultidimensional

As this looks like fitting an ellipsoid peg into a hypercubic hole, it begs the question of whether it makes sense to model the space of acceptable values of \left (V_1,..., V_n \right ) as a hypercube when, in fact, the space of values corresponding to good end products can be any shape, which electrical engineers have taken to calling a shmoo.

Yield Rollups

Yields, on the other hand, scale up in many cases. Many organizations use yield as a high-level metric of process capability. If defects of different types occur independently, like paint scratches and transmission leaks, yields multiply. If 99% of a car lines production is free of paint scratches and 99.5% of transmission leaks, then 99\%\times 99.5\% = 98.5\% is free of both.

Through a route with independent defects, the Rolled Throughput Yield is the product of the operation yields. Yield improvement at one operation will proportionately affect it, however, only if the defects are independent. Otherwise, an improvement at one operation may cause deterioration or an improvement at another as well.

Decisions Based on Yield

On a route, the plain yield — the ratio of good to total units at final test — is a useful metric. It serves not only in production control but also in production line design. For example, if this yield is under <95%, you cannot practically replace defectives with new starts, even with short lead-times. You overbuild systematically instead and maintain a buffer of finished units. With a yield of 97% through a process segment of 5 to 10 operations, one-piece flow is feasible. Then, it will boost that yield further by rapid problem detection; with a yield under 90%, on the other hand, one-piece flow will be impractical.

First-Pass Yield

The First-Pass Yield is another common metric but different production organizations may calculate it differently. It is the ratio of units that make it through the process without any rework to total units started. Varying interpretations of what counts as “rework” and pressure on managers to report high yields generate inconsistencies in yield calculations. If done consistently over time in a line, it is a useful tracker of improvements.

Interactions between characteristics

An integrated circuit, as opposed to a rod, is an open-faced sandwich with multiple layers. Each layer is a patterned thin film that interacts with other layers. Defects in the final circuit can be caused by complex interactions between different levels. All layers can be within tolerance limits and without defects, and the circuit may still not work. In this case, you do not improve overall yield by focusing on each operation. Instead, you apply data science to measurements and attributes collected on each unit, in-process, and at final test.

Measurement System Analysis

You cannot take the accuracy and precision of measurement instruments or human observers for granted. It is well known to anyone who has “lost” 5 lbs from using a new bathroom scale, or observed differences of ±0.5 lbs from stepping onto it multiple times.

Measurement System Analysis (MSA) is capability analysis of the measurement or observation system rather than the process itself. As such, it is meta-process capability analysis. When assessing process capability, the measurement or observations errors compound with the variability of the process itself. Unless these errors are negligible with respect to the variation in the process itself, they can make the data unusable.

With measurements, the instrument adds its bias to the true value but fluctuations are not additive. If we represent the fluctuations due to the process by the standard deviation \sigma_{p} and those independently due to the instrument by \sigma_{m}, then the standard deviation for the total is not the sum of \sigma_{p} and \sigma_{m} but instead \sigma = \sqrt{\sigma_{p}^{2} +\sigma_{m}^{2}}.

The bias of an instrument can be corrected by calibrating it or correcting the data. Then Gauge Repeatability & Reproducibility (R&R) uses ANOVA to separate the fluctuations between units with the same operator from the variations between operators.

Manual inspections for attributes are more subject to variations between operators than measurements. The details and clarity of the specs make a difference but there are factories with thick binders of specs for inspection procedures that do not contain the specifics of defects to watch out for.

Measurement System Analysis (MSA) is a recently introduced term for these activities, and it encompasses Gauge R&R.

Process Capability with Advanced Process Control (APC)

The C_{p} and C_{p_{k}} are derived from Statistical Process Control (SPC), as developed in the 1920s when automatic controls were in their infancy. Control theory did not emerge as an engineering discipline until World War II.  Sensors didn’t proliferate inside production machinery until the 21st century. SPC focuses on characteristics observed on workpieces after completing operations, not on the control of processes in progress.

In 2006, Gary May and Costas Spanos described the following architecture for process control in semiconductors:

MaySpanosProcessControlHierarchy 2006

The different layers have the following functions:

  • Supervisory Control ensures that each machine is running the right program for its current load. It also aggregates vital signs, and issues alarms. In May & Spanos’s description, the Supervisory Controller compensates for variations observed at an operation by making adjustments to the next one.
  • Run-by-Run Control replaces or enhances SPC. It performs multivariate analysis of the history of runs through the operation to tweak it for the next run.
  • Real-Time Control encompasses all the automatic controls inside the machine. They range from mass-flow controllers to cyber-physical systems that simulate the process and tweak it as needed.

In 2021, a supplier of semiconductor production equipment offer Run-To-Run (R2R) control, as part of Advanced Process Control (APC).

Yields work as measures of process capability in this context, where it is not clear how you would calculate C_{p} or C_{p_{k}}.

References

Santos-Fernandez, E. & Scagliarini (2012) An R Package for Computing Multivariate Process Capability Indice, Journal of Statistical Software April 2012, Volume 47, Issue 7. http://www.jstatsoft.org/

May, G.  & Spanos, C. (2006) Fundamentals of Semiconductor Manufacturing and Process Control, Wiley-Interscience, ISBN: 978-0-471-78406-7

Burdick, R., Borror, C.,  & Montgomery, D.  (2005) Design and Analysis of Gauge R and R Studies: Making Decisions with Confidence Intervals in Random and Mixed ANOVA Models. American Statistical Association and the Society for Industrial and Applied Mathematics, ISBN: 0898715881.

#cpk, #quality, #spc, #yield, #rolledthroughputyield, #firstpassyield, #manufacturing, #R2R