Jul 18 2022
The Most Basic Problem in Quality
Two groups of parts are supposed to be identical in quality: they have the same item number and are made to the same specs, at different times in the same production lines, at the same time in different lines, or by different suppliers.
One group may be larger than the other, and both may contain defectives. Is the difference in fraction defectives between the two groups a fluctuation or does it have a cause you need to investigate? It’s as basic a question as it gets, but it’s a real problem, with solutions that aren’t quite as obvious as one might expect. We review several methods that have evolved over the years with information technology.
Oct 4 2022
Strange Statements About Probability Models | Don Wheeler | Quality Digest
In his latest column in Quality Digest, Don Wheeler wrote the following blanket statements, free of any caveat:
Source: Wheeler, D. (2022) Converting Capabilities, What difference does the probability model make? Quality Digest
Michel Baudin‘s comments:
Not all models assume i.i.d. variables
Wheeler’s first statement might have applied 100 years ago. Today, however, there are many models in probability that are not based on the assumption that data are “observations from a set of random variables that are independent and identically distributed”:
Probability Models Are Useful
In his second statement, Wheeler seems determined to deter engineers and managers from studying probability. If a prominent statistician tells them it serves no useful purpose, why bother? It is particularly odd when you consider that Wheeler’s beloved XmR/Process Behavior charts use control limits based on the model of observations as the sum of a constant and a Gaussian white noise.
Probability models have many useful purposes. They can keep from pursuing special causes for mere fluctuations and help you find root causes of actual problems. They also help you plan your supply chain and dimension your production lines.
Histograms are Old-Hat; Use KDE Instead
As Wheeler also says, “Many people have been taught that the first step in the statistical inquisition of their data is to fit some probability model to the histogram.” It’s time to learn something new, that takes advantage of IT developments since Karl Pearson invented the histogram in 1891.
Fitting models to a sample of 250 points based on a histogram is old-hat. A small dataset today is more 30,000 points, and you visualize its distribution with kernel density estimation(KDE), not histograms.
#donwheeler, #probability, #quality
Share this:
Like this:
By Michel Baudin • Press clippings • 8 • Tags: Don Wheeler, Probability, Quality