Dec 21 2014
Nov 11 2014
Dr. Deming: ‘Management Today Does Not Know What Its Job Is’ (Part 2) | Quality content from IndustryWeek
“The usual procedure is that when anything happens, [we] suppose that somebody did it. Who did it? Pin a necklace on him. He’s our culprit. He’s the one who did it. That’s wrong, entirely wrong. Chances are good, almost overwhelming, that what happened, happened as a consequence of the system that he works in, not from his own efforts. In other words, performance cannot be measured. You only measure the combined effect of the system and his efforts. You cannot untangle the two. It is very important, I believe, that performance cannot be measured.”
Nov 11 2014
Oct 11 2014
On 23 December 1924, a group of leading international businessmen gathered in Geneva […]. Present were top representatives from all the major lightbulb manufacturers, including Germany’s Osram, the Netherlands’ Philips, France’s Compagnie des Lampes, and the United States’ General Electric. […] the group founded the Phoebus cartel, a supervisory body that would carve up the worldwide incandescent lightbulb market, with each national and regional zone assigned its own manufacturers and production quotas. It was the first cartel in history to enjoy a truly global reach.
The cartel’s grip on the lightbulb market lasted only into the 1930s. Its far more enduring legacy was to engineer a shorter life span for the incandescent lightbulb. By early 1925, this became codified at 1,000 hours for a pear-shaped household bulb, a marked reduction from the 1,500 to 2,000 hours that had previously been common. Cartel members rationalized this approach as a trade-off: Their lightbulbs were of a higher quality, more efficient, and brighter burning than other bulbs. They also cost a lot more. Indeed, all evidence points to the cartel’s being motivated by profits and increased sales, not by what was best for the consumer. In carefully crafting a lightbulb with a relatively short life span, the cartel thus hatched the industrial strategy now known as planned obsolescence.
Early in my career, I worked with an older engineer who told me that his first professional experience had been in the reliability department of a large, US appliance maker, where his job was to change product designs to make them fail as soon as the warranties expired.
I had heard of such efforts before, but had found the accounts difficult to believe. How could companies spend money to deliberately lower the quality of their products? But this was the testimony of a man I trusted who had personally done it, and hated it.
It was malicious, and it was corporate hubris at its worst. It created opportunities for competitors, which they eventually took. When we were having this conversation, my colleague also told me that the manufacturer was no longer in business.
This article from IEEE substantiates another story of market dysfunction that I had heard of but was not sure was true: the manufacturers of incandescent light bulbs conspired to reduce the lives of the bulbs.
The article gives dates, names, and places. An organization called the Phoebus cartel was set up in Geneva in 1924 by the leading lightbulb manufacturers in the US, Germany, the Netherlands, France, and Japan for the purpose of shortening bulb lives from 1,500 to 2,000 hours down to 1,000 hours.
Now that the incandescent lightbulb itself is becoming obsolete, how do we prevent LED manufacturers from pulling the same stunt?
It should be noted also that designing products to fail quickly is only one form of planned obsolescence. A less nefarious one is simply introducing regular product updates to make today’s cool product lame tomorrow. iPhones last much longer than one year. An iPhone 3 may still work today, particularly on its original operating system, but has been made unattractive by five new product releases. In IT in general, you don’t have to play along and can save by buying last year’s products.
Aug 8 2014
“That was random!” is my younger son’s response to the many things I say that sound strange to him, and my computer has Random Access Memory (RAM), meaning that access to all memory locations is equally fast, as opposed to sequential access, as on a tape, where you have to go through a sequence of locations to reach the one you want.
In this sense, a side-loading truck provides random access to its load, while a back-loading truck provides sequential access.
While these uses of random are common, they have nothing to do with probability or statistics, and it’s no problem as long as the context is clear. In discussion of quality management or production control, on the other hand, randomness is connected with the application of models from probability and statistics, and misunderstanding it as a technical term leads to mistakes.
In factories, the only example I ever saw of Control Charts used as recommended in the literature was in a ceramics plant that was firing thin rectangular plates for use as electronic substrates in batches of 5,000 in a tunnel kiln. They took dimensional measurements on plates prior to firing, as a control on the stamping machine used to cut them, and they made adjustments to the machine settings if control limits were crossed. They did not measure every one of the 5,000 plates on a wagon. The operator explained to us that he took measurements on a “random sample.”
“And how do you take random samples?” I asked.
“Oh! I just pick here and there,” the operator said, pointing to a kiln wagon.
That was the end of the conversation. One of the first things I remember learning when studying statistics was that picking “here and there” did not generate a random sample. A random sample is one in which every unit in the population has an equal probability of being selected, and it doesn’t happen with humans acting arbitrarily.
A common human pattern, for example, is to refrain from picking two neighboring units in succession. A true random sampler does not know where the previous pick took place and selects the unit next to it with the same probability as any other. This is done by having a system select a location based on a random number generator, and direct the operator to it.
This meaning of the word “random” does not carry over to other uses even in probability theory. A mistake that is frequently encountered in discussions of quality is the idea that a random variable is one for which all values are equally likely. What makes a variable random is that probabilities can be attached to values or sets of values in some fashion; it does not have to be uniform. One value can have a 90% probability while all other values share the remaining 10%, and it is still a random variable.
When you say of a phenomenon that it is random, technically, it means that it is amenable to modeling using probability theory. Some real phenomena do not need it, because they are deterministic: you insert the key into the lock and it opens, or you turn on a kettle and you have boiling water. Based on your input, you know what the outcome will be. There is no need to consider multiple outcomes and assign them probabilities.
There are other phenomena that vary so much, or on which you know so little, that you can’t use probability theory. They are called by a variety of names; I use uncertain. Earthquakes, financial crises, or wars can be generically expected to happen but cannot be specifically predicted. You apply earthquake engineering to construction in Japan or California, but you don’t leave Fukushima or San Francisco based on a prediction that an earthquake will hit tomorrow, because no one knows how to make such a prediction.
Between the two extremes of deterministic and uncertain phenomena is the domain of randomness, where you can apply probabilistic models to estimate the most likely outcome, predict a range of outcomes, or detect when a system has shifted. It includes fluctuations in the critical dimensions of a product or in its daily demand.
The boundaries between the deterministic, random and uncertain domains are fuzzy. Which perspective you apply to a particular phenomenon is a judgement call, and depends on your needs. According to Nate Silver, over the past 20 years, daily weather has transitioned from uncertain to random, and forecasters could give you accurate probabilities that it will rain today. On the air, they overstate the probability of rain, because a wrong rain forecast elicits fewer viewer complaints than a wrong fair weather forecast. In manufacturing, the length of a rod is deterministic from the assembler’s point of view but random from the perspective of an engineer trying to improve the capability of a cutting machine.
This categorization suggests that that a phenomenon that is almost deterministic is, in some way, “less random” than one that is near uncertainty. But we need a metric of randomness to give a meaning to an expression like “less random.” Shannon’s entropy does the job. It is not defined for every probabilistic model but, where you can calculate it, it works. It is zero for a deterministic phenomenon, and rises to a maximum where all outcomes are equally likely. This brings us back to random sampling. We could more accurately call it “maximum randomness sampling” or “maximum entropy sampling,” but it would take too long.