# Variability, Randomness, And Uncertainty in Operations

This elaborates on the topics of randomness versus uncertainty that I briefly touched on in a prior post. Always skittish about using dreaded words like “probability” or “randomness,” writers on manufacturing or service operations, even Deming, prefer to use “variability” or “variation” for the way both demand and performance change over time, but it doesn’t mean the same thing. For example, a hotel room that goes for $100/night in November through March and$200/night from April to October has a price that is variable but not random. The rates are published, and you know them ahead of time.

By contrast, to a passenger, the airfare from San Francisco to Chicago is not only variable but random. The airlines change tens of thousands of fares every day in ways you discover when you book a flight. Based on having flown this route four times in the past 12 months, however, you expect the fare to be in the range of $400 to$800, with \$600 as the most likely. The information you have is not complete enough for you to know what the price will be but it does enable you to have a confidence interval for it.

Beyond randomness, events like the 9/11 attack in 2001, the financial crisis in 2008, the Fukushima earthquake in 2011, or a toy that is a sudden hit for the Christmas season create uncertainty, a higher level of variability than randomness. Such large-scale, unprecedented events give you no basis to say, on 9/12/2001, when airliners would fly again, in 2008 how low the stock market would go, in 2011 when factories in northeastern Japan would restart, or how many units of the popular toy you should make.

In Manufacturing, you encounter all three types of variability, each requiring different management approaches. In production planning, for example:

1. When the volume and mix of products to manufacture is known far in advance relative to your production lead time, you have a low-volume/high-mix but deterministic demand. The demand for commercial aircraft is known 18 months ahead of delivery. If you supply a variety of components to this industry that you can buy components for, build and ship within 6 weeks,  you still have to plan and schedule production but your planners don’t need to worry about randomness or uncertainty.
2. When volume and mix fluctuate around constant levels or a predictable trend, you have a random demand. The amplitude of fluctuations in aggregate volume is smaller than for individual products. In this context, you can use many tools. You can, for example, manage a mixed-flow assembly line by operating it at a fixed takt time, revised periodically, using overtime to absorb fluctuations in aggregate volumes, heijunka to sequence the products within a shift, and kanbans to regulate the flow of routinely used components to the line.
3. As recent history shows, uncertain events occur, that can double or halve your demand overnight. No business organization can have planned responses to or all emergencies, but it must be prepared to respond when it needs to. The resources needed in an emergency that need to be nurtured in normal times include a multi-skilled, loyal and motivated workforce, as well as a collaborative supply chain.
In many cases, you have to improvise a response; in some, vigilance can help you mitigate the impact of the event. Warned by weather data, Toyota’s logistics group in Chicago anticipated the Mississippi flood of 1993. They were shipping parts by intermodal trains to the NUMMI plant in California and, two days before the flood covered the tracks, they reserved all the available trucking in the area, which cost them daily the equivalent of 6 minutes of production at NUMMI. They were then able to reroute the shipments south of the flooded area.

The distinction between random and uncertain is related to that between common and special causes introduced by Shewhart and Deming in the narrower context of quality control. In Deming’s red bead game,  operators plunge a paddle into a bowl containing both white and red beads with the goal of retrieving a set of white beads only, and most paddle loads are defective.

The problem has a common cause: the production system upstream from this operation is incapable of producing bowls without red beads. In Deming’s experiment, the managers assume is has a special cause: the operator is sloppy. They first try to motivate by slogans, then discipline and eventually fire the operator. The proper response would have been (1) as an immediate countermeasure, filtering the red beads before the operation and (2) for a permanent solution, working with the source to improve the process so that it provides batches with all white beads every time.

The imprecision — or randomness — of the process is summarized in terms of its capability, which sets limits on observable parameters of outgoing units. Observations outside of these limits indicate that, due to a special cause, to be identified, the capability model no longer matches reality. In the other cases discussed above, the cause is known: you felt the earthquake, or you heard on the news that war broke out… The only challenge you are facing is deciding how to respond.

Deming made “knowledge of variation” one of the pillars of his “system of profound knowledge.” One key part of this knowledge is recognition of the different types of variability described above and mastery of the tools available to deal with each.

# Probability For Professionals

In a previous post, I pointed out that manufacturing professionals’ eyes glaze over when they hear the word “probability.” Even outside manufacturing, most professionals’ idea of probability is that, if you throw a die, you have one chance in six of getting an ace.  2000 years ago, Claudius wrote a book on how to win at dice but the field of inquiry has broadened since, producing results that affect business, technology, science, politics, and everyday life.

In the age of big data, all professionals would benefit from digging deeper and becoming, at least, savvy recipients of probabilistic arguments prepared by others. The analysts themselves need a deeper understanding than their audience. With the software available today in the broad categories of data science or machine learning, however, they don’t need to master 1,000 pages of math in order to apply probability theory, any more than you need to understand the mechanics of gearboxes to drive a car.

It wasn’t the case in earlier decades, when you needed to learn the math and implement it in your own code. Not only is it now unnecessary, but many new tools have been added to the kit. You still need to learn what the math doesn’t tell you: which tools to apply, when and how, in order to solve your actual problems. It’s no longer about computing, but about figuring out what to compute and acting on the results.

Following are a few examples that illustrate these ideas, and pointers on concepts I have personally found most enlightening on this subject. There is more to come, if there is popular demand.

# If Talk Of Probability Makes Your Eyes Glaze Over…

Few terms cause manufacturing professionals’ eyes to glaze over like “probability.” They perceive it as a complicated theory without much relevance to their work. It is nowhere to be found in the Japanese literature on production systems and supply chains, or in the American literature on Lean. Among influential American thinkers on manufacturing, Deming was the only one to focus on it, albeit implicitly, when he made “Knowledge of Variation” one of the four components of his System of Profound Knowledge (SoPK).

# How to Pick the Fastest Line at the Supermarket | New York Times [Debunk]

“[…] Choose a single line that leads to several cashiers

Not all lines are structured this way, but research has largely shown that this approach, known as a serpentine line, is the fastest. The person at the head of the line goes to the first available window in a system often seen at airports or banks. […]”

Sourced through the New York Times

No! Research shows no such thing. The serpentine line does not reduce the customers’ mean time through the system. Little’s Law tells us that, in steady state, regardless of how the queue is organized:

${Mean\, time\, in\, system = \frac{Mean\, number\, of\, customers\, in \, system}{Mean\, service\, rate}}$

# The Value Of Surveys: A Debate With Joseph Paris

Joseph Paris and I debated this issue in the Operational Excellence group on LinkedIn, where he started a discussion by posting the following:

#### “Riddle me this…

If the Japanese way of management and their engagement with employees is supposedly the best, yielding the best result, why is there such a lack of trust among employment across the spectrum; employers, bosses, teams/colleagues. From Bloomberg and EY.

#### Japanese Workers Really Distrust Their Employers

Lifetime employment sounds like a great thing, but not if you hate where you work. That seems to be the plight of Japanese “salarymen” and “office ladies.” Only 22 percent of Japanese workers have “a great deal of trust” in their employers, which is way below the average of eight countries surveyed, according to a new report by EY, the global accounting and consulting firm formerly known as Ernst & Young. And it’s not just the companies: Those employees are no more trusting of their bosses or colleagues, the study found.

# Separating Human Work From Machine Work [Infographic]

Most of the work we do today involves interactions with machines. It is true not only in manufacturing but in many other business processes. The machinist works with machining centers, the pilot with an airplane, the surgeon with a laparoscopy robot, the engineer with a variety of computer systems,…, not to mention the automatic appliances that relieve us of household chores. In fact, I think that being good at working with machines is so essential that I wrote a book about it. For the short version, see the following A3/tabloid infographic. To enlarge it, click on the picture, and then on “View full size” in the bottom right-hand corner.

# “Smart” Part Numbers Strike Again: Wrong Part Shipped

I own two dishwashers in two homes, different models from the same brand, bought in the same store, and both on a service contract. For the first one, the model number  is SHE55R56UC; for the second one, SHE65T55UC. Today, we needed help on the first one, but customer service shipped us parts for the second one, which the repair technician discovered when unpacking them.

# Manufacturing Data Cleaning: Thankless But Necessary

Whether you manage operations with paper and pencil as in 1920 or use the state of the art in information technology (IT), you need clean data. If you don’t have it, you will suffer all sorts of dysfunctions. You will order materials you already have or don’t need, and be surprised by shortages. You will make delivery promises you can’t keep, and ship wrong or defective products. And you will have no idea what works and what doesn’t in your plant.

I have never seen a factory with perfect data, and perhaps none exists. Dirty data is the norm, not the exception, and the reason most factories are able to ship anything at all is that their people find ways to work around the defects in their data, from using expediters to find parts that aren’t where the system thought they were, to engineers who work directly with production to make sure a technical change is implemented. Mei-chen Lo, of Kainan University in Taiwan, proposed a useful classification of the issues with data quality. What I would like to propose here is pointers on addressing them.

# Is Vendor Selection Really The First Step in ERP Implementation?

A free guide that you can download from ERP Focus makes vendor selection the first of an 11-step implementation process, while defining success is the last.  In other words, they have you choose who you buy from before having a clear idea of what you are trying to accomplish.

It reminds me of a meeting at a client site where ERP implementation was about to begin. “This train has left the station,” I was told. The purpose of the meeting was to draw a “Value Stream Map” for the whole plant, in preparation for ERP, and the participants included managers from Manufacturing, Quality, Production Control, Maintenance, Purchasing, Sales, and Engineering.

# Industry 4.0: Without Stable Processes, Nothing Works | Wiegand’s Watch

This is a translation of Bodo Wiegand’s latest newsletter, about Lean in Germany, followed by my comments:

This week I was with a company that is on its way to implement industry 4.0. All machines were networked. The manager could see from his desk which machines were running and which were not. All data were collected centrally and also shown locally to the machine operator. The trend was easy to see. One third of the machines had a malfunction. With an average OEE of 62%, the machines do not always run.

“As long as we buy new machines, we have to live with this,” was his answer to my question.

But, it was not only the newest, but also the older machines that don’t need to be smeared with oil and dirty, even even while generating chips. Provided on request, the Fire-Fighting-factor reported to us by the maintenance technicians was above 75%. The chief knew exactly: 76.6%. An OEE of 62% and 76.6% Firefighting means in plain language: In this business, there is no stable processes.

But what drives intelligent managers then to link his whole company, only to find that the processes are unstable? With some thought they could have discovered this without networking and invested first in stabilizing the processes. Introducing Industry 4.0 For industry on unstable processes will fail. The crucial question: how I manage to stabilize the processes and avoid unplanned shutdowns?

Follow

### Follow Michel Baudin's Blog

Get every new post delivered to your Inbox

Join other followers: