What Can We Learn from NIST’s Next Generation Manufacturing Studies?

When you hear anyone say “Studies show…”, you want to know who studied what and how, so that you can use what Kaiser Fung calls your number sense to decide what, if anything, can be learned.

Since 2008, “Next Generation Manufacturing Studies” have been conducted in the US by the following:

Results are available on line up to 2011, and the 2013 study is underway. It is intended to evaluate “awareness, best practices, and achievements” in the following six areas:

  1. Customer-focused innnovation.
  2. Engaged people/human-capital acquisition, development, and retention.
  3. Superior process/improvement focus.
  4. Supply-chain management and collaboration.
  5. Sustainability.
  6. Global engagement.

By what methods are these studies conducted? The following is p.27 from the 2011 National Executive Summary:

NGM Methodology

This study is therefore entirely based on questionnaires filled out by a small, self-selected sample of companies rating themselves on a scale of 1 to 5 on issues like “the importance of customer-focused innovation.” It involves no site visit or personal  interviews. The responses were only “cleansed to ensure answers were plausible,” a statement that leaves much to the imagination.

This raises the following questions:

  • What can we learn from such a study?
  • Is this the best we can do with 21st-century data mining technology?


Mitigating “Mura,” or unevenness

The Japanese word Mura (ムラor斑) is the third member of the Muri, Muda, Mura axis of manufacturing evils. It means unevenness. In terms of volume of activity, if Muri refers to overburdening resources, Mura then really is the conjunction of overburdening some resources while others wait, or of alternating over time between overburdening and underutilizing the same resources.

Unevenness, however, is not only about volumes, but about quality as well. Unevenness in products is even synonymous with bad quality. From production managers facing “unpredictable” environments to academics promoting genetic algorithms or other cures, everyone bemoans how variable, or uneven, manufacturing is. The litany of causes is endless. Following are a few points that I think may clarify the issues:

  1. Mura in space, Mura in time, and Mura in space and time
  2. Degrees of severity: Deterministic, random, and uncertain environments
  3. What is special about Manufacturing?
  4. Internal versus external causes of unevenness
  5. Most useful skills in dealing with Mura

Mura in space, Mura in time, and Mura in space and time

Mura in space is imbalance in the work loads or utilization among resources at the same time; Mura in time, variability in the work load of a resource over time. The two can be present in the same factory. You may notice a kitting team working feverishly while the next one is waiting but, two hours later, you find the roles reversed.
Mura is often symbolized by two trucks arriving in sequence with different loads. I tend to think of working with Mura as moving around a city built on hills. A city built on a plain is even and easy to cross, and is often planned with a grid of numbered streets, like Manhattan in New York, or Kyoto. A city built on hills is uneven and offers many obstacles. San Francisco is built on hills, but its planners have chosen to ignore the terrain and slap on it a grid of straight streets. It makes for great views and dramatic car chases, but its steep slopes challenge your engine, your suspension, and your parking skills. Most hilly cities, like Nagasaki, Japan, for example, instead have streets that follow contour lines and therefore meander. The path by car from point A to point B may be longer than a straight line, but it is a smooth ride.

Navigating the peaks and valleys of product demand is like driving in a hilly city. If you just go straight, you keep alternating between pressing the accelerator and the brake, but by hugging contour lines, you can reach to your destination while going at a steady pace. This is what fighting Mura is about.

Degrees of severity: Deterministic, random, and uncertain environments

Some businesses are deterministic. They are “boring” and predictable. They have no variability. A manager once explained to me the electricity meter business in the market his plant was serving, as follows: “There are 20 million households with electricity meters in the country. Each meter lasts 20 years. Every year, I have to make 1 million.” The same products are made for many years, in stable quantities, and with mature processes that have no problem meeting tolerances. Of course, it only lasts until the advent of a disruptive technology, like smart meters.

This kind of environment is not common but it does exist. If your are in one, you should focus your improvement efforts on the opportunities it offers, and avoid tools that are overkill for it. For example, large, diversified companies that make a corporate decision to deploy the same planning and scheduling system in all their plants burden their simplest and most stable business units with unnecessary complexity.

Other business have variations that can be best be described as fluctuations around a smooth trend. If you make consumer goods, the demand every day is the result of decisions from a large number independent agents and will vary in both aggregate volume and mix, but within ranges that can be predicted. In terms of quality characteristics, if you fire ceramics, they shrink, by factors that still vary, even though we have been using this process for thousands of years. This level of variability is very common. The best term to describe it is randomness, and there is a rich body of knowledge on ways to work with it, including the Kanban system to regulate fluctuating flows and techniques to adjust processes in order to obtain consistent results from materials that are not. In ceramics, for example, you make your parts from a slurry that is a moving average of batches of powders received from the supplier, in order to even out their characteristics.

Contrast this with a toy manufacturer who cannot tell ahead of time which one or two products will be hits at Christmas, when most of the year’s sales occur. In process technology, there are similar differences in variability between mature, stable industries and high technology suffering from events like “yield crashes” during which a manufacturing organization “loses the recipe” for a product. Various terms are used to describe such situations, which, following Matheron, I call uncertainty. In such circumstances, the best you can expect from the techniques used to deal with randomness is to let you know that they no longer work. For example, dealerships can shield your plants from fluctuations in consumer purchases, where direct selling would let you find out sooner when demand drops for good or when consumer tastes change.

Calling our environment deterministic, random, or uncertain is always a judgment call. The deterministic electricity meter business turns uncertain with the advent of smart meters. If you view your environment as random, you expect fluctuations with a predictable range, and the signal of a shift into uncertainty manifests itself in changes beyond this range. You can use a variety of tests to detect that such as shift has occurred. Furthermore, with the possible exception of quantum physics, randomness is always in the eye of the beholder, and not intrinsic to a phenomenon.

What is special about Manufacturing?

Manufacturing is not the only kind of business to have high overhead; others include aviation or hospitals. In all such cases, companies must invest upfront in resources that pay off over time, and this is easiest to achieve with activities that place a balanced load on all resources and don’t vary over time — that is, without Mura.

Internal versus external causes of unevenness

Some unevenness comes from outside the organization, in many forms:

  • Fickle customers.
  • Seasonal variations in demand as in the toy industry.
  • Seasonal variations in supply, such as crop seasons in the food industry.
  • Changes in the macro-economy, such as a financial crisis.
  • Natural disasters, like earthquakes, tsunamis, and floods.
  • Raw materials with uneven characteristics, like ores or electronic waste for recycling.
  • Epidemics, as when 10% of your work force has the flu.
  • Unreliable suppliers.

You do not have the power to eliminate this kind of unevenness, but you can use countermeasures to mitigate its effects. On the other hand, you can and should eliminate unevenness that is self-inflicted. If you have not paid attention to balancing the work load of the various stations on your production lines, you are likely to have both overburdened and underutilized operators. Because of the different roles machines play, the workload can rarely be balanced across machines in a line, but the workloads of operators can be.

If you order materials from suppliers, for example by relying on an ERP system to issue orders by an algorithm for timing and quantities that you don’t undertand, you may well cause alternations of feast and famine in your suppliers’ order books for materials that you, in fact, consume at a steady pace. This creates unevenness not only in your suppliers’ operations, but also in your internal logistics. In The Lean Turnaround, Art Byrne explains that, at Wiremold, he eliminated volume discounts and incentives for Sales to book the largest possible orders. Instead, he preferred a steady flow of small orders, that smoothed the aggregate demand.

Not all resources need to be treated the same way. You want resources that can be described as producers to be generating useful output all the time. Other resources, which we may call responders, must be available when needed, and this is a radically different objective. Unevenness is an enemy for producers, but, unless responders’ work loads provide enough slack, they are unable to respond. Firefighters fighting fires 100% of the time would be unavailable when a new fire breaks out, and the same logic applies to maintenance technicians and operators who work as floaters on a production line. And it applies to machines as well as people. In a machine shop, for example, machines that carry out the primary processes, like hobbing a gear or milling pockets in a slab, are producers, while devices used for secondary processes, like deburring or cleaning, are responders. This is often, but not always, related to the cost of the machines, with expensive machines as producers and cheap ones as responders. However, some of the most expensive equipment, like machining centers, may be bought for its flexibility more than for its capacity, in which case its primary role is to respond to orders for short runs or prototypes.

Most useful skills in dealing with Mura

Permanently uneven workloads among operators can be addressed by balancing, using Yamazumi charts for manual operations and work-combination charts for operations involving people and machines. If the unevenness pattern shifts or oscillates over time, then the workload itself needs to be smoothed, with is done by the various techniques known as heijunka.

Many organizations are not aware of Mura as a problem, and, when aware, are oblivious to patterns in the unevenness that can be used to mitigate or eliminate it. Management, for example, may be struggling to cope with occasional large orders and fail to notice that they arrive like clockwork every other Wednesday from the same customer. A modicum of data mining skills is needed to recognize such patterns in the records of plant activity.

A Lean Journey: Meet-up: Michel Baudin

See on Scoop.itlean manufacturing

Interview on Tim McMahon’s A Lean Journey.
See on www.aleanjourney.com

Data Mining in Manufacturing versus the Web

Data mining, in general, is the retrieval of information from data collected for a different purpose, such as using sales transaction histories to infer what products tend to be bought together. By contrast, design of experiments  involves the collection of observations for the purpose of confirming or refuting hypotheses.

This perspective on data mining is consistent with the literature in expressing purpose, but most authors go further. They include in their definitions that data mining is done with computers, using large databases and specific analytical tools, which I think is too restrictive. The tools they list are the ones they have found useful in analyzing the behavior of millions of users of search engines or commerce websites, and they are not obviously applicable in other areas, such as manufacturing.

During World War II, British analysts used the serial numbers of captured or destroyed German tanks to estimate the numbers produced. Because serial numbers were not attached for this purpose, it was data mining. It used clever statistical models but, obviously, no computers.

Today, PhD-level data miners at Google, eBay, or Amazon sift through the  page views and click-throughs of millions of users for clues to patterns they can use. The data, automatically collected, is accurate and collected by the terabytes every day. This “big data” requires parallel processing on clusters of computers and lends itself to the most advanced analytical tools ever developed.

Compared to this fire hose of data, what manufacturing produces is a trickle. In a factory, the master data/technical specs, plans and schedules, status of operations and work in process, and the history of production over, say, 12 months, usually adds up to a few gigabytes. It doesn’t  fit on one spreadsheet, but it often does on a memory stick. On the other hand, much of it is still manually generated and therefore contains errors, and it is often structured in ways that make it difficult to work with.

Even if manufacturing companies could hire the data miners away from their current jobs, their experience with e-commerce or web search would not have prepared them well for the different challenges of manufacturing data mining.

There is an opportunity for data mining to contribute to competitiveness in manufacturing, but the approach must start from the needs. It must not be an e-commerce cure in search of manufacturing diseases.

A management perspective on data quality

Prof. Mei-chen Lo, of National University and Kainan University in Taiwan, worked with Operations Managers in two semiconductor companies to establish a list of 16 dimensions of data quality. Most  are not parameters that can be measured, and should be considered instead as questions to be asked about a company’s data. I learned it from her at an IE conference in Kitakyushu in 2009, and found it useful by itself as a checklist for a thorough assessment of a current state. Her research is about methods for ranking the importance of these criteria.

They are grouped in four main categories:

  1. Intrinsic. Agreement of the data with reality.
  2. Context.  Usability of the information in the data  to support decisions or solve problems.
  3. Representation. The way the data is structured, or not.
  4. Accessibility. The ability to retrieve, analyze and protect the data.

Each category breaks further down as follows:

  1. Intrinsic quality
    • Accuracy. Accuracy is the most obvious issue, and is measurable. If the inventory data says that slot 2-3-2 contains two bins of screws, then can we be confident that, if we walk to aisle 2, column 3, level 2 in the warehouse, we will actually find two bins of screws?
    • Fact or judgement. That slot 2-3-2 contains two bins of screws is a statement of fact. Its accuracy is in principle independent of the observer. On the other hand, “Operator X does not get along with teammates” is a judgement made by a supervisor and cannot carry the same weight as a statement of fact.
    • Source credibility. Is the source of the data credible? Credibility problems may arise due to the following:
      • Lack of training. For example, measurements that are supposed to be taken on “random samples” of parts are not, because no one in the organization knows how to draw a random sample.
      • Mistake-prone collection methods. For example, manually collected measurements are affected by typing errors.
      • Conflicts of interest. Employees collecting data stand to be rewarded or punished depending on the values of the data. For example, forecasters are often rewarded for optimistic forecasts.
    • Believability of the content. Data can unbelievable because it is valid news of extraordinary results, or because it is inaccurate. In either case, it warrants special attention.
  2. Context.
    • Relevance. Companies often collect data because they can, rather than because it is relevant. It is the corporate equivalent of looking for keys at night under the street light rather than next to the car. In the semiconductor industry, where this list of criteria was established, measurements are routinely taken after each step of the wafer process and plotted in control charts. This data is relatively easy to collect but of little relevance to the control and improvement of the wafer process as a whole. Most of the relevant data cannot be captured until the circuits can be tested at the end of the process.
    • Value added. Some of the data produced in a plant has a direct economic value. Aerospace or defense goods, for example, are delivered with documentation containing a record of their production process, and this data is part of the product. More generally, the data generated by commercial transactions, such as orders, invoices, shipping notices, or receipts, is at the heart of the company’s business activity. This is to be contrasted with data that is generated satisfy internal needs, such as, for example, the number of employees trained in transaction processing on the ERP system.
    • Timeliness. Is the data available early enough to be actionable? A field failure report on a product that is due to problems with a manufacturing process as it was 6 months ago is not timely if this process has been the object to two engineering changes since then.
    • Completeness. Measurements must be accompanied by all the data characterizing where, when, how and by whom they were collected and in what units they are expressed.
    • Sufficiency. Does the data cover all the parameters needed to support a decision or solve a problem?
  3. Representation
    • Interpretability. What inferences can you draw directly from the data? If the demand for an item has been rising 5%/month for the past 18 months, it is no stretch to infer that this trend will continue next month. On the other hand, if you are told that a machine has an Overall Equipment Effectiveness (OEE) of 35%, what can you deduce from it? The OEE is the product of three ratios: availability, yield, and actual over nominal speed. The 35% figure may tell you that there is a problem, but not where it is.
    • Ease of understanding. Management accounting exists for the purpose of supporting decision making by operations managers. Yet the reports provided to managers are often in a language they don’t understand. This does not have to be, and financial officers like Orrie Fiume have modified the vocabulary used in these reports to make them easier for actual managers to understand. The understandability of technical data can also be impaired when engineers use cryptics instead of plain language.
    • Conciseness. A table with 100 columns and 20,000 rows with 90% of its cells empty is a verbose representation of a sparse matrix. A concise representation would be a list of the rows and columns IDs with values.
    • Consistency. Consistency problems often arise as a result of mergers and acquisitions, when the different data models of the companies involved need to be mashed together.
  4. Accessibility
    • Convenience of access. Data that an end-user can retrieve directly through a graphic interface is conveniently accessible; data in paper folders on library shelves is not. Neither are databases in which each new query requires the development of a custom report by a specially trained programmer.
    • Usability. High-usability data, for example, comes in the form of lists of property names and values can easily be tabulated into spreadsheets or database tables, and, from that point on, selected, filtered and summarized in a variety of informative ways. Low-usability data often comes in the form of a string of characters, that first needs to be separated, with character 1 to 5 being one field, 6 to 12 another, etc., and the meaning of each of these substrings needs to be retrieved from a correspondence table, to find that ’00at3′ means “lime green.”
    • Security. Manufacturing data contain some of the company’s intellectual property, which must be protected not only from theft but from inadvertent alterations by unqualified employees. But effective security must also be provided efficiently, so that qualified, authorized employees are not slowed down by security procedures when accessing data.

Prof. Mei-Chen Lo’s research on this topic was published in The assessment of the information quality with the aid of multiple criteria analysis (European Journal of Operational Research, Volume 195, Issue 3, 16 June 2009, Pages 850-856)