Where Have The Scatterplots Gone?

What passes for “business analytics” (BI), as advertised by software vendors, is limited to basic and poorly designed charts that fail to show interactions between variables, even though the use of scatterplots and elementary regression is taught to American middle schoolers and to shop floor operators participating in quality circles.

But the software suppliers seem to think that it is beyond the cognitive ability of executives. Technically, scatterplots are not difficult to generate, and there are even techniques to visualize more complex interactions than between pairs of variables, like trendalyzers or 3D scatterplots. And, of course, visualization is only the first step. You usually need other techniques to base any decision on data.

Continue reading

“Studies show…” or do they?

Various organization put out studies that, for example, purport to “identify performances and practices in place among U.S. manufacturers.”  The reports contain tables and charts, with narratives about “significant gaps” — without stating any level of significance — or “exponential growth” — as if there were no other kind. They borrow the vocabulary of statistics or data science, but don’t actually use the science; they just use the words to support sweeping statements about what manufacturers should do for the future.

At the bottom of the reports, there usually is a paragraph about the study methodology, explaining that the data was collected as answers to questionnaires mailed to manufacturers and made available on line, with the incentive for recipients to participate  being a free copy of the report. The participants are asked, for example, to rate “the importance of process improvement to their organization’s success over the next five years” on a scale of 1 to 5.

The results are a compilation of subjective answers from a self-selected sample. In marketing, this kind of surveys makes sense. You throw out a questionnaire about a product or a service. The sheer proportion of respondents gives you information about the level of interest in what you are offering, and the responses may further tell you about popular features and shortcomings.

But it is not an effective approach to gauge the state of an industry. For this purpose, you need objective data, either on all companies involved or on a representative sample that you select. Government bodies like the Census Bureau or the Bureau of Labor Statistics collect useful global statistics like value-added per employee or the ratio indirect to direct labor by industry, but they are just a starting point.

Going beyond is so difficult that I don’t know of any successful case. Any serious assessment of a company or factory requires visiting it, interviewing its leaders in person, and reviewing its data. It takes time, money, know-how, and a willing target. It means that the sample has to be small, but there is a clash between the objective of having a representative sample and the constraint of having a sample of the willing.

For these reasons, benchmarking is a more realistic approach, and I know of at least two successful benchmarking studies in manufacturing, both of which, I believe, were funded by the Sloan Foundation:

  • The first was the International Assembly Plant Study, conducted in the late 1980s about the car industry, whose findings were summarized in The Machine That Changed The World in 1990. The goal was not to identify the distribution of manufacturing practices worldwide but to compare the approaches followed in specific plants of specific companies, for the purpose of learning. Among other things, the use of the term “Lean” came out of this study.
  • The second is the Competitive Semiconductor Manufacturing Program, which started in the early 1990s with a benchmarking study of wafer fabrication facilities worldwide. It did not have the public impact of the car assembly plant study, but it did provide valuable information to industry participants.

The car study was conducted out of MIT; the semiconductor study, out of UC Berkeley. Leadership from prestigious academic organizations helped in convincing companies to participate and provided students to collect and analyze the data. Consulting firms might have had better expertise, but could not have been perceived as neutral with respect to the approaches used by the different participants.

The bottom line is that studies based on subjective answers from a self-selected sample are not worth the disk space you can download them onto.

The GM Toyota Rating Scale | Bill Waddell

See on Scoop.itlean manufacturing

“In a survey of suppliers on their working relationships with the six major U.S. auto makers – Toyota, Honda, Nissan, Ford, Chrysler and GM – GM scored the worst.  But of course they did.  They are GM and we can always count on such results from them. […] Toyota scored highest with a ranking of 318, followed by Honda at 295, Nissan at 273, Ford at 267, Chrysler at 245, with GM trotting along behind the rest with an embarrassing 244.”

 

Michel Baudin‘s comments:

While I am not overly surprised at the outcome, I am concerned about the analysis method. The scores are weighted counts of subjective assessments, with people being asked to rate, for example, the “Supplier-Company overall working relationship” or “Suppliers’ opportunity to make acceptable returns over the long term.”

This is not exactly like the length of a rod after cutting or the sales of Model X last month. There is no objective yardstick, and two individuals might rate the same company behavior differently.

It is not overly difficult to think of more objective metrics, such as, for example, the “divorce rate” within a supplier network. What is the rate at which existing suppliers disappear from the network and others come in? The friction within a given Supplier-Customer relationship could be assessed from the number of incidents like the customer paying late or the supplier missing deliveries…

Such data is more challenging to collect, but supports more solid inferences than opinions.

See on www.idatix.com

Yet Another Post About Poka-Yoke

A week ago, José Roberto Rolim Nunes  started a new discussion on the TPS Principles and Practice discussion group on LinkedIn by asking “What is Poka-Yoke?” As of today, it has had 42 contributions, including several from Sid Joynson, Jerry O’Dwyer, and Peter Winton. Sid recounts personal communications from Shigeo Shingo, Jerry adds a semiconductor industry perspective, and Peter discusses Poka-Yoke with current production machinery versus what it was in Shingo’s days. Discussion groups often revisit the same topics, but with different participants exploring different details. On Poka-Yoke, I have posted the following before:

  1. Key details on Poka-Yoke/Mistake-Proofing (11/22/2011). A compilation of my inputs to a discussion of Poka-Yoke in the AME discussion group on LinkedIn. Several sections of this post are repeated here, with illustrations added.
  2. Poka-Yoke at Toyota: the Current State (3/10/2013). A review of Mikiharu Aoki’s 2013 book on the subject.

The featured image above is of my favorite Poka-Yoke. The press punches a hole in a fiberboard with no metal parts. At a later step, a metal bracket is mounted in this hole. One day, an operator mistakenly loaded into the press a board that already had a bracket in it, which put the press die out of commission for 36 hours. The device mounted in front of the press  is a permanent magnet, which sucks up any board already containing a bracket.and preventing it from going in. It was designed and implemented by Hormoz Mogarei‘s team a few years ago.

The topics covered here are as follows:

Definition, applicability and boundaries

A Poka-yoke is a device integrated in a manufacturing operation to prevent human error. As an approach to quality improvement, it is therefore relevant where human error is the main cause of defects, which means that your process is capable and that discrete malfunctions, such as tool breakage, either stop the line automatically or are promptly detected thanks to one-piece flow.

Once you have a capable process and one-piece flow, human error percolates to the top of the Pareto chart of defect causes, and Poka-Yoke can take you to the next level. One feature of Poka-Yoke that is often missing in the literature is that the defect-prevention devices must not add labor.

The reason this is vital is that, otherwise, operators will stop using them under pressure and they will be ineffective. For example, validating picks by reading bar codes adds labor if an operator has to wave a reader and is therefore not a Poka-Yoke. If the ID Is read automatically while the part travels on a belt, then it is a Poka-Yoke.

Poka-Yoke and Inspections

A common misconception is that only devices that prevent defects being created qualify as Poka-Yoke. Poka-Yoke inventor Shigeo Shingo, however, disagrees. To Shigeo Shingo, “inspection” was not a dirty word. In ZQC, p.92, he wrote: “Poka-Yoke systems involve carrying out 100% inspections and requiring immediate feedback and action when errors or defects occur.”

Although he does not say this explicitly, what I read between his lines is that, since Poka-Yoke exist to prevent human error, they do not prevent defects from being created by machines, but they ensure that these defects are detected and acted upon immediately when the workpiece is unloaded.

We should remember that Poka-Yoke is about prevention of human error, not machine malfunction. Even in the best processes, machines will occasionally malfunction, in ways that will damage a workpiece and preventing this from happening is out of the scope of Poka-Yoke.

On the other hand, it is a human mistake to let this defective part escape and proceed to the next operation, and a device that keeps this from happening performs mistake-proofing and is therefore a Poka-Yoke, as Shingo defined it. It should also be noted that Lean/TPS includes other forms of 100% inspections as well, such as go/no-go gauge checking on every part, or successive inspection, in which each operator on an assembly line starts by touching every component that was supposed to be installed at the previous station.

Nobody likes inspections, but, if they find defects, they are necessary. An inspection step that fails to uncover a single defect in a million consecutive units is probably unnecessary. When you find that 100% inspection is unnecessary, you don’t replace it with sample inspection, because drawing samples would disrupt the production routine. Instead, you go to first and last piece checking on a production run. And the next step is complete elimination of the inspection step.

An inspection system that just filters bad parts is not a Poka-Yoke, because it does nothing to prevent the generation of more bad parts. If you have a process that is out of control and routinely produces 5% of defectives or more, human error is not the problem, and Poka-Yoke not the solution. You need to fix the process. The following is an example of a Poka-Yoke to detect empty packages coming out of a machine, from Mikiharu Aoki’s Poka-Yoke at Toyota: the Current State (pp. 168-169):

Poka-Yoke Aoki with  translated captions

Poka-Yoke example from Mikiharu Aoki

It does not prevent the machine from putting out an empty box, but it prevents it going forward. This Poka-Yoke could have been built in the 1960s. For this purpose, a engineer’s first instinct today would have been to use an electronic check-weigher, and program it to trigger the Andon as needed.

This mechanism in this example can be implemented by a shop floor team with mostly mechanical skills. An even simpler device that has been used to filter empty boxes on a conveyor is to blast air at them: the empties fly off, while the full ones are unaffected; it does not, however, trigger the andon.

Poka-Yoke and Statistics

About Poka-Yoke versus Statistical Methods, following are two quotes from Shigeo Shingo’s ZQC book and Mikel Harry’s Six Sigma, that I posted a few days ago in another thread on the TPS+1 subgroup:

Shigeo Shingo: “When I first heard about statistics in 1951, I firmly believed it to be the best technique around, and it took me 26 years to be completely free of its spell.” ZQC, p. 54

Mikel Harry:”We believe that statistical knowledge is to the information age what fossil fuel was to the industrial age. In fact, the future of industry depends on an understanding of statistics.” Six Sigma, p. 24

Shigeo Shingo’s and Mikel Harry’s perspectives seem diametrically opposed. Neither Shingo nor Harry, however, takes the trouble to specify the context of their remarks. Shingo’s world is primarily automotive; Harry’s, electronics. My take on this is that they are both right, in non-overlapping universes.

My first experience in manufacturing was in the semiconductor industry, where lack of process capability, at the level of the whole 500+ step wafer process, is the main cause of defects, and is fought by armies of yield enhancement engineers who use statistical design of experiments. In this context statistical tools are indispensable. Later, working in automotive, it was not the case.

When I last looked at this, the semiconductor industry, along with pharmaceuticals, was the largest industrial user of statistical software products. The motivation in pharmaceuticals is compliance with legal mandates for drug approval; the motivation in semiconductors is internal, technical needs.

Poka-Yoke and Usability Engineering

Usability engineering is also an approach that, while falling short of Poka-Yoke, goes a long way towards reducing human errors by leveraging intuition and pre-existing habits. The idea is to make the human interfaces of tools and machines so intuitive that people naturally tend to use them correctly. It does not mistake-proof processes, but it makes mistakes unlikely and reduces training needs. The following picture shows features of the alarm clock I used before switching to a smartphone app. It included both usability engineering and mistake-proofing.

Poka-Yoke and Usability Engineering in Alarm clock

Swiss Army alarm clock

Usability engineering is applied to the clock face. It is analog, and consistent with the style developed over  hundreds of years, that is now a cultural constraint, even for digital clocks.  The hours are marked in bold numbers with high contrast and a consistent orientation. The hour and minute hands have markedly different lengths, and the hour-hand has an arrow. These hands will not be confused, and they glow in the dark.

The alarm and clock time settings, on the other hand, are mistake-proof. On traditional alarm clocks, you set the alarm time by turning a button on the back while looking at the clock face in the front, and often turn the wrong button, changing the clock time instead. The mistake-proofing device here is a cover that you have to open if you want to access the clock time knob. With the cover closed, you can only change the alarm time.

Mistake-proofing and usability engineering are two different disciplines, with overlapping goals. By making user interfaces intuitive, usability engineering makes mistakes unlikely but does not prevent them. Conversely, you could have a fully mistake-proof machine that would require you to study  a manual in order to operate it.

In a car, for example, you could prevent starting the engine while in gear — which is mistake-proofing — and still make the controls unintelligible by confusing captions or unusual locations. In the Ford Edsel, for example, you shifted gears by pressing a button in the center of the steering wheel, which violated a cultural constraint on shifter location established over 70 years of car making.

Ford Edsel shifter: pressing buttons in the center of the steering wheel

The push-button shifter on the Ford Edsel

The reason you can rent a car from any brand and drive it off without opening a manual is usability engineering, not mistake-proofing. Usability engineering is widely used in consumer goods, which are bought their end-users, but not in production machines, which aren’t. The principles of Lean equipment design include usability engineering, even though it is not explicitly referenced. Production machinery should have both; most of it currently has neither.

The best source on usability engineering is Don Norman’s Design of Everyday Things. For its application to airliner cockpits, see Asaf Degani’s Taming HAL. The application of usability engineering to production machinery is discussed in Part I of Working with Machines (pp. 9-84). In is also discussed in an earlier post on Avoiding Lean Wallpaper. Here is an example of Don Norman’s recommendations for a stovetop design:

Natural mapping of knobs to burners

Natural mapping of knobs to burners

The relative positions of the knobs match the relative positions of the burners they control. It is not a Poka-Yoke, because it doesn’t prevent you turning the wrong knob; in practice, however, it is the only layout in which nobody does.

Poka-Yoke in Production versus Household Appliances

Why is production machinery so lacking in mistake-proofing and usability engineering?  I think the suppliers are just catering to their customers, the people who select equipment and make purchasing decisions. In most companies, the operators are out of this loop, and their needs are not addressed. Household appliances are loaded with mistake-proofing and usability engineering features because they are bought by their end-users.

As a consequence, manufacturing Poka-Yoke are usually retrofits introduced as part of continuous improvement. To the extent that they are specific to the plant’s use of the machines, this is inevitable. Machine suppliers, however, can help by building in Poka-Yokes that apply to all uses of their equipment, and by making it easy for users to add custom Poka-Yokes.

Poka-Yoke, Jidoka, and Technology

Logically, Poka-Yoke belongs under the Jidoka column of the TPS house. There are only two columns, the other one is Just-In-Time, and Poka-Yoke is not part of Just-In-Time…

The engineers who program control systems to prevent using the wrong recipe in semiconductor wafer processing or car engines from starting while in gear are definitely mistake-proofing. What the literature has encouraged us to do, however, is to think of Poka-Yoke as small, cheap devices developed by the people who do the work as part of continuous improvement. I don’t see anything wrong with applying the term to the work of design engineers, as long as we don’t forget about the Poka-Yoke from the floor.

The Poka-Yoke concept relevant in semiconductor manufacturing even though process capability issues are dominant, because operator errors have catastrophic consequences. An operator who starts the wrong recipe on a diffusion furnaces may lose an entire load of 200 wafers, worth about $250K. The prevention of such errors is mistake-proofing, but it is pursued through high-technology methods by engineers specialized in computer systems to control process equipment. The following is an illustration of how this was accomplished in the 1990s:

Poka-Yoke recipe control in semiconductors

The control systems embedded in the production machines were themselves powerful computers, and communicated with outside computers using industry-specific protocols. Using commercially available packages, in-house computer engineers  programmed an external computer to communicate with both the embedded controller on one side and with the plant-wide Manufacturing Execution System  to prevent the wrong process program being executed.

The literature on Poka-Yoke gives the impression that Poka-Yoke devices are always low-tech, but that is because they use examples from the 1960s. Relying on size differences between products to make them trigger or not trigger switches is fine as long as these differences exist. If, however, you are building differently configured computers in the same cases, the outer dimensions are identical, but you can use RFID tags to achieve the same result.

Imagine a mixed-flow assembly line as in the following picture:

Mixed-flow assembly line

Mixed-flow assembly line

This picture focuses on one assembly station within a mixed-flow assembly line.  The line makes a variety of different configured products. At a supermarket just off the line, a water spider picks kits of parts for the work done at this station and sequences them on a gravity flow rack a few minutes before they are used.

The pick-to-light system itself is not a Poka-Yoke, because it does not physically prevent picking the wrong parts or the wrong quantities. Systems that pop up lids to make parts accessible only in some bins are closer, because they only make the right parts accessible, but they still don’t control the picking quantity. Pick-to-Light systems are popular because they are cheaper than automatic dispensers, increase picker productivity, and reduce, if not eliminate, picking errors. But the pick-to-light controller has to know which bins to light up.

How do you go about making sure it does, and the water spider  delivers the kits in the proper sequence to the flow rack, knowing that you cannot rely on the outer dimensions of the product to pinpoint its configuration? Following is a possible solution based on RFID technology:

Mistake-proofing configuration with RFID

Mistake-proofing configuration with RFID

The entire bill of materials for the product configuration is loaded onto a high-capacity RFID tag attached the fixture at the start of the line. Past that point, all the data is locally held, and the operation of the line is decoupled from the central information system of the plant. An RFID proximity reader is located a few stations ahead. Through it, the product sequence and pick lists are fed to the pick-to-light controller just early enough for the water spider to pick the kits in time for assembly.

Poka-Yoke Implementation

When a Poka-Yoke is a simple device, as described in Shingo’s ZQC book, in Productivity Press’s big red book of 240 examples, in Hinckley’s more recent “Make no mistake” (2001), or in Aoki’s “Poka-Yoke at Toyota (2012), it is implemented differently from when it is a larger-scale project. For these classical Poka-Yoke, the challenge is the idea, and it usually comes from a production operator or a technician.

The implementation cost is petty cash, and it takes a few hours to do, as part of continuous improvement. It does not require a formal economic justification or business case. The barrier is low. If you kind of think it the current setup is a mistake waiting to happen, and a $50 change to the operator work station will prevent it, you do it.

Preventing a diffusion furnace in semiconductors from running the wrong recipe is a different story. With the technology as it was when I was involved, you were talking about a $50K retrofit to the local control system, with integration to the overall plant system, so you had to look at the economics. In one case, there had been recipe mistakes about once a quarter, resulting each time in a total loss do a load of wafers, worth at this stage about $250K, which worked out to about $1M/year. Investing $50K to save $1M/years sounds like a good investment, but, for this kind of Poka-Yoke, you need to have that conversation, and it needs to be documented.

Betting on Lean, or …. Analytics versus Empowerment | Bill Waddell

See on Scoop.itlean manufacturing

“Management is all about playing the odds. […]  In operations, calculate lot sizes, generate forecasts and set quality standards with enough data and increasingly sophisticated algorithms and statistical methods and you will increase the chances of coming close enough.  At least that is the theory, and the hope.

This is the basic premise of big data and ERP.  With point of sale scanning, RFID, smart phones and all of the other data collecting technologies increasingly in use, the data to feed the engines is more and more available.  The potential and the lure of the data driven, analytical approach to finding the center line and getting more decisions closer to correctness is growing.

The other approach is empowered people.  Recognizing that management cannot be involved in every one of the individual customer interactions and operational, situational, tiny decisions, those calls are left to the people on the spot.  They are expected to rely on their knowledge, understanding of company values and goals, and the information available to them in very real time to decide what to do.[…] The basic question is whether empowered people will get it right more often than big computer.”

Michel Baudin‘s insight:

In this article, Bill Waddell presents the data-driven approach to management decision making as contradictory to people empowerment. I do not see these as mutually exclusive.

In 1993, there was a group within Toyota’s logistics organization in the US that, based on weather data, thought that the Mississippi might flood the railroad routes used to ship parts from the Midwest to the NUMMI plant in California. Four days before the flood, they reserved all the trucking available in the Chicago area, for the daily cost of 6 minutes of production at NUMMI. When the flood hit, they were able to ship the parts by truck around the flood zone, and NUMMI didn’t miss a beat.

This is what a good data scientist  does.

In Numbersense, Kaiser Fung points out that data analysis isn’t just about the data, but also about the assumptions people make about it. As an example, he points out the Republican polling fiasco of the 2012 election, as being due to the combination of flawed data collection and equally flawed modeling.

In other words, it’s not a computer that comes up with answers from data, but a human being, and the quality of these answers depends as much on the human analyst’s understanding of the underlying reality as it does on the ability to collect clicks from the web or transactions from point-of-sale systems.

Good data analysis does not require petabytes of data. In statistics, a small sample is 10 points; a large sample, 100 points. The difference matters because, with small samples, there are many convenient approximations that you cannot make. But 100 points is plenty for these approximations to work.

With millions of points, the tiniest wiggle in your data will show overwhelming significance in any statistical test, which means that these test are not much use in that context. To figure out what this tiny wiggle is telling you about reality, however, you still need to understand the world the data is coming from.

I don’t see an opposition between relying on people and relying on data, because, whether you realize it or not, you are never relying on data, only on people’s ability to make sense of it.

See on www.idatix.com