Jul 16 2019
Updating the 7 tools of QC
A conversation with Franck Vermet about problem-solving tools for factory operators caused me to revisit the 7 tools of QC from 50 years ago and ponder how they should be updated with current data science.
Data Science for Operators, as a book, remains to be written. If you google this phrase today, what comes up is training courses offering to “change your career” by attending a “data science bootcamp.” TIBCO Spotfire has “Workflow Operators” but these are programs, not people.
So the following are tentative answers to questions that haven’t been asked before.
Contents
- A quality problem-solving toolbox for operators
- Ishikawa’s tools and their updates
- From Check Sheets to Event Loggers
- Stratification
- Pareto analysis
- From Histograms to Density Estimators
- From Scatterplots to Principal Components
- From Control Charts to IoT-based go/no-go checking and R2R Controls
- Pie charts, bar charts, line plots, radar charts, and other
- From Fishbone Diagrams to Mind Maps
- Other toolkits
- Implementation concerns
- Conclusions
A quality problem-solving toolbox for operators
In 1970, Kaoru Ishikawa identified the 7 tools of QC that he felt were sufficient for participants in QC circles and could practically be taught to them. Later, he made the (unverifiable) claim that this toolbox was sufficient to solve 90% of the quality problems in all industries.
QC circles still thrive in Asia and in Japanese transplants
50 years later, while QC circles still thrive in Asia, they are a fashion that has come and gone in the 1980s in American and European companies. In the US, production operators who master and use Ishikawa’s 7 tools are found only in Japanese transplants, like Toyota’s factories. In fact, Toyota alumnae Tracy Richardson from Kentucky and Babs Woodward from the UK confirmed that Ishikawa’s 7 tools were still taught to operators in 2010.
Even among engineers and managers, the level of data literacy implicit in the use of the 7 tools is uncommon, as reflected in the success of Mark Graban’s book Measures of Success, where he argues that managers should look at more than the last data point on performance.
Production operator involvement is still needed
The need for improvement in manufacturing operations, however, endures, and so does the need to involve production operators because:
- They know process details that no one else does.
- The professional staff does not have the bandwidth to solve all the problems.
Toolboxes provide structure and methods to small-group activity
In its day, Ishikawa’s toolbox was what made QC circles different from earlier attempts to organize small group activity. Instead of simply letting people loose on a problem, the toolbox gave them structure and methods.
This thinking has carried over to other types of projects. When a team starts a SMED projects, it receives training on the SMED methodology; a cell project, on cell design; kanban implementation, on the rules for using kanbans. In all these cases, as for QC circles, it is vital to put this training to use immediately. Adults have an efficient mental garbage collector that promptly flushes unused information from memory.
In the US, more sophisticated tools are taught in middle schools than are used in business
Even though many of 7 tools of QC are taught in American middle schools, they are absent in business documents, where data presentation is limited to visualizations that are trivial, like pie charts, or confusing, like stacked bar charts.
How can we do today what Ishikawa’s toolbox managed to do 50 years ago? Data science and information technology have changed since 1970, and so have production operators. In advanced economies, manufacturing has shrunk as a share of the total workforce and the level of education of production operators has risen. In the 1950s, for example, Toyota recruited middle school graduates; today, high school graduates and above.
Ishikawa’s tools and their updates
As a start, let us examine Ishikawa’s tools, the purposes they served, and what their current equivalents would be. Where applicable, lists of available software tools are included, in particular, within the following environments:
- Excel, because everybody has it.
- Minitab, because it is the system that Six Sigma black belts train on.
- RapidMiner and KNIME, because they are used for analytics by leading Industry 4.0 companies.
- R, because it is what I use.
From Check Sheets to Event Loggers
Check sheets are a manual means of aggregating events by category that, once filled out, look like a horizontal bar chart. Anyone can understand a check sheet but it is slow, labor-intensive and error-prone, and it only produces a summary. The complete event log is lost. You record how many paint scratches you observed but not when they occurred, which might provide clues as to why they occurred.
The following example is from Dad Puo’s blog in Malaysia in 2017:
Will smartphone users fill out manual checksheets?
How do you teach this to a generation of operators who have grown up using smartphones? The best that can be said for collecting data in this fashion is that it makes you observe more closely than you otherwise would. Being on the lookout for items to check keeps you eyes focused on the operation and the product but you really can’t justify using a technique based only on its side effects. The primary purpose of data collection is to collect data.
Collect event logs, then summarize
If you have an event log, with categories assigned to each event, then you can cross-tabulate it into event counts by category. It, therefore, makes sense to replace the check sheet with a form of event logger.
SCADA systems and MESs do record events, uploaded from PLCs or other types of controllers. Engineers and managers should know how to validate their accuracy and retrieve them in the form of tables. When engineers support improvement teams, they bring their laptops.
PLC suppliers now provide Human-Machine Interfaces (HMI) that perform traditionally SCADA functions. In particular, they can be used to collect and send event logs to higher-level systems. The following picture is of a Siemens Simatic HMI:
If the skill of retrieving and summarizing event logs were passed on to technicians and operators, they could only use it if at least one connected computer were made part of the standard package available to each team, along with Creform, Post-It notes, etc.
Set up project-specific event logging
In addition to event logs that are already collected by existing systems, an improvement team may need to collect new ones. Systems that log sensor-detected events over the network exist but currently require high-level engineers to set up and debug. It can now be done as part of engineering support of improvement projects. It the long run, it should be simplified to the point that teams can set it up themselves.
Use No-Code Development Platforms (NCDP) for manual logging
Finally, there are — and probably always will be — events for which no sensor is available yet. These must be manually detected but in a way that requires less work than check sheets and produces logs that can be exploited electronically.
For this purpose, No-Code Development Platforms (NCDP) could be used. They allow a technician to configure data collection on a mobile device by clicking and labeling a few boxes.
Stratification
“Stratification” in the context of Ishikawa’s 7 tools refers to slicing and dicing the data based on a variety of criteria. It covers much of the data manipulations performed today in manufacturing with Excel, culminating in pivot tables.
Work with tables, not spreadsheets
The abuse of Excel for data management is well known and training project teams to use it more would make the situation worse. What they need is the ability to perform basic operations on tables.
In Excel, however, for historical reasons, you work instead with spreadsheets, that are large grids of rows and columns on top of which you overlay tables. This unnecessary complexity is the root cause of the many errors found in Excel worksheets, such as:
- Missing rows or columns when selecting a table by dragging a cursor over an array of grid cells.
- Color-coding grid cells without explaining the meaning of each color and needing add-ons to summarize the cells by color.
- Formulating calculations in terms of grid cells instead of variable names, which is error-prone and makes formulas unnecessarily difficult to read.
The tools that have been developed specifically to work with and share tables are relational databases but most are not fit to be used by improvement project teams. You manipulate tables without worrying about missing rows and columns. Calculations are defined in queries in terms of column names. And put all the information explicitly in columns. Then you may use colors when viewing tables, to express information that is stored otherwise.
To pull, visualize, and summarize data from multiple sources, in the Microsoft world, you can use Power BI. Tableau software is an alternative applicable in other environments as well.
Pareto analysis
Pareto analysis is to tell the vital few from the trivial many among categories, for the purpose of identifying the most promising projects. It does, however, have the two following flaws:
- In quality, the most common defect may be difficult to eliminate. It may, for example, require knowledge of the physics and chemistry of the process that is not present in the organization.
- When you stop final inspections or tests on a unit as soon as the first defect is found, you may find that the frequencies of the second and third most frequent defects rise when you eliminate the first.
The Pareto chart standard format is obsolete
In addition, the standard Pareto chart, as shown in Figure, is also graphically obsolete.
Juran introduced the Pareto diagram as a tool to rank defect categories, and it has been taught to generations of professionals and participants in small-group problem-solving activities. It is the way they expect it to be presented. If you consider the chart out of this context, however, you find that it violates many principles of good chart design. Following are a few objections:
A bar for each category is appropriate but a line for cumulative values is not. Lines should only be used where interpolation makes sense, and interpolation between categories does not. Furthermore, the use of vertical bars suggests that the x-axis represents time. Where it doesn’t, horizontal bars are preferred, as in Figure:
Pareto analysis is a case where an ordered table actually outperforms the chart.
Have only one y-axis per chart
Another objection to the traditional chart is that it has two y-axes with different scales. Once you are used to it, you know that the scale on the left is for frequencies and the one on the right for cumulative percentage.
On the other hand, if you see the chart for the first time, you may be confused. In fact ggplot2, a software package that its developer Hadley Wickham describes as “elegant graphics for data analysis,” refuses to produce a chart with two y-axes having different scales. Wickham considers it to be such a bad design that he makes it impossible.
The closest you can come to it with ggplot2 is a stack of two charts with the same x-axis, each with its own y-axis. At the cost of taking up more space, it removes any ambiguity as to which axis applies to which data series.
In conclusion, chart standards are not sacred. The standard Pareto chart may look “professional” but, from today’s perspective on data communications, it needs to be improved.
From Histograms to Density Estimators
As a way to show the distribution of a sample of a numeric variable among bins, the histogram is useful but better tools have appeared since 1960. Originally, the histograms were produced by hand; now they are most commonly done with Excel.
The histogram of your data is a jagged skyline to which you mentally fit a smooth line. You may see one mode or two modes, or a wide flat zone, or it can look skewed, or too pointy, it can have outliers, etc.
One of the shortcomings of histograms is the dependence on arbitrary bin sizes and the loss of information from reducing measurements to membership in bins. The second shortcoming is that fitting a probabilistic model is entirely up to you.
KDE is a richer and easier alternative to histograms
The alternative has the intimidating name of kernel density estimation (KDE), which directly translates the list of points into a smooth distribution function. I don’t see why this should be any more difficult to understand than a histogram as it provides a clearer view of the distribution.
The following figure explains how you do it: you replace each data point with a distribution function centered on it, and add them all up. It’s impossible manually but trivial with software.
The picture, from the Wikipedia article about KDE, uses Gaussians centered on each point. Of course, they can be scaled by varying σ. The software uses a default value that usually works but you can play with it, and you can use other distributions. The Gaussians have the unique characteristic that, as you increase σ, the peaks always come down and the valleys always go up.
Common software packages do KDE
You can perform KDE with common software packages as follows:
- In Excel, use the NumXL add-in.
- Minitab, version 19, does not have a KDE function but there was an add-in called NMODE14 for version 14 from the Royal Society of Chemistry.
- In KNIME, there is a 3rd-party add-on called Knime_KDE_Node.
- In RapidMiner, this is done by executing an R script.
- In R, use the density function.
- In Python, use KernelDensity.
- In Matlab, use the ksdensity function.
- In Weka, use KernelEstimator
- You can do it in SAS by selecting the kernel option when producing a histogram.
From Scatterplots to Principal Components
Scatterplots have not aged but they have vanished from routine business communications. In engineering and quality, they visualize relationships between two parameters of the same operation or product parameters at different operations.
Manufacturing applications include plotting time-to-repair versus time-between-failures on a machine that you run to failure, or order size versus time between orders from a given customer.
In the first case, you might want to determine whether a periodic maintenance plan would increase availability; in the second, whether an erratic ordering pattern masks a smooth consumption by the customer, who may be better served on a replenishment basis. For details, see Where have all the scatterplots gone?
Scatterplots are visualization only
Within the 7 tools of QC, the scatterplot is not accompanied by a discussion of correlation or regression. Presumably, the math involved would be beyond what is expected of operators. Consistent with this, common manipulations that are easy with software, like changing one or both axes to a logarithmic scale, might be omitted in an updated version.
Turn scatterplots into heat maps for large data sets
If you have a dataset with 10,000 points or more representing, for example, one day’s production of an auto part at a takt time of 5 seconds. A scatterplot of two parameters measured on every part would be a black blob. A more informative plot can be obtained by reducing the data to a grid of cells, counting the data in each cell, and color-coding the cells, as in the following:
Use dimensionality reduction to view more than 2 variables
What happens when you have more than 2 variables, like multiple critical dimensions measured on the same part? A cloud of points in more than 3 dimensions is impossible to visualize, so we look for other ways to show the data. One obvious method is to plot a matrix of scatterplots for every pair of variables. The following example is with three measurements of flower parts:
Obviously, it would be neither practical to produce nor easy to interpret with 100 variables instead than 3, and all these projections do not give an overall view of the cloud of points.
One alternative is to project the data on the plane that contains as much of the information in the full cloud as possible. Generally, this is called “dimensionality reduction,” and the most common method for it is called Principal Component Analysis (PCA). The math for it was worked out in 1901, but it couldn’t be used without computers.
A basic PCA example explains the technique
Let’s see how it looks in the basic example provided in R, a list of data on 32 models of car, from Motor Trend magazine in 1974. For each car, you have 11 features. PCA gives you the following plane, containing 86% of the total variance in the 11-dimensional cloud:
It shows the projection of the cloud of points on this plane, and of the axes for the individual measurements. You can see, for example, that you get more miles per gallon as you move left, and more power, more cylinders, a higher displacement, and a higher weight on the right.
The points are color-coded to separate the major centers of the car industry at that time, Europe, Japan, and the US. The ellipses surround 2/3 of the points for each group.
Drinking water quality data from the US government are a real example
A dataset of pharmaceutical content in drinking water, measured in nanograms/liter for >100 different drugs from 2013 to 2015 was recently published by the US Geological Survey. Its PCA looks as follows:
Even though the axis captions are not readable, you can see that all the axes point to the right, and therefore that the cleanest water is at the far left and the dirtiest at the far right. It also tells you that you can use the first principal component, PC1, as an overall index of contamination. You could further analyze this data be grouping the points by time of collection or geographical area.
Can PCA be used by production operators?
This technique has been used in business publications without mentioning its name to generate, for example, “maps” of the labor force based on parameters like years of education, income, years in current job, etc. These infographics show that PCA can be used to communicate with the general public.
Can it be used in quality problem solving by production operators? Scatterplots are the first step. Circle members who are proficient in using scatterplots may be able to make sense of PCA, if properly packaged.
Common software packages do PCA
You can perform PCA with common software packages as follows:
- In Excel, use the NumXL add-in.
- In Minitab select Stat > Multivariate > Principal Components.
- In KNIME, use the PCA node.
- In RapidMiner, use the PCA core function.
- In R, use the prcomp or the princomp function.
- In Matlab, use the pca function.
- In Weka, use _PrincipalComponent_filter.
- In SAS, use the PCA procedure.
From Control Charts to IoT-based go/no-go checking and R2R Controls
A list where Pareto charts and control charts are listed in parallel is unbalanced, and presenting control charts as “simple” is misleading.
Control charts are sophisticated but obsolete
Control charts are, in fact, a sophisticated tool based on the statistical state of the art 100 years ago, cleverly working around the limitations of that era’s information technology. There are entire books just about this topic.
The most recent types, the cumulative sum control chart and the XmR chart, are from the 1940s. They were the best you could do with the ideas of that time, paper, pencils, slide rules, and books of tables.
These charts are supposed to show the evolution over time of one quality characteristic measured after the operation is complete and issue alarms when it shows evidence of a change with an assignable cause. There was, at the time, no way to capture and analyze the vital signs of the process itself during the operation.
The x-axis, on a control chart, does not represent time but the sequence of parts that went through an operation. Two consecutive points represent two lots, or workpieces, processed consecutively, regardless of whether the time between them was 1 minute or 1 day, which may make a difference if the operation uses materials with expiration times, like paints or glues.
Except as wall decorations for the benefit of outside auditors, they are not used today for two reasons:
- Mature processes have capabilities that are beyond the range where these charts can be useful with, for example, machine tools that can hold tolerances ten times tighter than required in automotive.
- High-Technology processes, like semiconductor wafer processing, have process capability issues that are beyond the range of what these tools can address.
In mature processes, move to IoT-connected go/no-go checking
In mechanical manufacturing, after-the-fact validation of operations is now mostly done through go/no-go gauges applied to 100% of the workpieces. The technical and operational aspects of go/no-go gauges can be taught as part of a toolset for operators, and their natural evolution is towards connection through the IoT.
In high technology processes, consider Run-to-Run controls
In semiconductor manufacturing, on the other hand, the descendants of the control charts are Run-to-Run (R2R) controls. R2R is, as of today, so confidential that it doesn’t have a Wikipedia article. It’s part of the automatic process control and out of reach to operators.
This is how Applied Materials describes its offering in this space:
“SmartFactory R2R offers a patented model predictive control solution providing a multivariate, constrained, optimization-based process controller.”
R2R is described as follows in a 2004 paper:
The R2R control methodology combines response surface modeling, engineering process control, and statistical process control. The main objective of such control is to manipulate the recipe to maintain the process output of each run as close to the nominal target as possible.
[…]
A general control scheme is presented that can compensate for a variety of noise disturbances frequently encountered in semiconductor manufacturing. The controller can also compensate for various system dynamics, including autocorrelated responses, deterministic drifts, and varying process gains and offsets. Self-tuning controllers are developed to provide on-line parameter estimation and control.
A recursive least squares algorithm is normally used to provide on-line parameter estimation to the controller. This type of control strategy used in the proposed self-tuning controller applies the principle of minimizing total cost (in the form of an expected off-target and controllable factors adjustment) to obtain a recipe for the next run.
It should be noted that tweaking the process after every run is not the objective of control charts, even though they have been used this way.
Pie charts, bar charts, line plots, radar charts, and other
This is a hodgepodge of descriptive statistical graphics that includes some of dubious value, like pie charts and stacked bar charts, and others, like radar charts, that may be a challenge for operators to understand.
Chart junk has degraded visualization
Since the 1970s, these charts have mostly been subject to the addition of graphic junk, like 3D bars or pie charts seen at an angle, which make the small amount of information they contain more difficult to retrieve.
For more on chart junk, how to avoid it and what to replace it with, see
Metrics in Lean – Chart junk in performance boards and presentations – Michel Baudin’s Blog
Use colors and collocate captions with their object
In addition to the chart type, there are many details in the production of charts in print or on a screen that enhance their readability. This includes the use of color to separate lines on the same chart and the inclusion of legends on the chart itself rather than in a legends block to the side.
The following examples show the traditional format, as used in Thomas Piketty’s Capital in the 21st Century and an improved version:
Make charts easy to annotate
When you use Charts in circle projects as well as in daily management, you _annotate_them. A pristine chart is pristine, is one that no one pays any attention to. It is easy to annotate paper charts but not so easy to capture electronically the notes made on paper.
On the other hand, you can annotate charts on screens if the screen is an interactive whiteboard. It looks like an ordinary whiteboard but it can display software-generated charts and allow viewers to use “pens” to annotate the screen.
From Fishbone Diagrams to Mind Maps
While brainstorming, team build a fishbone diagram, or Ishikawa diagram, or cause-and-effect diagram, to identify possible causes of a quality problem. The causal relationships are not precisely defined, which is deliberate because requiring them to be as specific as in, say, a fault tree would stop brainstorming.
Attractive though it may be in a classroom environment, the fishbone diagram is almost never used in projects. In quality problem solving, Honda, for example, uses Why-Why analysis instead, the first “Why” being the reason for the occurrence of a defect, and the second the reason it was not immediately detected. You get to ask why five times in each column but the result is not formatted as a fishbone diagram.
Fishbone diagrams are still used, though perhaps not as intended
Crispin Brown sent me the following recent example where he used it with a team, albeit for a purpose other than quality diagnosis.
Curtis Anderson, from Genentech, reported using the fishbone diagram as a way to make sure that a team does not omit a whole category of possible causes, like Materials.
You can use mind-mapping software to enhance fishbone diagrams
To free yourself from the limitations of a whiteboard or a physical wall, you can use tools like MindGenius to generate and navigate through a cause-and-effect structure of unlimited size, as in the following example:
It doesn’t look like a fishbone but it has the same structure. You expand a node by clicking + and collapse it with -. And you can attach text to any node.
Other toolkits
In the early 1980s, a decade after Ishikawa’s 7 tools of QC, the Japanese Union of Scientists and Engineers (JUSE) released a New 7 tools of QC (N7) set for managers and the support staff, containing no statistical tool. The “N7” list includes only the following 6 tools:
- Affinity charts
- Interrelationship diagrams.
- Work breakdown trees
- Matrices
- Flow charts
- Process-Decision-Program-Charts (PDPC)
Other than affinity charts and matrices, these tools all are bubbles-and-arrows with different semantics. This effort may be the origin of the notorious X-matrices. Toyota does not use them but many Lean consultants have been promoting them.
Some of these tools look borrowed from software engineering. The Interrelationship diagrams, for example, look like simplified versions of the Entity-Relationship diagrams from database design, and the specific flow charts of N7 like a graphic technique to represent algorithms abandoned in the 1970s as programs became easier for humans to read.
A major difference between the N7 and the original 7 tools is that none of the N7 involves any numbers. Training operators but not the staff to work with numbers is odd.
Many other toolkits are available for Lean implementation but not specifically for operators. Productivity Press has had a “for operators” series with books on topics like SMED, cells, or Kanban. And there is, of course, TWI.
Implementation concerns
If we establish a list of 7 tools for today’s problem-solving teams, we also need to think through effective ways of deploying them were useful and providing teams with the requisite technical tools.
The two terms that best describe the practices of most companies regarding not just the 7 tools of QC but many others are sheep dipping and formalism.
Sheep dipping is ineffective training
“Sheep dipping” is immersing every sheep in the flock in the same chemical bath for disinfection. In manufacturing, it means putting the entire management or production department of a factory through two days of classroom training, regardless of whether they have a use for it, and certifying them on that basis.
Tools used just to humor outsiders don’t help solve problems
“Formalism” is what the use of the tools degenerates into when teams produce charts, drawings, and tables only to satisfy mandates from outside a team. In this case, the requirement is that they match a given standard. The following example is typical of this approach:
The better alternative is to introduce tools based on project needs. When a team takes on a challenge, the Continuous Improvement support group must identify what tools may help the team. It should then provide training in these tools just before the team needs them, using the current project for application examples. You brief a team for two hours on a tool, and then apply it right away.
Conclusions
Long derided as “toolheads,” Lean consultants have developed an anti-tool bias. Instead, they emphasize culture and leadership. Engaging employees in improvement without giving them the best tools available for their specific projects, however, is asking them to reinvent many wheels and setting them up for failure.
The value of tools, in principle, is obvious. The selection of appropriate tools for a project, on the other hand, is not so obvious, and must take into consideration both the available technology and the participants’ ability to use it.
#quality, #qc, #spc, #qccircles, #datascience, #industry40, #7toolsofqc
Frank Vermet
December 23, 2019 @ 6:09 am
@Michel I feel very honored to have triggered such an insightful post.
Indeed the operators’ need to solve problems should be much better fulfilled by today’s data crunching and visualizing possibilities.
Everything Should Be Made as Simple as Possible, But Not Simpler for data visualization (2 dims for 2 dims data, 3rd dim with color).
In an ideal world, operators have all they need to do their job including easy access to data to make decisions, take appropriate action to solve problems
I am usually quite happy with spreadsheets pivot tables, one of the reasons being it is on everybody’s computer. The fundamental issue is that it requires rigor and discipline: I often see people struggle to analyze data because they are playing with the analysis tool and don’t take the time to define what they are looking for. Teaching data discipline and its benefits can be done in an hour, then practice is required, and it has always been enough to cover at least 95% of my data processing needs
The big difference between Ishikawa’s times and now is the storage and calculation power of computers. (Data) science has not changed that much.
What do operators need today?
Everyone has a smartphone and use it mostly for interacting between themselves (social media). In a professional environment, we need to interact with people and
machines.
Machines talk, we need to be able to read their language (collect data thru IoT or smartphone notifications when human-detected), analyze ((Not so) Big Data) to find the needle in the haystack (origin of problem) and reply (decide and take action)
Dashboards / Visual management to interact remotely with other people (iObeya)
In terms of visualizing data, a good practice I have heard many times is ‘less ink is better’, which definitely rules out the 3D charts when visualizing 2 datasets. Basically, a rule of thumb is one dimension (X, Y, Z, surface area, color) for one dataset. X (horizontal axis) = 5 hats racks
Data analysis for the sake of (ppt) presentation is waste. The purpose of data analysis is action, therefore Pareto charts are usually waste.
Charts are a way of understanding visually the intensity of numbers (most people are visual thinkers)
Brainstorming is, in my opinion, an outdated way of thinking or at least needs to be used with caution: it is ‘anti-factual’ as it implies we stay in a room, away from the reality.
The statements of a brainstorming session need to be quickly validated (it starts with a guess, then gets stronger with an expert’s opinion, the is firmed up with fact-finding). The popular 5Ms of a fishbone diagram are just a checklist to help not forget about the 5 necessary and sufficient elements to produce (manpower, machinery, methods, materials, mother nature-the environment)
After all, it is only with daily field practice that the appropriate tools for a production system will emerge after deliberate practice and, agreed with Michel, sheep dipping during belts training only gives a theoretical overview of the tools.
When you have a hammer everything is a nail.
Adaptability will be the number one tool to absorb quickly new devices, start with why to know which one to use, curiosity to constantly become aware of the newly appearing solutions to pick the right one for MY WHY.
Michel Baudin
December 23, 2019 @ 6:33 am
Thanks for your thoughts. Pivot tables are indeed useful and, outside Excel, often known as Crosstabs. They actually work on tables, not spreadsheets, as the tables don’t need to be overlaid on a grid of cells. The grid of cells adds an unnecessary layer of complexity and mistake opportunities.
Data science has changed radically. It’s not just a matter of having more powerful engines to use the same tools as 40 years ago. There are tools that did exist 40 years ago but were inaccessible to the masses for lack of computing power, and there are many tools that have been developed since.
Finding a needle in a haystack is a search problem. Classical statistics will estimate the average number of needles per haystack, which isn’t what you’re after.
I think of brainstorming as a useful tool but also a skill that needs to be honed through practice. It will yield better results the 100th time you use it than the first time, at least if you do it frequently enough that teams don’t forget.
Your Lean Six Sigma Belt Program Is the Problem | Dan Markovitz | Industry Week - Michel Baudin's Blog
January 28, 2020 @ 12:59 pm
[…] from the 1960s, I have never seen them used as advertised anywhere. They are sorely in need of an update in every respect, from data acquisition to analysis and […]
Michel Baudin
March 9, 2020 @ 5:47 am
The 3/5/2020 issue of QualityDigest has an article on the seven basic tools of quality by Jody Muelaner with the following lead:
These claims about this ancient toolkit would need further clarification and justification. I cannot validate any of them based on my understanding of the tools or my factory experience. Let’s examine them one by one:
Ronald Kirby
December 26, 2020 @ 2:20 pm
Well, nice long school of questions on CC statistic’s.
Can be broken down into (2) simple sections (man) & (machine).
What comes 1st, it is not man or machine, it is raw materials of a grade quality.
Then comes man & machine. How ever True Lean 360 Deg. seeks CC data all stages of production friend or foe in the transition of all individual steps of the flow of information, be it man or machine.
Define man = supervisor (No), Team Leader = NO, Team member (Work transformation) = yes (verifiable with one other person) the smartest person that works at each machine of part transformation, with out this person in 1970 up until 2018/19 this person was indispensable. Do u know this person’s title and job function?
Ron Kirby – San
Michel Baudin
December 26, 2020 @ 4:38 pm
I would like to understand what you are saying but I can’t, starting with “CC statistics.”
Could you please clarify what you mean?
Ronald Kirby
December 27, 2020 @ 10:26 am
“CC” – “Cost Control”, In America and around the NON-LEAN World, a style of “Quality Control”, it takes many team members, operators and supervision/engineer’s to define multiple layers of errors to drill down to the root cause of the defect(s). Which is very costly and mostly inefficient.
Ever since the 80’s and early 90’s defects ran rampant at all levels of production from Raw Materials to the Finished Product. The charts and graphs on the very 1st picture in your “QCtrackingToyota1950’s”, shows a man with a “hat”. what is this man’s main job & title at Toyota.
Hint… he is not a fire fighter, as explained in detail to me by Nakao San in 1998.
Ron Kirby San
Michel Baudin
December 27, 2020 @ 11:04 am
I don’t see how this article is about “Cost Control Statistics.” All I am trying to do here is take the toolkit Ishikawa thought should be taught to QC circles back in 1970 and, one-by-one, examine how it could be updated in 2020.
Do you think Ishikawa’s 7 tools were “Cost Control Statistics”?
sanyaa
May 26, 2021 @ 3:05 am
Very well explained. Thanks for sharing these useful tips. I am sure this blog will help others too in the future.
deekshitha
July 17, 2023 @ 11:59 pm
I wholeheartedly congratulate the writer of this post for explaining the difficult concepts of data science in a simple and easy-to-understand manner. My only regret is that I didn’t read this post earlier. I have made many career decisions in my life after reading this, and have no regrets whatsoever to date.data science course in malaysia
Process Control and Gaussians
March 5, 2024 @ 1:49 pm
[…] Updating the 7 tools of QC (2019) […]