Discipline And The Broken Windows Theory | Dumontis

"Over the last few years a lot has been written about Lean leadership. For instance about what the differences would be between Lean and traditional leadership. And what the characteristics are of a Lean leader. One of the aspects often missing, I feel, is "discipline". I have always told my managers that they weren't paid more because they would supposedly be more intelligent or because they studied for a longer period of time, but because I expected them to be the most disciplined in respecting standards. As without the manager's respect - also interestingly described in the "broken windows" theory - the organization as a whole will flout its own rules."

Source: www.dumontis.com

Michel Baudin's comments:

Is being disciplined in respecting standards truly the quality that justifies managerial pay? By this criterion, the Caine's Captain Queeg and the Bounty's Lt. Bligh were both excellent managers. Whatever happened to "plan, organize, control, and lead"?

Like the "Hawthorne effect" or "Maslow's hierarchy of needs," the broken windows theory is being accepted just because it sounds plausible, not because it is supported by experiments. Do clean walls and intact windows deter serious crime? Perhaps, but it has to be established, and the response of passers-by to flyers does not do the job.

See on Scoop.it - lean manufacturing

The Creative Benefits of Boredom | HBR Blog Network | David Burkus

"[...]a certain level of boredom might actually enhance the creative quality of our work [...]"

Source: blogs.hbr.org

Michel Baudin's comments:

It is one step away from claiming that boredom makes you creative, which would make no sense. The frustration of boredom may motivate you to use your creativity, but deliberately boring people in order to make them creative is not something I would recommend.

I think that creativity is innate, but much more widely spread than most managers and engineers believe. The example in the article is about sales;  I am more familiar with manufacturing, where most jobs are repetitive, tedious, and boring.

They jobs are also tiring, but most production operators will tell you that they don't mind the tiredness as much as the slowness of the clock. Boredom is their number one enemy, and participation in improvement activities a welcome relief from it, as well as an opportunity to be creative.

People who are bored by repetitive tasks go "on automatic." Their hands keeps executing the sequence of tasks with accuracy and precision, while their minds wander off to, perhaps, the lake they fish in on week-ends. While on automatic, you don't think about improvements.

Changes in the routine, whether deliberate or accidental, refocus their minds on the workplace. This includes product changes, spec changes, rotation between work stations, or any breakdown like defects in the product, component shortages, or machine stoppages. During theses changes, while engaged, your mind is focused on responding as you were trained to, and avoiding mistakes. If you think of better ways to do this work, they go on the back burner in your mind, while you attend to immediate needs.

Depending on the management culture, operators may or may not be willing to share these ideas. They may be afraid of humiliation by a tactless manager, or they may fear that improving their job puts it in jeopardy,...

To put to use the operators' creativity, you have to organize for this purpose, and it can't be while the line is running. This is why continuous improvement requires structures, procedures, and leadership.

See on Scoop.it - lean manufacturing

VSM Pitfall: unnecessary process | Chris Hohmann

Value Stream Mapping (VSM) is probably the main analysis tool and the most used in the lean toolbox. Easy to understand and handle, VSM is the starting point of improvement workshops and kaizen eve...

Source: hohmannchris.wordpress.com

Michel Baudin's comments:

Thoughtful comments, as usual from Chris Hohmann.

However, we need to go further and question the wisdom of reducing Lean implementation to Value-Stream Mapping and kaizen events when neither tool is central to the Toyota Production System.

"Value-Stream Mapping," which is really materials and information flow mapping, is a minor tool at Toyota, used only with suppliers who have delivery problems. And "kaizen events" don't exist at Toyota.

See on Scoop.it - lean manufacturing

Are Part Numbers Too Smart for Their Own Good? | ENGINEERING.com

[...] technology experts are warning that the use of such descriptive part numbers is not necessarily so “smart,” and that they could drag down productivity in today’s fast-changing manufacturing environments. A smarter tactic, they assert, is to employ auto-generated “insignificant” or “non-intelligent” part numbers and let information about the part reside in a database. [...]

Source: www.engineering.com

See on Scoop.it - lean manufacturing

Michel Baudin's comments:
For details on the reasons to get rid of so-called "smart" part numbers, see  Why "Smart" part numbers should be replaced with keys and property lists.

Gauss with bell shape banknote

The bell curve: "Normal" or "Gaussian"?

Most discussions of statistical quality refer to the "Normal distribution," but "Normal" is a loaded word. If we talk about the "Normal distribution," it implies that all other distributions are, in some way, abnormal. The "Normal distribution" is also called "Gaussian," after the discoverer of many of its properties, and I prefer it as a more neutral term. Before Germany adopted the Euro, its last 10-Mark note featured the bell curve next to Gauss's face.

The Gaussian distribution is widely used, and abused, because its math is simple, well known, and wonderful. Here are a few of its remarkable properties:

  1. It applies to a broad class of measurement errors. John Herschel arrived at the Gaussian distribution for measurement errors in the position of bodies in the sky simply from the fact that the errors in x and y should be independent, that the probability of a given error should depend only on the distance from the true point.
  2. It is stable. If you add Gaussian variables, or take any linear combination of them, the result is also Gaussian.
  3. Many sums of variables converge to it.  The Central Limit Theorem (CLT) says that, if you add variables that are independent, identically distributed, with a distribution that has a mean and a standard deviation, they sum converges towards a Gaussian. It makes it an attractive model, for example, for order quantities for a product coming independently from a large number of customers.
  4. Diffusion  of syrup in waterIt solves the equation of diffusion. The concentration of, say, a dye introduced into clear water through a pinpoint is a Gaussian that spreads overt time. You can experience it in your kitchen: fill a white plate with about 1/8 in of water, and drop the smallest amount of grenadine syrup you can in the center. After a few seconds, the syrup in the water forms a cloud that looks very much like a two-dimensional Gaussian bell shape for concentration, as shown on the right. And it fact it is, because the Gaussian density function solves the diffusion equation, with a standard deviation that rises with time. It also happens in gases, but too quickly to observe in your kitchen, and in solids, but too slowly.
  5. It solves the equation of heat transfer by conduction. Likewise, when heat spreads by conduction from a point source in a solid, the temperature profile is Gaussian... The equation is the same as for diffusion.
  6. Unique filter. A time-series of raw data -- for temperatures, order quantities, stock prices,... -- usually has fluctuations that you want to smooth out in order to bring to light the trends or cycles your are looking for. A common way of doing this is replacing each point with a moving average of its neighbors, taken over windows of varying lengths, often with weights that decrease with distance, so that a point that is 30 minutes in the past counts for less than the point of 1 second ago. And you would like to set these weights so that, whenever you enlarge the window, the peaks in your signal are eroded and the valleys fill up. A surprising, and recent discovery (1986) is that the only weighting function that does this is the Gaussian bell curve, with its standard deviation as the scale parameter.
  7. Own transform. This is mostly of interest to mathematicians, but the Gaussian bell curve is its own Laplace transform, which drastically simplifies calculations.
  8. ...

For all these reasons, the Gaussian distribution deserves attention, but it doesn't mean that there aren't other models that do too. For example, when you pool the output of independent series of events, like failures of different types on a machine, you tend towards a Poisson process, characterized by independent numbers of events in disjoint time intervals, and a constant occurrence rate over time. It is also quite useful but it doesn't command the same level of attention as the gaussian.

The most egregious misuse of the gaussian distribution is in the rank-and-yank approach to human resources, which forces bosses to rate their subordinates "on a curve." Measuring several dimensions of people performance and examining their distributions might make sense, but mandating that grades be "normally distributed" is absurd.

Purpose and Etiquette of On-Line Discussions

In the Lean Six Sigma Worldwide discussion group on LinkedIn, Steven Borris asked about the purpose of on-line discussions, whether they should stick precisely to the topic they were started on, and how disagreements between participants should be expressed or handled. As a participant in a variety of professional forums for the past 16 years, I have come to think of an online discussion as a conference that is always in session, in which the posting etiquette should be the same as at conferences.

Contributors should think of readers first. LinkedIn members read discussions for enlightenment, not entertainment. This isn't Facebook. When readers browse a discussion, it is based on its subject, and that is what they expect to be covered. Like the title of a book, the name of a discussion announces what it is about. Readers are drawn to it by the need for information on that topic and have a legitimate expectation that the posts will be about it. If participants disappoint them, they go away upset at having been misled. For this reason,  discussions should stick to their subject, and group moderators or managers should make sure they do, with interesting digressions spawning new discussions.

Professional readers are also turned off by personal attacks and posts that question other posters' motives. The participants need to "play nice" with each other, but a discussion where they all express the exact same ideas would not be informative and would be dull. The contributors to the discussions I participate in often have decades of experience that have shaped their perspectives on the topics, differently based on the industries and companies they have worked for. They are not on the same wavelength.

Often, however, apparent disagreements disappear when the context is properly set. For example, in his 1999 book on Six Sigma,  Mikel Harry wrote that the future of all business depends on an understanding of statistics; Shigeo Shingo, on the other hand, had no use for this discipline and wrote in ZQC that it took him 26 years to become free of its spell.

That sounds like a clear-cut disagreement. Mikel Harry developed Six Sigma at Motorola in the 1980s; Shigeo Shingo was a consultant and trainer primarily in the Japanese auto industry from 1945 to the 1980s, too early for discussion groups. Harry and Shingo worked in different industries with different needs at different times.With proper context setting, they can be both right.  Posts that start with "In my experience..." and support topical conclusions with an account of what that experience go a long way towards setting that context.

Toyota Cutting the Fabled Andon Cord, Symbol of Toyota Way | Automotive News

Toyota is retiring the fabled “andon cord,” the emergency cable strung above assembly lines that came to symbolize the built-in quality of the Toyota Way and was widely copied through the auto industry and beyond.

Source: www.autonews.com

 

Michel Baudin's comments:
The point of having a cord rather than buttons was that the cord could be pulled from anywhere along the line, whereas buttons require you to be where they are. It is the same reason many buses have cords for passengers to request stops rather than buttons.

Toyota's rationale for moving to buttons, according to the article, is the desire to clear the overhead space. Another advantage, not stated in the article, is that the alarm from a button is more location-specific than from a cord.

Another reason to use a cord was that you didn't have to change it when you rearranged the line, whereas relocating buttons required rewiring. But the wireless button technology has made this a moot point.

See on Scoop.it - lean manufacturing

The meaning(s) of "random"

Random and seq. access"That was random!" is my younger son's response to the many things I say that sound strange to him, and my computer has Random Access Memory (RAM), meaning that access to all memory locations is equally fast, as opposed to sequential access, as on a tape, where you have to go through a sequence of locations to reach the one you want.

In this sense, a side-loading truck provides random access to its load, while a back-loading truck provides sequential access.

While  these uses of random are common, they have nothing to do with probability or statistics, and it's no problem as long as the context is clear. In discussion of quality management or production control, on the other hand, randomness is connected with the application of models from probability and statistics, and misunderstanding it as a technical term leads to mistakes.

From the AMS blog (2012)

From the AMS blog (2012)

In factories, the only example I ever saw of Control Charts used as recommended in the literature was in a ceramics plant  that was firing thin rectangular plates for use as electronic substrates in batches of 5,000 in a tunnel kiln. They took dimensional measurements on plates prior to firing, as a control on the stamping machine used to cut them, and they made adjustments to the machine settings if control limits were crossed. They did not measure every one of the 5,000 plates on a wagon. The operator explained to us that he took measurements on a "random sample."

"And how do you take random samples?" I asked.

"Oh! I just pick here and there," the operator said, pointing to a kiln wagon.

That was the end of the conversation. One of the first things I remember learning when studying statistics was that picking "here and there" did not generate a random sample. A random sample is one in which every unit in the population has an equal probability of being selected, and it doesn't happen with humans acting arbitrarily.

A common human pattern, for example, is to refrain from picking two neighboring units in succession. A true random sampler does not know where the previous pick took place and selects the unit next to it with the same probability as any other. This is done by having a system select a location based on a random number generator, and direct the operator to it.

This meaning of the word "random" does not carry over to other uses even in probability theory. A mistake that is frequently encountered in discussions of quality is the idea that a random variable is one for which all values are equally likely.  What makes a variable random is that probabilities can be attached to values or sets of values in some fashion;  it does not have to be uniform. One value can have a 90% probability while all other values share the remaining 10%, and it is still a random variable.

When you say of a phenomenon that it is random, technically, it means that it is amenable to modeling using probability theory. Some real phenomena do not need it, because they are deterministic:  you insert the key into the lock and it opens, or you turn on a kettle and you have boiling water. Based on your input, you know what the outcome will be. There is no need to consider multiple outcomes and assign them probabilities.

There are other phenomena that vary so much, or on which you know so little, that you can't use probability theory. They are called by a variety of names; I use uncertain.  Earthquakes, financial crises, or wars can be generically expected to happen but cannot be specifically predicted. You apply earthquake engineering to construction in Japan or California, but you don't leave Fukushima or San Francisco based on a prediction that an earthquake will hit tomorrow, because no one knows how to make such a prediction.

Between the two extremes of deterministic and uncertain phenomena is the domain of randomness, where you can apply probabilistic models to estimate the most likely outcome, predict a range of outcomes, or detect when a system has shifted. It includes fluctuations in the critical dimensions of a product or in its daily demand.

The boundaries between the deterministic, random and uncertain domains are fuzzy. Which perspective you apply to a particular phenomenon is a judgement call, and depends on your needs. According to Nate Silver, over the past 20 years, daily weather has transitioned from uncertain to random, and forecasters could give you accurate probabilities that it will rain today. On the air, they overstate the probability of rain, because a wrong rain forecast elicits fewer viewer complaints than a wrong fair weather forecast. In manufacturing, the length of a rod is deterministic from the assembler's point of view but random from the perspective of an engineer trying to improve the capability of a cutting machine.

Rods for assemblers vs. engineers

Claude Shannon

Claude Shannon

This categorization suggests that that a phenomenon that is almost deterministic is, in some way, "less random" than one that is near uncertainty. But we need a metric of randomness to give a meaning to an expression like "less random."  Shannon's entropy does the job. It is not defined for every probabilistic model but, where you can calculate it, it works. It is zero for a deterministic phenomenon, and rises to a maximum where all outcomes are equally likely. This brings us back to random sampling.  We could more accurately  call it "maximum randomness sampling" or "maximum entropy sampling," but it would take too long.

How to Really See What is Going On in Your Workplace | IndustryWeek | Jamie Flinchbaugh

How managers can use the four levels of observation to really see what is going on in their workplace:

      1. Stories and anecdotes.
      2. Data and graphs.
      3. Pictures and diagrams.
      4. Direct observation."

Source: www.industryweek.com

Michel Baudin's comments:
Deep down, I believe I agree with Jamie Flinchbaugh on observation, but I am puzzled by the way he phrases it. He describes stories and anecdotes as "the most abstract level of observation." I see them as a means of persuasion, not observation, and concrete, not abstract.

I don't see data as necessarily dependent on assumptions. What assumptions are there behind, say, the number of boxes of Cereal Z you sold last month? It is just a fact. While photographs are a form of data, graphs and diagrams are ways of analyzing data and presenting results, which is also downstream from observation.

For the analysis of a plant, I see three main sources of input:

  1. Direct observation of the operations.
  2. Interviews with key members of the organization.
  3. The organization's data.

The Lean literature justifiably emphasizes direct observation. You go to where the work is being done, and then apply various mental techniques to help you notice relevant characteristics. You may even gather data in the form of photographs an videos for future analysis.

But it cannot be your only source. You also need to know what the manager's ambitions are for the organization, what they have tried to realize them, and what obstacles they feel they have encountered. Their perceptions may or may not agree with what you see with your own eyes, but you need to know what they are.

Finally, any business activity leaves a data trail that should not be ignored, including product and process definitions, current status, history, and plans for the near and distant future. All of this also needs to be reviewed and confronted with direct observation and human perceptions.

It's when you present your conclusions and recommendations that you use stories, graphs, diagrams, pictures, and videos to get your point across.

See on Scoop.it - lean manufacturing

"Studies show..." or do they?

Various organization put out studies that, for example, purport to "identify performances and practices in place among U.S. manufacturers."  The reports contain tables and charts, with narratives about "significant gaps" -- without stating any level of significance -- or "exponential growth" -- as if there were no other kind. They borrow the vocabulary of statistics or data science, but don't actually use the science; they just use the words to support sweeping statements about what manufacturers should do for the future.

At the bottom of the reports, there usually is a paragraph about the study methodology, explaining that the data was collected as answers to questionnaires mailed to manufacturers and made available on line, with the incentive for recipients to participate  being a free copy of the report. The participants are asked, for example, to rate "the importance of process improvement to their organization's success over the next five years" on a scale of 1 to 5.

The results are a compilation of subjective answers from a self-selected sample. In marketing, this kind of surveys makes sense. You throw out a questionnaire about a product or a service. The sheer proportion of respondents gives you information about the level of interest in what you are offering, and the responses may further tell you about popular features and shortcomings.

But it is not an effective approach to gauge the state of an industry. For this purpose, you need objective data, either on all companies involved or on a representative sample that you select. Government bodies like the Census Bureau or the Bureau of Labor Statistics collect useful global statistics like value-added per employee or the ratio indirect to direct labor by industry, but they are just a starting point.

Going beyond is so difficult that I don't know of any successful case. Any serious assessment of a company or factory requires visiting it, interviewing its leaders in person, and reviewing its data. It takes time, money, know-how, and a willing target. It means that the sample has to be small, but there is a clash between the objective of having a representative sample and the constraint of having a sample of the willing.

For these reasons, benchmarking is a more realistic approach, and I know of at least two successful benchmarking studies in manufacturing, both of which, I believe, were funded by the Sloan Foundation:

  • The first was the International Assembly Plant Study, conducted in the late 1980s about the car industry, whose findings were summarized in The Machine That Changed The World in 1990. The goal was not to identify the distribution of manufacturing practices worldwide but to compare the approaches followed in specific plants of specific companies, for the purpose of learning. Among other things, the use of the term "Lean" came out of this study.
  • The second is the Competitive Semiconductor Manufacturing Program, which started in the early 1990s with a benchmarking study of wafer fabrication facilities worldwide. It did not have the public impact of the car assembly plant study, but it did provide valuable information to industry participants.

The car study was conducted out of MIT; the semiconductor study, out of UC Berkeley. Leadership from prestigious academic organizations helped in convincing companies to participate and provided students to collect and analyze the data. Consulting firms might have had better expertise, but could not have been perceived as neutral with respect to the approaches used by the different participants.

The bottom line is that studies based on subjective answers from a self-selected sample are not worth the disk space you can download them onto.