Digital Transformation vs. Lean Transformation | Bob Emiliani

“Corporate investment is increasingly shifting from machinery and employees to robots and software. Why? Because CEOs think digital transformation will be a source of competitive advantage. And it is a transformation that they think they can execute more rapidly compared to Lean transformation. CEOs also think that automation and artificial intelligence will take on greater roles, while the work of employees will take on less significance over time. They think technology is becoming more valuable than employees.”

Sourced through Bob Emiliani’s blog

Michel Baudin‘s comments: “Digital transformation” is a quaint way of describing the growing pervasiveness of software in business, with its infrastructure of computers, computer-controlled devices, and networks. Digital is normally opposed to analog, as in music CDs versus vinyl LPs. The early work on industrial automation was based on analog mechanical, fluidic, or electronic control systems, and its “digital transformation” happened decades ago with the advent of numerically controlled (CNC) machine tools and programmable logic controllers (PLCs). This is not what Bob is talking about, but I am not sure what he is talking about.

Continue reading

Industry 4.0 – Revolution or Evolution | Bodo Wiegand | Wiegand’s Watch

 

Bodo WiegandBodo Wiegand heads the Lean Management Institute, which is the German affiliate of the Lean Enterprise Institute. In his latest newsletter, on Wiegand’s Watch, he explains how he feels manufacturers should respond to the German government’s Industry 4.0 initiative.

Continue reading

Excel Hell – An Insider’s Report | Chad Smith | LinkedIn Pulse

Excel Hell, from Gustave Doré print

From a Gustave Doré print

“95% of companies report that they are using spreadsheets to augment their ERP system for planning. I asked a good friend that I have known for 20 years to share his experiences with the proliferation of work-arounds and ad-hoc planning “solutions” that we tend to see in most companies that run MRP. My friend cannot specifically name the products his company makes because the market is dominated globally by only two players (he works for one of them). The sales of this company are between $100M – $500M (US) annually. Read about his experiences and let me know if you can relate.”

Sourced through LinkedIn Pulse

Michel Baudin‘s comments:

The issues listed by Chad Smith’s friend are not specific to Excel. His company’s MRP or ERP system does not meet the functional needs of the Planning Department, and its members supplement it by crunching data extracts from it on their personal systems, in their own ways. The manager does not control what formulas are used, and does not know how diligent each member is at keeping the data up do date. The planners happen to be using Excel, but these problems would not be solved if they replaced Excel with any other single-user tool: they should all work on the same data, not individually ordered extracts of inconsistent vintage, and the planning logic should be shared, not buried in private spreadsheets.

Continue reading

How Does This All Play Out?

It is a seemingly simple question, but one that is not asked as often as it should be. It challenges managers to consider the responses of other stakeholders and think beyond immediate consequences. It checks their “bias for action,” and makes them take a pause to think farther than one move ahead.

If you outsource an item, for example, will the new supplier eventually morph into a competitor? What know-how might you lose? How will it affect employee morale? Are you putting your quality reputation at risk?  The question is an invitation to work through multiple scenarios of responses by your suppliers, your work force, and your customers, reaching into the future.

Continue reading

Manufacturing’s Digital Revolution | Travis Hessman | Industry Week

GE's Jamie Miller

GE’s Jamie Miller

“The once distant and isolated worlds of OT and IT – of physical production and the software that drives it – has been on a steady, inevitable collision course for over a decade.  Today, with the help of sensors, powerful analytics, and the Internet of Things, those two sides of the manufacturing world are finally ready to merge. The result will be nothing short of a full-scale manufacturing revolution.”

 

Sourced through Industry Week

Michel Baudin‘s comments:

“OT,” as an acronym, is new to me. In this context, it stands for Operational Technology, and it differs from IT in that, instead of putting out words and pictures on screens for humans to read, it issues instructions to physical devices, like automatic machines, robots, or Automatic Guided Vehicles (AGVs). “OT” in this sense is so recent that,  google doesn’t know it, and spells it out as Occupational Therapy.

In her keynote presentation at the IndustryWeek Manufacturing & Technology Conference and Expo in Rosemont, IL, on May 4, GE’s Jamie Miller asserted that the OT/IT merger and the data-rich world of the Industrial Internet were the key drivers of changes in manufacturing for the next few years. But the obstacles to this merger, or even convergence, have been non-technical for decades. While the Industrial Internet of Things (IIoT) may be a real breakthrough, its absence was not the reason OT and IT have remained apart.

Continue reading

Is Choosing a Consultant Truly The Second Step in ERP Implementation?

According to the previously cited guide from ERP Focus, choosing an implementation consultant is the second step of ERP implementation, right after selecting a vendor. In the consulting business, being a certification as an implementer from a leading ERP vendor is known as a license to print money. Even vendors of ERP products acknowledge that their customers spend more to implement the software than to buy it, and that much of this cost goes into consulting fees. The following are a few thoughts about the process of ERP implementation and the roles played by consultants, contractors, and the in-house IT team. Continue reading

Is Vendor Selection Really The First Step in ERP Implementation?

A free guide that you can download from ERP Focus makes vendor selection the first of an 11-step implementation process, while defining success is the last.  In other words, they have you choose who you buy from before having a clear idea of what you are trying to accomplish.

It reminds me of a meeting at a client site where ERP implementation was about to begin. “This train has left the station,” I was told. The purpose of the meeting was to draw a “Value Stream Map” for the whole plant, in preparation for ERP, and the participants included managers from Manufacturing, Quality, Production Control, Maintenance, Purchasing, Sales, and Engineering.

Continue reading

“I’ve had results with Lean but Corporate pushes ERP. Any advice?” | LEI | Michael Ballé

Question:  “I’m the head of a business unit and have had visible results with lean. Yet, my corporate colleagues refuse to acknowledge this and want to force their ERP and purchasing practices on my division. This is very frustrating – any advice?”

Answer: “I certainly understand (and share) your frustration and, unfortunately, I don’t really have useful advice[…] No easy answers”

Sourced through Scoop.it from: www.lean.org

Michel Baudin‘s comments:

Ballé then follows up the non-advice with a 1,079-word essay where, among other developments, he equates the use of ERP with colonialism, leading to the conclusion that there are no easy answers.

Let us assume that the question is from a real manager in a real situation, in a position to make choices with real consequences for his or her career as well as for the company. It deserves an answer.

Continue reading

Averages in Manufacturing Data

The first question we usually ask about lead times, inventory levels, critical dimensions, defective rates, or any other quantity that varies, is what it is “on the average.” The second question is how much it varies, but we only ask it if we get a satisfactory answer to the first one, and we rarely do.

When asked for a lead time, people  usually give answers that are either evasive like “It depends,” or weasel-worded like “Typically, three weeks.” The beauty of a “typical value” is that no such technical term exists in data mining, statistics, or probability, and therefore the assertion that it is “three weeks” is immune to any confrontation with data. If the assertion had been that it was a mean or a median, you could have tested it, but, with “typical value,” you can’t.

For example, if the person had said “The median is three weeks,” it would have had the precise meaning that 50% of the orders are delivered in less than 3 weeks, and that 50% take longer. If the 3-week figure is true, then the probability of the next 20 orders all taking longer, is 0.5^{20}= 9.6\,ppm. This means that, if you do observe a run of 20 orders with lead times above 3 weeks, you know the answer was wrong.

In Out of the Crisis, Deming was chiding journalists for their statistical illiteracy when, for example, they bemoaned the fact that “50% of the teachers performed beneath the median.” In the US, today, the meaning of averages and medians is taught in Middle School, but the proper use of these tools does not seem to have been assimilated by adults.

One great feature of averages is that they add up: the average of the sum of two variables is the sum of their averages. If you take two operations performed in sequence in the route of a product, and consider the average time required to go through these operations by different units of product, then the average time to go through operations 1 and 2 is the sum of the average time through operation 1 and the average time through operation 2, as is obvious from the way an average is calculated. If you have n values X_{1},...,X_{n}

the average is just

\bar{X}= \frac{X_{1}+...+X_{n}}{n}

What is often forgotten is that most other statistics are not additive.

To obtain the median, first you need to sort the data so that  X_{\left(1\right)}\leq ... \leq X_{\left(n\right)}. For each point, the sequence number then tells you how many other points are under it, which you can express as a percentage and plot as in the following example:

Median graphic

Graphically, you see the median as the point on the x-axis where the curve crosses 50% on the y-axis. To calculate it, if n is odd, you take the middle value

\tilde{X}= X_{_{\left (\frac{n}{2}+1\right )}}

 and, if n is even, you take the average of the two middle values, or

\tilde{X}= \frac{\left[ X_{_{\left (\frac{n}{2}\right )}}+X_{_{\left (\frac{n}{2}+1\right )}}\right]}{2}

and it is not generally additive, and neither are all the other  statistics based on rank, like the minimum, the maximum, quartiles, percentiles, or stanines.

An ERP system, for example, will add operation times along a route to plan production, but the individual operation times input to the system are not averages but worst-case values, chosen so that they can reliably be achieved. The system therefore calculates the lead time for the route as the sum of extreme values at each operation, and this math is wrong because extreme values are not additive. The worst-case value for the whole route is not the sum of the worst-case values of each operation, and the result is an absurdly long lead time.

In project management, this is also the key difference between the traditional Critical Path Method (CPM) and Eli Goldratt’s Critical Chain. In CPM, task durations set by the individuals in charge of each task are set so that they can be confident of completing them. They represent a perceived worst-case value for each task, which means that the duration for the whole critical path is the sum of the worst-case values for the tasks on it. In Critical Chain, each task duration is what it is actually expected to require, with a time buffer added at the end to absorb delays and take advantage of early completions.

That medians and extreme values are not additive is experienced, if not proven, by a simple simulation in Excel. Using the formula “LOGNORM.INV(RAND(),0,1)” will give you in about a second, 5,000 instances of two highly skewed variables, X and Y, as well as their sum X+Y. On a logarithmic scale, their histograms look as follows:

lognormal histogram with sum

And the summary statistics show the Median, Minimum and Maximum for the sum are not the sums of the values for each term:

Simulation stats

Averages are not only additive but have many more desirable properties, so why do we ever consider medians? There are real problems with averages, when taken carelessly:

      1. Averages are affected by extreme values. It is illustrated by the Bill Gates Walks Into a Bar story. Here we inserted him into a promotional picture of San Fancisco’s Terroir Bar:Bill-Gates-in-terroir_bar-SFAttached to each patron other than Bill Gates is a modest yearly income. But his presence pushes the average yearly income above $100M, which is not a meaningful summary of the population. On the other hand, consider the median. Without Bill Gates, the middle person is Larry, and the median yearly income, $46K. Add Bill Gates, and the median is now the average of Larry and Randy, or $48K. The median barely budged! While, in this story, Bill Gates is a genuine outlier, manufacturing data often have outliers that are the result of malfunctions, as when wrong measurements are recorded as a result of a probe failing to touch the object it is measuring, or the instrument is calibrated in the wrong system of units, or a human operator puts a decimal point in the wrong place…Large differences between average and median are a telltale sign of this kind of phenomenon. Once the outliers are identified, assessed, and filtered, you can go back to using the average rather than the median.
      2. Averages are meaningless over heterogeneous populations. The statement that best explains this is “The average American has exactly one breast and one testicle.” It says nothing useful about the American population. In manufacturing, when you consider, say, a number of units produced, you need to make sure you are not commingling 32-oz bottles with minuscule free samples.
      3. Averages are meaningless for multiplicative quantities. If you data is the sequence Y_{1}, ...,Y_{n} of yields of the n operations in a route, then the overall yield is Y= Y_{1}\times ...\times Y_{n}, and the plain average of the yields is irrelevant. Instead, you want the geometric mean \bar{Y}=\sqrt[n]{Y_{1}\times ...\times Y_{n}}.
        The same logic applies to the compounding of interest rates, and the plain average of rates over several years is irrelevant.
      4. Sometimes, averages do not converge when the sample size grows. It can happen even with a homogeneous population, it is not difficult to observe, and it is mind boggling. Let us say your product is a rectangular plate. On each one you make, you measure the differences between their actual lengths and widths and the specs, as in the following picture:Plate dimensions
        Assume then that, rather than the discrepancies in length and width, you are interested in the slope ΔW/ΔL and calculate its average over an increasing number of plates. You are then surprised to find that, no matter how many data points you add, the ratio keeps bouncing around instead of converging as the law of large numbers has led you to expect. So far, we have looked at the averages as just a formula applied to data. To go further, we must instead consider that they are estimators of the mean of an “underlying distribution” that we use as a model of the phenomenon at hand. Here, we assume that the lengths and widths of the plates are normally distributed around the specs. The slope ΔW/ΔL is then the ratio of two normal variables with 0 mean, and therefore follows the Cauchy distribution. This distribution has the nasty property of not having a mean, as a consequence of which the law of large numbers does not apply. But it has a median, which is 0.

The bottom line is that you should use averages whenever you can, because you can do more with them than with the alternatives, but you shouldn’t use them blindly. Instead, you should do the following:

      1. Clean your data.
      2. Identify and filter outliers.
      3. Make sure that the data represents a sufficiently homogeneous population.
      4. Use geometric means for multiplicative data.
      5. Make sure that averaging makes sense from a probability standpoint.

As Kaiser Fung would say, use your number sense.

Is the Kanban system to ensure availability of materials or to reduce inventory?

Pranay Nikam, from VCT Consulting India, asked the following question:

“I have designed and implemented the Kanban System at various type of industries. The challenge I face now is not that of explaining people how the system is designed or how it works. But rather clearing the misconception/misunderstandings key industry people have about Kanban.

My understanding of a Kanban System is ‘A Consumption based replenishment system’ with Multiple Re-Order Point (multiple Bins) as opposed to the traditional Two Bin System. In simpler words you keep enough stock to cover for the total lead time and add a buffer for demand variation and supply failures. And keep replenishing the stock as and when you consumes. The replenishment can be through fresh production or withdrawal from Warehouses or procurement from supplier.

Prime objective of the Kanban System is material availability to enable High Mix and low volume production; ultimately to support production levelling instead of running huge batches.

However, some Lean Consultants propagate Kanban as a inventory reduction tool and nothing more than a material scheduling software that can be configured in any ERP Systems.

I would be happy be receive your comments on the two different perspectives.”

The Kanban system has many variants, discussed in Chapters 10 to 13 of Lean Logistics. All these variants, however, have the following characteristics in common:

  1. They implicitly assume the demand for an item in the immediate future to match the recent past. It is a naive forecast, but hard to beat on intervals that are negligible with respect to what Charlie Fine calls the clockspeed of the business. And the fluctuations are smoothed by leveling/heijunka.
  2. They use some form of tokens to signal demand. Whether these tokens are cards or electronic messages, they can be detached from bins and parts and processed separately, in ways that are not possible, for example, in the two-bin system.
  3. There is a fixed number of tokens in circulation for each item, which is a key control mechanism for the supply of this item.
  4. The protocols for handling these tokens provide unambiguous directions on what should be done. No human judgement call is required to decide which item to move or produce. There are variations where that is not the case, like the French Kanban, which, for this reason, I don’t consider genuine.

The Kanban system  is not just a multiple-bin system, because bins are not used as pull signals. The Kanbans are pulled from bins when you start withdrawing parts from it, which you couldn’t do if the bin itself were used as a signal. If the signals are cards, you can organize them in post-office slots or on boards, which you also couldn’t do with bins. And, of course, you can do much more with electronic signals, which does not necessarily mean you should.

Your description of Kanban omits the goal of keeping inventory as low as you can without causing shortages, and experimenting with the numbers of Kanbans in circulation to test where the limit is, which makes it a tool to drive improvement.

Kanbans work for items consumed in quantities that have small fluctuations around a mean, which means medium-volume/medium mix rather than low-volume/high mix. You use other methods for different demand patterns, like reorder point for bulk supplies, consignment for standard nuts, bolts and washers, or just-in-sequence for option-specific large items… In low-volume/high-mix production you have many items that you cannot afford to keep around and only order from your supplier when you have an order from your customer; it isn’t the way the Kanban system works.

You can do many things with ERP systems but, historically, they have been more effective in managing purchase orders with suppliers than in directing shop floor operations. If you have an ERP system with accurate, detailed data about your shop floor, you can, in principle apply any algorithm you want to produce a schedule. Most ERP systems, however, do not even have structures in their databases to model the behavior of production equipment at a sufficient level of detail, and are not capable of producing actionable schedules. They print recommendations, and the final decision on the work that is actually done is a judgement call by the supervisor, or even sometimes the operator. Within its range of applicability, the Kanban system avoids this with simple rules, by focusing on what is actually observable and controllable at the local level.

So, I suppose the answer to your question is that the Kanban system’s immediate purpose in daily operations  is to assure the availability of materials while reducing inventory, with the longer-term purpose of driving improvement. Pursuing either of these goals at the expense of the other would be easier but not helpful to the business.