Lean Strategy | Bob Emiliani

bob emiliani

“Fifteen years ago, Art Byrne suggested to me that the title of our book about The Wiremold Company’s Lean transformation should be Lean Strategy. I resisted that suggestion because I did not view Lean as a strategy, despite Art’s firm view that Lean is a strategy. Who was right, me or Art?”

Sourced through Lean Leadership

Michel Baudin‘s comments:

Strategy originally is a military term, for the plans on where you deploy armies and fleets and for what purposes. It is supplemented by tactics, the methods used in the field to engage the enemy. It is easy to think of it as cascading down, where what is tactics to the general is strategy to the colonel, and so on down to the grunt, who only has tactics. To the CEO, Lean is not a strategy but a tactic; to the VP of Manufacturing, on the other hand, it is a strategy.

For details in this blog, see last year’s About Strategy, Tactics, and Lean.

About Strategy, Tactics, and Lean

Carl von Clausewitz, writer on military strategy and tactics

Carl von Clausewitz, writer on military strategy and tactics

Originally “the art of the general,” strategy is about which armies or fleets you deploy where and for what purpose. It goes hand in hand with tactics, which is the way each unit then engages the enemy. Always fond of military metaphors, business people have chosen to use  the term”strategy”  for their plans and decisions on products or services, markets, promotion methods, technology, organization, and financing. To Harvard Business School’s Michael Porter “the essence of [business] strategy is choosing what not to do.”

Continue reading

Guidelines for Fast Lean Transformation | M. Zinser & D. Ryeson | HBR Blog

See on Scoop.itlean manufacturing
One of the most common mistakes that companies make when embarking on a lean program is trying to do too much at once. These “boil-the-ocean” initiatives are long, costly and often end up stalling under the weight of their own…

 

Michel Baudin‘s insight:

Scoop It just brough my attention to this 2 1/2-year old article by BCG consultants Michael Zinser and David Ryeson. Their key point is that a successful Lean implementation must start with a small number of well-chosen, pilot projects, and I agree.

I do, however, part company with them on two other issues. First, they only speak the language of money, relentlessly bringing up costs, savings,  payoffs, metrics and incentives. I understand that this language is familiar and attractive to top management.

The article only cites examples of improvements that have a direct economic impact, but there are many aspects of Lean for which the relationship is indirect. Scoring a goal in tonight’s game has a direct impact on performance; building a championship team doesn’t.

Which brings me to my second disagreement with the authors:  there is no consideration in their article of the need to develop the organization’s technical and managerial skills. They are just assumed to be there.

Lean is about developing a team that is able to compete at the highest level in your industry. If you already have such a team, you are probably not looking to implement Lean. If you don’t have it, you can’t start projects as if you did. Instead, you have to focus on projects that your team can do today and that will start it on its way. The biggest payoff and the practically possible do not always match.

This perspective is missing in their guidelines.

See on blogs.hbr.org

Manufacturing can thrive in countries with expensive labor

The news may lead you to believe that almost everything is manufactured in China, and that what little is left is made in Mexico, India, or Brazil. The numbers about the world economy, however, tell a different story. Most of the manufacturing in the world still takes place  in advanced economies, where labor is expensive and where manufacturing employs a diminishing share of the labor force.

What the data says

The Curious Cat Economics blog contains several enlightening charts about the state of manufacturing in the world. Figure 1, for example, reveals that, as of 2010, three of the four largest manufacturing economies — the US, Japan and Germany —  were high-wage countries, with China as the lone exception.

Figure 1. Top countries by manufacturing output

I was first puzzled by what was meant by “Output” on the y-axis. It is in fact manufacturing’s contribution to the Gross Domestic Product (GDP). GDP is not a perfect metric, but it is supposed to be the value added of the entire economy, or its Sales minus External Inputs, the external inputs being purchases of materials, energy, and outsourced services. In other words:

Value\: Added = Sales - \left ( Materials + Energy + Outsourced\: Services \right )

Out of this Value Added, companies have to pay for people, facilities and equipment, and taxes. This metric has the merit of being widely available, in part because it is used as the basis for a tax in many countries. It is meaningful, but not a perfect measure of activity, and much confusion is created by the many uses of the term “value added” to mean other things.

Using data from the US Bureau of Labor Statistics (BLS), Figure 2 shows how these countries rank in terms of labor costs, with the special case of China shown in Figure 3. Labor costs in Figure 3 include both wages and benefits; they are total costs to the employers.

Figure 2. Labor Costs by Countries in 2010

The only cheap labor country among the top four is China, and it is not shown in Figure 2 because the BLS does not have data on China  past 2008. Figure 3, however, shows wages in China rising rapidly, and they have continued to do so since 2008. According to The Economist’s Intelligence Unit, average labor costs in China in 2010 were $1.84/hr.

Figure 3. Labor Costs in China between 2003 and 2008

Furthermore, as you can see in Figure 4, also from the  Curious Cat Economics blog, manufacturing’s share of the GDP is holding steady in the USA, Japan, and Germany.

Figure 4. Manufacturing as a share of GDP

Manufacturing remains a major component of the advanced economies, but this is compatible with a decline in manufacturing employment.  As China built the largest manufacturing sector in the world up from nothing in 35 years, some activity has clearly migrated there from Europe, the US, Canada, and Japan,  but the shift is nowhere near as massive as it is perceived to be. What has created this perception is that, in the G7 countries, manufacturing employment has decreased much more than manufacturing’s share of GDP. The high labor costs are coupled with high productivity, and it takes fewer people to produce the same value added. In the US, manufacturing dropped from a high of 40% of the labor force  in 1960 to about 11% today. In Japan, Nippon Steel’s Yawata Steel Works in Kitakyushu produces as much steel today as it did 45 years ago, with a work force that has gone from about 40,000 in the 1960s to 3,000 today. An auto parts plant in China will be swarming with people; its counterpart in Japan will have one operator taking care of 20 machines.

To take a closer look, I retrieved the data in Figure 5 from a variety of sources. All the numbers about GDP are from the World Bank; most labor costs, from the US Bureau of Labor Statistics (BLS). The grand total is $2 trillion short of the total world GDP for 2011 due to the exclusion of countries that don’t provide data on Manufacturing. The labor costs for China and India respectively come from the British magazine The Economist and from an Indian e-zine. Although these numbers are from respected sources, we should not forget that the quality of economic statistics, while nowhere perfect, is much better in rich democracies than in poor or authoritarian regimes. The collection of complete, accurate data sets is expensive, and you need a free press to keep rulers from manipulating this sensitive data, in particular growth rates.

Figure 5. GDP, Manufacturing GDP and Labor Costs

China is at the top of this ranking, but followed closely by the US and mostly other members of the G7 group. The remaining countries in the table are leading emerging economies often referred to as “BRIC” for Brazil, Russia, India, and China. Figure 6 summarizes the manufacturing GDP data in Figure 5 by country group.

Figure 6. Manufacturing GDP by Country Group

The BRIC countries together represent 2.7 billion people; the G7, about 725 million. The BRIC country labor costs range from $1.71/hr in India to $10/hr in Brazil;  the G7, from $29/hr in the UK to $43/hr in Germany. Yet the BRIC countries account for substantially less of the world’s manufacturing GDP than the G7, and, among the BRIC, China by itself accounts for two thirds.

Within the Asian context, according to the IMF, China’s labor costs in 2010 were already 3rd in Asia, behind Malaysia and Thailand, as shown in Figure 7:

Figure 7. Labor Costs in Asia

The IMF expects China to rise to second place by 2015, as follows:

“However, it is expected that China’s next five year plan will see mechanisms put in place to double the country’s minimum wage by 2015. That will raise the Chinese figure to $3,000 plus welfare of 50 percent, assuming the latter payments remain the same. This provides a total minimum salary overhead of $4,500. In reality, most salaries will be far higher. That will make China’s average labor cost second only to Malaysia and significantly more expensive than any other Asian country.”

How industrial economies emerge

The industrial revolution in Europe and the US was not an event like the American and French revolutions but instead a development that took more than a century to unfold. The emerging economies of today do not have this leisure. They need to catch-up in a few decades. The first one to do it successfully was Japan, starting in the 1860s with the Meiji era. Late 20th century success stories include South-Korea, Taiwan, Hong Kong, Singapore, Malaysia, and finally China. These countries all followed different paths, but share the characteristic that they relied on cheap labor to get started and that, as their manufacturing sectors matured, their labor costs rose and ceased to be the primary motivation for locating factories on their soil.

The challenges facing manufacturing companies in every country evolve over time. During the cheap labor phase, they are handicapped by the work force’s lack of skills, which is usually compounded by a rickety infrastructure of transportation, energy and communications, the lack of local suppliers for materials and services, and a weak legal system. For foreign companies, you also have the additional challenge of training and managing a new work force in a different language, possibly across many time zones, and of shipping the output to the target market.

Labor remains cheap only if the country fails to develop as hoped, in which case, the above-mentioned challenges persist. If the country succeeds, they are resolved, but labor is no longer cheap, and manufacturers there must compete both with cheaper labor counterparts in newer economies and with expensive labor in older ones. Vietnam’s labor today costs one third of China’s while China’s is one fifth of Russia’s.

Systematically searching for the cheapest labor implies walking away every few years from investments, not only in plant and equipment but in the development of a skilled workforce, an organization adapted both to a local culture and to the corporate culture of a multinational, and a local supply chain. It can only pay in industries where none of these issues matter.

In most industries, building a factory is a commitment to the area, region, and country for decades, and you make that decision for a variety of reasons, labor cost being only one of many factors. You may believe in the country’s potential as a market for your goods, want a hedge against exchange rate fluctuations, and defuse protectionist tendencies. In the car industry, this is why Toyota built plants in the US and Volkswagen in China.

To survive and thrive in advanced economies, manufacturers focus on products where labor costs matter least. Following are a few of the factors that play a part:

  1. Lean has sufficiently increased productivity.
  2. Not only are people used effectively but the level of automation is high, as is the case with products ranging from cars to detergents.
  3. The products have a high value added, like luxury watches from Switzerland.
  4. There is a high technological barrier to entry in the industry.
  5. The products are customized, which makes geographical proximity to customers an advantage.

No advantage, however, lasts forever. Labor costs go up, and technological barriers fall. The German camera industry was a leader until the late 1960s, with brands like Rollei, Leitz, Zeiss, or Edixa. It was relying on the knowledge accumulated by generations of engineers over a century in optics calculations. By the late 1960s, however, computers were powerful enough for Japanese competitors like Nikon, Canon, or Minolta to replicate these calculations with software and take over the market. 40 years later, German cameras are collectibles but the German optics industry has not disappeared. It has shifted to high-end professional equipment for such applications as photolithography, and is still present in such mass-markets as eyeglass lenses.

Conclusions

Unless you are at the bottom or at the top of the scale, you always have competitors with lower wages and with higher wages. The US is a low-wage competitor to Denmark but a high-wage competitor to Mexico, which itself is a high-wage competitor to China. Now Vietnam is emerging as a low-wage competitor to China. Against high-wages you compete on price; against low-wages, through superior quality and higher productivity.

In countries that develop successfully, wages rise and competing through cheap labor is only a temporary opportunity. In recent years, Chinese wages have been rising at a rate in excess of 13%. At this rate, in 9 years, wages in the Pearl River Delta will be $12/hour, or higher than in Korea in 2003. The decision to contract out the manufacturing of a toy for the next Christmas season can be based on current wages, but the decision to build a car assembly plant should not be, because it will remain in operation for decades, and the decision makers must anticipate what wages and the work force will be 10 or 15 years ahead.

Metrics in Lean – Part 5 – Lead times and inventory

As in the article on Nike, Lead time reduction is often touted as Lean’s greatest achievement. Improvements in productivity, quality and new product introduction time make it into the first paragraph, but lead times get the headline. Lead time metrics are popular, but there are many different lead times that are of interest and they are not easy to define, measure, or interpret. Inventory is easier to measure and, under stable conditions, Little’s Law provides some information about average lead times, details on lead time distributions can be inferred from inventory age statistics. In addition, inventory metrics are useful in their own right, to support improvements in storage and retrieval, materials handling, and supply chain management.

What do we call a lead time?

In its most general form, the lead time of an object through a system is the interval between the time it enters the system and the time it leaves it. The objects can be material, like manufacturing work pieces, or data, like a passport application, or a combination of both, like a pull signal or a customer order for a manufactured product, which starts as data and ends as materials accompanied with delivery documents. The system is any part of your business that you can draw a boundary around and monitor objects going in and coming out.

Order fulfillment lead time is, in principle, well defined as the interval between the placement of the order and receipt of the goods by the customer. The objects are orders and the system is comprised of your own company and its distribution network. There is no ambiguity as to the time the order is placed when a consumer confirms the checkout for an on-line cart, nor is there about the time of delivery when it is recorded by the delivery service. On the other hand, business-to-business transactions frequently do not have that clarity, particularly on large, long-term orders. If a customer places an order for 12 monthly deliveries, strictly speaking, the order fulfillment lead time is one year, which is not terribly useful. Then you have to identify a trigger point to start the clock for each delivery. If you use Kanbans or other types of pull signals, they can be used for this purpose.

Inside the company, if production work is organized in jobs or work orders, you can measure the time between release by production control and completion, and that gives you an internal, manufacturing lead time. If you produce the same item every day one piece at a time, you can record the times through a production line by serial number. But the existence of scrap and rework makes this a bit more complicated. The parts that do not make it out of the line tie up capacity and slow down the others, and the parts that are reworked undergo extra processing, adding to the lead time and increasing its variability. When calculating lead times for a process, however, you should only consider the units that make it out as good product.

An assembly process involves multiple flows merging. It is somewhat like a river basin, and there is often no objective criterion for deciding which of two merging rivers is the main stream and which one the tributary. Usually, the smaller river is designated as the tributary, but there are exceptions. By this criterion, for example, the river crossing Paris should be called the Yonne rather than the Seine, because, as French kids learn in primary school, where they merge upstream from Paris, the Yonne is the larger of the two (See Figure 1).

Figure 1. A tributary larger than the mainstream

Likewise, in assembly, working upstream from the end, you have to decide which flow is the main and which are  feeder lines.  It is a simple call where you are mounting a side rear-view mirror on a car, but less obvious when you are mating an engine and transmission assembly with a painted body.

Measuring lead times directly

Tracing the lead time of completed units through multiple operations requires a history database with timestamps for all the relevant boundary crossings. This is only available if there is a tracking system collecting this data. If the data collection is manual, it often occurs at the end of each shift, meaning that we know in which shift the event occurred but not at what time within that shift, as shown in Figure 2. To measure lead times in weeks, it is accurate enough; in hours, it isn’t.

Figure 2. Operators recording production activity at the end of the shift

The direct measurement of lead times is also problematic with rapidly evolving, high-technology processes that have manufacturing lead times in months. If a part goes through 500 operations in 4 months, its actual lead time will commingle data about current conditions at the last operation with four-month-old data about the first one. Since then, three additional machines may have been brought on line, two engineering changes to the process may have taken place, and volume may have doubled, all of which makes the old data useless. It would be more useful to have a snapshot of the lead time under current conditions, with the understanding that it is an abstraction because, as the process keeps evolving, no actual part will ever make it from beginning to end under these exact conditions. To get such a snapshot, we need to measure lead times for individual operations, which raises the question of how we can infer lead times for an entire process from operation lead times.

Average lead times add up, extreme values don’t

When we have lead times for operations performed in sequence, we want to add them up like the times between stations on a train line, to get a lead time for the entire process. For each object flowing through, it always works: the time it needs to go through operations 1 and 2 is the sum of its times through Operation 1 and Operation 2.When we look at populations of objects flowing through, it is a different story. The averages still add up by simple arithmetic. The problem is that the average is usually not what we are interested in. When accepting customer orders, we want to make promises we are sure to keep, which means that our quotes must be based not on lead time averages  but on upper-bounds, so that, in the worst-case scenario, we can still deliver on time. We need to be careful, however, because extreme values are not additive. The worst-case scenario for going through operations 1 and 2 is not the sum of the worst-case scenario through Operation 1 and the   worst-case scenario through Operation 2.

That it is wrong to add the worst-case times is easiest to see when considering two operations in sequence in a flow line, when variability in the first operation causes you to maintain a buffer between the two. If one part takes especially long through the first operation, then the buffer will be empty by the time it reaches the second and its time through it will be short, and it makes no sense to add the longest possible times for both operations. If it takes you an unusually long time to go through passport control at an airport, your checked luggage will be waiting for you at the carrousel and you won’t have to wait for it. In other words, the times through both operations are not independent.

A job shop is more like a farmer’s market (See Figure 3). At each operation, each waits in line with other parts arriving by different paths, like customers in queues at different stalls in the market. Then the times through each operation are independent, and the extreme values for a sequence of operations can be calculated simply, but not by addition. This is because, for independent random variables, it is the squares of the standard deviations that are additive, and not the standard deviations themselves. If operations 1 and 2 have independent lead times with standard deviations σ1 and σ2, the standard deviation for both in sequence is \sqrt{\sigma _{1}^{2}+\sigma _{2}^{2}} . If the first operation takes 8±2 hours and the second one 5±1 hours, the sequence of the two will take 13 ± 2.2 hours and not 13 ± 3 hours as would be obtained by just adding the extreme values. It is like the hypotenuse of a right triangle versus the sum of its two other sides. Of course, the more operations “standard lead times” you add in this fashion, the worse the lead time inflation. For details on this phenomenon, see Measuring Delivery Performance: A Case Study from the Semiconductor Industry, by J. Michael Harrison et al. in Measures for Manufacturing Excellence,  pp.309-351.

Figure 3. A farmers’ market and a machining job shop

Interpretation and use

Process lead times look like task durations in a project, and it is tempting to load them in a program like Microsoft project and treat operations like tasks with finish-to-start constraints and use the project planning and management tools to perform calculations on the production process. Unless you are building a one-of-a-kind prototype or a rarely ordered product, however, manufacturing a product is not a project but an activity involving flow. As a consequence, order fulfillment lead times are usually much shorter than process lead times. You can order a configured-to-order computer on-line and get it delivered within 3 to 5 days, but the processor in it takes months to make. When a manufacturer explains that the business is purely “make-to-order,” it doesn’t usually mean starting by digging for iron ore to make a car. The game is to decide where in the process to start and how to have just the materials you need when you need them, in order to fill customer orders promptly without hoarding inventory.

Lean manufacturers achieve short lead times indirectly by doing the following:

  1. Engineering production operations for stability and integration into one-piece flow lines. This is never achieved 100% but is always pursued.
  2. Designating your products as runners, repeaters or strangers, and lay out production lines and logistics differently for each category.
  3. In mixed-flow lines, applying SMED to reduce changeover times.
  4. Applying leveled-sequencing (heijunka) as needed in scheduling production lines.
  5. Using a pull system to manage both in-plant and supply-chain logistics.

In an existing factory, the challenge of reducing lead times is often mistakenly perceived as involving only production control and supply chain management in actions limited to production planning, production scheduling,  and materials procurement. Because materials in the factory spend so little of their time being worked on, improving production lines is viewed at best as secondary, and at worst as a waste of time, because “production already been optimized.” In reality, it is nothing of the kind, and one key reason materials wait so long is dysfunctional production. Improve the productivity and flexibility of manufacturing operations, lay out your lines to make it easiest to do what you do the most often, and you see the waiting times melt away, creating the opportunity to use more sophisticated methods in support of production. This perspective is a key difference between Lean Manufacturing and the theory of constraints or the approaches proposed in the academic literature on operations management, such as Factory Physics.

Theoretical versus actual lead time

In analyzing lead times, we separate the time the object spends waiting from the time it is being worked on, making progress towards completion. This serves two purposes:

  1. Establishing the lower limit of lead time under current process conditions. The fastest the object can move through the system is if it never waited.
  2. Understanding the ratio of working to waiting, and making it a target for improvement.

The dual timelines at the bottom of a Value Stream Map bear lead time and process time data. The sum of these process time data is often called theoretical lead time or theoretical cycle time, after which actual performance is often described as “We’re running at five times theoretical…” How exactly the theoretical lead time is calculated is usually not specified.

What I recommend to calculate a meaningful theoretical lead time for a product is a thought experiment based on the following assumptions:

  1. The plant has no work to do, except except making one piece of the product.
  2. The following is ready and available for this one piece:
    • Materials
    • Equipment.
    • Jigs, fixtures, and tools
    • Data, like process programs or build manifests.
    • Operators.
  3. Transportation between operations is instantaneous.
  4. There is no inspection or testing, except where documented results are part of the product, as is common in aerospace or defense.

Under these conditions, the theoretical lead time is what it would  take to make the unit from start to finish. These assumptions have the following consequences:

  1. Since we assume the equipment is ready, no setup time is involved.
  2. The process time through an operation involving a machine includes loading and unloading.
  3. If a machine processes a load of parts simultaneously, the processing time for a single part is the same as for a load. If an oven cures 100 parts simultaneously in two hours, it still takes two hours to cure just one part.

On the other hand, there are cases for which our assumptions still leave some ambiguity. Take, for example, a moving assembly line with 50 stations operating at a takt time of 1 minute. If we treat it as one single  operation, our product unit will take 50 minutes to cross it from the first station to the last. On the other hand, to make just one part, the line does not have to move as a constant pace. The amount of assembly work at each station has to be under 1 minute, and the part transferred to the next station as soon as this work is done, with the result that it takes less than 50 minutes to go through the whole line. You can make an argument for both methods, and the assumptions are not sufficiently specific to make you choose one over the other. What is important here is that the choice  be explicit and documented.

The difference between the actual and theoretical lead times can then be viewed as gold in the mine, to be extracted by improvements in all aspects of operations except the actual processes. If you find a way to mill a part twice as fast, you change the theoretical lead time itself. Because the theoretical lead time is usually a small fraction of the actual lead time, say, 5 hours versus 2 months, managers often assume that it makes no sense to focus on finding ways to reduce these 5 hours to 4, and that they should instead focus on the time the materials spend waiting. But, as said above, the two are not independent. Faster processing melts away the queues, and reducing the theoretical lead time by 20% may reduce the actual lead time by 50%.

“Days of inventory” and Little’s Law

Inventory levels are often expressed in terms of days of coverage. 200 units in stock, consumed at the rate of 10 units/day, will last 20 days. Therefore, 200 units is equivalent to 20 days of inventory, and this is what the average lead time for one unit will be. This is the method most commonly used to assign durations to “non-value added activities” on Value Stream Maps.

We should not forget, however, that the validity of this number is contingent on consumption. If it doubles, the same number of parts represents 10 days instead of 20. If consumption drops to zero, then the 200 parts will cover the needs forever.

When, on the basis of today’s stock on hand and today’s throughput, a manager declares that it is “20 days of inventory,” it really means one of the two following assertions:

  1. If we keep producing at the exact same rate, the current stock will be used up in 20 days, which is simple arithmetic.
  2. If the production rate and available stock fluctuate around the current levels, the item’s lead time from receiving into the warehouse through production will fluctuate around 20 days, by Little’s Law.

In either one of these interpretations, we have an “instantaneous” lead time that is an abstraction, in the sense that no actual part may take 20 days to go through this process, just as a car going 60 mph this second will not necessarily cover 60 miles in the next hour. In the case of a car, we all understand it is just a speedometer reading; for days of inventory, it is easy to draw conclusions from the number that go beyond what it actually supports.

Inventory, throughput, and average lead times

As we have seen, lead times are difficult to measure directly, because it requires you to maintain and retrieve complete histories for units or batches of units. Inventory is easier to measure, because you only need to retrieve data about the present. First, the inventory database is much smaller than the production history databases. Second, because inventory data are used constantly to plan, schedule, and execute production,  it is readily accessible and its accuracy is maintained. For similar reasons, throughput data is also easier to access than history and more accurate. As a result, with all the caveats on assumptions and range of applicability, Little’s Law is the easiest way to infer average lead times.

Inventory age analysis and lead time distribution

In some cases, inventory data lets us infer more than just average lead times. Often, the inventory database contains the date and time of arrival into the warehouse by unit, bin, or pallet. If it cannot be retrieved from the database, it is often available directly from the attached paperwork in the warehouse.Then for a relevant set of items, we can plot a histogram of the ages of the parts in the warehouse, which, as a snapshot of its state, may look like Figure 4.

Figure 4. Inventory age snapshot for one item

If there is “always at least 5 days of inventory,” then we can expect no part to leave the warehouse until it is at least 5 days old, and seek an explanation for the short bar at age 3 days. The bar to the right shows outliers, parts that have been passed over in retrieval for being too hard to reach, or possibly have gone through a special quality procedure. In any case, they are an anomaly that needs investigating.

If the warehouse operations are stable in the sense that there is a lead time distribution, then, if we set aside obvious outliers and take the averages of multiple snapshots taken at different times of the day, the week or the month as needed to smooth out the spikes associated with truck deliveries, the chart should converge to a pattern like that of Figure 5.

Figure 5. Average of multiple snapshots with outliers removed

If a unit is 9 days old in the warehouse, it means that its time in the warehouse will be at least 9 days. The drop between the columns for 9 and for 10 days, then represents the parts that stay at least 9 days but less than 10. In other words, in proportion to the whole, it gives the probability that a part will be pulled on its 10th day in the warehouse. Therefore, by differences, the age distribution gives us the complete distribution of the lead times, as shown in Figure 6.

Figure 6. Lead time distribution inferred from inventory age

Admittedly, this approach cannot always be used. Where it can, it gives us detailed information about lead times at a fraction of the cost of directly measuring it. Even where it cannot be used, snapshots of inventory age still provide valuable information, much like the demographers’ populations pyramids, as in Figure 7.

Figure 7. Example of population pyramids

Inventory metrics

To accountants “resource consumption” is synonymous with cost. As discussed in Part 2, for posting on the shop floor, we need metrics that express performance in the language of things. Depending on circumstances, such substitutes may include the amount of work in process used to sustain production, as a measure of the effectiveness of production management and engineering. When it goes down, it is a both a one-time reduction in working capital and a reduction in recurring holding costs. The unit of measure of WIP can be set locally in each work area.

Many companies measure inventory in terms of its dollar value, of the time it would take to consume it, or the turnover frequency. In doing so, they combine measures of the inventory itself with other parameters, such as the values assigned to inventory by Accounting and an assumed throughput rate. These are legitimate derivative metrics and of interest to management, but when you stand next to a rack on the shop floor, you see pallets and bins, not money, days of supply, or turns. The raw inventory data is comprised of quantities on hand by time, and these should also be used as the basis for simple metrics in the language of things, such as the following:

  • Number of pallets, bins and units on hand by item. This is what Operations has to work with, regardless of item cost.
  • Number of partial containers in store. The presence of “partials” in store is a sign of a mismatch between batch sizes in quantities received and consumed.
  • Floor space occupied by inventory. This is of interest because freed up space can be used to increase production.
  • Accuracy of inventory data. This is usually measured by the percentage of items for which database records agree with reality, as observed through cycle counting.

As discussed above, inventory is easier to measure than lead time, and much about lead time can be inferred from inventory status, using tools like Little’s Law or age analysis. But it is not the systematic application of formulas to numbers: we need to be careful about underlying assumptions and the extent to which the data supports our conclusions.

Metrics in Lean – Part 4 – Gaming and how to prevent it

As massively practiced today, Management-by-Objectives (MBO) boils down to management imposing  numerical targets on a few half-baked metrics, cascading this approach down the organization and giving individuals a strong incentive to spin their numbers. It is a caricature of the process Peter Drucker recommended almost 60 years ago, and he deserves no more of the blame for it than Toyota does for what passes as Lean in most companies that claim to implement it.

A non-manufacturing example of decadent MBO is the French police under former president Sarkozy, which was tasked by the government to decrease the crime rate by 3%/year while increasing the proportion of solved cases. According to the French press, this was achieved by gaming the numbers. The journalists first latched on to a reported yearly decrease in identity theft, which seemed unlikely. They found that police stations routinely refused to register complaints about identity theft on the grounds that the victims were the banks and not the individuals whose identities were stolen. A retired officer also explained how crimes were systematically downgraded with, for example, an attempted break-in recorded as the less severe “vandalism.”

The fastest way the police had found to boost the rate of case solutions was to focus on violations detected through their own actions, such as undocumented aliens found through identity checks. The solution rate for such crimes is 100%, because they are simultaneously discovered and solved. The challenge is to generate just enough of such cases to boost the solution rate without increasing the overall crime rate… To achieve this result, packs of police officers stalked train stations in search of offenders, as reported both by cops who felt this was not what they had joined up to do, and innocent citizens who complained about being harassed for their ethnicity.

In organizations affected by this kind of gaming, members work to make numbers look good rather than fulfill their missions. It is a widely held belief that you get what you measure and that people will always work to improve their performance metrics, but this is not a simplistic view of human nature. This behavior does not come naturally. On their own, schoolteachers focus on educating children, not boosting test scores, and production operators on making parts they can take pride in. It takes heavy-handed management to turn conscientious professionals into metrics-obsessed gamers, in the form, for example, of daily meetings focused entirely on the numbers, backed up by matching human resource policies on retention, promotion, raises and bonuses.

But enough about police work. Let us return to Manufacturing, and list a few of the most common ways of gaming metrics in our environment:

  1. Taking advantage of bad metrics. As discussed in The Staying Power of Bad Metrics, many metrics commonly used in manufacturing are poorly defined, providing gaming opportunities, such as outsourcing in order to increase sales per employee.
  2. Stealing from the future. In sports, nothing is more dramatic than the game won by points scored in the last seconds of a game. The bell rings right after the ball spirals into the basket and the Cinderella team wins the championship. In business, the end of an accounting period is the end of a game, and, as it approaches, sales scrambles to close last-minute deals and manufacturing to ship a few more orders. This is what Eli Goldratt called the “hockey stick effect.” Of course, this is done by moving up activities that would otherwise have taken place a few days later, during the beginning of the next accounting period. As a consequence, the beginning of the period is almost quiescent. Not much is going on, but it will be made up at the end…
  3. Redefining 100%. Many ratios, by definition, top out at 100%. A machine cannot run 25 hours/day, and a manufacturing process cannot produce more good parts than the total it makes. This is why ratios like equipment uptime and first-pass yield top out at 100%. Any result under 100%, however, invites questions on how it could be improved. A common way to fob off the questioners is to decree, for example, that a particular machine could not possibly be up more than 85% of the time, and redefine the scale so that 85% uptime is 100% performance. For production rates in manual operations, the ratio of an operator’s output to a work standard is often used instead of process times or piece rates. Such ratios have the advantage of being comparable across operations, and are not bounded in either direction. But their relevance depends on a work standard, and, when everybody in a shop performs at 140% of standard, chances are that the standards are engineered for this purpose.
  4. Leveraging ambiguity. Terms like availability, cycle time, or value added are used with different meanings in different organizations, creating many opportunities to game the metrics. If the product’s market share in the first quarter went for 1% to 2%, it doubled, but, if it went back to 1% in the second quarter, it went down by 1%.

Why do people who, in other parts of their lives, may be model citizens, engage in such behaviors, ranging from spinning to cheating? One answer is that, with what MBO has degenerated into in many companies, management is co-opting metrics gamers into its ranks. It is not that gaming is human nature, but instead that you are actively weeding out those who don’t engage in it. Changing such habits in an organization is obviously not easy.

Assume, for example, that your goal is to be competitive by having a skilled work force, and that your analysis shows that it requires employees to stay for entire careers so that what they learn at the company stays in the company. You then apply a number of different methods to make it happen:

  • Communications. You make sure that all employees know what you are doing.
  • Career planning. You have human resources develop a plan with all employees so that each one knows what he or she can aspire to by staying with the company.
  • Organized professional development. You organize formal training, on-the-job training, and continuous improvement to provide opportunities for employees to develop the skills they need to execute their plans.
  • Job enrichment. You redesign the jobs themselves to make more effective use of each employee’s talents.

If employees appreciate their jobs and have long-term career perspectives within the company, few of them should quit or make excuses not to come to work today, and the results should be visible in lower employee turnover rates and absenteeism.

The metrics are there to validate the approaches taken to reach the goal, but the goal is not to improve the metrics. It is a subtle difference. If you have the flu, you have a fever, but your goal is to heal, not just to bring down the fever. Once you are healed, you fever will be gone, and the decrease in your temperature is therefore a relevant indicator of your healing process, but it is not the healing process. If bringing down the fever were the goal, you could “game” your temperature and bring it down without healing. This distinction existed in Drucker’s original writings about MBO, but got lost in implementation.

So, what can you do to prevent metrics gaming? Let us examine three strategies:

  1. Review the metrics themselves. Use the requirements listed in my first post on this subject. You may not be able to completely game-proof your metrics, but at least you could make sure that they make sense for your business and are not trivially easy to game.
  2. Decouple the metrics from immediate rewards. Piece rates used to be the most common form of payment for production work, but have almost entirely vanished in advanced manufacturing economies, and been replaced by hourly wages. Performance expectations are attached, but there is no direct link to the amount produced in a given hour of a given day. There are many reasons for this evolution:
    • The pace of work is often set by machines or by a moving line, rather than by the individual.
    • The best performance for the plant is not necessarily achieved by every individual maximizing output at all times.
    • More is expected of all individuals than just putting out work pieces, including training or participating in improvement activities.

    One consequence of this decoupling is that time studies are easier and more accurate than in a piece rate environment. The same logic applies in management: the more direct the link between metrics and individual evaluations, the more intense the gaming. Don’t make the metrics the key to promotions or to prizes representing a substantial part of a manager’s compensation. Use them only as indicators to inform discussions on plans and strategies.

  3. Increase the measurement frequency. The longer the reporting period, the more opportunities it offers for gaming the metrics by stealing from the future, and the more pronounced the hockey stick effect. Conversely, you can reduce it by measuring more often, and eliminate it by monitoring continuously, as is done, for example by the electronic production monitors that keep a running tally of planned versus actual production in a line during a shift. Periods exist in accounting because of the limitations of data processing technology at the time the accounting methods were developed. In the days of J.P. Morgan, closing the books was a major effort that a company could only be do every so often. In 2012, there is no technical impediment to the “anytime close,” but the publication of periodic reports continues by force of habit. Metrics in the language of things as well as the language of money can be monitored continuously.
  4. Have third parties calculate the metrics. In principle, counting chips should be done more accurately by agents with no stake in where they may fall. In practice, it is not only expensive but does not always produce the desired result. It is the approach used in Management Accounting. A plant’s accounting manager, or comptroller, is not chosen by the plant manager, he or she reports directly to corporate finance, and has no motivation to humor the plant manager. This is a double-edged sword because, with neutrality, comes a distance from the object of the measurement that may cause misunderstandings, and Management Accounting leaders like Robert Kaplan, Orrie Fiume, or Brian Maskell  have been struggling with the challenge of providing relevant, actionable information to managers for the past 30 years. Outside of Accounting, for metrics in the language of things, the closest you can come to having a 3rd party produce the measurements is to have a computer system do it, based on automatic data acquisition. There is no opportunity for gaming, but the issues of relevance are as acute as in Management Accounting.

Manufacturing: Offshore Out; Lean In

See on Scoop.itCellular manufacturing

When companies implement lean manufacturing correctly they achieve necessary cost improvements and increase productivity while maintaining operational control that only on-shore, in-sourced production offers.

See on d2pnews.com

Improvement in a silo

In a discussion he recently started in the PEX Network discussion group on LinkedIn, Adi Gaskell asked whether process improvement worked in a silo. Most participants said no, but Steven Borris said yes, and I agree with him. Following is what I added:

I agree with Steven, and will even go further: your first pilot projects when you start Lean implementation have the best chance of success if they are contained within a department. The more departments, silos, or fiefdoms you involve, the more difficult you make it, and the less likely to succeed.

The scope does not have to include a complete process from raw materials to finished goods. It does not even have to be at the end or the beginning of the process. All his has to be is a process segment with a technical potential for improvement that is achievable with available skills, and enthusiastic local management.

There is a simple criterion to establish whether such a local project improves the plant as a whole: does it move its target operations in the direction of takt-driven production. If it does, and only if it does, the order-of-magnitude improvements you get locally translate to nibbling percentages globally. For example, the local WIP drops by 90% and that makes the global WIP drop by 4%.

Only once you have a few successful within-silo projects under your belt do you have the support in the organization and the skills base to take on cross-silo or silo-eliminating projects.

The Lean Body of Knowledge

Efforts at implementing Lean have become pervasive in manufacturing, branching out from the automotive industry to electronics, aerospace, and even food and cosmetics, not to mention efforts to adapt it to construction, health care, or services. As a consequence, the knowledge of Lean, proficiency in its tools, and skills in its implementation are highly marketable in many industries.

There is, however, no consensus on a body of knowledge (BOK) for education in the field, and my review of existing BOKs and university courses confirms it.  A consensus is elusive because Lean emerged as the accumulation of point solutions developed at Toyota over time, rather than as the implementation of a coherent strategy.

As Takahiro Fujimoto explains, there was no individual thinker whose theories started the company down this path. Decades later, we are left with the task of reverse-engineering underlying principles from actual plant practices. Those who have attempted it produced inconsistent results because they have gone at it like the six blind men with the elephant: their personal backgrounds, mostly in business school education, management, or even psychology allowed them to see different slivers of the Toyota system but not the whole, giving, in particular, short shrift to its engineering dimension.

In the following paragraphs, first I explain what I think the Lean BOK should be. Then I review five programs offered in the US by universities and professional societies and highlight where they differ.

My view of the Lean BOK

A well-rounded program for manufacturing professionals would provide Lean skills to all the professionals involved in designing and operating manufacturing plants. Organizations that are successful at Lean do not rely on one department to “do Lean” for everybody else. Instead, Lean is part of everybody’s job. There are basics that everybody needs to know, and then there are different subsets of skills that are useful depending on where you work in the plant.

Beyond the common background, the knowledge should be organized around functions performed by people. In this way of thinking, Visual Management, for example, would not be a stand-alone subject, because factories don’t have “visibility managers.” On the other hand, plants have assembly lines, machining or fabrication shops, shipping and receiving departments all in need of visual management. As a consequence, visual management is part of the training of professionals in assembly, machining, fabrication, logistics, quality, maintenance, etc. And each one only needs to know visual management as it is relevant to his or her position.

Over time, Lean should  migrate into the mainstream of manufacturing and industrial engineering, and lose its separate identity, both in industrial practice and in engineering and management education. This has been the fate of successful innovations in manufacturing in the past. For example, the “American system of manufacture” to which we owe interchangeable parts is now only a subject for historians. It is not the object of a standard or certification, and nobody explicitly undertakes to implement it. That is because its components — engineering drawings, tolerances, allowances, routings, special-purpose machines, etc. — have all become an integral part of how we make things. Likewise, in Japan, TQC is no longer a topic, as its most useful components have just fused into the manufacturing culture 30 years ago. This is what must happen to Lean in the next 30 years.

Lean proficiency should be built around manufacturing functions, not Lean tools. From foundation to superstructure, we see the following hierarchy — originally defined by Crispin Vincenti-Brown — and structure the body of knowledge accordingly:

  1. Manufacturing and industrial engineering of production lines is the foundation, covering every aspect of the physical transformation of materials and components into finished goods. This is about the design and operation of a production lines using different technologies and working at different paces.
  2. Logistics and production control build on top of this foundation, covering both the physical distribution and the information processing required to make materials available to production and deliver finished goods.
  3. Organization and people covers both what an implementer needs to know in order to lead the Lean transformation of an organization, and to manage it once it is underway. The first part is about Lean project and program management; the second, about the alignment of operator team structures  to the production lines, continuous improvement and skills development, and support from production control, quality assurance, maintenance, engineering, and HR.
  4. Metrics and accountability. This is about appropriate metrics for quality, cost, delivery, safety, and morale. In routine operations, this also means collecting the data needed, computing the metrics, and communicating the results in a way that provides useful feedback. On projects, this means estimating improvements. In both cases, metrics in the language of things need to be translated into the language of money for top management.

A hypothetical participant who would master all  of the above  would understand both the philosophy and the tools of Lean, their range of applicability, and their implementation methods. He or she would possess the following skills:

  1. How to read a plant, assess its performance potential, set strategic directions, and start it moving in these directions. This entails the following:
    • Characterizing the demand the plant is expected to respond to.
    • Mapping its current, ideal and future value streams and processes and detect waste.
    • Assessing its technical and human capabilities.
    • Setting strategic directions for improvement.
    • Identifying appropriate improvement projects for current conditions and skill levels.
  2. How to generate or evaluate micro-level designs for takt-driven production lines or cells in assembly, fabrication, or machining by focusing on flows of materials and movements of people. The tools include spreadsheet calculations with Yamazumi and work-combination charts, jidoka, board game simulations, full-size mockups, and software simulations as needed.
  3. How to generate or evaluate macro-level designs for plants and supply chains, involving the organization of:
    • Internal and external logistics.
    • Milk runs.
    • Water spiders.
    • Heijunka and Kanbans.
    • Lean inventory management.
  4. How to apply the right tools for quality improvement, addressing:
    • Process capability issues with statistical methods/Six Sigma
    • Early detection and resolution of problems through one-piece flow and systematic problem-solving.
    • Human-error prevention through poka-yoke/mistake-proofing.
    • Planned responses to common problems through Change Point Management (CPM), embedded tests and other tools of JKK.
  5. How to organize people to execute and support takt-driven production, and in particular:
    • Set up a systems of small teams, team and group leaders, to carry out daily production as well as continuous improvement activities.
    • Set up a Lean daily management system with performance boards and management follow-up routines.
    • Generate and maintain a system of posted standard work instructions.
    • Apply Training-Within-Industry (TWI).
    • Set up and dimension appropriately a support structure for logistics/production control, maintenance, quality assurance, engineering, human resources, supply chain management and customer service.
  6. How to manage the Lean transformation of a plant from pilot projects to full deployment.
  7. How to select and deploy relevant metrics to monitor manufacturing performance and estimate the impact of improvement projects both in the language of things and in the language of money.

This BOK is dauntingly large, and new wrinkles are added daily.  Fortunately, you don’t need to master all of it in order to be effective.

Review of existing BOKs

I took a look at a few of the existing training programs offered by various institution, for the purpose of identifying the underlying BOKs. Table 1 shows the list. My comments follow.

Table 1: A few Lean training programs in the US
University of Kentucky Lean Systems Certification
University of Michigan Lean Manufacturing Training
SME Lean Certification
University of Dayton Get Lean
Auburn University Lean Certificate Series

The University of Kentucky program

The University of Kentucky’s program includes Core Courses — a train-the-trainer program — and Specialty Courses — for professionals outside of production operations. Some but not all the specialty courses are targeted at functions within the organization but others are about tools. Just the core courses add up to three one-week training sessions, while each specialty course is typically a one- or two-day workshop.

From the University’s web site, however, I cannot tell when, or if, participants ever learn how to design a machining cell, or an assembly line, or how to reduce setup times. In the core courses, it’s great to talk about mindsets, culture, and transformational leadership, but where is the engineering red meat?

The specialty courses address planning, improvement methods, logistics, supplier development, and other unquestionably important topics, but offer nothing about manufacturing or industrial engineering.

The University of Michigan program

The University of Michigan has a program of two one-week sessions with three-week gaps between sessions. This program does cover cell design, materials handling and factory layout,  and even rapid plant assessment, that are certainly relevant engineering topics, but I didn’t see anything about the design of lines that are not cells, autonomation, or the Lean approach to quality. There is a module about integrating Six Sigma with Lean, but there is a lot to Lean Quality that has nothing to do with Six Sigma, such as mistake-proofing.

There is also some coverage of logistics, organization, and accountability, but not as much as in the University of Kentucky program.

The SME

The SME has published a document entitled Lean Certification Body of Knowledge, in which the major headers are:

  1. Cultural Enablers
  2. Continuous Process Improvement
  3. Consistent Lean Enterprise Culture
  4. Business Results

Organization and People issues are treated in 1. and 3. The first two line items under Cultural Enablers are “Respect for the individual” and “Humility.” I am not sure how you can teach this or test for it, particularly humility. It is followed by techniques that have to do with implementation. The topics  in 3. have more to do with management once Lean is started, but it doesn’t say it in so many words.

All Engineering and Logistics is lumped under Continuous Improvement, which is clearly a misnomer because many of the Lean techniques in these areas are radical innovations that have nothing to do with continuous improvement. Inside this section, the choice of topics and their structure is surprising. For example, the only method of data collection considered is the check sheet, and it ranks as high in the hierarchy of topics as poka-yoke or one-piece flow.

As the name suggests, Business Results covers metrics and accountability.

The weight of the different areas varies with the level of certification. At the Bronze level, for example, Continuous Improvement counts for 60%; at the Gold level, only for 15%.

The University of Dayton

I have ties with this institution from having taught courses there  for many years, and I am still listed among their Experts. But I am not involved with their GetLean Certification program. It is an 8 to 10-day curriculum with a core of 5 days on the following topics:

  • Introduction to the Lean Tools
  • How to Develop New Metrics in a Lean Culture
  • Human Error Reduction: Root Cause Analysis
  • Fundamentals of Negotiation
  • Strengthening Your Business Services using LEAN Tools
  • Managing Projects in a LEAN or Six Sigma Environment
  • Managing an Efficient Supply Chain

The choice of topics may seem odd. For example, you might wonder what Fundamentals of Negotiation is doing in a Lean training program, or why Root Cause Analysis only appears under Human Error Reduction. What about root cause analysis of process capability problems?

Auburn University

Of all the Lean programs reviewed here, Auburn University’s probably has the deepest roots, through the influence of JT Black, whose passion for Lean goes back to the late 1970s.

The list of subjects they cover is as follows:

  • Principles of Lean
  • Value Stream Mapping
  • 5s
  • Total Productive Maintenance (TPM)
  • Quick Changeover
  • Pull / Kanban / Cellular Flow
  • Sustaining Continuous Improvement
  • Lean Office
  • Lean Accounting
  • Rapid Improvement Event
  • Problem Solving

If anything, this program has too much of the red meat that is lacking in some of the others. It could, without harm, emphasize Logistics and Management a bit more.

Conclusion: no consensus

Even when considering the programs solely on the basis of their published syllabi, it is clear that their graduates will have received vastly different instruction, and that the designers of these programs have no common view of what the Lean Body of Knowledge is.

Jidoka versus automation

Toyota’s jidoka isn’t just about stopping production when something goes wrong. It is an automation strategy that works because it is incremental and centered on human-machine interactions. It is essential to the strength of manufacturing in high-wage economies and should command more attention than it has so far among Lean implementers.

The most striking characteristic of automation in manufacturing is that, while making progress, it has consistently fallen short of expectations. In Player Piano, Kurt Vonnegut articulated the 1950s vision of automated factories: integrated machines produce everything while their former operators are unemployed and the managers spend their time playing silly team-building games at offsite meetings. 60 years on, the most consistently implemented part of Vonnegut’s vision is the silly team-building games…

Nippon Steel’s Yawata Steel Works in Kitakyushu, Japan, produce as much today with 3,000 employees as they did with 40,000 in 1964, and this transition was accomplished without generating massive unemployment. There are other such limited areas of automation success, like the welding and painting of car bodies. When manufacturing jobs are lost today, it is almost never to automation and almost always to cheaper human competition elsewhere. In the words of an experienced operator in a plant making household goods in the US, “When I joined 25 years ago, I expected these jobs to be automated soon, but we’re still doing them the same way.”

What is holding up automation today is not technology  but the lack of consideration for people. There are entire books on automation without a paragraph on what their roles should be. Of course, a fully automatic, “lights-out” factory has nobody working inside, so why bother? There are at least two reasons. First, even an automatic plant needs people, to program its processes, tell it what work to do, maintain it, monitor its operations and respond to emergencies. Second, successful automation is incremental and cannot be developed without the help of the people working in the plants throughout the migration.

Enter autonomation, or jidoka, which is sometimes also called “automation with a human touch” but really should be called human-centered automation. Instead of systems of machines and controls, it is about human-machine interactions. In the classical House of Lean model, the two pillars holding up the roof at Just-In-Time and Autonomation, or Jidoka. Figure 1 is lifted from the introduction to Working with Machines, and shows what happens when the jidoka pillar is ignored.

Figure 1. Just-in-Time and Jidoka

More and more, the Lean literature in English uses the japanese word jidoka rather than autonomation, but with its scope reduced to the idea of stopping production whenever anything goes wrong, and the concept is tucked away under the umbrella of Quality Management.

Toyota’s jidoka is a tricky term, because it is an untranslatable pun. Originally,  the Japanese word for automation is jidoka (自動化) , literally meaning “transformation into something that moves by itself.” What Toyota did is add the  human radical 人 to the character 動  for “move,” turning it into the character 働 for “work,” which is still pronounced “do” but changes the meaning to “transformation into something that works by itself.” It”s automation with the human radical added, but it is still automation, with all the technical issues the term implies.

The discussion of automation in the first draft of Working with Machines started with the following historical background, which was edited out like the chapter on locomotives and typewriters, on the ground that it contained no actionable recommendations. In this blog, I can let you be the judge of its value.

From tea-serving wind-up dolls to autonomation

The word automation was first used by Ford manufacturing Vice President Delmar Harder in 1947 for devices transferring materials between operations. He set as targets a payback period of at most one year in labor savings, which meant in practice that each device should not cost more than 15% above an operator’s average yearly wages and eliminate at least one operator. While this kind of economic analysis is still used, from the perspective of Toyota’s system, Ford’s focus on materials handling was putting the integration cart before the unit operation horse. Toyota’s approach focuses on individual operations first, and only then addresses movements of parts between them. In 1952, John Diebold broadened the meaning of automation to what has become the common usage, and painted a picture of the near future that was consistent with Kurt Vonnegut’s.

At that time, automatic feedback control was perceived to be the key enabling technology for automation, to be applied to ever larger and more complex systems. It was not a new concept, having been applied since 1788 in the centrifugal governor regulating pressure in a steam engine (See Figure 2)

Figure 2. James Watt’s 1788 centrifugal governor

Applying electronics to feedback control in World War II had made it possible, for example, to move a tank’s gun turret to a target angle just by turning a knob. Postwar progress in the theory and application of feedback control both caused many contemporary thinkers, like Norbert Wiener,  to see in the concept a philosophical depth that is truly not there, and to underestimate what else would need to be done in order to achieve automation. Of course, if you cannot tell a machine to take a simple step and expect it to be executed accurately and precisely, then not much else matters. Once you can, however, you are still faced with the problem of sequencing these steps to get a manufacturing job done.

While automatic feedback control was historically central to the development of automatic systems, it is not at center stage in manufacturing automation today. With sufficiently stable processes, open-loop systems work fine, or feedback control is buried deep inside such off-the-shelf components as mass flow controllers, thermostats, or humidity controllers. Manufacturing engineers are occasionally aware of it in the form of variable-speed drives or adaptive control for machine tools, but other issues dominate.

Fixed-sequence and even logic programming also have a history that is as long as that of feedback control and are by no means easier to achieve. Figure 2 shows two examples of 18th century automata moved by gears, levers and cams through sequences that are elaborate but fixed.

Figure 2. 18th century automata from France and Japan

These concepts found their way into practical applications in manufacturing as soon as 1784, with Oliver Evans’s continuous flour mill that integrated five water-powered machines through bucket elevators, conveyors and chutes (See Figure 3). The same kind of thinking later led to James Bonsack’s cigarette making machine in 1881, and to the kind of automatic systems that have dominated high-volume processing and bottling or cartonning plants for 100 years, and to the transfer lines that have been used in automotive machining since World War II.

Figure 3. Oliver Evans’ continuous flour mill (1784)

Fixed-sequence automation works, but only in dedicated lines for products with takt times under 1 second, where the investment is justifiable and flexibility unnecessary. Rube Goldberg machines parody this type of automation.

Figure 3. Winner of the 2008 Penn State Rube Goldberg machine contest

Automation with flexibility is of course a different goal, and one that has been pursued almost as long, through programmable machines. The earliest example used in production is the Jacquard loom from 1801, shown in Figure 4. It is also considered a precursor to the computer, but it was not possible to make a wide variety of machines programmable until the actual computer was not only invented but made sufficiently small, cheap and easy to use, which didn’t occur until decades after Vonnegut and Diebold were writing.

Figure 4. Jacquard loom from museum in Manchester, UK

By the mid 1980’s, the needed technology existed, but the vision of automation remained unfulfilled. In fact, more technology was available than the human beings on the shop floor, in engineering, and in management knew what to do with. As discussed in the post on Opinels and Swiss knives, the computer as a game changer. In manufacturing, this was not widely recognized when it became true, and it still is not today.

Writing in 1952, John Diebold saw nothing wrong with the way manufacturing was done in the best US plants, nor did he have any reason to, as the entire world was looking at the US as a model for management in general and manufacturing in particular. In the 1980’s, however, when GM invested $40B in factory automation, it was automating processes that were no longer competitive and, by automating them, making them more difficult to improve.

Whether the automation pioneers’ vision will ever come true is in question. So far, every time one obstacle has been overcome, another one has taken its place. Once feedback control issues were resolved came the challenge of machine programming. Next is the need to have a manufacturing concept that is worth automating, as opposed to an obsolete approach to flow and unit processes. And finally, the human interface issues discussed must be addressed.

21st century manufacturers do not make automation their overall strategy. Instead, automation is a tool. In a particular cell, for example, one operator is used only 20% of the time, and a targeted automation retrofit to one of the machines in the cell may be the key to eliminating this 20% and pulling the operator out of the cell.