Michel Baudin's Blog
Ideas from manufacturing operations
  • Home
  • Home
  • About the author
  • Ask a question
  • Consulting
  • Courses
  • Leanix™ games
  • Sponsors
  • Meetup group
Taxis waiting for people and people waiting for taxis - reduced

Oct 27 2011

Waiting for each other

We have all seen the absurd situation in the featured picture above of a line of customers waiting for taxis while a line of taxis next to them is waiting for customers, with a barrier separating them. This particular instance is from The Hopeful Traveler blog. The cabs are from London, but the same scene could have been shot in many other major world cities.

I am sure we have all encountered similar situations in other circumstances, which may or may not be easy to resolve. One particular case where it should be easy is the restaurant buffet. Figure 1 shows a typical scene in buffet restaurants, with a line of people waiting to get food all on the one side of the table, while food is waiting and accessible on the opposite side.

Figure 1. A typical buffet

I think the fundamental mistake is the assumption that a buffet is like an assembly line, providing sequential access to dishes. This means that you cannot get to the Alo Gobi until the person in front of you is done with the Tandoori.  The ideal buffet would instead provide random access, meaning that each customer would have immediate access to all dishes at all times. While it may not be feasible, you can get much closer to it than with the linear buffet. The following picture shows an alternative organization of a buffet in circular islands that is non-sequential.

Figure 2. A buffet island at the Holiday Inn in Visalia, CA

The limitation of this concept is that replenishment by waiters can interfere with customers. To avoid this, you would want dishes to be replenished from inside the circle while customers help themselves on the outside, as in the following sketch:

Figure 3. A buffet island with replenishment from inside

One problem with the circular buffet island, however, is its lack of modularity. You can add or remove whole  islands but you cannot expand or shrink an island, which you can if you use straight tables arranged in a U-shape, as in Figure 4.

Figure 4. Buffet island with straight tables

This buffet island may superficially look like a manufacturing cell, but it is radically different. Its purpose is random access to food as opposed to sequential processing of work pieces, and the materials do not flow around the cell but from the inside out.

Such are the thoughts going through my mind while munching on the Naan at Darbar.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Technology • 3 • Tags: Buffet, industrial engineering, Lean manufacturing, Restaurant

Mining operations

Oct 26 2011

Data Mining in Manufacturing versus the Web

Data mining, in general, is the retrieval of information from data collected for a different purpose, such as using sales transaction histories to infer what products tend to be bought together. By contrast, design of experiments  involves the collection of observations for the purpose of confirming or refuting hypotheses.

This perspective on data mining is consistent with the literature in expressing purpose, but most authors go further. They include in their definitions that data mining is done with computers, using large databases and specific analytical tools, which I think is too restrictive. The tools they list are the ones they have found useful in analyzing the behavior of millions of users of search engines or commerce websites, and they are not obviously applicable in other areas, such as manufacturing.

During World War II, British analysts used the serial numbers of captured or destroyed German tanks to estimate the numbers produced. Because serial numbers were not attached for this purpose, it was data mining. It used clever statistical models but, obviously, no computers.

Today, PhD-level data miners at Google, eBay, or Amazon sift through the  page views and click-throughs of millions of users for clues to patterns they can use. The data, automatically collected, is accurate and collected by the terabytes every day. This “big data” requires parallel processing on clusters of computers and lends itself to the most advanced analytical tools ever developed.

Compared to this fire hose of data, what manufacturing produces is a trickle. In a factory, the master data/technical specs, plans and schedules, status of operations and work in process, and the history of production over, say, 12 months, usually adds up to a few gigabytes. It doesn’t  fit on one spreadsheet, but it often does on a memory stick. On the other hand, much of it is still manually generated and therefore contains errors, and it is often structured in ways that make it difficult to work with.

Even if manufacturing companies could hire the data miners away from their current jobs, their experience with e-commerce or web search would not have prepared them well for the different challenges of manufacturing data mining.

There is an opportunity for data mining to contribute to competitiveness in manufacturing, but the approach must start from the needs. It must not be an e-commerce cure in search of manufacturing diseases.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Technology • 0 • Tags: Data mining, Lean manufacturing, Manufactuting

Steam locomotive and typewriter

Oct 25 2011

The steam locomotive and the typewriter

The first draft of my book  Working with Machines contained a chapter that was a post-mortem on two obsolete machines, which was cut on the grounds that, unlike all other chapters, it was not actionable for the reader.

Its abstract is as follows:

The steam locomotive and the typewriter are icons of the industrial age, and their parallel histories show different aspects of the human experience of working with machines. The steam locomotive is fondly remembered; the typewriter, all but forgotten except for the QWERTY keyboard. The steam engine participated in the development of every industrial economy, but the typewriter played no major role in Japan. The typewriter did not demonstrably improve the productivity or quality of office output, but was adopted only because of its image of modernity.

Locomotive driver was a prestigious position for a manual laborer, but typist never was. Compared to electrics and diesels, the steam locomotive had a cab that was exposed to the elements and to the heat of the firebox and therefore uncomfortable, difficult to operate, and dangerous. Yet engineers and firemen preferred it to the tedium and loneliness of modern locomotives. Automatic machines that require human attention only when they malfunction are also in airplanes and in manufacturing plants, challenging the job designer to keep the operator alert and used efficiently.

As the typewriter prints one keystroke at a time, typists were always busy with a single machine and determined both its productivity and output quality. Typists worked in comfortable places, but under pressure, and faced the long-term hazards of sedentary work. The typewriter’s main legacy is that a society can make a long-term investment in machines whose tangible benefits do not obviously exceed their costs.

Click here for a pdf file of the entire chapter.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • History • 1 • Tags: History of technology, industrial engineering, Lean manufacturing, Manufacturing engineering

Manual data collection at end of shift

Oct 24 2011

A management perspective on data quality

Prof. Mei-chen Lo, of National University and Kainan University in Taiwan, worked with Operations Managers in two semiconductor companies to establish a list of 16 dimensions of data quality. Most are not parameters that can be measured and should be considered instead as questions to be asked about a company’s data. I learned it from her at an IE conference in Kitakyushu in 2009 and found it useful by itself as a checklist for a thorough assessment of a current state. Her research is about methods for ranking the importance of these criteria.

They are grouped into four main categories:

  1. Intrinsic. Agreement of the data with reality.
  2. Context.  Usability of the information in the data to support decisions or solve problems.
  3. Representation. The way the data is structured, or not.
  4. Accessibility. The ability to retrieve, analyze and protect the data.

Each category breaks further down as follows:

  1. Intrinsic quality

    • Accuracy. Accuracy is the most obvious issue and is measurable. If the inventory data says that slot 2-3-2 contains two bins of screws, then can we be confident that, if we walk to aisle 2, column 3, level 2 in the warehouse, we will actually find two bins of screws?
    • Fact or judgment. That slot 2-3-2 contains two bins of screws is a statement of fact. Its accuracy is in principle independent of the observer. On the other hand, “Operator X does not get along with teammates” is a subjective judgment and cannot carry the same weight as a statement of fact.
    • Source credibility. Is the source of the data credible? Credibility problems may arise due to the following:
      • Lack of training. For example, measurements that are supposed to be taken on “random samples” of parts are not, because no one in the organization knows how to draw a random sample.
      • Mistake-prone collection methods. For example, manually collected measurements are affected by typing errors.
      • Conflicts of interest. Employees collecting data stand to be rewarded or punished depending on the values of the data. For example, forecasters are often rewarded for optimistic forecasts.
    • Believability of the content. Data can unbelievable because it is valid news of extraordinary results, or because it is inaccurate. In either case, it warrants special attention.
  2. Context.

    • Relevance. Companies often collect data because they can, rather than because it is relevant. It is the corporate equivalent of looking for keys at night under the street light rather than next to the car. The semiconductor industry, that established this list of criteria, routinely takes measurements after each step of the wafer process and plots them in control charts. This data is relatively easy to collect but of little relevance to the control and improvement of the wafer process as a whole. The engineers cannot capture most of the relevant data until the circuits undergo tests at the end of the process.
    • Value-added. Some of the data produced in a plant have direct economic value. Aerospace or defense goods, for example, include documentation of their production process, as part of the product. More generally, the data from commercial transactions, such as orders, invoices, shipping notices, or receipts, is at the heart of the company’s business activity. By contrast, the organization generates data to satisfy internal needs, including, for example, the number of employees trained in transaction processing on the ERP system.
    • Timeliness. Is the data available early enough to be actionable? A field failure report on a product that is due to problems with a manufacturing process as it was 6 months ago is not timely if this process has been the object to two engineering changes since then.
    • Completeness. Measurements must come with units of measure and all the data describing who collected them, where, when, and how.
    • Sufficiency. Does the data cover all the parameters needed to support a decision or solve a problem?
  3. Representation

    • Interpretability. What inferences can you draw directly from the data? If the demand for an item has been rising 5%/month for the past 18 months, it is no stretch to infer that this trend will continue next month. On the other hand, if a chart tells you that a machine has an Overall Equipment Effectiveness (OEE) of 35%, what can you deduce from it? The OEE is the product of three ratios: availability, yield, and actual over nominal speed. The 35% figure may tell you that there is a problem, but not where it is.
    • Ease of understanding. Management accounting exists for the purpose of supporting decision making by operations managers. Yet the reports provided to managers are often in a language they don’t understand. This does not have to be, and financial officers like Orrie Fiume have modified the vocabulary used in these reports to make them easier for actual managers to understand. Engineers using cryptics instead of plain language make technical data more difficult to understand.
    • Conciseness. A table with 100 columns and 20,000 rows with 90% of its cells empty is a verbose representation of a sparse matrix. A concise representation would be a list of the rows and columns IDs with values.
    • Consistency. Consistency problems often arise as a result of mergers and acquisitions, when companies mash together their different data models.
  4. Accessibility

    • Convenience of access. Data that an end-user can retrieve directly through a graphic interface is conveniently accessible; data in paper folders on library shelves are not. Neither are databases in which each new query requires the development of a custom report by a specially trained programmer.
    • Usability. High-usability data, for example, take the form of lists of property names and values that you can easily tabulate into spreadsheets or database tables. From that point on, you can select, filter and summarize it in a variety of informative ways. Low-usability data often comes in the form of a string of characters that you first need to parse, with character 1 to 5 being one field, 6 to 12 another, etc. Then you need to retrieve the meaning of each of these substrings from a correspondence table, to find that ’00at3′ means “lime green.”
    • Security. Manufacturing data contain some of the company’s intellectual property, which you need to protect not only from theft but from inadvertent alterations by unqualified employees. But you must also provide security effectively so that security procedures do not slow down qualified, authorized employees accessing data.

Prof. Mei-Chen Lo’s research on this topic was published in The assessment of the information quality with the aid of multiple criteria analysis (European Journal of Operational Research, Volume 195, Issue 3, 16 June 2009, Pages 850-856)

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Management • 4 • Tags: Data mining, Information systems, Information technology, IT, Lean manufacturing, Manufacturing, Quality

Dashboard assembly stations

Oct 23 2011

Why production matters

On LinkedIn Lean Business Process group, Ralph Bartelmann asked the following:

Is it really the matter to squeeze the last cent out of production ? In many environments production costs represent a minor part of the over all product cost. Following Pareto reasoning it seems more reasonable to work on other parts of the value stream like supplier developpement, product design etc. What is your opinion and experience ? What are the real challenges ?

Following is my answer:

It’s not about what production costs  but about what it does for the business. Improving production is about making is faster, better, safer, less tedious,… and cheaper.  It needs to be faster to make you more responsive, better so that production does not introduce defects that harm your reputation, safer and less tedious so that you can retain your work force and grow its skills.

If you improve on all these fronts, guess what? Your costs go down, and not only in production but in other parts of the value stream too, because they are not independent of production. For example, there is no point in trying to develop just-in-sequence suppliers unless you practice leveled-sequencing (a.k.a. Heijunka)  in your assembly line.

A manufacturing company ignoring production is an army ignoring combat on the grounds that more money is spent moving soldiers and keeping them supplied.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Management • 0 • Tags: industrial engineering, Lean manufacturing, Manufacturing engineering

Oct 22 2011

This is a poll for managers and engineers

This is a poll for managers and engineers working in factories: how to you use manufacturing data? Please answer on http://linkd.in/ogOsOv, and comment if none of the choices fit you.

Share this:

  • Click to print (Opens in new window) Print
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to email a link to a friend (Opens in new window) Email

Like this:

Like Loading...

By Michel Baudin • Polls • 3 • Tags: Data mining, Information systems, Information technology, IT, Lean manufacturing

«< 161 162 163 164 165 >»

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 585 other subscribers

Recent Posts

  • How One-Piece Flow Improves Quality
  • Using Regression to Improve Quality | Part III — Validating Models
  • Rebuilding Manufacturing in France | Radu Demetrescoux
  • Using Regression to Improve Quality | Part II – Fitting Models
  • Using Regression to Improve Quality | Part I – What for?

Categories

  • Announcements
  • Answers to reader questions
  • Asenta selection
  • Automation
  • Blog clippings
  • Blog reviews
  • Book reviews
  • Case studies
  • Data science
  • Deming
  • Events
  • History
  • Information Technology
  • Laws of nature
  • Management
  • Metrics
  • News
  • Organization structure
  • Personal communications
  • Policies
  • Polls
  • Press clippings
  • Quality
  • Technology
  • Tools
  • Training
  • Uncategorized
  • Van of Nerds
  • Web scrapings

Social links

  • Twitter
  • Facebook
  • Google+
  • LinkedIn

My tags

5S Automation Autonomation Cellular manufacturing Continuous improvement data science Deming ERP Ford Government Health care industrial engineering Industry 4.0 Information technology IT jidoka Kaizen Kanban Lean Lean assembly Lean Health Care Lean implementation Lean Logistics Lean management Lean manufacturing Logistics Management Manufacturing Manufacturing engineering Metrics Mistake-Proofing Poka-Yoke Quality Six Sigma SMED SPC Standard Work Strategy Supply Chain Management Takt time Toyota Toyota Production System TPS Training VSM

↑

© Michel Baudin's Blog 2025
Powered by WordPress • Themify WordPress Themes
%d