“Studies show…” or do they?

Various organization put out studies that, for example, purport to “identify performances and practices in place among U.S. manufacturers.”  The reports contain tables and charts, with narratives about “significant gaps” — without stating any level of significance — or “exponential growth” — as if there were no other kind. They borrow the vocabulary of statistics or data science, but don’t actually use the science; they just use the words to support sweeping statements about what manufacturers should do for the future.

At the bottom of the reports, there usually is a paragraph about the study methodology, explaining that the data was collected as answers to questionnaires mailed to manufacturers and made available on line, with the incentive for recipients to participate  being a free copy of the report. The participants are asked, for example, to rate “the importance of process improvement to their organization’s success over the next five years” on a scale of 1 to 5.

The results are a compilation of subjective answers from a self-selected sample. In marketing, this kind of surveys makes sense. You throw out a questionnaire about a product or a service. The sheer proportion of respondents gives you information about the level of interest in what you are offering, and the responses may further tell you about popular features and shortcomings.

But it is not an effective approach to gauge the state of an industry. For this purpose, you need objective data, either on all companies involved or on a representative sample that you select. Government bodies like the Census Bureau or the Bureau of Labor Statistics collect useful global statistics like value-added per employee or the ratio indirect to direct labor by industry, but they are just a starting point.

Going beyond is so difficult that I don’t know of any successful case. Any serious assessment of a company or factory requires visiting it, interviewing its leaders in person, and reviewing its data. It takes time, money, know-how, and a willing target. It means that the sample has to be small, but there is a clash between the objective of having a representative sample and the constraint of having a sample of the willing.

For these reasons, benchmarking is a more realistic approach, and I know of at least two successful benchmarking studies in manufacturing, both of which, I believe, were funded by the Sloan Foundation:

  • The first was the International Assembly Plant Study, conducted in the late 1980s about the car industry, whose findings were summarized in The Machine That Changed The World in 1990. The goal was not to identify the distribution of manufacturing practices worldwide but to compare the approaches followed in specific plants of specific companies, for the purpose of learning. Among other things, the use of the term “Lean” came out of this study.
  • The second is the Competitive Semiconductor Manufacturing Program, which started in the early 1990s with a benchmarking study of wafer fabrication facilities worldwide. It did not have the public impact of the car assembly plant study, but it did provide valuable information to industry participants.

The car study was conducted out of MIT; the semiconductor study, out of UC Berkeley. Leadership from prestigious academic organizations helped in convincing companies to participate and provided students to collect and analyze the data. Consulting firms might have had better expertise, but could not have been perceived as neutral with respect to the approaches used by the different participants.

The bottom line is that studies based on subjective answers from a self-selected sample are not worth the disk space you can download them onto.