On 23 December 1924, a group of leading international businessmen gathered in Geneva […]. Present were top representatives from all the major lightbulb manufacturers, including Germany’s Osram, the Netherlands’ Philips, France’s Compagnie des Lampes, and the United States’ General Electric. […] the group founded the Phoebus cartel, a supervisory body that would carve up the worldwide incandescent lightbulb market, with each national and regional zone assigned its own manufacturers and production quotas. It was the first cartel in history to enjoy a truly global reach.
The cartel’s grip on the lightbulb market lasted only into the 1930s. Its far more enduring legacy was to engineer a shorter life span for the incandescent lightbulb. By early 1925, this became codified at 1,000 hours for a pear-shaped household bulb, a marked reduction from the 1,500 to 2,000 hours that had previously been common. Cartel members rationalized this approach as a trade-off: Their lightbulbs were of a higher quality, more efficient, and brighter burning than other bulbs. They also cost a lot more. Indeed, all evidence points to the cartel’s being motivated by profits and increased sales, not by what was best for the consumer. In carefully crafting a lightbulb with a relatively short life span, the cartel thus hatched the industrial strategy now known as planned obsolescence.
Early in my career, I worked with an older engineer who told me that his first professional experience had been in the reliability department of a large, US appliance maker, where his job was to change product designs to make them fail as soon as the warranties expired.
I had heard of such efforts before, but had found the accounts difficult to believe. How could companies spend money to deliberately lower the quality of their products? But this was the testimony of a man I trusted who had personally done it, and hated it.
It was malicious, and it was corporate hubris at its worst. It created opportunities for competitors, which they eventually took. When we were having this conversation, my colleague also told me that the manufacturer was no longer in business.
This article from IEEE substantiates another story of market dysfunction that I had heard of but was not sure was true: the manufacturers of incandescent light bulbs conspired to reduce the lives of the bulbs.
The article gives dates, names, and places. An organization called the Phoebus cartel was set up in Geneva in 1924 by the leading lightbulb manufacturers in the US, Germany, the Netherlands, France, and Japan for the purpose of shortening bulb lives from 1,500 to 2,000 hours down to 1,000 hours.
Now that the incandescent lightbulb itself is becoming obsolete, how do we prevent LED manufacturers from pulling the same stunt?
It should be noted also that designing products to fail quickly is only one form of planned obsolescence. A less nefarious one is simply introducing regular product updates to make today’s cool product lame tomorrow. iPhones last much longer than one year. An iPhone 3 may still work today, particularly on its original operating system, but has been made unattractive by five new product releases. In IT in general, you don’t have to play along and can save by buying last year’s products.
“That was random!” is my younger son’s response to the many things I say that sound strange to him, and my computer has Random Access Memory (RAM), meaning that access to all memory locations is equally fast, as opposed to sequential access, as on a tape, where you have to go through a sequence of locations to reach the one you want.
In this sense, a side-loading truck provides random access to its load, while a back-loading truck provides sequential access.
While these uses of random are common, they have nothing to do with probability or statistics, and it’s no problem as long as the context is clear. In discussion of quality management or production control, on the other hand, randomness is connected with the application of models from probability and statistics, and misunderstanding it as a technical term leads to mistakes.
In factories, the only example I ever saw of Control Charts used as recommended in the literature was in a ceramics plant that was firing thin rectangular plates for use as electronic substrates in batches of 5,000 in a tunnel kiln. They took dimensional measurements on plates prior to firing, as a control on the stamping machine used to cut them, and they made adjustments to the machine settings if control limits were crossed. They did not measure every one of the 5,000 plates on a wagon. The operator explained to us that he took measurements on a “random sample.”
“And how do you take random samples?” I asked.
“Oh! I just pick here and there,” the operator said, pointing to a kiln wagon.
That was the end of the conversation. One of the first things I remember learning when studying statistics was that picking “here and there” did not generate a random sample. A random sample is one in which every unit in the population has an equal probability of being selected, and it doesn’t happen with humans acting arbitrarily.
A common human pattern, for example, is to refrain from picking two neighboring units in succession. A true random sampler does not know where the previous pick took place and selects the unit next to it with the same probability as any other. This is done by having a system select a location based on a random number generator, and direct the operator to it.
This meaning of the word “random” does not carry over to other uses even in probability theory. A mistake that is frequently encountered in discussions of quality is the idea that a random variable is one for which all values are equally likely. What makes a variable random is that probabilities can be attached to values or sets of values in some fashion; it does not have to be uniform. One value can have a 90% probability while all other values share the remaining 10%, and it is still a random variable.
When you say of a phenomenon that it is random, technically, it means that it is amenable to modeling using probability theory. Some real phenomena do not need it, because they are deterministic: you insert the key into the lock and it opens, or you turn on a kettle and you have boiling water. Based on your input, you know what the outcome will be. There is no need to consider multiple outcomes and assign them probabilities.
There are other phenomena that vary so much, or on which you know so little, that you can’t use probability theory. They are called by a variety of names; I use uncertain. Earthquakes, financial crises, or wars can be generically expected to happen but cannot be specifically predicted. You apply earthquake engineering to construction in Japan or California, but you don’t leave Fukushima or San Francisco based on a prediction that an earthquake will hit tomorrow, because no one knows how to make such a prediction.
Between the two extremes of deterministic and uncertain phenomena is the domain of randomness, where you can apply probabilistic models to estimate the most likely outcome, predict a range of outcomes, or detect when a system has shifted. It includes fluctuations in the critical dimensions of a product or in its daily demand.
The boundaries between the deterministic, random and uncertain domains are fuzzy. Which perspective you apply to a particular phenomenon is a judgement call, and depends on your needs. According to Nate Silver, over the past 20 years, daily weather has transitioned from uncertain to random, and forecasters could give you accurate probabilities that it will rain today. On the air, they overstate the probability of rain, because a wrong rain forecast elicits fewer viewer complaints than a wrong fair weather forecast. In manufacturing, the length of a rod is deterministic from the assembler’s point of view but random from the perspective of an engineer trying to improve the capability of a cutting machine.
This categorization suggests that that a phenomenon that is almost deterministic is, in some way, “less random” than one that is near uncertainty. But we need a metric of randomness to give a meaning to an expression like “less random.” Shannon’s entropy does the job. It is not defined for every probabilistic model but, where you can calculate it, it works. It is zero for a deterministic phenomenon, and rises to a maximum where all outcomes are equally likely. This brings us back to random sampling. We could more accurately call it “maximum randomness sampling” or “maximum entropy sampling,” but it would take too long.
Blog post at Lean Blog : “Today is the start of the 2014 World Cup, which means much of the world will be talking about goals.I’m not really a soccer, I mean football, fan but I’m all for goals. In the Lean management system, we generally have five high-level goals. These were the goals taught to us in the auto industry, where I started my career, and they apply in healthcare.”
As I learned it, it was “Quality, Cost, Delivery, Safety, and Morale” -(QCDSM) rather than SQDCM. I am not sure the order matters that much. The rationale for grouping Quality, Cost, and Delivery is that they matter to customers, while Safety and Morale are internal issues of your organization, visible to customers only to the extent that they affect the other three.
They are actually dimensions of performance rather than goals. “Safety,” by itself, is not a goal; operating the safest plants in your industry is a goal. In management as taught in school, if you set this goal, you have to be able to assess how far you are from it and to tell when you have reached it. It means translating this goal into objectives that are quantified in metrics.
In this spirit, you decide to track, say, the number of consecutive days without lost time accidents, and the game begins. First, minor cuts and bruises, or repetitive stress, don’t count because they don’t result in the victims taking time off. Then, when a sleeve snagged by a machine pulls an operator’s hand into molten aluminum, the victim is blamed for hurting the plant’s performance.
Similar stories can be told about Quality, Cost, Delivery and Morale, and the recent scandal in the US Veterans’ Administration hospitals shows how far managers will go to fix their metrics.
To avoid this, you need to reduce metrics to their proper role of providing information an possibly generating alarms. In health care, you may measure patients’ temperature to detect an outbreak of fever, but you don’t measure doctors by their ability to keep the temperature of their patients under 102°F, with sanctions if they fail.
Likewise, on a production shop floor, the occurrence of incidents is a signal that you need to act. Then you improve safety by eliminating risks like oil on the floor, frayed cables, sharp corners on machines, unmarked transportation aisles, or inappropriate motions in operator jobs. You don’t make the workplace safer not by just rating managers based on metrics.
In summary, I don’t see anything wrong with SQDCM as a list. It covers all the dimensions of performance that you need to worry about in manufacturing operations, as well as many service operations. Mark uses it in health care, but it appears equally relevant in, say, car rental or restaurants. I don’t see it as universal, in that I don’t think it is sufficient in, for example, research and development.
And, in practice, focusing on SQDCM easily degenerates into a metrics game.
See on www.leanblog.org
See on Scoop.it – lean manufacturing
Blog post at Lean Blog :”[…]I’ve been in healthcare for 8.5 years now, but at the start of my career, I was an entry-level industrial engineer at the GM Powertrain Livonia Engine plant from June 1995 to May 1997. This plant was in my hometown, Livonia, Michigan and was located exactly 1.3 miles from the house where I grew up. The factory opened in 1971, two years before I was born. The factory closed in 2010 due to the GM bankruptcy and sits empty today as part of the ‘rust belt’ ..]”
About a decade before Mark, I spent time implementing scheduling systems in GM plants, and my memories, while not great, are less gloomy than Mark’s. My main project was at the GM aluminum foundry in Bedford, IN which is still open today, unlike the Livonia plant where Mark worked.
I remember being impressed by the depth of automotive and manufacturing knowledge of the GM engineers and managers; I also remember them as unable to implement any of their ideas, because it was dangerous to be perceived as someone who makes waves. They had no need for the scheduling system, but it was a corporate decision to deploy it in 150 plants, and they just had to get along.
The company culture was dysfunctional — particularly in quality, safety, and improvement — but the plant was in a small town where the employees all knew each other and worked to make a go of it as best they could. And, they are still around.
I have since experienced a radically different quality culture in another car company. The quality manager in a parts plant once noticed that defectives had been shipped to final assembly. The parts had been machined so well that they didn’t leak at final test even though they were missing a gasket.
The quality manager — who told me the story — felt that he had to do whatever it took to prevent the cars being shipped with the defective parts. What it did take was driving two hours to the assembly plant at night, locating the finished cars with the defective parts in the shipping yard, and removing their keys.
See on www.leanblog.org
“This webcast shows an interesting interview with Steve Jobs when he was with NeXT computer. He discusses quality, business and the experience of working with Dr. Juran at NeXT computer. The video is likely from around 1991.”
The interview starts slowly, with Jobs collecting his thoughts before speaking, and it was not supposed to be about Juran. Jobs is the one who brings up Juran in response to a question about quality.
At first, he reverently calls him “Dr. Juran” — Juran was not a PhD — and then, affectionately, “Joe Juran.” Steve Jobs as the respectful disciple is something I had not seen before. What was he so impressed with? Here are a few I picked up in the video:
- While focused on quality, Juran did not see it as more than it was. It is about making good products and services; it is not a philosophy of life.
- For all his accomplishments, Juran remained simple. He treated everybody alike, and answered every question put to him as if it were the most important in the world.
- Juran was “driven by his heart” to share what he had learned and found out in decades of work.
Towards the end of the video, the 30-year old Jobs sounds more and more as if he setting a role model for himself. But Juran lived to be 103; Jobs died at 56, only three years after Juran, and did not get the chance.
“Every time you set a standard, you create possibilities for deviations, and the need to respond,” David Meier said. The setting was a presentation on problem-solving at a corporate in-house conference. It struck me as a concise statement of why managers should issue standards only where clearly and unquestionably useful.
Unnecessary standards were discussed in an earlier post, but the topic is worth revisiting in greater detail, first because there are so many of them in companies, making work life more complex than it has to be, and second, to elaborate on their impact on the organization.
Standards are rules set for others to follow, and rarely welcome, particularly when these others are thereby required to change behaviors they consider rational and appropriate, and sometimes more advanced than the standard. If you don’t enforce your unnecessary standard, your hurt the credibility of all your standards, including the useful ones; if you do, you turn into Captain Queeg, as he “tried to run the ship properly, by the book.”
The reception desk in the lobby of the building where the conference took place had a holder full of pens for visitors to sign in. No two pens were alike. A Captain Queeg would have had none of that; he would have mandated a model of pen and a color of ink, and set up regular audits to monitor compliance.
The example David gave was of office 5S zealots in Germany who had marked a location for a computer mouse on an individual’s desk and written him up for not complying. Last year, Mark Graban had posted a similar example of “5S” at a desk in the UK, shown on the right.
It reminded me of my experience of working in a Japanese office in the 1980s. It was a room with about 30 gray metal desks arranged in facing rows without any partitions. Everywhere else I have worked, each desk had its own supplies of staplers, staple-removers, scissors, glue sticks, etc., but it was not so in that office. These were shared resources, stored in a transparent plastic chest in the center of the room, with a label for each compartment.
This arrangement sounds right out of a Staples commercial, but that was the way it was. What struck me about it, however, was that the sharing created the need for labeled locations and for the discipline to return the items to assigned locations after use. This approach might make sense in offices used in hotelling mode. Everywhere else, however, each office worker has a dedicated desk that comes with a set of tools, that the employee organizes as he or she sees fit.
In the 21st century, the tidiness of desks does not have much to do with the performance of an office. What really makes a difference is the way information is organized on its computer network, and that is not visible when you walk through the office. But effective ways to do this are a whole other discussion. In factories, 5S in the offices is sometimes justified “to show solidarity with the shop floor.” It has been suggested to me that a better way to show solidarity might be to make the offices as hot, smelly and grimy as the shop floor.
Sometimes, the consulting companies that guide 5S implementation in client offices do not practice it in their own. In one particular firm, as consultants were in the field most of the time, they had no permanent desk, and grabbed an available one when they happened to be in town. With such a system, you might have expected the rooms to be numbered, and to have a magnetic board at the entrance with token for each present consultant to mark on a map of the facility where he or she could be found, but the managers felt that such a board would have been “ugly.” They never told me why they didn’t number the rooms. To locate a consultant you had to call his or her cell phone, and then follow instructions like “go left at the top of the stairs and it’s the third door to the right.”
Besides the size of steel balls at the end of motorcycle brake handles and company email addresses listed in my earlier post, there are many other things that are better off not standardized, and prominent ones include the analytical tools used in problem-solving.
The institutions of the quality profession in the US still promote as a standard the 80-year old tools of SPC, as if the art of collecting and analyzing data had not evolved since 1930. These tools are obsolete, but it would be a mistake to replace them with another set as a new standard.
There should be no standard. The professionals who work on improving quality should learn to use a broad variety of tools, and should be free to use whichever they think may help in their current circumstances. And they should always be on the lookout for new approaches and new technology.
Likewise, Value Stream Mapping (VSM) has, in the past 15 years, been elevated in the US to the position of a standard that all Lean practitioners are mandated to use. The need to map flows of materials and information in a manufacturing supply chain is often real, but there is no need for it to be done in a specific format with a specific set of symbols.
In fact, what happens in both situations is that formal compliance with the standard rapidly becomes the only criterion against which the outputs of quality or flow improvement teams are reviewed. The tools, their appropriate use, and their range of applicability are understood neither by the teams mandated to use them nor by the auditors who make sure they do, and the result is wallpaper.
I see three types of what Hela Hassine call “actions”:
- For some, you can do a complete discounted cash flow analysis before implementing. Cellularizing a job-shop falls into this category.
- For others, you cannot calculate the benefits ahead of time, but you can measure them afterwards. When you improve quality, first you can’t tell ahead of time by how much it will actually improve, and second, you can’t tell how much good this improvement will do to your business. After you have improved quality, you know by how much, and you can also measure the market impact of the improved quality, which is its dominant benefit. There is no way you can justify quality improvement ahead of time through cost-of-quality analysis.
- For the rest, the benefits are too diffuse to be measurable. 5S falls is in this category.
This has obvious consequences on implementation sequencing, that are often overlooked. Projects that lend themselves to a-priori justification are easiest to sell to management, and success in such projects gives you the credibility you need to undertake others with less tangible benefits. In other words, you are better off starting with cells than with 5S.
What I propose to call “Orbit Chart” is rarely used in manufacturing today, and I think it should be. This chart tracks the path followed by a system or an object in a plane where the coordinates are two of its characteristics. In earlier applications, there were geographical coordinates; in current ones, they may be the numerator and denominator of a financial ratio like Return On Net Assets (RONA), metrics of productivity and quality, or physical characteristics, like the depth and diameter of a drilled hole.
Most data visualizations in manufacturing are limited to simple charts, that show how one parameter varies by category, in bar graphs or continuously as a function of time in line plots. Charts that show more dimensions require more effort both to generate and to the read, but reveal information that you would not otherwise find.
For details, click on the following:
- What is an “orbit chart”?
- Examples of orbit charts
- Minard’s Russia campaign chart
- Unplanned versus planned downtime in nuclear power plants
- GINI index versus GDP in Brazil and the US, 1980-2011
- Populations of predators and preys
- Orbit charts for spacecraft
- Parker Hannifin’s “North-by-Northwest” chart
- Recovery from crisis at Toyota and GM
- Janet Yellen’s orbit chart of inflation versus unemployment
- Lora Cecere’s orbit charts of inventory turns versus operating margins for Colgate, P&G, Walmart, and Target
When you have an object moving in a plane, you can separately plot each coordinate against time, but the juxtaposition of the two plots would not show you the path followed in two dimensions. When you want to study a possible relationship between two parameters from a table of values where both are measured on the same objects, you usually start part generating a scatter plot, from which you try to infer some form of correlation between the two. When you do this, however, you lose the sequence information. When you consider two neighboring points on your scatter plot, you have no way to tell whether they are measurements on two units of product made consecutively or with many others in between.
On an orbit chart, you connect the points that succeed each other in the sequence, and label each point with its sequence number. As a result, what you plot is no longer a cloud of points, but a path followed by your object in the plane of your two parameters. On a road trip, the sequence of your locations at the end of each day are not independent: where you are tonight is where you were last night plus today’s increment. In machining with a tool that wears out, its condition after the 50th workpiece is what it was after the 49th, plus the effect of the 50th. The orbit chart is a visualization tool for this kind of phenomenon.
Let us assume that you are plotting the quality performance of a production line, as represented by its first-pass yield, against its productivity in terms of units/operator/shift. If you are practicing management whack-a-mole, you improve quality at the expense of productivity, by adding inspections and rework, or productivity at the expense of quality, by pressuring operators to cut corners. In this case, you can expect the orbit of your production line in the Quality versus Productivity plane to be a cycle, looking somewhat like a figure eight, and showing no real improvement.
On the other hand, if you are practicing Lean, quality and productivity improve together, resulting in a chart that does not loop.
Plotting an orbit, of course, is not always meaningful. In the following example, we show a scatterplot of two parameters measured on a sequence of independent events. The scatterplot lends itself to correlation and regression analysis of the two parameters, but tracing the path of the values in the sequence of points as on the right is meaningless.
The examples below are not from manufacturing but from military history, nuclear power plant maintenance, macro-economics, and ecosystems analysis. 25 years ago, radar charts were used in magazine like Britain’s The Economist, to compare parameters like inflation, unemployment and growth in multiple countries, but were unknown in factories, where they now are commonplace. Such may be the fate of orbit charts as well. The caveat is that, like radar charts, they are richer and more sophisticated than the usual charts you find on performance boards, and may be difficult for operators to relate to. As a result, they may be more useful as an analytical tools for engineers and managers than as a communication tool on the shop floor. The only way to find out is to try.
Perhaps the best known example of an orbit chart is Minard’s map showing the path of Napoleon’s army in 1812 Russia, on the offensive in brown, and retreating in black, with the thickness of the line showing the size of the army. It is annotated with dates, and with a temperature chart below. It was drawn in 1869, and Edward Tufte brought to the attention of readers in The Visual Display of Quantitative Information, as the most eloquent summary ever written about this disastrous campaign, and the mother of all infographics. Please click on the picture if you would like to enlarge it and read the text, knowing that it is in French. Tufte calls it a “narrative of space and time.” I prefer to call it “orbit chart” because (1) it is a shorter name and (2) the x and y coordinates do not necessarily represent space, nor is the index of the points always time. In manufacturing, for example, x and y could be quality characteristics, indexed by the serial numbers rather than time.
I used the orbit chart below on p. 282 of Working with Machines, to compare the maintenance performance of the Japanese and French nuclear industries in the 1980s:
Jean-Pierre Mercier, from the French electrical utility EDF, published this chart to compare the evolution of nuclear reactor downtime in Japan and France. Each node on each of the orbits indicates performance in a given year, in terms of planned unavailability on the y-axis, and unplanned unavailability on the x-axis, and the sum of the two gives the total unavailability, so that diagonal lines indicate a constant total value. The orbits enable us to track year-by-year progress, and improvement is marked on the chart by movement towards the origin. The Japanese and the French orbits both show improvement over the years but are so different that they do not even intersect, which begs the question of why. What did the Japanese and French industries do so differently that it produced such radically different results? Once the chart prompts you to ask the question, it is easily answered:
- France has one national utility company, with reactors of just two designs all made by the same supplier. This company reduced downtime by redesigning components or subsystems that failed in one reactor, and retrofitting the changes to all sites of the same design.
- Japan has nine private utilities and reactors with many different designs, which made the French approach impossible. Instead, they overdid preventive maintenance in the beginning, and gradually improved it, eventually achieving almost the same performance as the French utility.
This was back in 1990. Fast forward to 2013, and I am reading a new book by Alberto Cairo, called The Functional Art, about the design of information graphics and visualization. On p. XIX of the Introduction, I find the following chart of the orbit of Brazil’s economy in terms of GDP on the x-axis and the GINI index of inequality on the y-axis through five presidential administrations: As explained on Wikipedia, the GINI index is a cleverly defined ratio, which is 0 if every member of a society has an equal share of its wealth, and 100 if it is all in the hands of a single individual. Worldwide, among the countries for which data is available, Sweden’s GINI index of 23 is the lowest, while South Africa and Lesotho have the highest, at 62. The US has gone from a low of 38 in the late 1940 to 47.7 in 2010. The GDP is a better known metric, and is here shown as evaluated in constant US$ by institutions external to Brazil, like the World Bank and the IMF.
Assuming the underlying Brazilian economic statistics are credible, this chart tells quite a story, from the peak of inequality with low growth during hyperinflation under Sarney to sustained growth with steady reductions in inequality under Lula, the only pause in growth coming with the financial crisis of 2008. Would we see anything similar from plotting the same chart for the US economy? I tried, and the result is as follows: What does this chart tell us? The first obvious conclusion are:
- Regardless of economic circumstances or the political affiliation of the president, inequality has steadily increased in the US for over 30 years.
- We must always be wary of highly aggregated numbers. The US census bureau warns us that the formula for calculating the GINI index was changed for 1993, and that before-and-after comparisons are therefore not meaningful. On the face of the charts, it appears that even the high point of 47.7 for the US is lower than the low point of 53.8 for Brazil, but we would have to assume that the numbers are calculated the same way, which is doubtful. The formula was changed in the US; it may be different in Brazil and, unbeknownst to us, it may have been changed as well along the way.
- We can see, at the Bush(43)/Obama juncture, that the crisis of 2008 had a bigger impact on the US than on Brazil.
- While still roughly one sixth the size of the US economy, Brazil’s is growing faster. Back in 1980, it was less than one tenth.
For a long time, orbit charts have been used in population dynamics, with the x-axis being a prey population and the y-axis a predator population. Orbit charts can represent both theoretical models and actual data, when these are available. The following example, from a course taught at Portland State University, shows a simulation starting at the bottom right-hand side with a large prey population and few predators. This stimulates population growth for the predator, which depletes the prey population. This leads to a food shortage for the predators, causing their population to collapse, which in turn gives preys the opportunity to multiply again…
Eventually, the two populations spiral down not to a stable point, but to a repeating loop called limit cycle.
Parker Hannifin is a diversified manufacturing company that included the following orbit chart is in its 2012 annual report:
Given the type of publication, the axes are unlabeled. Internally, this chart is also generated by division and monitored by General Managers. The black straight line represent constant RONA, and the desired movement is upwards and orthogonal to it, hence the nickname of “North-by-Northwest chart” given to it by managers. When it was first introduced a decade ago, it was not immediately understood, but it has taken root in the organization. I would connect the dots and annotate it as follows:
We can also see on this chart that, while the RONA improvements of 2010 and 2011 involved movement along both axes, in 2012, it was only a reduction in Net Assets/Sales, which is no doubt meaningful to someone familiar with the company’s operations.
The term “orbit” of course is from astronomy, and I found a great example of an orbit chart drawn by NASA’s George Resteck of the path of the Pioneer 10 and 11 probes sent to explore the solar system in the early 1970s. As we can see on the chart, Pioneer 10 “only” managed to fly by Jupiter; Pioneer 11 flew by both Jupiter and Saturn, but crossed the other planets’ orbits far from where they were.
Only Voyager 2, launched 5 years later, managed to get close to all big four planets: Jupiter, Saturn, Uranus, and Neptune. The closest I could find to a similar chart for Voyagers 1 and 2 is as follows:
It is the same general idea, but with less details particularly on planet positions over time, and less information on scale. You can also find animations of their paths, but Resteck’s chart for Pioneer 10 and 11 still gives you the most information at a glance.
The following orbits of Toyota and GM profitability as a function of number of vehicles produced in recovery from crisis were included in a previous post:
The first shows GM through the growth of the twenties and the great depression; the second, Toyota through its 2001-2008 boom, followed by the financial crisis, the mass recalls of 2010, and the Fukushima earthquake and Thailand floods of 2011. It also shows how the economics of the auto industry changed in 80 years. In good times, today’s mature automobile industry yields profit margins that are barely 1/3 of what they used to be, on volumes that are many times higher. In the worst year of the great depression, 1932, GM made only 28% as many vehicles as in 1929. If the worst of the current crisis was in 2009-2010, Toyota’s drop in volume, while similar in absolute terms to GM’s in the great depression, was much smaller in relative terms, at barely 15% off from the 2008 peak.
The Toyota chart further shows three distinct periods:
- From 2001 to 2004, profit margins and volume rose together, suggesting that Toyota was enjoying some form of economies of scale.
- From 2004 to 2008, volume kept rising rapidly, but profit margins were flat. Toyota was criticized during that time for pursuing faster growth than it could manage.
- From 2008, the dominant effect is the financial crisis and recovery. with the 2010 recalls further reducing volume.
David J (Joe) Armstrong pointed out to me an article from the New York Times on 10/9/2013 about Janet Yellen’s analysis of inflation and unemployment in the US since the Great Depression, using orbit charts and animations. The overall chart is as follows:
See the articles for animations and explanations of the different segments.
Lora Cecere’s orbit charts of inventory turns versus operating margins for Colgate, P&G, Walmart, and Target
Read the article for Lora Cecere’s analysis of the significance of these charts. My first question about them is whether these companies compute the plotted parameters in the same way. Only if we can be sure that they do, can we venture some conclusions from the charts. No units are given on the operating margin axis. Given that it is a ratio, I assume “0.06” means 6%.
The key point is that, in 12 years, none of these companies moved towards the best scenario of a high number of inventory turns and a high operating margin. In the case of Walmart versus Target, they seemed to have had “orthogonal” strategies, with Walmart moving up and down in inventory turns without any large gains in margin, while Target was moving back and forth in margin without major changes in inventory turns.
I think there are many factors that these charts don’t show, such as the impact of the competitive environment and the financial crisis. While neither manufacturers like P&G and Colgate nor retailers like Walmart or Target have shown much progress, the impact of inventory on their operations is quite different. Overall, the manufacturing companies buy materials and generate their margins from selling finished goods; they may have some items on consignment, but only a minority.
By contrast, if I understand how very large retailers work, their income does not come from markups on products but from the float in accounts payable. They pay suppliers a month or two after collecting revenues from selling to consumers, which adds up to a very large float, and the companies’ profits come from the income generated by this float in financial markets. In essence, their inventory is on consignment and their holding costs are nothing like the manufacturers’. They still have the costs of operating warehouses, but their own money is not tied up in materials paid upfront. On the other hand, their margins are sensitive to the health of the financial markets.
No matter how great orbit charts may be, not many people in manufacturing will use them unless they are easy to generate with Excel. Generating the orbit itself is not a problem. With Excel 2007 or 2010, all you have to do is, under the Insert tab, select Scatter and click one of the two options for Scatter with lines and markers. Then you can use the various formatting options to refine the axes, gridlines, etc. Most of the charts in this post were generated this way.
As you can see in the following example, Excel does not mind the curve looping and spiraling:
Labeling the points on the charts is trickier. If there are few enough, you can manually add text boxes on the chart, which is what I did, but it would not work for thousands of points. In fact, for any large number of points, your only options are:
- Labeling every n-th point.
- Making labels pop up next to a point when you hover on it or click on it.
The labeling option in Excel charts will display the numeric values for x and y or the name of the data series next to each point, which doesn’t tell you anything you don’t already know. You want to label each point with the value of its index in the data table so that, if it is time, you can know when each point was generated. Excel won’t do it, but Microsoft Support provides a Macro with which you can, with the result as follows:
In Alberto Cairo’s chart, the successive presidencies of Brazil were marked by a different color. In Excel, you could achieve this effect by have having a separate data series for each presidency, which you could then color as you wish.
Visualizing two dimensions of the evolution over time of a group of machines or the output of a production line is an improvement over plotting just one. But what if, instead of two, you have fifty or even ten characteristics of interest?
You cannot see a point with 10 coordinates, but you can use dimensionality reduction techniques to work around this problem. Principal Component Analysis, for example, projects these multidimensional points onto a plane so that the projections contain most of the variability of the full multidimensional cloud. Linda E. Kavraki provides the following illustration of the concept:
The coordinates on this plane are two uncorrelated linear combinations of the full set of coordinates called first and second principal components. Then you can plot the orbit of your population in this plane. Technically, it is straightforward, because you will easily find software packages to perform Principal Component Analysis. Minitab does it, and so does the XLSTAT add-in to Excel.
The challenge is making sense of the orbit chart. When you just plot the projection of your cloud of points onto the first two principal components, you may notice a small clump of points off to the side and identify them as outliers. But, when the points are generated over time, following the orbit may not tell you much because the coordinates are linear combinations of the original coordinates with no obvious meaning. The first principal component could well be three times the length of the ship minus half the captain’s age.
“…It’s brand image,” he explained. “There is tremendous value to the perception of trust—customers don’t want to worry about the products they buy; they want it to be trouble free. We can offer them that….
We all use the same equipment to make tire, so we know it’s not the equipment that makes the difference. It’s the interface between the equipment, the material and the person—the training and the qualification of the person—that makes the difference.”
I see two key statements in this article, both quoted above:
- The first is an acknowledgement that the company’s reputation for quality is its crown jewels. It’s priceless, and worth any burden to nurture and protect, and the classical “cost of quality” calculations based on the direct costs of failures, appraisals and repairs are irrelevant.
- The second is that the key is the way people work with machines. Selleck does not reference jidoka, but his thinking is in line with it and, unlike the bulk of the American literature on Lean, puts the spotlight on production engineering
See on www.industryweek.com