“Management is all about playing the odds. […] In operations, calculate lot sizes, generate forecasts and set quality standards with enough data and increasingly sophisticated algorithms and statistical methods and you will increase the chances of coming close enough. At least that is the theory, and the hope.
This is the basic premise of big data and ERP. With point of sale scanning, RFID, smart phones and all of the other data collecting technologies increasingly in use, the data to feed the engines is more and more available. The potential and the lure of the data driven, analytical approach to finding the center line and getting more decisions closer to correctness is growing.
The other approach is empowered people. Recognizing that management cannot be involved in every one of the individual customer interactions and operational, situational, tiny decisions, those calls are left to the people on the spot. They are expected to rely on their knowledge, understanding of company values and goals, and the information available to them in very real time to decide what to do.[…] The basic question is whether empowered people will get it right more often than big computer.”
In this article, Bill Waddell presents the data-driven approach to management decision making as contradictory to people empowerment. I do not see these as mutually exclusive.
In 1993, there was a group within Toyota’s logistics organization in the US that, based on weather data, thought that the Mississippi might flood the railroad routes used to ship parts from the Midwest to the NUMMI plant in California. Four days before the flood, they reserved all the trucking available in the Chicago area, for the daily cost of 6 minutes of production at NUMMI. When the flood hit, they were able to ship the parts by truck around the flood zone, and NUMMI didn’t miss a beat.
This is what a good data scientist does.
In Numbersense, Kaiser Fung points out that data analysis isn’t just about the data, but also about the assumptions people make about it. As an example, he points out the Republican polling fiasco of the 2012 election, as being due to the combination of flawed data collection and equally flawed modeling.
In other words, it’s not a computer that comes up with answers from data, but a human being, and the quality of these answers depends as much on the human analyst’s understanding of the underlying reality as it does on the ability to collect clicks from the web or transactions from point-of-sale systems.
Good data analysis does not require petabytes of data. In statistics, a small sample is 10 points; a large sample, 100 points. The difference matters because, with small samples, there are many convenient approximations that you cannot make. But 100 points is plenty for these approximations to work.
With millions of points, the tiniest wiggle in your data will show overwhelming significance in any statistical test, which means that these test are not much use in that context. To figure out what this tiny wiggle is telling you about reality, however, you still need to understand the world the data is coming from.
I don’t see an opposition between relying on people and relying on data, because, whether you realize it or not, you are never relying on data, only on people’s ability to make sense of it.
See on www.idatix.com