Jul 16 2021
Sales Forecasts – Part 1. Evaluation
When sizing a new factory or production line, or when setting work hours for the next three months, most manufacturers have no choice but to rely on sales forecasts as a basis for decisions.
But how far can you trust sales forecasts? You use a training set of data to fit a particular model and a testing set of actual data observed over a time horizon of interest following the end of the training set period. The training set may, for example, cover 5 years of data about product sales up to June 30, 2021, and the testing set the actual sales in July, 2021.
The forecasters’ first concern is to establish how well a method works on the testing set so that the decision makers can rely on it for the future. For this, they need metrics that reflect end results and that end-users of forecasts can understand. You cannot assume that they are up to speed or interested in forecasting technology.
Forecasters also need to compare the performance of different algorithms and to monitor the progress of an algorithm as it “learns,” and only they need to understand the metrics they use for this purpose.
Jul 30 2021
Sales Forecasts – Part 2. More About Evaluation
The lively response to last week’s post on this topic prompted me to dig deeper. First, I take a shot at clarifying the distinction between point forecasts and probability forecasts. Second, I present the idea behind the accuracy metric for probability forecasts that Stefan de Kok recommends as an alternative to the WSPL. Finally, I summarize a few points raised in discussions on LinkedIn and in this blog.
All of this is about evaluating forecasts. We still need methods to generate them. There are many well-known, published methods for point forecasts but not for probability forecasts, particularly for sales. This is a topic for another post.
Continue reading…
Contents
Share this:
Like this:
By Michel Baudin • Tools • 2 • Tags: Probability Forecast, Sales forecast