Quantcast
Channel: Forecasting and Planning – Demand Planning
Viewing all articles
Browse latest Browse all 67

A Critical Look at Measuring and Calculating Forecast Bias

0
0

I cannot discuss forecasting bias without mentioning MAPE, but since I have written about those topics in the past, in this post, I will concentrate on Forecast Bias and the Forecast Bias Formula.
[bar group=”content”]

What Is Forecast Bias?

Forecast Bias can be described as a tendency to either over-forecast (forecast is more than the actual), or under-forecast (forecast is less than the actual), leading to a forecasting error.

There are many reasons why such bias exists including systemic ones as discussed in a prior forecasting bias discussion. Some core reasons for a forecast bias includes:

  1. Optimism bias: I have seen this primarily with the sales team who seem to have an abundance of confidence in their ability to sell and therefore inflate the end results.
  2. Sandbagging bias: This is the reverse of the above and I have seen this where well-meaning executive have created a system of bonuses based on exceeding the forecasts, and this has created a culture of sandbagging.
  3. Anecdote bias: I have heard so many instances where regardless of what the data is telling them, client personnel would be wary of seeing it because a terrible thing that happened in the past and is part of the company folklore. Their forecast is therefore biased based on the anecdotes.
  4. Recent data bias: This is probably true for all processes where humans are involved. The more recent occurrences weigh heavier in our mind. In the case of forecasting, this can create an overreaction based on the latest events.
  5. Silly bias: In a study conducted by Amor Tversky and Daniel Kahneman, they asked respondents to guess the number of countries in Africa. However, they showed them a number right before asking them to guess. What they found was on average, the estimate of some countries went up when the user was shown a bigger number and went down when the users were shown a smaller number before answering the question. This makes me think a forecast could be impacted by silly things you saw before you start doing the forecast. For example, what if they saw the temperature and it was a hot day? Does that high number skew the forecast higher? What if they called someone before forecasting and the phone number was comprised of larger digits?

How To Calculate Forecast Bias

A quick word on improving the forecast accuracy in the presence of bias. Once bias has been identified, correcting the forecast error is quite simple. It can be achieved by adjusting the forecast in question by the appropriate amount in the appropriate direction, i.e., increase it in the case of under-forecast bias, and decrease it in the case of over-forecast bias.

Rick Glover on LinkedIn described his calculation of BIAS this way: Calculate the BIAS at the lowest level (for example, by product, by location) as follows:

  • BIAS = Historical Forecast Units (Two-months frozen) minus Actual Demand Units.
  • If the forecast is greater than actual demand than the bias is positive (indicates over-forecast). The inverse, of course, results in a negative bias (indicates under-forecast).
  • On an aggregate level, per group or category, the +/- are netted out revealing the overall bias.

The other common metric used to measure forecast accuracy is the tracking signal. On LinkedIn, I asked John Ballantyne how he calculates this metric. Here was his response (I have paraphrased it some):

  • The “Tracking Signal” quantifies “Bias” in a forecast. No product can be planned from a severely biased forecast. Tracking Signal is the gateway test for evaluating forecast accuracy. The tracking signal in each period is calculated as follows:

1

  • Once this is calculated, for each period, the numbers are added to calculate the overall tracking signal. A forecast history entirely void of bias will return a value of zero, with 12 observations, the worst possible result would return either +12 (under-forecast) or -12 (over-forecast). Such a forecast history returning a value greater than 4.5 or less than negative 4.5 would be considered out of control.

At Arkieva, we use the Normalized Forecast Metric to measure the bias. The formula is very simple.

2

As can be seen, this metric will stay between -1 and 1, with 0 indicating the absence of bias. Consistent negative values indicate a tendency to under-forecast whereas constant positive values indicate a tendency to over-forecast. Over a 12-period window, if the added values are more than 2, we consider the forecast to be biased towards over-forecast. Likewise, if the added values are less than -2, we find the forecast to be biased towards under-forecast.

A forecasting process with a bias will eventually get off-rails unless steps are taken to correct the course from time to time. A better course of action is to measure and then correct for the bias routinely. This is irrespective of which formula one decides to use.

Good supply chain planners are very aware of these biases and use techniques such as triangulation to prevent them. Eliminating bias can be a good and simple step in the long journey to an excellent supply chain.

The post A Critical Look at Measuring and Calculating Forecast Bias appeared first on Demand Planning.


Viewing all articles
Browse latest Browse all 67

Latest Images

Trending Articles





Latest Images