Forecastability Improves with Altitude
Problem Statement
How do you decide which level to forecast at during the short-term operational horizon (Sales & Operations Execution S&OE)?
You can choose to go bottoms-up (e.g. Product / Customer / Week or Product / Location / Week):
This approach is good from the point of view of being able to see granular patterns (product going viral, large customer reducing demand).
However, this approach usually guarantees high levels of noise or high levels of intermittency in the sales history time series data.
Most statistical forecasting methods struggle to produce high quality forecasts at such granular levels.
The granular time series are relatively unforecastable and significantly increase the error in demand forecast.
Any review / judgment process for Demand Planners to adjust the statistical forecast at this SKU/Location granular level is simply not scalable given typically thousands of such combinations per Planner.
This approach usually burns out Demand Planners given the lethal combination of high error and high effort cycle after cycle.
You can also choose to go top down (e.g. Product Family / Customer Region / Quarter):
Most Demand Planning Practitioners are aware that aggregating a time series up (going from Product to Product Group / Family level; going from Customer to Customer Group / Region level; going from Weekly to Monthly / Quarterly level) reduces noise and makes the series more forecastable. This is good when the granular data is simply too intermittent / noisy to forecast.
You do run the risk of missing out on rich trends that may be available at lower levels: e.g. Product or Customer level (a specific Product going viral, a specific large Customer buying progressively less volumes). This loss of signal due to aggregation is not beneficial from a forecast quality perspective.
Any benefit from forecasting at aggregated levels can be quickly lost when disaggregating the signal back to the S&OE execution level (SKU/Location) which is what the Supply Planners need to be at: after all we don't ship Product Families to a group of locations monthly; we ship individual Products to individual locations weekly and hence need to decide sourcing at such granular levels.
Any manual methods to disaggregate the top-down demand forecast to granular levels are fraught with risks of high effort and high error. The disaggregation factors keep changing and are a huge pain to maintain. Such manual approaches promptly wash out any benefits from a more forecastable aggregate level forecast, as they re-introduce errors back at the granular level being fed to the Supply Planners.
As the volatility increases all around us, it's increasingly critical and simultaneously near impossible to get the forecast level right through manual efforts. This is a tough problem to solve manually; and solving it wrong (forecasting too high or too low) means high forecast error and hence, high cost of forecast error.
Result is continued high error at granular levels, which is often hidden from executives by computing and reporting error at much more aggregated levels (e.g., Product Family /Quarter level error) or even outright capping of error at 100%. Supply chain cost of 100% error and 1000% error are poles apart, but the error capping at 100% treats both at the same impact. Noise cancels out at aggregated levels and the stable (and more forecastable) time series at aggregate levels provides a false sense of well-being ('our forecast accuracy is already 83%, we are fine...').
Few are willing to compute / report the cost of forecast error and explain why their supply chain then continues to be plagued by excess stocks, expedite costs, and customer service failures if they indeed do such an excellent job of forecasting demand.
So, top-down is lazy and bottoms-up is back-breaking, how do we move forward then?
How do we reduce the high risk & cost of forecast error (high carrying cost for stock, excess & obsolescence costs, expediting costs, lost sales & customer satisfaction risk, planner burnout risk, etc.)?
Solution Statement
Can AI help with this problem?
Yes it can with two advanced capabilities: 1. Forecast Level Optimization and 2. Ensemble Forecasting working together.
Forecast Level Optimization:
Instead of getting locked in a specific forecast level (which could be too high or too low), we let the forecasting software experiment statistical forecast quality across all levels of hierarchical aggregation: Product/Location, Product, Product Group/Location, Product Group/Location Group as well as time aggregation: weekly, monthly, quarterly.
We then disaggregate all these combinations of aggregate forecasts down to the level that matters to the Supply Chain in the short-term (Product/Location/Week level) intelligently with ML-generated and validated disaggregation factors (thorough computing historical error during the training portion of the historical dataset).
Ensemble Forecasting:
We generate a range of forecasts across all combinations: a specific forecast model, a specific forecast level, a specific time aggregation.
We do not choose any one of the multiple forecasts available to us, but develop an optimally blended ensemble forecast by taking different proportions of all the various forecasts (machine as well as human forecasts) to produce the least overall error.
An ensemble forecast typically outperforms any specific forecasting level, thus significantly reducing the cost of forecast error as well as the Demand Planner effort.
How does Forecast Level Optimization and Ensemble Forecasting work together?
Your demand planning software should come with a wide library of Forecast Models including advanced Machine Learning / Deep Learning models (AdaBoost, Random Forest, Gradient Boosting, Neural Networks, etc.).
It should enable an automated and comprehensive evaluation of all the combinations: all the forecast models x all the forecast hierarchy levels x all the time aggregation levels.
It should then dynamically identify weight to be given to any particular combination, as we produce an ensemble (optimal) forecast.
It should test these weights out by checking the model performance in the Validation Phase of the historical dataset to ensure no 'overfitting' during the Training Phase.
It should be able to report these weights to provide a 'glass-box' with full explainability to Demand Planners. This helps understand which machine or human forecasts are how effective in which part of the product portfolio at which forecast lag.
For example, if there is a clear trend at a customer / region level, your demand planning software should pick it up and provides higher weights to such customer / region level forecasts for a given part of the product portfolio. If the Product / Customer level forecast is too intermittent for most customers except a few large ones, your demand planning software should also catch this and can group 'all other' customers in a different bucket to forecast only the large customers by Product / Customer, while keeping the rest of the customers at a Product (/ 'all other customers') level.
Benefit Statement
So, how does this help the Business?
This ability to dynamically assess forecastability across all possible combinations of forecast model, forecast level, and time bucket helps produce a much higher quality signal in a highly autonomous fashion.
This autonomous and optimal forecasting approach can reduce forecast error by 10% or more and cost of forecast error by 5% or more.
This results in significant productivity gains for the Demand Planners, who are now on point mostly to provide market intelligence / planned demand shaping activities on top of the system generated Baseline Forecast.
We can help you understand the value to your business through a rapid Proof of Value assessment with your data. Send us an email at info@vyan.ai