Lean Forecasting: through Override Value Prediction

A typical process to plan demand goes through statistical forecasting followed by demand planning. The demand planning steps typically use statistical forecast as a baseline, then lets sales team / marketing team / demand planners adjust it as needed. The idea is that humans have forward-looking business knowledge (planned promotions, pricing changes in response to competitive actions, new products, new customers, supply chain disruptions, new regulations, etc.) that the backward-looking statistical forecasting engine missed out on, and hence this market intelligence needs to be reflected in the statement of consensus demand.

Given these overrides take serious human effort to review & adjust the statistical forecast, there is a need to quantify the ROI from such efforts to ensure that the juice is worth the squeeze. As Demand Planners override the statistical forecast up or down, by a little or a lot, it becomes important to understand the direction & magnitude of such overrides as well as override density (which parts of the product/location or product/customer portfolio are getting the most overrides). We are not referring to the process of Demand Planners tweaking Forecast Model parameters to produce a different statistical forecast, but the act of simply overriding a best-fit statistical forecast in a way that cannot be explained to the forecast model for it to produce a better statistical forecast next cycle (assuming such overrides are helping reduce error).

Override direction is important to quantify planner-introduced bias: planners overriding up may increase bias if actual sales do not exceed statistical forecast. Overrides may also be successful regardless of the direction (override up or down), in this case, it's important to correlate override direction and size in context of the product portfolio. In case of sustained correlations (large positive overrides usually hurt; large negative overrides usually help), it becomes useful to alert the Demand Planners real-time as they override the statistical forecast. Key is to differentiate sustained patterns from random variations. This is where Machine Learning helps: to both forecast the value add from a specific override, as well as attach a confidence rating to such a prediction. For example, we might be 80% confident that a large positive override is going to hurt in case of the stable sections of the product portfolio, but only 20% confident in case of a large positive override for a recently launched product (in case it is going viral).

Usually, large overrides which consistently fail point to toxic practices: inflating the forecast to please Finance Team / Executives; to hide the bad news as long as possible in a shoot-the-messenger culture; to guarantee product availability to support a tender bid - no matter how low the winning odds, or simply to get a large capital expense approved to hoard resources just in case. These are all examples of what we would like the demand to be, as opposed to what we believe the demand most likely  will be, in absence of proactive mitigating action.

It's also important to understand the overrides from a lag-perspective: how far out in the future do planners make such overrides and how many chances they get to revise their judgment before a hard execution action is triggered in response to the consensus demand: raw material procurement, hiring direct labor, production for sub-assemblies or worse case the final product. For a 4-month cumulative lead time product, lag 4 error and bias (4-cycle ahead forecast vs. actuals) are way more critical than a lag 5/6/7 forecast. Usually, forecast error is measured mostly at lag 1 (forecast this month for next month), this misses out on the nuance that most of the costs may already have been incurred if raw materials have a 3-month lead time and represent majority of the cost of goods sold (relative to production costs / final product transportation costs).

Ideally, we build the statistical forecast on a forward-looking driver based model, which factors downstream signals (customer inventory, etc.), demand-shaping drivers (pricing, promotions, new products, etc.), and macroeconomic factors ( interest rates, consumer confidence index, exchange rates, etc.). This forward-looking statistical forecasting (or demand modeling) approach removes the core argument that overrides to a backward-looking traditional statistical forecasts are needed given a volatile business environment. We recommend both driver-based forecasting and override value predictions as best practices to eliminate overrides that are likely to increase error/bias with high probability. Humans should focus on providing their inputs in terms of driver forecasts (e.g., forecasted interest rates, planned promotions, future pricing, etc.) as opposed to directly editing the driver-based statistical forecast without a proper explanation (of why such an override could not be reflected in the underlying demand-driver forecasts/plans).

Vyan enables real-time override value predictions & alerts to flag (and potentially block) value-hurting overrides, thus both reducing the planner effort and reducing the forecast error/bias. Post-game reporting on forecast error is relatively much less useful, as the damage is already done, and there are also no structured guiderails to help planners from not making wasteful overrides in future. To learn more, reach out to vyan.ai team for a live demonstration or a proof-of-value with your data.

Previous
Previous

How to Reduce Error: Avoid Making Overrides Under Pressure