Choosing the Optimal Forecasting Method to Minimize Planning Errors: Q&A on SAP APO DP Model Selection Tools

March 05, 2015

Alistair ThorntonWhich forecast method can deliver the greatest accuracy for demand planning? Are you currently using the optimal model and planning level to meet forecast requirements?

SCM 2015 speaker Alistair Thornton of Deloitte Consulting took readers' questions on  SAP APO demand planning options for model selection - including the underutilized Forecast Level Optimizer tool (sometimes referred to as "FLO") - to improve forecasting and planning.

View the replay below along with the edited transcript.



James Ciccone, SCM 2015 Conference Producer:

Thank you to everyone for joining us today for a chat on SAP APO tools for forecast model selection. 

Joining us today to take your questions is Alistair Thornton. Alistair is a Senior Manager at Deloitte Consulting and a speaker at our upcoming SCM 2015 conference in Las Vegas.

Alistair will be presenting this session on model selection tools: “A better ‘best fit’: A guide to leveraging underutilized SAP APO demand planning functionality to optimize statistical forecast accuracy”.

Alistair, thank you for joining us today to take some questions. 

Alistair Thornton, Senior Manager, Deloitte Consulting: Hello everyone and thanks for joining this chat.

James Ciccone: Thanks again for joining us, Alistair! We’ll let you get started on those questions that are already waiting for you…


Comment From Milton Gallegos

My question is about time granularity for historical data for 2 groups of finished products. For finished external procurement products the planed delivery time is about 7 days. For finished in-house production products the in-house production time is about 30 days.

Alistair Thornton: It is certainly possible to have different granularities in the time dimension in APO. I have seen implementations using days, weeks, calendar months and fiscal periods

One thing to consider when choosing the time granularity is how that might affect the ability of the algorithms to detect patterns such as seasonality.


Comment From test

Hello, as I understand it, APO only offers basic models such as ARIMA for forecasting. Is there an option to include PAL or even R to use any models like multivariate regressions, SVM, Neural Nets, etc.?

Alistair Thornton: This is mostly correct. APO does not support ARIMA, however. And it does support multivariate regression. There is a way to write your own forecasting algorithm and have APO use that - if you wanted to use the more advanced methods you refer to, then this would be an option.


Comment From george morstadt

What are your thoughts on APO's MAPE calculation with history that includes zeros?

Alistair Thornton: As part of the project in which we implemented to Forecast Level Optimizer, some work was done by our technical team to walk through the program logic for the standard MAPE calculations. I didn't do this myself, so don't have all the details, but there were a few things which we felt weren't as good as they could be.
My recommendation if using any form of 'best fit' approach in APO is to write your own error measure algorithm.

Another item to look at is the flag in the forecasting profile called 'Without leading zeroes'. This tells the forecasting algorithm to adjust the start of the horizon to the first non-zero period, and I would assume (but haven't tested) that this would also cause the MAPE score to ignore them.


Comment From Erik

If APO suggests the great majority of the products offered fit best in a constant model, should anything be taken from that as far as the expectations how “forecast-able” the product line is on a monthly basis?

Alistair Thornton: I would say that a “best in class” approach to statistical forecasting includes a step for demand segmentation. This would analyze the data and determine which parts of the total data set really can be forecasted.

It's also possible that the choice of the constant model for a large section of the products is due more to the error measure being used than the inherent demand patterns.

There is a relatively new feature in APO called “ABC/XYZ Analysis,” which can perform demand segmentation.


Comment From Subho

We do demand segmentation out of APO. So now we can do it in APO. That’s great. Please tell more on ABC/XYZ analysis.

Alistair Thornton: The ABC / XYZ analysis looks at a set of data, and plots the volume against the variability of the shipment history for each one. This tells you whether a given product is high or low volume, and high or low variability.

It's also possible to have the ABC/XYZ tool assign a forecast profile based on this analysis, although I'm not personally convinced that I would use this feature - I don't see a strong reason to assume that all low volume / low variability products should use the same forecast profile.

What I have seen done is to use the ABC / XYZ functionality to perform the calculations, and then use the results of that to update navigational attributes in the APO DP database - this involves some custom enhancements, but enables you to select data in APO planning books based on the segmentation.


Comment From Mark

If one uses APO for an ABC/XZY analysis, what CV should be used for X, Y and Z?

Alistair Thornton: Although the feature is called 'ABC / XYZ', which implies three levels of volume (low / medium / high) and three levels of variability (l / m / h), you don't have to stick to three. My client tried to keep this simple, and only used two levels (low / high) in each. We used a coefficient of variation of 1 to determine the split between high and low variability.


Comment From Subho

In ABC analysis, can I define breakpoints, or is it automated in APO?

Alistair Thornton: You determine the breakpoints between the levels (i.e. what constitutes high vs low).


Comment From Subho

I have volatile demand pattern and what should be my approach to select models for a better forecasting of future demand?

Alistair Thornton: I think the first thing to look at is how 'forecastable' this set of products is (i.e. use segmentation to determine this). For products which are very volatile and high volume, I would recommend involving other functions in your business to provide forecasts based on customer and market intelligence they have gathered.

Assuming that you do want to produce a statistical forecast for these products (and this would be common), then there are many ways to determine which profiles to use. These include use of automated approaches like the FLO or Automodel in SAP, which perform tests to determine the 'best' forecast. Or you could do a more manual analysis periodically to test various algorithms, and then 'fix' them for a period of 6 months or so.


Comment From Guest

When your business has multiple selection IDs and aggregation levels, and you want to test whether they are correct and/or the statistical strategy for each one is "the best," what is a good systematic way to do that efficiently?

Alistair Thornton: This is handled by the Forecast Level Optimizer. In the session [at SCM 2015], I'll show you a screenshot of one of the process chains we set up to run the forecasting in FLO. We split this up into groups of products which we expected to share similar demand patterns. The FLO can make use of parallelization, so our approach was to give it as many parallel analyses to perform as the server could handle.


Comment From george morstadt

I see on the OSS note for the FLO that it can work on determining forecast level as well as model selection. What was your experience with having forecast level be variable?

Alistair Thornton: Yes, this is really the key feature of the FLO compared to other approaches of “best fit” in APO. In practice, allowing a lot of levels to be analyzed does increase the runtime so it works best if you can narrow down the options before FLO runs them.


Comment From Jeff

To get the best results from a model, at what level of granularity should the aggregation level be in terms of the presence of volume? For example, SKU proliferation can create great intermittency in volume if run at that level.

Alistair Thornton: I would say that this is the key feature of the Forecast Level Optimizer - its ability to grind through multiple levels of aggregation and then tell you which level gave the best results.


Comment From Ian B

Do you trust the outlier correction methodology within Univariate forecasting, or would you recommend writing a macro to try and cleanse historical sales in planning books prior to running a stat forecast?

Alistair Thornton: In my experience this is a choice in approach. Some companies want to spend some time manually cleansing history, and if you can do this I think it's the best approach, though obviously requires work. Automatic outlier control is an easy option to enable, and will catch significant outliers, so if you can't spend the time to do this manually (and many companies can't) then I would definitely look at enabling this.


Comment From Erik

How many CVCs are too many CVCs?

Alistair Thornton: This is very specific to your particular implementation and hardware. The number of CVCs in itself is only one factor impacting the overall performance of APO. I have heard of APO DP systems with more than 5 million CVCs.


Comment From Subho

How do you see seasonal models performing in APO compared to that of something like SPSS? Do you suggest  doing the STAT forecasting outside the APO?

Alistair Thornton: I'm not familiar with SPSS, but I could understand some companies wanting to produce the statistical forecast outside of APO and then interface it in. This often involves a conversation with the company IT group if they have a strategy to use SAP software where possible.


Comment From Milton Gallegos

How can the system help the demand planner to analyze a product's stock out and then correct the historical sales before performing the forecasting?

Alistair Thornton: There's nothing standard which would do exactly this in APO. But a simple thing to start with would be to write a macro which looks for periods of “empty” shipment history that are preceded by non-zero periods and create an alert. The planner could then launch the planning book directly from the alert list to decide whether to cleanse that period.


Comment From Subho

How can I use the auto mode in most efficient way?

Alistair Thornton: I would say that using a custom error measure is one of the most important pieces of using auto models.


Comment From george morstadt

How do create planner understanding when you have variable forecast levels in action? How does the planner know what level a product is forecasted on? How do they know if they are looking at the actual forecast level or an aggregation or disaggregation of the actual forecast?

Alistair Thornton: In fact it is difficult to tell looking at the data in the planning book. There are reports produced by the forecasting methods in APO stored as logs in the system which can be read if you want to know the exact levels / parameters used. I think that use of more advanced methods work best when there is a small group of dedicated and well-trained forecasting users who will look at these details and make the necessary “tweaks” to produce a good statistical forecast.

But I think it's also true that for the majority of users in APO, they don't need to know how the forecast was produced; they just need to use it.


Alistair Thornton: We're at the end of our time, so just wanted to say thanks to everyone for joining and providing the questions.

James Ciccone: Thanks to everyone who joined us today!

For more on demand forecasting and supply planning, join us at SCM 2015 this spring - March 30-April 1 in Las Vegas. Alistair Thornton will be presenting his session on forecast selection tools  as part of our dedicated Demand Planning and S&OP track in Vegas.    

For those who asked for more on a general introduction to APO and forecasting capabilities, here is a great resource from Alistair from his APO jumpstart day presentation at SAPinsider’s SCM conference a few years back. 

Finally, a big thank you to Alistair Thornton of Deloitte for taking the time for these questions. We’re looking forward to seeing you in Las Vegas at this year's SCM conference in just a few weeks!


An email has been sent to:

More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!