866.P4D.INFO | Plan4Demand.com | Info@plan4demand.com
Post Go-Live's Come & Gone... Now What?
All software implementations suffer from traditional post go-live issues that can arise out of the trade-offs between cost/budget, time constraints/benefits, and of course human interaction. There is no "one size fits all" design. Tailoring your optimization plan to address common pain points that can arise becomes critical to lasting success.
What Attributes can be Optimized to gain the most benefit?
John George has identified several key attributes to optimize that can provide the most payback for the effort through his experience in over 100 JDA Implementations. Learn from his experiences!
This session will provide several pragmatic optimization tips from both technical
& functional perspectives including;
• Establishing Proper Thresholds on key Forecasting metrics
• Reacting to Period by Period accuracy issues
• Aligning DFUs to the right Forecasting Algorithm
• Cleansing Demand Signals
• Fine-tuning Batch Processes
• Smoothing In/Outbound Bottlenecks
• Tuning Service Run Environment (SRE)
For more information about Plan4Demand, visit www.plan4demand.com
Contact the event organizer Jaime Reints for more information on the topic
Jaime.reints@plan4demand.com or 412.733.5011
Check out this webinar on-demand at
http://www.plan4demand.com/Video-Tips-to-Optimize-JDAs-Demand-Planning-Module
1. DEMAND PLANNING LEADERSHIP EXCHANGE
PRESENTS:
The web event will begin momentarily
with your host:
Featuring Guest Commentators:
October 16th, 2012 plan4demand
2. Goals for the Session
Putting “Optimization” in context
Business and Technology objectives
Business Optimization considerations
Technical Optimization considerations
The Bottom line
Q&A/Closing
3. Goal:
Examine ways to optimize and get the best out of a JDA Demand
toolset installation post go-live
Objectives:
Putting optimization in context
Business process considerations for optimization
Technical considerations for optimization
The bottom line
Key Take-a-ways
4. 4
In a typical optimization problem
The goal is to find the values of controllable factors determining the
behavior of a system (e.g. a physical production process, an investment
scheme) that maximize productivity and/or minimize waste
Here we will look at a given Scenario
Your business has purchased, installed, and implemented JDA’s Demand
Planning Module (may have also included “either/or/both” options in
Demand Classification and Demand Decomposition)
Time has passed since the original implementation
There is a wish list from the business on things not completed when the
project went live
Senior Exec’s in the company are not seeing the benefits they thought
they would see
5. 5
Set appropriate thresholds on key forecasting metrics to find exceptions
Wide enough to capture critical exceptions but narrow enough that you can
actually review them all within the planning cycle
Re-evaluate those thresholds periodically
Focus on this region first
Review Exception DFUs each planning cycle
Chart Aggregate Accuracy/Error
You should use the sum of absolute error (sum(abs(hist-fcst))/sum(hist))
Not just Net Error (sum(hist-fcst)/hist)
You don’t want to have the over and under forecasting cancel each other out,
you want to know how far you were actually off
6. 6
Don't overreact to poor accuracy for one period for a single DFU
It may be noise or timing of actual demand
Review ALL DFUs at least once annually
Bring them up in the demand workbench, and look at them
Make sure the model is appropriate
Make sure the history is complete and accurate
Make sure you like the forecast and that it isn’t biased high or low
Do a portion of the total DFUs each period
7. 7
Bias is more critical than accuracy on a single DFU
Constantly over forecasting by 20% is more damaging than over forecasting 30% one
month than under forecasting 30% the next…
Abs Pct Abs Pct
Fcst Absolute Pct Fcst Fcst Fcst Absolute Pct Fcst Fcst
Hist Fcst Error Error Error Error Hist Fcst Error Error Error Error
Period 1 500 650 (150.00) 150 -30.00% 30.00% Period 1 500 600 (100.00) 100 -20.00% 20.00%
Period 2 650 455 195.00 195 30.00% 30.00% Period 2 520 650 (130.00) 130 -25.00% 25.00%
Period 3 550 715 (165.00) 165 -30.00% 30.00% Period 3 550 605 (55.00) 55 -10.00% 10.00%
Total 1700 1820 (120.00) 510 -7.06% 30.00% Total 1570 1855 (285.00) 285 -18.15% 18.15%
• In this example, a period of over-forecasting is • In this example, the DFU was consistently
followed by a period of under forecasting over-forecasted every period
• In total, the DFU was off by 120 units over • In total, the DFU was off by 285 units over
three periods for a Forecast Error of 7.06% three periods for a Forecast Error of 18.15%
• Although Error on a period by period basis was worse on the left,
you can see the Net Error was better over time
8. 8
Make sure the Forecasting Algorithm is appropriate for the DFU's historical pattern
Demand Classification can help significantly
Each Forecasting Algorithm has a number of parameters (or levers) that you can use to affect the forecast
Within a DFU model, the only way to get the “best” statistical model is through trial & error
Moving each parameter in micro-increments and setting the values that work best with each
of the parameters in conjunction
If you take that to the next step and do that with each DFU using each forecasting algorithms - that is millions
of potential combinations for each DFU !
Demand Classification can do those computations for you - setting the “best mix” of
parameters for each DFU - for each model - letting you see which works best for you
9. 9
The “Best" Statistical Model does not necessarily generate the “Best” Forecast
The numbers don’t always tell the whole story …
There is both Art AND Science involved in Demand Planning
The Science is in the pure number being forecasted by the statistical model
The Art comes in with analyzing the model and making sure it is the best “forecast”
Is there market intelligence that can be incorporated ?
- such as a competitor’s product launch of a new marketing campaign
Are you expecting some cannibalization ?
Is your product affected by the current economic climate ?
10. 10
A Process by which Sales History data is split into a Time-Series component
and a Marketing/Special Event component
Time Series
Trend
Seasonality
Marketing Component
Promotions: TV Ads, Catalogs, Displays, Coupons
Price Fluctuations
Cross-product effects (halo/cannibalization)
Market Conditions (micro and macro)
Special/Unusual Event
11.
12. 12
There are so many things that go into generating a Good Forecast
When choosing between similar forecasts, the simpler algorithm (e.g. Fourier
or moving average) is often the best choice
Many of the more advanced models such as Lewandowski and Holt-Winters can
be very receptive to even the slightest changes in historical patterns and tuning
parameters
A simpler model automatically factors out a lot of the noise due to the fact that
it is less complicated
You can create simple spreadsheets to calculate a Fourier model of moving
average and explain what is happening to all of your stakeholders
(… More on this topic & FVA-Forecast Value Add analysis in a moment)
There are definitely times when a more advanced model is more
appropriate but it doesn’t have to be a one-size-fits-all approach
13. What approach do you use to measure
how well you are Forecasting over time?
Answer on the right hand side of your screen
A. We use MAPE at an appropriate Lag
B. We use a combination of techniques
C. We compare to an naïve model offline
D. We use business intelligence reports
E. I don’t know!
14. 14
Studied 60,000 forecasts at four supply chain companies
75% of statistical forecasts were manually adjusted
Large adjustments tended to be Beneficial
Small adjustments did not significantly improve accuracy and sometimes
made the forecast Worse
Downward adjustments were more likely to improve the forecast than
upward adjustments
Source: “Good and Bad Judgment in Forecasting.”
Fildes and Goodwin, Foresight, Fall 2007
15. 15
FVA is defined as the change in a forecasting performance metric (whatever metric
you happen to be using, such as MAPE, forecast accuracy or bias) that can be
attributed to each particular step and participant in your forecasting process
FVA analysis also compares both the statistical forecast and the analyst forecast to
what’s called a naïve forecast
In FVA analysis, you would compare the analyst’s override to the statistically
generated forecast to determine if the override makes the forecast better
In this case, the naïve model was able to achieve MAPE of 25%
• The statistical forecast added value by reducing MAPE five
percentage points to 20%
• However, the analyst override actually made the forecast worse,
increasing MAPE to 30%
• The override’s FVA was five percentage points less than the naïve
model’s FVA, and was 10 percentage points less than the
statistical forecast’s FVA
Source: Michael Gilliland SAS Chicago APICS 2011
16. 16
Comparing your Forecast with a Naïve Model
The “Random Walk”, also called the “no-change”
model, uses your last-known actual value as the
future forecast
For example: if you sold 12 units last week, your forecast is
12. If you sell 10 this week, your new forecast becomes 10
The “Seasonal Random Walk”, uses something such
as the same period from a year ago as your
forecast for this year
For example: if last year you sold 50 units in June and 70
units in July, your forecast for June and July of this year
would also be 50 and 70
A “Moving Average” is also suitable to use as your
naïve model, because it’s also simple to compute and
takes minimal effort.
The duration of the moving average is up to you - A full
year of data (12 months or 52 weeks) has the advantage
of smoothing out any seasonality
17. How would you classify your level of interest in
the technology behind the JDA applications?
Answer on the right hand side of your screen
A. Purely business functional that’s IT’s job –
High Level “stuff” only
B. Some business some technical –
Mid-level “stuff” only
C. I have a Technical background –
Give me those details!!
*Answer will effect how technically in-depth we go for the rest of the presentation
18. 18
In the database management systems developed by the Oracle Corporation, the
System Global Area (SGA) forms the part of the RAM shared by all the processes belonging
to a single Oracle database instance
The SGA contains all information necessary for the instance operation
In general, the SGA consists of the following:
Dictionary Cache: information about data dictionary tables, such as information about account, data file, segment,
extent, table and privileges
Redo Log Buffer: containing information about committed transactions that the database has not yet written to online
redo log files
Buffer Cache or "database buffer cache": holds copies of data blocks read from data files
Shared Pool: the cache of parsed commonly-used SQL statements
which also contains;
Data-Dictionary Cache; containing tables, views and triggers
Java Pool; for parsing Java statements
Large Pool; (including the User Global Area (UGA))
19. 19
The Program Global Area (PGA) memory-area of an Oracle instance contains
data and control-information for Oracle's server-processes
The size and content of the PGA depends on the Oracle-server options installed
The PGA consists of the
following components:
• Stack-Space: the memory that holds the session's
variables, arrays, etc.
•Session-Information: unless using the multithreaded
server, the instance stores its session-information
in the PGA (In a multithreaded server,
this goes in the SGA)
•Private SQL-area: an area which
holds information such as;
bind-variables and runtime-buffers
•Sorting Area: an area in the PGA
which holds information
on sorts, hash-joins, etc.
Source: Oracle Corp 2012
20. 20
An SGA sized too small can cause disk thrashing as the SGA is cached to
disk instead of remaining in-memory
Oracle has gotten better about SGA management with new memory
management and the all-encompassing MEMORY_TARGET parameter which can
dynamically set the PGA and SGA memory sizes
However, incorrect
MEMORY_TARGET or override
values can cause the PGA and/or
the SGA to be too small
The DB_CACHE_SIZE should
be sized to fit largest amount
of data needed to satisfy
a given query
This can be determined
by monitoring the SGA
21. 21
In computer data storage, Data Striping is the technique of segmenting logically
sequential data (i.e. a file) in a way that accesses of sequential segments are
made to different physical storage devices
Striping is useful when a processing device requests access to data more quickly than a storage
device can provide access. By performing segment accesses on multiple devices, multiple
segments can be accessed concurrently
This provides more data access throughput, which avoids causing the processor to idly wait for
data accesses
Striping is used across
disk drives in;
RAID storage
Network interfaces
in Grid-oriented Storage
RAM in some systems
Source : Microsoft 2011
22. 22
Over the course of several JDA implementations we have observed that
I/O bottlenecks can also occur in systems where the data is not properly
striped across disks
This was more of a problem in the past when array caches were smaller and
volume management software was less efficient
However, an incorrect implementation can still cause hotspots where disk
thrashing can occur because you may end up with multiple parallel processes
attempting to pull data from the same disk.
Batch process related tuning
Statistics can be a problem
with JDA
The PROCESS% tables
are particularly sensitive,
depending on volume of data
being processed they benefit from either
having NULL statistics or stored statistics
based on the maximum size of the table
23. 23
Adding hints and adjusting values for the given process in the sre_node_config_props table
JDA allows the option of adding Oracle hints to this table that will be added to the dynamic SQL when it is
created. For example we have seen Calc Model in particular take advantage of these options. This is very
dependent on the type of data you have
Adjusting the Number of Nodes
This is highly dependent on the hardware (amount of memory available primarily),
The JDA process you are having problems with (not all of them even take advantage of the SRE)
And the amount of data you are processing (We have seen as few as 3 up through as many as 20
being optimal for a given process)
New in 7.6 onwards capability grid status monitor
Adjusting the Java Heap size of the Nodes
within the Pool
With the same caveats as above with sizing
being as low as 32m up to 1,536m being
appropriate based on what processes are
assigned to a given node pool
Adjusting the number of Node Pools
Most clients will want to have a node pool
with a small heap size (i.e. 32mb) setup for JDA’s
daemon (continuous) processes (i.e. CalcModelUI, etc.) and any Stored Procedure processes
24. What would you say was biggest technical issue
you’ve experienced with the application?
Answer on the right hand side of your screen
A. Batch related problems
B. Systems availability (e.g. Uptime)
C. None at all ! It works just fine
D. Slow data refreshes on-screen
E. Other
25. 25
Flexible Editor (FE) Pages and Searches need to have a periodic review
Work with the functional team to identify the Search and FE pages they
leverage in their current business process
Adjust FE pages and searches to remove tables or search conditions that are no
longer needed
Capture the SQL and review it to determine if it is need of optimization
FE pages could perform poorly after the data volume changes or
upgrades.
For example;
FE page for DFUMap table viewing – Page was constructed with MAP as
primary table within the FE page and the DFUmap table was secondary
Previous version of the app and oracle handled this fine. With the Newer
version this caused excessive full table scans and caused undesirable
performance and wait time
A change to DFUmap as the primary table restored acceptable performance
26. 26
Enhancements in end-user and batch latency can be improved with defined
housekeeping
Stale or obsolete DFU views should be identified for deletion based on
the following criteria;
DFUView has no history records, including Zero value records in last 3 years
DFU has no Histfcst records for 3 years
Records in FCSTDRAFT or FCST do not exist or = 0
UDC - Lifecycle indicator value is not set indicate a New Item
UDC - Dependent Demand Flag <> Yes
A SQL script will be written to identify and mark DFUView for deletion
To avoid performance issues, it is advised to remove records from certain child tables
explicitly before deleting records from master tables
Script Name: jda_del_dfu.sql
27. 27
Tables to consider for record deletion prior to removing
DFUView records
(it is recommended to delete records explicitly from its child tables)
Dfumap
Fcstdraft
Fcstperfstatic
Fittedhist
Histfcst
28. 28
Introduced in version 7.7
the possibility of moving from vertical storage to a compressed format
29. Unrealistic expectations
When a project team estimates the time it will take to complete the implementation, they
tend to skip important activities such as organizational change management or process
design
Not adequately accounting for business-oriented activities
While it is relatively straightforward for a software vendor to estimate how long it will take to
install and set up software, it is much more difficult to predict activities that are not directly related
to software such as defining business process and making decision how the software should be
configured
Lack of resources
There are many companies that agree to a project plan with their vendor, only to find
that the two parties have differing expectations of how many and what types of people
will support the project
Software customization
Project teams will inevitably find at least a handful of functionality gaps that they would like to
address by changing the software
It is important to prioritize and to limit the amount of customization to help contain costs
30. Initiate a Post Go-Live assessment
Preferably within 2-4 full planning cycles of Go-Live
This can be done externally to prevent “we put it in“ bias
IT Systems DO have a product lifecycle too
While it may not be as rapid as the cell phone market very
few people are still using the very first iPhone model!
Optimizing can turn a disappointing initial ROI
into a better one!
31. Business Process Optimization
Don’t create a statistical forecast when you don’t need one
Get the balance right between science and art
Good analytics make prioritizing effort “fact” based
Over analyzing causes paralysis and wasted effort
Setting exceptions tolerance and “alerts” to the level your team can cope with is
critical
Technical Optimization
System up time is key if users cant get on they cant work
Understanding JDA’s integration with Oracle is important for some one within the
business even if technical support is outsourced
The more you customize the harder it is to support
Slow FE page response can be due to badly structured query’s this can be improved
by greater education on “data selection” functionality
32. Page 32
For more information about Plan4Demand or Optimizing JDA
Contact: Jaime.Reints@Plan4Demand.com