Вы находитесь на странице: 1из 117

High Volume Service Parts Management

Performance and Scalability of the SAP Solution for


Service Parts Management with IBM System p5, AIX
and DB2 9

1
A document from the joint IBM/SAP collaboration projects for solution and technology
enablement.

2
SAP Solution for Service Parts Management (SPM)
Including Service Parts Planning, Fulfilment and Warehousing
DB2 9 for Linux, UNIX and Windows
IBM DS8300 Storage
IBM System p

High Volume Service Parts Management

SAP Solution for Service Parts Management with


IBM System p, AIX and DB2 9
- Proof of Performance and Scalability -
IBM SAP International Competence Center
Walldorf, Germany

IBM SAP Solutions Center, PSSC


Montpellier, France

DB2 /SAP Centre of Excellence,


SAP DB2 Development/Porting Center,
IBM Lab Böblingen, Germany

SAP Performance, Data Management and Scalability


SAP AG, Walldorf, Germany

SAP Business Unit Service and Asset Management


SAP AG, Walldorf, Germany

SAP Center of Expertise Logistics


SAP AG, Walldorf, Germany

e-business solutions Technical Sales Support for SAP,


IBM, Hamburg, Germany

SAP Center of Excellence (CoE), SAP AG, Walldorf, Germany

Version 2.1
August 2007

3
1 Preface ......................................................................................................................................8
1.1 Document Scope ..............................................................................................................8
1.2 Special Notices .................................................................................................................8
1.3 Authors of this Document ................................................................................................8
1.4 With gratitude and acknowledgement of our sponsors ....................................................8
1.5 Project Team ....................................................................................................................9
2 Introduction ..............................................................................................................................9
3 Test Scope and Goals .............................................................................................................10
3.1 Target KPIs (Key Performance Indicators)....................................................................11
3.2 Results ............................................................................................................................12
4 Executive Summary ...............................................................................................................13
5 SAP Solution for Service Parts Management ........................................................................14
5.1 SAP Solution for Service Parts Management - Solution Overview...............................14
5.2 Service Parts Management: Architecture Overview ......................................................14
5.3 Logical Infrastructure Mapping and Methodology ........................................................16
6 Service Parts Planning............................................................................................................18
6.1 Planning Service Manager (PSM)..................................................................................19
6.2 Test Landscape for Service Parts Planning ....................................................................22
6.3 SPP Data Model in Test Landscape ...............................................................................23
6.3.1 Master data .............................................................................................................23
6.3.2 Demand Data..........................................................................................................24
7 Service Parts Planning Test Scenarios ...................................................................................25
7.1 Stocking/Destocking ......................................................................................................25
7.1.1 Business Process Description.................................................................................25
7.1.2 How was it tested? .................................................................................................26
7.1.3 Results and Scalability ...........................................................................................27
7.2 Forecasting .....................................................................................................................30
7.2.1 Business Process Description.................................................................................30
7.2.2 How was it tested?..................................................................................................31
7.2.3 Test Results and Scalability ...................................................................................32
7.2.4 Recommendations ..................................................................................................36
7.3 Economic Order Quantity and Safety Stock Calculation...............................................37
7.3.1 Business Process Description.................................................................................37
7.3.2 How was it tested?..................................................................................................37
7.3.3 Input .......................................................................................................................37
7.3.4 Output.....................................................................................................................37
7.3.5 Test set up...............................................................................................................37
7.3.6 Results and Scalability ...........................................................................................38
7.4 Distribution Requirements Processing (DRP)................................................................40
7.4.1 Business Process Description.................................................................................40
7.4.2 How was it tested?..................................................................................................40
7.4.3 Results and Scalability ...........................................................................................41
7.4.4 Recommendations ..................................................................................................44
7.5 Deployment ....................................................................................................................45
7.5.1 Business Process Description.................................................................................45
7.5.2 How was it tested?..................................................................................................46
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.5.3 Results and Scalability ...........................................................................................46


7.6 Service Parts Planning – General Recommendations ....................................................49
7.6.1 Application Recommendations ..............................................................................49
7.6.2 Infrastructure Recommendations ...........................................................................53
7.6.3 Data Base Recommendations.................................................................................53
8 Service Parts Fulfillment – Order Management.....................................................................59
8.1 Business Process Description.........................................................................................59
8.1.1 Global Availability Check (gATP) ........................................................................62
8.1.2 Pricing ....................................................................................................................63
8.1.3 Credit Check...........................................................................................................64
8.1.4 Global Trade Service (GTS) ..................................................................................64
8.2 Performance Test............................................................................................................65
8.2.1 Master Data Setup ..................................................................................................65
8.2.2 Test Execution........................................................................................................68
8.2.3 Test Landscape for Service Parts Fulfillment ........................................................69
8.3 Results and Scalability ...................................................................................................70
8.3.1 Results Summary....................................................................................................70
8.3.2 Test Case 1: Varying number of Line Items per Order..........................................70
8.3.3 Test Case 2: Varying the Degree of Parallelisation ...............................................72
8.3.4 Detailed Analysis of High Load Run .....................................................................74
8.3.5 qRFC Queues .........................................................................................................79
8.4 Recommendations ..........................................................................................................80
8.4.1 Application Recommendations ..............................................................................80
8.4.2 Infrastructure Recommendations ...........................................................................81
8.4.3 Database Recommendations ..................................................................................81
8.5 Service Parts Fulfillment – Customer Stress Test ..........................................................83
8.6 Combined Load and the Shared Processor Pool ............................................................86
8.6.1 General Recommendations for Service Parts Management Landscape.................87
9 Service Parts Warehousing.....................................................................................................91
9.1 Business Process Description for Outbound Processing with Warehouse Management
in SCM .......................................................................................................................................91
9.1.1 Purpose ...................................................................................................................91
9.1.2 Process Flow ..........................................................................................................92
9.2 How was it tested? ........................................................................................................94
9.3 Results and Scalability ...................................................................................................94
9.4 Recommendations ..........................................................................................................99
9.4.1 Application Settings ...............................................................................................99
9.4.2 Database Settings .................................................................................................100
10 Benefits of Advanced Power Virtualization.....................................................................101
11 Database Technology .......................................................................................................102
11.1.1 Properties of DB2 for Linux, UNIX and Windows .............................................102
11.1.2 DB2 Configuration used in the project ................................................................106
12 Hardware Infrastructure ...................................................................................................110
12.1 Attributes of the IBM POWER5 Server.......................................................................110
12.2 POWER5 SAP Configuration ......................................................................................112
12.3 Storage Technology......................................................................................................113
6
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

13 Appendix: .........................................................................................................................115
14 Copyrights and Trademarks .............................................................................................116

7
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

1 Preface
1.1 Document Scope
This joint IBM/SAP document describes a performance project for the management of high
volume service parts in the automotive industry using the SAP solution for Service Parts
Management – edition 2005. This document simulates the high volume requirements of a big
automotive OEM. This series of tests were done as an enablement project: the goal of which is to
ensure quality, performance and stability of the solution under high-load, and demonstrate
scalability on a high-performance infrastructure. The purpose of such enablement tests is also to
ensure the quality of the product and its integration with the supporting infrastructure. These
tests cover three separate scenarios which include Service Parts Planning, Service Parts
Fulfillment, and Service Parts Warehousing.

This document covers the customizing choices for the solution and describes the scenarios tested.
It covers the infrastructure basis, how the SAP components were configured on the infrastructure
and the reasons for the design. Since the Service Parts Planning scenarios are much more
database-centric then those of the traditional SCM APO, the document also covers the database
approach, design and tuning recommendations as implemented on the IBM DB2 database.
This document was written to support sizing efforts and implementation best practices with the
expectation that this information can benefits other teams designing, implementing, or
restructuring the business processes for the high volume parts business.

1.2 Special Notices


Copyright© IBM Corporation, 2007 All Rights Reserved.
All trademarks or registered trademarks mentioned herein are the property of their respective
holders.

1.3 Authors of this Document


o Tanja Baeck, Solution Manager – BU Service and Asset Management, SAP AG
o Dr. Hubertus Oswald, Performance Architect, SAP AG
o Carol Davis, Senior pSeries Technical Support - ISICC, IBM
o Brigitte Blaeser, Senior Developer, IBM SAP DB2 Development/Porting Center, IBM
o Brian Carter, Solution Manager, SAP Labs, LLC – BU Service and Asset Management
o Marc-Stefan Tauchert, e-business solutions Technical Sales Support for SAP, IBM
o Walter Orb, Senior pSeries Technical Support - ISICC, IBM
o Peter Jaeger, Senior Technical Consultant, CoE Performance Lab, SAP AG

1.4 With gratitude and acknowledgement of our sponsors


Dieter Haesslein – Head of Solution Management BU Service and Asset Management - SAP
Robert Reuben – System p Technical Sales Support Manager, NE and SW IOT, IBM
Rainer Staib – Manager SAP DB2 Center of Expertise, Boeblingen Lab, IBM
Pierre Sabloniere - System Storage TSM Technical Enablement Leader, IBM

8
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Dr. Antonio Palacin – Director IBM SAP International Competence Centre, IBM
Dr. Ulrich Marquard – Senior Vice President Performance, Data Management and Scalability,
SAP

1.5 Project Team


Role /Area responsible Person responsible Company
Project Lead IBM Carol Davis IBM Germany
Performance and Benchmark Expert Walter Orb IBM Germany
Technology – Database DB2 Brigitte Blaeser IBM Germany
Technology – Database DB2 Waldemar Gaida IBM Germany
Technology – AIX SAP Performance Marc-Stephan Tauchert IBM Germany
Technology – AIX Joergen Berg IBM France
Project Lead SAP Tanja Baeck SAP AG
Warehouse Management Brian Carter SAP Labs, LLC
Performance Lab Lead Peter Jaeger SAP AG
Performance Expert, Data Preparation Hans-Peter Seitz SAP AG
Performance Expert Michael Stafenk SAP AG
Performance Architect Hubertus Oswald SAP AG
Project Steering and Management Gordon Watson IBM France
Project Steering and Management Eric Cicchiello IBM France
Performance Expert Marco Vieth SAP AG
Performance Expert Nikolai Sauerwald SAP AG
Technology – LiveCache Grzegorz Posyniak SAP AG
Warehouse Management Andreas Daum SAP AG
Development

2 Introduction
SPM Performance Project was successfully carried out in December 2006 – March 2007 as a
joint project between SAP and IBM. This project grew out of an early project in which IBM
Walldorf provided infrastructure and support for the functionality testing and low-end scalability
of the SPM solution. The high-end scalability tests were the logical next step to ensure upward
scalability and overall performance under very high load, as a number of interested customers
would expect to be in this category. These tests also provided IBM the opportunity to
demonstrate the strengths of its hardware infrastructure and the versatility of the IBM DB2
database to meet the requirements of the high-load scenario for SPM. SPM is currently unique
amongst SAP Supply Chain scenarios in its database demand as functionality normally located in
the liveCache is implemented, for SPM, using the database.

This project represents the successful collaboration between SAP and IBM. SAP is represented
by the SAP BU Service and Asset Management, the Center of Excellence, and several of the
development groups. The IBM team included the IBM SAP International Competence Center, the
IBM SAP DB2 CoE and SAP DB2 Development/Porting Centre, all in Germany, and the IBM

9
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

SAP Solution Center in France. So many different organizations contributed their expertise to the
success of this enablement PoC.

SAP and IBM Germany provided the specific skills around the application, business processes
and performance. The IBM PSSC Montpellier, home of the SAP Solution Center, traditionally
supports major customer stress tests and high-end benchmark requirements. They provided the
high-end landscape and infrastructure expertise.

The goals of the project were to meet the high-load expectations for three differing scenarios and
study the behavior of the load under varying conditions deriving recommendations for
implementation and tuning best practices. A goal for IBM was to ensure the IBM infrastructure,
comprised of System p, IBM System Storage, and DB2 UDB, put their stake in the ground as the
first proven landscape for this solution.

The scope covered a number of extremely complex scenarios, i.e. the Service Parts Planning in
SCM, the order management process of Service Parts Fulfillment, which spans an integrated
landscape of SCM, ERP, and CRM, and the extended warehouse solution in SCM. This is the
first high-end scalability tests down for the SPM solution. Indeed, the tests for service parts
warehouse solution required the scripting of a totally new benchmark driver suite to simulate the
high-load online user devices.

The scenarios used in these tests were based on customer feedback and implemented to simulate
a real customer situation, and the data volume used was derived from actual current customer
requirements.

3 Test Scope and Goals


The goals and objectives of the performance test aimed to perform integrated high volume test for
the SAP solution for Service Parts Management (edition 2005) to:

1. Get realistic, proven benchmarks for the performance critical SPM processes
2. Verify sizing recommendation
3. Optimize application and hardware settings
4. Identify software and hardware bottlenecks

10
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The scope of the performance Test was to perform a high volume tests for

1. Service Parts Planning


o Stocking/Destocking
o Demand Forecasting
o EOQ
o DRP
o Deployment

2. Service Parts Fulfillment – Order Management

3. Service Parts Warehousing


o Complex Outbound Processing

3.1 Target KPIs (Key Performance Indicators)

For the performance tests, an implementation of the SAP solution for Service Parts Management,
based on the requirements of three large automotive customers, was simulated. The resulting
fictitious company “OP Parts Inc.” provides the challenges for our proof of scalability and
performance:
• Service Parts Planning: Have the weekly parts planning volume (2 Mio part locations)
processed on Sundays – within 24 hours
• Service Parts Fulfillment: Have 150.000 sales order lines created in an hour (5 order
lines/order)
• Service Parts Warehousing: Process 10.000 – 12.000 order lines per hour (2 – 3 times
typical warehouse volumes for large distribution centers)

The implementation of the solution should support the strategic goals of “OP Parts Inc” to be the
Best-in-Class automotive OEM:
• Revenue growth through increased customer retention and brand loyalty
• Competitive differentiation through greater organizational agility and adaptability
• Superior visibility into parts operations across the entire service parts supply chain
o Better collaboration with suppliers, partners, customers, and employees.

The volume selected for these tests focus on the combined high end requirements of several large
automotive manufacturing companies, represented by a proxy company referred to as “OP Parts
Inc.”. By satisfying the requirements of OP Parts Inc., the solution proves it can address the
expectations of the most demanding companies. This message is also emphasized by the
integration of an actual “Customer Test Case”, which was carried out in parallel within this
enablement project, for one of the companies merged into the Op Parts Inc requirements. This
customer case became critical during the enablement project and diverted the focus of the project
for the time it took to verify this specific requirement. This interruption underlines the timeliness
of this enablement project plan, and the applicability of the test scenario to real-life needs.

11
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

3.2 Results
All benchmark goals were met and the performance results exceeded the expectations, especially
for Service Parts Planning.

Service Parts Planning


Service Parts Planning (Runtime in minutes)

Stocking

Forecasting
Achievement
EOQ
Goal
DRP

Deplyoment

0 50 100 150 200 250 300 350


[min]

Graph 3.2-1: Achievement vs. Target in overall runtimes

Instead of using 24 hours, the whole planning run can be processed within 5 hours and 26
minutes.

Goal Achievement
Stocking 20 Min
Forecasting 124 Min
EOQ and SS 43 Min
DRP 56 Min
Deployment 73 Min
Total 24 Hrs 5:26 Hrs (316 Min)
Graph 3.2-2: Achievement vs. Target for overall planning day

Service Parts Fulfillment


The requirement of creating 150,000 order lines or 30,000 orders with 5 line items within an hour
was achieved. 286,200 order lines or 57,240 sales orders (with 5 order lines) can be created
within an hour.

Goal Achievement
150,000 order Lines 60Min 28,9 Min
Total 150,000/Hr 311,965/Hr1
Graph 3.2-3: Achievement vs. Target for overall planning day

1
These results are taken from order fulfillment using 40 parallel processes.

12
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Service Parts Warehousing


The team was able to successfully process 2 to 3 times a typical distribution center’s outbound
volume, in terms of lines per hour, through the extended warehouse management module. By
processing nearly 14,000 lines per hour at only 34,000 SAPs, the team demonstrated the ability of
the extended warehouse management to easily handle one or more typical large scale distribution
centers.

Goal Achievement
Lines/Hr 12,000 13,729
Steps/Sec 23,3 26,7
Graph 3.2-4: Achievement vs. Target – service parts warehousing

4 Executive Summary
The performance project described in this document simulates an implementation of the SAP
solution for Service Parts Management (edition 2005) at a big automotive customer, including:
• SAP SCM – Service Parts Planning
• SAP CRM – Service Parts Fulfillment
• SAP SCM – Service Parts Warehousing with SAP Extended Warehouse Management
• SAP ERP - Service parts Management Execution

The performance project was executed to prove the high performance and high scalability of the
solution for Service Parts Management and to be able to provide more reliable sizing
recommendations. The solution was developed together with Caterpillar Logistics Services Inc.
and Ford Motor Company to manage high-volume service parts operations in complex multi-tier
networks.

For SAP this was a technical proof of concept, a proof of strength and scalability of the solution.
For IBM it was a demonstration of IBM technology, using new features and functionality to
provide the most responsive, most flexible, and most scalable solution, taking advantage of both
System p virtualization and DB2 data compression.

The test results demonstrate the combined strength of the SAP solution and the IBM
infrastructure which together achieved the current service parts management requirements of
several of the largest automotive customers. Within the scope of this project, a special request
for a major German automotive manufacturer with a critical load requirement for the order
fulfillment was also addressed. This additional high volume test proved that the results of the
enablement project were immediately applicable to real life, adding additional validity to the
proof of concept. The results of this customer test are also covered in this document.

13
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

5 SAP Solution for Service Parts Management


5.1 SAP Solution for Service Parts Management - Solution Overview

• Service Parts Planning includes the latest forecasting and inventory planning models,
which provide dramatic improvements in service levels and reductions in inventory costs.
The solution is designed for the unique and demanding needs of service parts.
• Service Parts Procurement efficiently manages all purchasing activities and increases
collaboration with all the suppliers.
• Service Parts Warehousing improves parts workflow and processes in parts distribution
centers. The capabilities support collaborative processes between multiple warehouses,
suppliers, customers, logistics service providers and other business partners.
• Service Parts Fulfillment supports the ultimate goal of a parts organization to fill service
parts orders for customers and dealers. It includes the complete order processing and
order fulfillment functionally.
• Service Parts Transportation manages all inbound and outbound transportation
shipments.

Unique Attributes
• Complete - enables all aspects of planning, procurement, warehouse management,
fulfillment, transportation, collaboration and analytics
• Scalable - supports small to very large service parts operations
• Optimal - combines scientific and practical approaches
• Global - provides worldwide accessibility and visibility
• Adaptive - adapts to a dynamic, constantly changing environment

5.2 Service Parts Management: Architecture Overview

The functions for the service parts management processes were developed on the system
landscape containing SAP ERP 2005, SAP CRM 2005, SAP SCM 2005 (including SAP APO -
Advanced Planner and Optimizer, SAP SNC -Supply Network Collaboration and SAP EWM -
Extended Warehouse Management) and SAP Netweaver (including SAP XI Exchange
Infrastructure and SAP BI Business Information Warehouse).

Functions and scenarios of the SAP solution for Service Parts Management are highly integrated
and some of the business scenarios run cross components, e.g. for a sales order scenario:
• Order entry in CRM
• Availability check in SCM and credit check in the ERP system
• Deliveries processing in ERP

14
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

• Outbound processing in SCM – Extended Warehouse Management

The following graphics provide a high level overview on the interfaces and components of the
SAP solution for Service Parts Management.

Service Parts Management - Architecture Overview:

Customer Relationship Management – SAP CRM 2005 Supply Chain Management – SAP SCM 2005
Customer Relationship Management – SAP CRM 2005 Supply Chain Management – SAP SCM 2005
RFC
Sales / SAP APO SAP SNC
Marketing Billing (ICH)
Service Service

SAP NetWeaver 2004s


RFC Global ATP

Core Interface
SAP NetWeaver 2004s

Parts Planning

EDI/IDoc
CRM Middleware

Internet SAP Extended


Master Warehouse Management
Pricing RFC
Data
Configurator
SAP SCM Basis
GTS 7.0 RFC Master Data
Interf.
LES

Compliance Customs Master


Mgmt. Mgmt. Data
SAP NetWeaver 2004s
SAP NetWeaver 2004s RFC
EDI/IDoc XML
RFC RFC
Adapter

ERP – SAP ERP 2005


ERP – SAP ERP 2005 Exchange Infrastructure 7.0
R/3

Exchange Infrastructure 7.0


SAP NetWeaver 2004s

Materials Finance/ Master SAP NetWeaver 2004s


Mgmt. Controlling Data
Interf.
Core

EDI/ RFC
Logistics
EH&S IDoc
Execution
Interf.
LES

Business Warehouse 7.0


Business Warehouse 7.0
SAP NetWeaver 2004s RFC SAP NetWeaver 2004s

RFC

Diagram 5.2-1: SPM Architecture Overview

15
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

5.3 Logical Infrastructure Mapping and Methodology

Diagram 5.3-1: SPM Landscape Architecture Overview

The Infrastructure Landscape Overview


The above graphic depicts the infrastructure used for the test landscape. A single IBM System p5
595 was used to house the three systems comprising the SAP landscape. The p5-595 is
configurable into multiple logical partitions; each then represents a physical server. The p5-595
supports processor virtualization or shared processors. The shared processor pool (described in
more detail later in this document) allows the same physical processors to be used by multiple
LPARs. The benefit is that the physical processors do not have to be separated into groups of
dedicated resources, but can be used to address the combined load of several LPARs, reacting in
millisecond time slices to the shifting load peaks. The Each system was implemented as a 2-tier
(all components on a single server), each within a separate logical partition.
The LPARs themselves were SPLPARs (shared processor LPARs). The 3 SPLPARs all share the
same 64 CPUs in the shared processor pool. The landscape was implemented in this way to allow
for the greatest flexibility. As the load shifts over this complex integrated SAP landscape, the
processing capacity of the machine can follow it. This is a benefit in production systems and not
just in a test landscape. However, in this test landscape this was very useful as the focus of the
tests moved between planning (SCM focus), order fulfillment (integrated scenario) and EWM
(SCM focus) according to day/night schedules. Planning is primarily batch, whereas the
integrated scenario was run during the day to allow performance metrics to be gathered on all
three systems, and the EWM is an online scenario requiring simulated users.

16
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Methodology
The methodology used for the hardware sizing was “big box”. In this method the focus is on the
application and possible application level bottlenecks in scaling. For this methodology, the
objective is to have the hardware exceed the application capacity to avoid any hardware
bottlenecks. In support of this, the memory sizes for the LPARs are large and each of the LPARs
can access up to 64 physical CPUs. Normally after a “big box” test, the system is reduced or
“shrink wrapped” to ensure that the predicted resource requirements are sufficient. In the case of
this enablement project, this was not possible. The project was diverted in mid-stream to satisfy a
very real customer requirement, performing a very high volume of service parts order fulfillment
within an extremely short time window: the critical requirements of a major German automobile
manufacturer. Although this was an additional proof of scalability, it did not allow time for the
final “shrink wrap”. The result is that the system resources exceed the requirement, and the
sizing effort, primarily for memory, will be a bit less precise.

This “real life” stress test and the results are described as well in a later chapter.

Although all three SAP Systems are resident within a single p5-595 server, the communication
was done via external gigabit Ethernet rather then virtual Ethernet via the system Hypervisor (via
internal memory). This was done intentionally, and this approach makes the results of the
integrated solution more applicable to landscapes spanning more than one physical server. This
flexibility proved itself in the” real-life” stress test effort, where it was necessary to add an
additional server to the landscape.

A single storage server was used to house all three DB2 databases. Shared storage servers are
typical in production landscapes so this was done intentionally. The storage layout is described in
detail in the infrastructure section.

17
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

6 Service Parts Planning


Service Parts Planning delivers integrated planning capabilities to the service parts supply chain.
While it considers the specific aspects of the after-market and provides extended capabilities for
handling the large parts volumes present in an after-market supply chain, it also delivers a tight
integration of the end-to-end planning process with areas, such as Supply Network Collaboration,
Procurement, Warehousing, and Order Fulfillment.

Service Parts Planning enables the service parts network to forecast parts demand, derive optimal
stocking levels for each location, plan parts replenishment, and distributes the parts within the
network. Service Parts Planning addresses specific needs such as slow- and fast-moving parts
demand forecasting, parts life cycle planning, interchangeability / supersession of parts, and
inventory planning for multiple hierarchies.

The following chart provides you an overview of the Service Parts Planning processes:
Tactical Planning Operative Planning Execution

Capture Demand DRP


Monitoring
History

Stocking & Procurement


De-Stocking Approval

Manage Demand Procurement


History Execution

Forecasting Deployment

EOQ & Safety Stock Inventory Balancing

Surplus & Stock Transfer


Obsolescence Plng. Execution

Service Parts Planning Execution

Diagram 5.3-1: Process Flow Overview

Within the performance test we concentrated on the most performance critical processes –
(processes in orange)

 Stocking / Destocking
 Forecasting
 EOQ/Safety Stock
 DRP
 Deployment

18
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

6.1 Planning Service Manager (PSM)

Most of the planning steps for Service Parts Planning are done in the background using the
Planning Service Manager (PSM).

The Planning Service Manager (PSM) is a tool that you use to execute automated planning tasks.
You can group various planning tasks, such as forecast, EOQ, or DRP in a planning profile.
When you execute a planning profile, all planning tasks defined are executed, and the results are
saved.
The PSM carries out planning tasks using planning services. The forecast service, the EOQ, and
the DRP are all examples of planning services. Using data managers, the planning services access
master data, transaction data (such as orders, time series, or stock), alerts, and all other kinds of
data. The storage service saves new and changed data to the database and clears the data manager
buffers.
Diagram 6.1-1: Overview Planning Service Manager (PSM):

Selection
s
Planning Service Manager (PSM)
Alert
s

Master
Data

Transactional
PSM Interface Data
SERVICE PSM Interface
Time Series

Service
Service metho
shell Orders
metho d
EXECUT
d
E
Inventory
Service
profile IN O
O
IN U
U
Servic Service shell for the
Service
e communication
between the service and PSM
Service
customizing
Basic (reusable) service

The system calls up the PSM with a planning profile. In the planning profile, you define which
planning steps are executed in which process blocks. The PSM calls up the planning services
sequence (such as first the forecast service, then EOQ calculation), as defined in the planning
profile. The storage service calls up the data managers that are responsible for reading and
buffering time series, orders, and master data for example. In the selection, you define which
planning objects you want the system to consider when executing the various planning services.

19
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

A planning profile consists of header data and one or more process blocks. The header data
contains administrative information for the planning profile, such as who created or changed the
planning profile, and when. The process blocks follow the header data. A selection, a planning
version, a process profile, and an optional trigger group are defined in every process block.

Diagram 6.1-2: Planning Profile

The PSM creates packages of planning objects for every process block from the selection and
planning version defined. The PSM creates packages using the package creation method defined
in the process profile. The packages can be processed in sequence or in parallel. Parallel
processing helps improve performance. Instead of processing packages in sequence, packages can
be processed in parallel, which can save time.

20
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 6.1-3: Process Packages:

Sequential processing of packages

Package 1 Package 2 Package 3 Package n


Service 1 Service 1 Service 1 Service 1
Service 2 Service 2 Service 2 Service 2
Service 3 Service 3 Service 3 Service 3
… … … …
Service n Service n Service n Service n

Storage Service Storage Service Storage Service Storage Service

Parallel processing of packages

Package 1 Package 2 Package 3 Package n


Service 1 Service 1 Service 1 Service 1
Service 2 Service 2 Service 2 Service 2
Service 3 Service 3 Service 3 Service 3
… … … …
Service n Service n Service n Service n

Storage Service Storage Service Storage Service Storage Service

Diagram 6.1-4 describes the program flow logic of PSM:

Diagram 6.1-4: logic flow

Instantiate PSM

Loop over
process blocks Process Planning Profile

Execute service

Build packages

Loop over Last service?


packages
Process list of packages

yes no
Loop
over
services Process list of services Execute storage service

21
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

6.2 Test Landscape for Service Parts Planning

Layout and Configuration of the SAP System Components: SCM


Diagram 6.2-1: SAP Landscape Implementation Overview for Service Parts Planning

DMQ(SCM) liveCache
MAXCPU 64
Version 5.0 SP09

DVEBMGS00 D01
Dialog
D02 32
Dialog 20
Batch 10
Update
Dialog 2 32 DB2 liveCache
D03 Version 9.1 Version 7.5
Enqueue 1 Update 2
Dialog
D04 32
Update 2 Update 2
Update2 5 Dialog 32 DB-Cache 40-70GB DB-Cache 48-96 GB
Spool 1 Update 2

LPAR: 64 VCPUs, 120 GB: 64 PHYC in Shared Processor Pool

The SPP scenario is done completely in the SCM system. The landscape for these tests was
realized in a single LPAR using the shared processor pool. This LPAR was configured to have
access to all CPUs in the system (64 VCPU to 64 Physical CPUs). The system resources were
dedicated to this test: no concurrent load from other systems on the hardware,

SCM
The SCM system is configured with 5 application servers: the central instance and 4 additional
application servers for planning load. The application servers D01-D04 have 32 dialog processes
each, for a total of 128 dialog processes. The service parts planning uses RFC distribution over
dialog processes to parallelise the load. A batch process is used to distribute the planning or DRP
workload across multiple dialog processes in 1-n SAP dialog instances.
An RFC login group is used to determine the target app-servers and the load balancing of the
work packets across the application servers. This test uses “round robin” load distribution. The
snapshot below shows the settings for the “parallel_generators” login group used for load
distribution over the DMQ application servers.

The liveCache varied between a memory cache-size of 48 – 96GB. The 96GB cache was used
only for the high-end scalability test at 80 to 100 parallel processes. MAXCPU was set to allow
the liveCache to use all 64 physical processors in the shared processor pool. This setting simply
avoids any limitation from the point of view of the application configuration. The actual CPU
utilization is controlled by the level of parallelisation of the load, and at the OS level. It is
normally good practices to allow the liveCache to use the full range of the processor resources so
that it can cover peaks in the load distribution on both the application servers and the liveCache.
While the liveCache task is active, the APPS-process is waiting, so in a sense they are alternating
the load, and the processing resources, from one to the other.

22
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

6.3 SPP Data Model in Test Landscape


6.3.1 Master data
Location products: We created 5 million location products using standard BAPIs. We created the
Materials ZSPP_000000 - ZSPP _499999 in 10 locations ( PLSPB1, PLSPB1, PLSPG1,
PLSPG2, PLSPP1, PLSPU1, PLSPU2, PLSPU3, PLSPU4 and PLSPU5).

Bill of Distribution (BOD): One of the specifics of Service Parts Planning is the Bill of
Distribution (BOD). A Bill of Distribution is a hierarchically structured network of locations
predefining the distribution routes for parts within Service Parts Planning. The BOD specifies
how a part is further distributed within the enterprise following inward delivery from the supplier
before being passed on to the customer. Because the distribution routes are firmly predefined by
the BOD, you can carry out spare parts planning without a time-consuming source determination
process. BODs are product-specific. The BOD is a hierarchy whose individual nodes consist of
locations. The BOD has a maximum number of levels that is predefined by the underlying BOD
hierarchy structure.

6.3.1.1 Entry Location


The node of the uppermost level of a BOD is the entry location. The product arrives from
the supplier at this point.

6.3.1.2 Child Location and Parent Location


The product is distributed from the entry location to the locations of the next lower level.
Subordinate locations are termed child locations, superordinate locations parent locations.
The locations of middle levels are always simultaneously parent and child locations.
The locations of the lowest level of a BOD are child locations only, not parent locations.
From here, products are not distributed further but passed on directly to customers
(Customer facing locations).

The BOD used for the performance test consists of 10 locations on three levels:

Diagram 6.3-1: Bill of Distribution Structure in Service Parts Planning Test

23
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

6.3.2 Demand Data


The sales history is created by using a report. The report generates history data for all 10
locations and writes the data in flat files. The flat file is uploaded via a standard InfoSource. We
are using the InfoSource ZSPP_CSV_LOAD for uploading the data. We created 100
InfoPackages in order to accelerate the upload. Using the different InfoPackages we could upload
the historical data in a parallel. In total we upload 960 million data records in order to create the
sales history. We simulated a sales history for 5 million location products for the last 3 years.

Diagram 6.3-2: Overview of InfoPackages used for historical data loading

24
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7 Service Parts Planning Test Scenarios


7.1 Stocking/Destocking
7.1.1 Business Process Description

One key character of the service parts business is the high number of parts and warehouse
locations. In order to reduce inventory and warehouse costs, the right decision where to stock a
part is taken considering the parts demand and the costs. The optimal stocking points of a product
within the service parts supply chain is recorded in the authorized stocking list (ASL) and
determined by considering the characteristics of the product, its demand, and the supply chain
structure.

The following business process runs in SAP SCM:


1. Perform stocking decisions
In this process step, currently non-stocked locations within the service parts supply chain
are analyzed to determine whether a product should be stocked there. Influencing factors
are the demand, the number of order items, procurement cost, or the parts classification.
Based on these criteria, a matrix is set up to define when the product is to be stocked. This
stocking matrix can be set up differently for each location.

2. Perform destocking decisions


In this process step, currently stocked locations within the service parts supply chain are
analyzed to determine whether a product should be taken off the stocking list. Influencing
factors are the demand, the number of order items, procurement cost, or the parts
classification. Based on these criteria, a matrix is set up to define when the product is to be
destocked. This destocking matrix can be set up differently for each location.
After a destocking decision, the location is no longer planning relevant for this part.

Result
The result of this process is a changed replenishment indicator and is used in all planning
processes as those usually consider only the planning relevant locations for processing.

25
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.1.2 How was it tested?


The following Planning Services were used:
No. Planning Service Name Comment
1 SPP: Stocking SPP_INVP_STOCKING Perform stocking decision
Decision Service
2 SPP: Destocking SPP_INVP_DESTOCKING Perform destocking decision
Decision Service

Input
Master data as described in chapter Master Data (location product, BOD)
• Decision table for stocking and destocking (/SAPAPO/SPPINVPDEC)
• Demand data (also as described in chapter Demand Data).

Output
Update of replenishment indicator for each location product
Execution of the stocking service will lead to the replenishment indicator being changed to
Stocked, if there is enough historical demand, this includes current demand.
Execution of the destocking service leads to the replenishment indicators being changed to Non-
stocked if there is not enough historical demand.

Master Data
For the test, 500.000 products in 10 locations were created; in total 5 million location products
were available for the planning tests.
The BOD displayed within the chapter Master Data was used for the planning runs.

Transactional Data
Sales history for 3 years for 5 million location products were created and loaded into BW. Nearly
1 billion records were loaded in the relevant info cube (9ADemand)

26
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.1.3 Results and Scalability


Stocking 2.000.000 Locations/Products 80 Parallel
PhysicalCPU
64

56

48
Physical CPU's

40

32

24

16

0
7h

8h

9h

0h

1h

2h

3h

4h

5h

6h

7h

8h

9h

0h

1h

2h

3h

4h
:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2
06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06
Time

Graph 7.1-1: Total physical processors consumed (Ref: run31 – stocking of 2million product/locations)

The preceding graph depicting the physical CPU utilization of this run show a relatively constant
high-load phase. The average during high-load phase was 35.89 and the peak of 42.45 physical
CPUs. With 80way parallelisation and a possible physical CPU utilization of 64 CPUs, the
parallelisation factor achieved is 66%.

Chart 7.1-1 shows the component CPU requirements in relation to each other. This shows a 4:1
relationship of application server requirements to database in the Stocking/De-stocking scenario.

Relative CPU Utilization over Components


SAP
1,83 DB
19,27
LC

78,90

Chart 7.1-1: Relative CPU utilization by SAP Component

Graph 7.1-2 below shows the ratio of the CPU utilization distributed over the SCM components
over the entire run. In this graph all the SAP application server instances and the CI are grouped
together to show the percentage of CPU resources used for SAP computation versus the portion
utilized by the database and the liveCache.

27
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Stocking 2.000.000 Locations/Products 80 Parallel


System DMQ_SAP DMQ_LC DMQ_db2
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
7h

8h

9h

0h

1h

2h

3h

4h

5h

6h

7h

8h

9h

0h

1h

2h

3h

4h
:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2
06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06
Time

Graph 7.1-2: %CPU utilized, broken down by Application server and database

Graph 7.1-3 below shows the load balancing over the 4 SAP application server instances in this
LPAR, and their relative CPU ratio in respect to the total CPU consumption. In this case the CI is
separate: DMQ_SAP is the central instance.

Stocking 2.000.000 Locations/Products 80 Parallel


System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01
100

90

80

70
Logical CPU Usage

60

50
40

30

20

10

0
7h

8h

9h

0h

1h

2h

3h

4h

5h

6h

7h

8h

9h

0h

1h

2h

3h

4h
:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2
06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

Time

Graph 7.1-3: %CPU utilized, distributed over individual SAP Component

This CPU distribution pattern shows the load distribution over the individual application servers
and the DB. Peak utilization (Graph 7.1-1 ) is 42.45 physical CPUs using 80 parallel processes.

is03d3(1) User% Sys% Wait% Idle%

100

80
60

40
20
0
8h

9h

0h

1h

2h

3h

4h

5h

6h

7h

8h

9h

0h

1h

2h

3h

4h
:0

:0

:1

:1

:1

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2
06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

Graph 7.1-4: %CPU by utilization categories

28
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.1-4 shows the utilization categories. In this graph the %system utilization for this
scenario is relatively high. This often indicates an application scenario with heavy system call
overhead, such as spinlocking. Considering the relatively low scalability ration (66%), this would
normally indicate a serialization scenario within the application which does not allow it to scale
beyond a point which has likely been exceeded with 80 parallel jobs. Better throughput for this
scenario may have been achieved with a parallelisation of 60. As this scenario is not considered
to be on the critical path, and as the process completed well within the timeframe allotted to it in
the total planning run, the optimal break-even point was not further investigate.

29
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.2 Forecasting
7.2.1 Business Process Description
As Service Parts Planning is typically a make-to-stock or procure-to-stock scenario, various
planning runs use the forecasting result as a basis, e.g. inventory planning with EOQ and Safety
Sock Calculation or DRP.

Based on historical demand data, a forecast for future demand is created for all stocking locations
of a part. A number of forecast models for constant, trend, seasonal, seasonal trend, intermittent,
and phase-out demand patterns are provided.
These are the forecasting models most frequently used within the service parts business:
o First-order exponential smoothing (constant) FOES
 The demand forecast of the recent period and the actual demand of that
period are combined to create the forecast of the next and future periods.
o Moving average (constant) MAVG
 The average demand of a given number of historical periods is used as
demand forecast.
o Second-order exponential smoothing (trend) SOES
 The demand forecast consists of two equations. The first equation
corresponds to that of first-order exponential smoothing. In the second
equation, the values calculated in the first equation are used as initial
values and are smoothed again. The result of both equations is used to
calculate the future trend.

Forecasts are created initially for each location and location aggregate in the BOD. The forecast
at the top level is then disaggregated level by level based on the proportions of the location
forecasts at the next lower level. The user can chose whether to use the initial forecast for that
location or the disaggregated forecast as the final forecast that is used by subsequent planning
processes.
As part of the forecasting process, a standard deviation is calculated based on the historical
forecast and the actual demand. This standard deviation is projected into the future.

Result
The result of parts forecasting is a forecast for each planning relevant product location
combination consisting of six time series:
1. Demand forecast
2. Demand standard deviation
3. Number of order item forecast
4. Number of order item standard deviation

30
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

5. Demand per order item forecast


6. Demand per order item standard deviation
The result is used in the inventory planning process. In addition, the demand forecast is the basis
for determining projected stock levels in the supply planning and the distribution planning
process.

7.2.2 How was it tested?

The following Planning Services were used:


In Planning Service Name Comment
Scope
1 SPP: Forecast SPP_FCS_SERVICE Generates the forecast
Service
2 SPP Forecast SPP_FCST_DISAGGREGATION Disaggregates the
Disaggregation forecasting results
3 SPP: Forecast SPP_FCS_SERVICE_MSE Calculates standard
Service (StdDev) deviation
4 Forecast Approval SPP_FCST_RELEASE Approves and releases the
forecast to DRP

Input
• Master data (location product, BOD etc.)
• Historical demand data
• Forecast models

Output
• Demand forecast data

Test set up
• Buckets: History 36 months = 156 weeks (buckets) + Future 12 months = 52 weeks
(buckets)
• The three main forecast models (each one third) were used:
• First-order exponential smoothing (constant)
• Moving average (constant)
• Second-order exponential smoothing (trend)

31
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.2.3 Test Results and Scalability


This diagram shows the relative runtimes of the two volumes: 500K and 2Million
product/locations.
The volume was increased by a factor of 4, and the runtime increased by a factor of 3.98.
Runtime of Forecasting for different volumes

8.000

7.000

6.000

5.000
Runtime [s]

4.000 Reihe1

3.000

2.000

1.000

0
500.000 2 Mill.
No. of location products

Graph 7.2-1: Relative Runtimes for different data volumes

Application Behavior
During processing of one package, the service first reads data from database and liveCache. In
the second step the data is processed and in the third step the results are written to database and
liveCache. At the beginning, when all processes start in parallel, we observed contention in
database and liveCache when writing the data to the persistence layer. After a while this
contention decreases as the parallel workload becomes more distributed, and buffer pool quality
of the database increases.

System Behavior during a 2 Million Product/Location Run.


Physical CPU utilization average is 50.87 the peak is 55.98.

Graph 7.2-2: CPU Utilization in Physical CPUs: Ref: Run26 Volume of 2 Million, parallelisation of 80
Forecasting 2.000.000 Locations/Products 80 Parallel
PhysicalCPU
64

56

48
Physical CPU's

40

32

24

16

0
2h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h
:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1

:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1
22

22

22

22

22

22

22

22

22

22

23

23

23

23

23

23

23

23

23

23

23

23

23

23

23

00

00

00

00

00

Time

32
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.2-3: CPU Utilization in categories: Ref Run26 Volume of 2 Million, Parallelisation of 80

Forecasting 2.000.000 Locations/Products 80 Parallel


User% Sys% Wait% Idle%
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
22 h
7h

1h

22 h
9h

3h

7h

1h

5h

9h

23 h

23 h
1h

23 h
9h

3h

7h

1h

5h

9h

23 h
7h

1h

5h

00 h
3h

7h

1h

5h

9h
2

9
:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1

:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1
22

22

22

22

22

22

22

22

23

23

23

23

23

23

23

23

23

23

23

00

00

00

00
Time

These two graph on CPU utilization show the total physical CPU utilized (Graph 7.2-2) and
(Graph 3.2-1) the distribution of CPU utilization over user (production throughput), system (OS
overhead) and IOwait (cycles lost due to IO). The picture is of an excellent production focused
load with little overhead either in IO or system. The total physical CPU utilization shows an
oscillation of high-load (massive parallel) and high-medium (lightly restricted parallel). The
utilization distribution by category shows the relation ratio of production to overhead remains
quite constant. This indicates that the short periods of restricted parallelism are due to a necessary
synchronization point within the application rather than an infrastructure bottleneck.
Parallelisation level is 87% showing that the application scales out very well.
Graph 7.2-4: IO Behavior – Kilobytes/Second

Forecasting 2.000.000 Locations/Products 80 Parallel


fcs0_read-KB/s fcs0_write-KB/s fcs3_read-KB/s fcs3_write-KB/s
160000

140000
Fiber-Channel Throughput in kB

120000

100000

80000

60000

40000

20000

0
22 h

22 h

22 h
5h

9h

3h

22 h

22 h

22 h
9h

23 h
7h

23 h

23 h

23 h
3h

23 h

23 h

23 h

23 h

23 h
7h

23 h

23 h

00 h
3h

7h

00 h

00 h
9h
2

5
:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1

:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1
22

22

22

22

23

23

23

23

00

00

Time

The graph above (Graph 7.2-4) shows the IO behavior of the database during the run. The picture
is relatively constant and the cumulative IO rates are not high in relation to the IO capacity
available (combined throughput capacity of 500MB/sec).

33
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

SCM Physical CPU Util - of 64CPUs SCM-CPU


70
60
50
40
30
20
10
0
5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h
:4

:4

:4

:5

:1

:2

:2

:2

:4

:5

:5

:0

:0

:1
:2

:2

:3

:3

:5

:0

:0

:0

:1

:3

:3

:4

:4

:0

:1
22

22

22

22

22

22

22

22

22

23

23

23

23

23

23

23

23

23

23

23

23

23

23

23

00

00

00

00

00
SCM PhyCPU Util: of 64 CPUs PhysicalCPU
70
60
50
40
30
20
10
0
16 h

16 h

16 h

16 h

17 h
1h

17 h

17 h

17 h

17 h
6h

17 h

17 h

17 h

17 h

17 h

17 h

17 h

17 h
7h

8h

17 h

17 h
2h

17 h

17 h

17 h
7h
17 h

17 h

17 h

17 h

17 h
5

6
:5

:5

:5

:5

:5

:0

:0

:0

:0

:0

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:2

:2

:2
16

17

17

17

17

17

Graph 7.2-5: Comparison of runtime patterns in CPU utilization – increased volume

This diagram compares the behavior of the two runs: one with a volume of 500K
location/products and the one with a volume of 2 million location/products. The longer run is
monitored on a 5 minute interval, whereas the short run is recorded at 1 minute intervals.
Nevertheless is it possible to see the repeating pattern of the 500K run in the 2 million loc/proc
run. This indicates a constant behavior over time and volume, no degradation over time. (The
recording tool stops and restarts at midnight which explains the break in the pattern at this time.)
Graph 7.2-6: Comparison in CPU Utilization over the runs (17 and 26):

Comparison of 2 runs with varying


volumes

59,48
60 55,98
50,87 50,58
50
40
MaxPhycpu
30
AvePhycpu
20
10
0
2Mil 500K

The two runs for different volumes but with the same level of parallelisation show similar
average physical CPU utilization. The shorter run does indicate a more erratic behavior, having a
lower average and higher peak, which is likely the effect of the job control over the shorter run.
The longer run has more consistency between the startup and the phase down of the run.

34
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.2-7: Distribution over WLM components

Forecasting 2.000.000 Locations/Products 80 Parallel


System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
22 h
7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

7h

1h

5h

9h

3h

23 h
1h

23 h
9h

3h

23 h
1h

5h

9h

3h

7h

1h

5h

9h
2

7
:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1

:2

:2

:3

:3

:3

:4

:4

:5

:5

:5

:0

:0

:1

:1

:1
22

22

22

22

22

22

22

22

22

23

23

23

23

23

23

23

23

23

23

23

23

00

00

00

00

00
Time

CPU utilization by component (Graph 7.2-7) shows a good load distribution over the application
servers and a constant behavior throughout the run. 2

Relative CPU Utilization (%) SCM-Apps


SCM-DB2
3 SCM-LC
13

84

Chart 7.2-1: Distribution over SAP components

Chart 7.2-1: Distribution over SAP components shows the breakdown of CPU resource
utilization by SAP component. This shows an approximate 6.5:1 application server: database
ratio. The liveCache is negligible in this scenario.

2
The measurement of CPU utilization by component is done using AIX workload manager. It represents the
percentage of the component used of the total CPU utilized. It does not represent the physical CPU utilization. These
measurements are primarily used to observe load balancing and relative ratio for sizing.

35
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.2.4 Recommendations
The recommendation is to restrict the number of key figures that are written to the tracking tables
(see note 1042636).
For the standard process in SPP you need the following key figures:
• FORECAST_EN
• FORECAST_PC
• FORECAST_PP
• FORECAST_DIS_EN
• FORECAST_DIS_PC
• FORECAST_DIS_PP

As an infrastructure recommendation, liveCache should be implemented to use direct concurrent


IO (DIO/CIO). In AIX, this is done by using filesystem mount options for the filesystems
containing liveCache data and logs.

36
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.3 Economic Order Quantity and Safety Stock Calculation


7.3.1 Business Process Description

As the forecast is only an estimation of the future demand, it is necessary to compensate the
deviations from the actual demand and supply by safety stock. And for the economic order
quantity determination the costs for a purchase order, e.g. for order administration and handling
costs in the warehouse, are considered as well as the inventory costs.
With Economic Order Quantity and Safety Stock Calculation the safety stock and the economic
order quantity is simultaneously optimized by using forecast demand information and its standard
deviation to determine the amount of safety stock to be kept at each stocking point in the supply
chain. This enables you to handle demand and supply uncertainty according to a target service
level. Service levels are determined dynamically and differentiated based on demand, demand
frequency, product classification, or cost of a product at the given location.

Result
The result of this process is the determination of the safety stock and the economic order quantity
for each planning relevant product location and is used in the supply and distribution planning
processes.

7.3.2 How was it tested?


The following Planning Services was used:
No. Planning Service Name Comment
1 SPP: EOQ/SFT SPP_EOQSFT_SERVICE Calculates the Economic Order
Service Quantity and the Safety Stock

7.3.3 Input
• Master Data (location product, BOD etc)
• Demand data
Definition of target service levels for the calculation on global and on location level

7.3.4 Output
The following key figures are determined for each location product:
• Economic Order Quantity (EOQ)
• Safety Stock

7.3.5 Test set up


• 52 weekly buckets
• Lead time settings in the service profile

37
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

• Rounding parameters

7.3.6 Results and Scalability

Graph 7.3-1: Physical CPU Utilization: 49.91 physical CPUs as average, 51.49 at peak – Ref Run29 2Mio
PhysicalCPU
EOQ SCM 2.000.000 Location/Proudcts 80 Parallel
70

60

50

40

30

20

10

0
4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

7h
:0

:0

:0

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:3

:3

:3

:3
10

10

10

10

10

10

10

10

10

10

10

10

10

10

10

10

10
Graph 7.3-2: Relative distribution over utilization category (percentage of used) - Ref Run29 2Mio
EOQ SCM 2.000.000 Locations/Products 80 Parallel
User% Sys% Wait% Idle%
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
10 h
8h

10 h

10 h

10 h
10 h
3h

10 h

10 h
10 h
7h

10 h

10 h

10 h
10 h
2h

10 h
4h

10 h

10 h

10 h

10 h
0h

10 h

10 h

10 h

10 h
10 h

10 h

10 h
10 h
9h

10 h

10 h
2h
7

5
7

1
2

5
6

0
1
:1
:1

:1

:2

:2
:2

:2

:2

:2

:2
:2

:2

:2

:3
:3

:3

:3

:3

:3
:3

:3

:3

:4
:4

:4

:4
:4

:4
:4

:4

:4
:4

:5

:5

:5
10

10

10

10

10

10

10

10

Time

CPU distribution over system and user: Kernel overhead is around 6%, IO waits are hardly
evident. Both pictures show a very stable behavior in both utilization and utilization ratio.
Parallelisation achieved is 80% (peak/max physical CPU utilization). 3

3
The parallelisation ratio is derived by the following equation: (peak phyc used)/(number available physical CPUs).
The maximum which could have been used vs the peak number which were used. This gives a feeling for the
percentage of the application which can run in parallel. There is always a certain percent which must run serial. In
this case there were 80 processes and 64 CPUs, there was no physical reason why the full capacity could not have
been utilized. It is a question of the applications level of parallelisation.

38
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

EOQ SCM 2.000.000 Locations/Products 80 Parallel


System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
10 h
8h

10 h

10 h

10 h
10 h

10 h
4h

10 h
10 h
7h

10 h

10 h

10 h
10 h

10 h
3h

10 h
10 h

10 h
8h

10 h
0h

10 h

10 h
3h

10 h
10 h

10 h

10 h
10 h

10 h
0h

10 h
2h
7

2
3

1
2

5
7

1
2

5
6

8
9

1
:1
:1

:1

:2

:2
:2

:2

:2

:2

:2
:2

:2

:2

:3
:3

:3

:3

:3

:3
:3

:3

:3

:4
:4

:4

:4
:4

:4
:4

:4

:4
:4

:5

:5

:5
10

10

10

10

10

10

10

10

10
Time

Graph 7.3-3: CPU distribution over WLM component, each Application server as separate WLM components

The WLM overview for load balancing, Graph 7.3-3, shows a consistent behavior. DMQ_SAP is
the central instance which is very lightly loaded, as is the liveCache in this scenario. The four
major application servers show good load balancing and the database does not show ramp-up
overhead but maintains a constant pattern throughout the run. This is indicative of good buffer
quality.

PhyCPU Distribution over Component

SAP
DB
13,53 0,75 LC

85,71

Graph 7.3-4: CPU distribution over system component: SAP = all application servers, DB includes client and
server.

The results show a 6.3 to 1 ratio of application server to database requirement.

39
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.4 Distribution Requirements Processing (DRP)


7.4.1 Business Process Description
With the Bill of Distribution concept, demand for the whole network is sourced at the entry point.
With Distribution Requirements Processing (DRP) the rounded net requirement needs are
determined throughout the service parts supply chain. These requirements are aggregated along
the hierarchical supply chain structure to result in supply proposals which are covered either by
supply of remanufactured parts or by purchase requisitions or schedules to one or more suppliers.
The distribution requirements planning calculation considers full interchangeability to use up
existing inventory of a predecessor product, minimum net demand for slow moving items,
schedule adjustments for seasonal demands and inventory build-up, as well as supplier schedule
stability rules.

For the determination of the net requirements the demands (forecast, safety stock, confirmed
stock transfer demand and fixed demand), the inventory, the confirmed receipts, the lead times,
the days of supply and the pack stages are taken into account.

Aggregation of demand up the BOD:

Montreal

Frankfurt Dallas Montreal

Helsinki Oslo Frankfurt

Result
The results of this process are valid and released supply schedules or purchase orders with
expected supplier ship dates.

7.4.2 How was it tested?


The following Planning Services was used:

No. Planning Service Name Comment


1 SPP: DRP Service SPP_DRP DRP incl. Approval

40
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Input
• Demand, supply and stock data
• Master data (location product, BOD, scheduling agreements, transportation lanes)
Output
• External release schedules
• Stock transfer requisitions
Test set up
• Planned buckets for one year (220 days = 220 buckets)
• Procurement: 100% via scheduling agreement
• Stability Rules: 100% of parts have stability rules activated
• Sales orders: 10% of parts have sales orders which are considered by DRP Approval

7.4.3 Results and Scalability


Graph 7.4-1: Relative Runtimes – increased volumes (ref: run27 2M, run13 500K)

Runtime of DRP for different volumes

4.000 These two graphs (Graph 7.4-1 and Graph


3.500 7.4-2 ) show the relative effects of the
3.000 master process on the total runtime. There is
2.500 a certain overhead resulting from the packet
Runtime [s]

2.000 Series1 initiation and run management which is


1.500
unrelated to the volume processed.
1.000
The first graph shows the relative runtime
500
over two volumes of data. The 2nd graph
0
500.000 2 Mio. shows the relative percentage of the runtime
No. of location products
influenced by the master process.

Graph 7.4-2: Influence of master process


Runtime of master process (in percentage of overall runtime)

7,00%

6,00%

5,00%

4,00%

3,00%

2,00%

1,00%

0,00%
500.000 2 Mio.

41
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.4-3: Ramp-Up of DRP Run27: 2 Mil Loc/proc, 80 parallel processes.

Response time per package

30.000

25.000
Response time [ms]

20.000

15.000

10.000

5.000

0
0 50 100 150 200 250 300 350 400 450
Package

Graph 7.4-4: IO behavior during ramp-up


DRP 2.000.000 Locations/Products 80 Parallel
fcs0_read-KB/s fcs0_write-KB/s fcs3_read-KB/s fcs3_write-KB/s
80000

70000
Fiber-Channel Throughput in kB

60000

50000

40000

30000

20000

10000

0
9h

1h

3h

5h

7h

9h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h
:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:4

:4

:4

:4

:4

:5

:5

:5

:5

:5

:0

:0

:0

:0

:0

:1

:1
02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

03

03

03

03

03

03

03

Time

The graphs above show the effect of the first 80 packages being spawned to dialog processes as
result of the 80 way parallelisation. After the first 80, the response time per package drops
significantly. This appears to be the result of both a synchronization point in the application at
startup, and read bursts against the database. This burst of activity lasts only for the distribution
of the first 80 packets, which is a matter of seconds.
The response times for the packets continue to improve over time even after the first burst is
over. The IO graph shows the same trend in that the read IO ratio continues to decrease as the
necessary data is filled into the database buffers. As the buffer hit ratio improves and less data
must be read, the response time per packet drops.

42
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

DRP 2.000.000 Locations/Products 80 Parallel


PhysicalCPU
64

56

48
Physical CPU's

40

32

24

16

0
9h

1h

3h

5h

7h

9h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h
:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:4

:4

:4

:4

:4

:5

:5

:5

:5

:5

:0

:0

:0

:0

:0

:1

:1
02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

03

03

03

03

03

03

03
Time

Graph 7.4-5: SCM - Physical CPU utilization during the run: MaxPhy = 56.83 for peak

DRP 2.000.000 Locations/Products 80 Parallel


User% Sys% Wait% Idle%
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
9h

1h

3h

5h

7h

9h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h
:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:4

:4

:4

:4

:4

:5

:5

:5

:5

:5

:0

:0

:0

:0

:0

:1

:1
02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

03

03

03

03

03

03

03

Time

Graph 7.4-6: SCM CPU distribution over categories.

Parallelisation reached 89% for the DRP. The load remains extremely stable with little kernel and
IO wait overhead.

43
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

DRP 2.000.000 Locations/Products 80 Parallel


System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
9h

1h

3h

5h

7h

9h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h

4h

6h

8h

0h

2h
:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:4

:4

:4

:4

:4

:5

:5

:5

:5

:5

:0

:0

:0

:0

:0

:1

:1
02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

02

03

03

03

03

03

03

03
Time

Graph 7.4-7: SCM - Physical CPU utilization distributed over WLM components. Each app-server is a separate
WLM component.

Graph 7.4-7, utilization by individual component, shows a balanced load distribution over the 4
application servers, as well as constant liveCache utilization. No peaks or holes in the load
indicating any type of infrastructure bottleneck. In earlier scenarios we saw disruptive peaks in
the liveCache caused by liveCache log writing, here these are not evident. liveCache log buffer
was increased to avoid this.

Chart 7.4-1: SCM – Relative CPU distribution over system component. SAP includes all application servers.

Relative PhyC Distribution over Components

7,58 SAP
4,55 LC
DB

87,88

Chart 7.4-1 shows an 11:1 ratio for applications servers to database, and 19:1 application servers
to liveCache.

DRP mainly utilizes the application server for calculating the DRP matrix. The database and
liveCache is used to store the results in the persistence layer.

7.4.4 Recommendations
See general recommendations.

44
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.5 Deployment
7.5.1 Business Process Description
Deployment is concerned with the distribution of the parts along the BOD. While DRP calculates
the requirements along the BOD to determine the procurement quantity at the entry location,
deployment distributes material throughout the network along the BOD from the entry location.
Stock transfer orders are created from each parent location to its child location.

Within deployment we distinguish between two scenarios


• Push deployment is a deployment decision that is triggered by receipt of a parts at the
parent location
With the push deployment the parts are not stored at the entry location but directly send to
the child locations. Push deployment is intended for fast moving parts and has the
advantage that is leads to a faster replenishment of parts within the BOD.
• Pull deployment is triggered as soon as a child location has an actual net demand over
lead time
Pull deployment is performed by regular deployment planning runs and is triggered by a
need at the child location.

Distribution of parts along the BOD:

Montreal

Frankfurt Dallas Montreal

Helsinki Oslo Frankfurt

Within the performance test we only considered the pull deployment:


Pull deployment is triggered based on a material need of a subordinate location in the supply
chain. It determines a prioritized fair share distribution among all subordinate locations of the
same level, but only creates stock transfer requisitions to the triggering locations. Pull
deployment uses the current inventory situation within the supply chain network as the basis for
decision making.

45
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Results
The results of this process are stock transfer orders for material movements within the supply
chain network.

7.5.2 How was it tested?

The following Planning Services was used:


No. Planning Service Name Comment
1 SPP: Deployment SPP_DEPL Deployment
Service

Input
• Demand, supply and stock data
• Master data (location product, BOD, scheduling agreements, transportation lanes)
Output
• Stock transfer requisitions
Test set up
• Pull deployment

7.5.3 Results and Scalability

Graph 7.5-1: Relative runtime comparison over volume


Runtime of Deployment for different volume

5.000

4.500

4.000

3.500
Runtime [s]

3.000

2.500

2.000

1.500

1.000

500

0
500.000 2 Mill.
No. of location products

46
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.5-2: Million Product/Locations at 80 Parallel, Physical CPU utilization: 54,17 (ref Run 28)
Deployment SCM 2.000.000 Locations/Products 80 Parallel
PhysicalCPU
64

56

48
Physical CPU's

40

32

24

16

0
9h

2h

5h

8h

1h

4h

7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

7h

0h

3h

7h

0h

3h

6h

9h

2h

5h
:2

:3

:3

:3

:4

:4

:4

:5

:5

:5

:5

:0

:0

:0

:1

:1

:1

:2

:2

:2

:3

:3

:3

:3

:4

:4
05

05

05

05

05

05

05

05

05

05

05

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06
Time

The peak physical utilization was 54,17 CPUs. This is a parallelisation factor of 84.6%.

Graph 7.5-3: 2 Million Product/Locations at 80 Parallel, proportional CPU distribution by category (ref Run 28)
Deployment SCM 2.000.000 Locations/Products 80 Parallel
User% Sys% Wait% Idle%
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

8h

1h

4h

7h

0h

3h
:2

:3

:3

:3

:3

:4

:4

:4

:5

:5

:5

:0

:0

:0

:0

:1

:1

:1

:2

:2

:2

:3

:3

:3

:4

:4
05

05

05

05

05

05

05

05

05

05

05

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

Time

Graph 7.5-4: 2 Million Product/Locations at 80 Parallel, proportional CPU distribution by WLM Class (ref Run
28)
Deployment SCM 2.000.000 Locations/Products 80 Parallel
System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
9h

2h

5h

8h

1h

4h

7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

7h

0h

3h

7h

0h

3h

6h

9h

2h

5h
:2

:3

:3

:3

:4

:4

:4

:5

:5

:5

:5

:0

:0

:0

:1

:1

:1

:2

:2

:2

:3

:3

:3

:3

:4

:4
05

05

05

05

05

05

05

05

05

05

05

06

06

06

06

06

06

06

06

06

06

06

06

06

06

06

Time

47
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.5-2 show an extremely very stable system behavior. The physical CPU remains at a
nearly constant 54 physical CPU level after a very brief ramp-up. The utilization by category
(Graph 7.5-3) shows no increase in overhead but rather shows constant high user (production)
utilization. The distribution over WLM groups representing the individual SAP system
components (Graph 7.5-4) shows a constant load distribution in the SAP components. The DB
shows a peak in utilization during the ramp-up but is otherwise very consistent. There is no
indication of a change in processing behavior over time.

48
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.6 Service Parts Planning – General Recommendations


7.6.1 Application Recommendations
Graph 7.6-1: Combination of Planning - Sequence of Runs vs. Combined Run

We recommend combining as many planning service as possible into one planning profile. The
combination of planning services in one planning profile will use the buffer mechanism for
master and transactional data and minimize the accesses to the persistence layer.

Graph 7.6-2: Parallelisation


Run Parallelism Physical CPU % p595
Run23 40 33.83 52.8%
Run22 60 43.03 67.2%
Run18 80 53.1 83%
Run21 100 55.75 87.4%

We only saw a small improvement when increasing the number of parallel jobs from 80 to 100.
When running 100 parallel processes, 55.75 physical CPUs of the available 64 were utilized
representing a total system utilization of 87%. Although the throughput was still scaling up, the
ratio of CPU costs to throughput is best at 80 parallel.

49
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Forecast & DRP 500.000 Location/Products Blocksize 100

Avg Max Linear (Avg)


64

56

48
Physical CPU's

40

32
24

16

0
40 60 80 100
Parallelism

Graph 7.6-3: Physical CPU utilization over scale-up

Graph 7.6-3 depicts the physical CPU consumption as the parallel processing is scaled up. This is
shown in both peak and average CPU consumption, with a trend line on the average
consumption.

Forecast & DRP 500.000 Location/Products 80 Parallel


User% Sys% Wait% Idle%

100%
90%
80%
Logical CPU Usage

70%
60%
50%
40%
30%
20%
10%
0%
40 60 80 100
Parallelism

Graph 7.6-4: CPU distribution by category

Graph 7.6-4 shows the CPU distribution by category as the load scales up. This shows the
relationship of system and user CPU remains constant which is a positive sign in scaling. An
increase in system CPU could indicate a bottleneck in increasing parallelism. This run is using
only 56 of the available 64 physical processors, so there is not yet a CPU constraint. Parallelism
factor is 87.5% at 80 processes.

50
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Forecast & DRP 500.000 Location/Products 80 Parallel

System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D04 DMQ_D03 DMQ_D02 DMQ_D01


100
90
80
% of utilized CPU

70
60
50
40
30
20
10
0
40 60 80 100
Parallelism

Graph 7.6-5: CPU distribution by component

This graph shows the distribution of the actual CPU used over the different components of the
system. This shows a good load distribution over the 4 application servers used to drive the load
(DMQ_01 – DMQ_04). The picture remains stable across all 4 scale points.

Bucket Size DRP and FST

FCST+DRP Different block sizes

700.000
Location products/h

600.000
500.000
400.000 Throughput
300.000 (LocProd/h)
200.000
100.000
0
0 100 200 300 400
Block size

Graph 7.6-6: Throughput with varying blocksize- Ref: Runs 24,18,19,20

We tested a combined ‚Forecasting + DRP’ with different block sizes.


We figured out that a blocksize of 100 gives the best throughput with our setup.

51
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.6-7 CPU Utilization and IO behavior with increasing blocksize

Forecast & DRP 500.000 Location/Products 80 Parallel These two graphs show the
Avg Max CPU utilization and the IO rates
64

56
as the blocksize of the packets
48 is increased.
Physical CPU's

40

32
In the CPU utilization, a trend is
24
depicted in which the average
16
8 CPU utilization is decreasing
0
50 100 200 300
from blocksize 100 and the peak
Blocksize
CPU is increasing. This is a
Forecast & DRP 500.000 Location/Products 80 Parallel
move from a more constant to a
burst type or “peaky” workload.
160000
read-kB/s write-kB/s
The IO rates are also decreasing
140000 as the blocksize increases. This
I/O-Throughput in kB

120000

100000
is positive between size 50 and
80000
100, but restricting the
60000

40000 parallelism in the runs with the


20000

0
larger blocksizes.
50 100 200 300
Blocksize

Graph 7.6-8: Relative CPU distribution over SAP components per system (percent of utilized)

Forecast & DRP 500.000 Location/Products 80 Parallel

System DMQ_SAP DMQ_LC DMQ_db2 DMQ_D01 DMQ_D02 DMQ_D03 DMQ_D04


100
90
19,19 20,05 19,42
80 16,75
Logical CPU Usage

70
60
50 20,21 20,74 18,28 17,59
40
30
20
10,76
10 13,57 11,40 12,68
2,78 2,69 3,56 7,05
0
50 100 200 300
Blocksize

Graph 7.6-8 shows the changes in the component utilization as the blocksize increases. The
trend in the larger blocksizes is a general reduction of throughput in the application servers (seen
as less CPU consumed in DMQ01-DMQ04). The database (DMQ_db2) works hardest for the
small blocksize, which makes sense as more IO is done for the small blocksize.

52
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

7.6.2 Infrastructure Recommendations


Using multiple application servers with load balancing is beneficial in this type of distributed
load where a batch process spins off dialog processes for block processing. Ideally the application
servers used for the batch planning runs are dedicated and not shared with any online activity.
This will allow for very high loading of the application servers. Separate application servers
should be used for the CI and online if it is concurrently active. This does not mean that CPU
capacity must be reserved. Using the shared processor pool, priorities for LPARs can be imposed,
and within an LPAR, priorities for application instances can be imposed using WorkLoad
Management (WLM). WLM is used in these tests for monitoring purposed only, although the
WLM classes can also be used to control resource distribution.

7.6.3 Data Base Recommendations


Prefer single-partition database layout
Although most of the tests ran on a multi-partition system with four database partitions, we
recommend setting up the SCM system for Spare Parts Planning as a single partition system as
shown in the figure below.

The system was set up with multiple database partitions because the InfoCube containing the
historical planning data was very large. Its fact tables were distributed over the four database
partitions. However, the tests showed that the workload for accessing the InfoCube is very small
compared to the actual planning workload running solely on database partition 0 and mainly
accessing the time series management tables /SCA/TK1SPW, /SCA/ST1SPW, /SCA/IS1SPW
and /SCA/KT1SPW.
Furthermore, the queries executed against the historical planning data InfoCube only processed a
relatively small number of rows. Such queries do not benefit much from a multi-partition layout
which is most beneficial in environments where large amounts of data are processed in parallel
on the database partitions.
Therefore it is not required to configure multiple partitions.

Diagram 7.6-1: Configuration with one database partition

53
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Use MDC for the historical planning data InfoCube


The queries executed against the historical planning data InfoCube in the SPP test scenarios have
strong restrictions on the material and the location dimension, as shown in the following sample:
SELECT
CHAR(RIGHT(DIGITS("DT"."SID_0CALWEEK"), 000006), 000006) AS "0CALWEEK" ,
“S7"."/BI0/9APRODUCT" AS "9APRODUCT" ,
"S8"."/BI0/9ASTOCKING" AS "9ASTOCKING" ,
"S9"."/BI0/9AVCP_ST" AS "9AVCP_ST" ,
"S10"."UNIT" AS "0BASE_UOM" ,
SUM ( "F"."/BI0/9AFI_QTY_W" ) AS "9AFI_QTY_W" ,
SUM ( "F"."/BI0/9AFI_EN_W" ) AS "9AFI_EN_W"
FROM
"/BI0/9AEDEMAND" "F" JOIN
"/BI0/9ADDEMAND1" "D1" ON "F" . "KEY_9ADEMAND1" = "D1" . "DIMID" JOIN
"/BI0/9ASPRODUCT" "S7" ON "D1" . "SID_9APRODUCT" = "S7" . "SID" JOIN
"/BI0/9ADDEMAND2" "D2" ON "F" . "KEY_9ADEMAND2" = "D2" . "DIMID" JOIN
"/BI0/9ASSTOCKING" "S8" ON "D2" . "SID_9ASTOCKING" = "S8" . "SID" JOIN
"/BI0/9ADDEMAND7" "D7" ON "F" . "KEY_9ADEMAND7" = "D7" . "DIMID" JOIN
"/BI0/9ASVCP_ST" "S9" ON "D7" . "SID_9AVCP_ST" = "S9" . "SID" JOIN
"/BI0/9ADDEMANDU" "DU" ON "F" . "KEY_9ADEMANDU" = "DU" . "DIMID" JOIN
"/BI0/SUNIT" "S10" ON "DU" . "SID_0BASE_UOM" = "S10" . "SID" JOIN
"/BI0/9ADDEMANDP" "DP" ON "F" . "KEY_9ADEMANDP" = "DP" . "DIMID" JOIN
"/BI0/9ADDEMAND3" "D3" ON "F" . "KEY_9ADEMAND3" = "D3" . "DIMID" JOIN
"/BI0/9AXDEM_CAT" "X2" ON "D3" . "SID_9ADEM_CAT" = "X2" . "SID" JOIN
"/BI0/9ASFCSTABLE" "S11" ON "X2" . "S__9AFCSTABLE" = "S11" . "SID" JOIN
"/BI0/9ADDEMANDT" "DT" ON "F" . "KEY_9ADEMANDT" = "DT" . "DIMID"
WHERE
"DT"."SID_0CALWEEK" BETWEEN 200409 AND 200708 AND
"DP"."SID_0CHNGID" = 0 AND
"DP"."SID_0RECORDTP" = 0 AND
"DP"."SID_0REQUID" <= 2000000250 AND
"S11"."/BI0/9AFCSTABLE" = 'X' AND
"S7"."/BI0/9APRODUCT" IN
( 'ZSPP_010477' ,'ZSPP_003007' ,'ZSPP_021149' ,
'ZSPP_046752' ,'ZSPP_027018' ,'ZSPP_005141' ,
'ZSPP_017414' ,'ZSPP_014746' ,'ZSPP_008876' ,
'ZSPP_004074' ) AND
"S8"."/BI0/9ASTOCKING" IN ( 'PLSPU2' ,'PLSPU1' ) AND
"S9"."/BI0/9AVCP_ST" = 'X' AND
"X2"."OBJVERS" = 'A'
GROUP BY
"DT"."SID_0CALWEEK" ,"S7"."/BI0/9APRODUCT" ,"S8"."/BI0/9ASTOCKING" ,
"S9"."/BI0/9AVCP_ST" ,"S10"."UNIT"

Performance of these queries may be improved with multi-dimensional clustering (MDC) on


material and location. If there are not enough records for each material and location combination
to fill the MDC blocks it is still beneficial to define MDC on material only. For the historical
planning data used in this project, MDC on the material dimension was the preferred solution
because MDC on both material and location would have increased the space consumption.

54
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

When using MDC, to reduce space consumption, you should make sure that the InfoCube is
created in a tablespace that has been defined with an extent size of 2. Space consumption of
InfoProviders using MDC can be checked in BI transaction RSRV with the test “Check Space
consumption of MDC InfoProviders in DB2 f. Linux, UNIX and Windows” which is located in
the folder all elementary tests - Database. You can convert an existing InfoProvider containing to
MDC with the Reclustering function (to be invoked from the context menu of the InfoProvider).
Before converting the Infoprovider, Reclustering offers to check the space consumption with the
selected MDC dimensions and the data currently contained in the InfoProvider.

For information on how to define MDC for InfoCubes see “SAP NetWeaver Business
Intelligence 7.0 and Higher — Administration Tasks: IBM DB2 Universal Database for UNIX
and Windows” on http://service.sap.com.

Applying deep compression


With deep compression, the size of table data objects can be reduced considerably. The largest
table in the test setup was the F fact table of the historical planning data InfoCube which had a
size of about 159 GB (data without indexes). In the original setup it was spread over four
database partitions, uncompressed and did not use MDC. We
• Moved it to database partition 0
• Configured it with MDC on the material dimension and
• Compressed it.
Compression reduced the size from 159 GB to less than 46 GB, which is about 29% of its
original size. The modified layout also reduced query execution time considerably. Most queries
were executed in less than a second compared to more than two seconds in the multi-partition
environment without MDC and deep compression.
Graph 7.6-9: Compression comparison – F fact table

Com pression Disk Space Savings for 9ADEMAND


InfoCube F fact table

18 0 , 0 0 15 9 , 0 0
16 0 , 0 0
14 0 , 0 0
12 0 , 0 0
10 0 , 0 0
80,00
60,00 45,24
40,00
20,00
0,00
Size Dat a uncompressed Size Dat a compressed

55
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 7.6-10: Compression comparison – E fact table

Com pression Disk Space Savings for 9ADEMAND E fact table

3,50 3,20

3,00
2,50

2,00
GB

1,50
0,94
1,00
0,50

0,00
Size Data uncompressed Size Data compressed

Deep compression also reduces the size of the time series tables considerably.

Compression Data Disk Space Savings for Time Series


Tables

60,00
54,53

50,00

40,00
32,78
31,10
GB

30,00

20,00
13,52
10,51
8,24
10,00 5,17
3,44

0,00
/SCA/ST1SPW /SCA/IS1SPW /SCA/TK1SPW /SCA/KT1SPW
(78.879.858 rows) (221.413.909 rows) (103.759.716 rows) (157.759.716)

Graph 7.6-11: Compression comparison – Times Series

The following two charts show the runtime and database request time of the FCST+DRP jobs
with uncompressed InfoCube and Time Series tables and compressed InfoCube and Time Series
tables. The bufferpool size for the uncompressed runs was 40 GB over four database partitions.
The bufferpool size for the compressed runs was 35 GB on one database partition.

56
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Job Runtimes with and without compression

6.000
5.220
5.000 4.565

4.000 3.790
3.445
3.234
seconds

2.858
3.000

2.000

1.000

0
FCST+DRP 500.000/100/40 FCST+DRP 500.000/100/60 FCST+DRP 500.000/100/60

uncompressed compressed

Graph 7.6-12: Compression comparison – Job Run Times

DB Request Time with and without compression

90.000.000

80.000.000

70.000.000

60.000.000

50.000.000
ms

40.000.000

30.000.000

20.000.000

10.000.000

0
FCST+DRP 500.000/100/40 FCST+DRP 500.000/100/60 FCST+DRP 500.000/100/60

uncompressed compressed

Graph 7.6-13: Compression comparison – DB Request Times

The following graphs show a reduction in IO of compressed FCST+DRP 500.000/100/60


compared to the uncompressed run.

57
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

fcs0_read-KB/s
is03d3(1) fcs0_write-KB/s fcs0_xfer-tps fcs1_read-KB/s fcs1_write-KB/s fcs1_xfer-tps fcs2_read-KB/s fcs2_write-KB/s fcs2_xfer-tps
fcs3_read-KB/s fcs3_write-KB/s fcs3_xfer-tps scsi0_read-KB/s scsi0_write-KB/s scsi0_xfer-tps scsi1_read-KB/s scsi1_write-KB/s scsi1_xfer-tps
140000
120000
100000
80000
60000
40000
20000
0
7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

7h

0h

3h

6h

9h

2h

5h

8h

1h

4h

7h

0h

3h
:5

:0

:0

:0

:0

:1

:1

:1

:2

:2

:2

:3

:3

:3

:3

:4

:4

:4

:5

:5

:5

:0

:0
03

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

04

05

05
Graph 7.6-14: IO for uncompressed FCST+DRP 500.000/100/60 run.

is03d3(1) scsi0_read-KB/s scsi0_write-KB/s scsi0_xfer-tps scsi1_read-KB/s scsi1_write-KB/s scsi1_xfer-tps fcs1_read-KB/s fcs1_write-KB/s


fcs1_xfer-tps fcs0_read-KB/s fcs0_write-KB/s fcs0_xfer-tps
120000
100000
80000
60000
40000
20000
0
00 h
9h

01 h
3h

01 h

01 h
9h

01 h
3h

01 h

01 h
9h

01 h

01 h
5h

01 h
9h

01 h
3h

01 h
7h

01 h
1h

01 h
5h

01 h
9h

01 h
3h

01 h
7h

9h
7

5
:5

:5

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3

:4

:4

:4

:4

:4

:5

:5

:5

:5

:5
00

01

01

01

01

01

01

01

01

01

01

01

01

01

01
Graph 7.6-15: IO for compressed FCST+DRP 500.000/100/60 run.

When using compression, we may recommend reorganizing tables with heavy update activities
from time to time to maintain optimal space consumption.

Buffer Pool Configuration


Most of the tests ran with a large bufferpool of 70 GB (60 GB on database partition 0 and 3.3 GB
on the database partitions 1-3). The test runs 17-20 and 22-24 for FCST+DRP ran with a
bufferpool size of 40GB (30GB on database partition 0 and 3.3 GB on database partition 1 to 3).
The tests were run with the maximum memory available to optimally use the hardware resources.
Due to the tight time frame there was no opportunity for scaling back.

Because of the high bufferpool hit ratio of 99.5% we conclude that a bufferpool size of 30-40 GB
is sufficient for SPP, with deep compression this could be further reduced.

58
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8 Service Parts Fulfillment – Order Management

8.1 Business Process Description


Sales order processing is the most performance critical process in the aftermarket business – as a
high number of orders need to be processed in a limited time frame.

Typically the sales order process can be split into two different types. One is the replenishment
ordering for the dealers in order to refill the onsite stocks. Here each order typically contains a
huge number of line items and a big amount of quantities per item. These orders are planned and
have a delivery window within the next days.
The other type can be described as rush-orders (overnight/emergency /vehicle-off-road) which
occur sporadic due to breakdowns of assets. Here the parts need to be delivered overnight to the
dealers. Typically these orders contain only a few line items with a low number of quantities. The
placement of the orders normally happens in a small timeframe in the afternoon as the dealers
want to meet the cut-off times set by the OEM or the logistics service provider. Within this time
frame most of the daily overall order volume is entered in the system. As the OEM guarantees
certain delivery times the sales orders need to be processed within a dedicated time slot, so that
the rush orders can be send out the same day for example.

The scenario of selling a part including delivering and invoicing goes across four systems: SAP
CRM for sales and invoicing, SAP SCM – APO for global Available to Promise (gATP) check,
SAP ERP for delivery processing and SAP SCM – Extended Warehouse Management (EWM)
for goods issue processing.

Diagram 8.1-1 below presents an overview of the complete sales process, from order creation to
the invoice processing:

59
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

SAP CRM SAP ERP SAP SCM SAP SCM


WM ATP
GTS –
Global ATP
Compliance Credit

Create
Create
sales
Delivery WH- customer
Sales
Request
Unchecked

*
Convert ATP
unchecked
Update
Delivery status
Delivery WH- Manage wave pick requirement
Release wave pick
Billing Goods issue status Request Determine picking loc.
status
Picking transfer orders
Create
Invoic
Create Warehouse
Update order
Invoice checked
Post goods issue
WH-
Post Order Picking
goods
Packing
Freight
Documents
Loading
Posting
Release WH-Order
For execution
Update
Available Update

* Either an unchecked delivery or directly a delivery is created (depending on customizing)

Diagram 8.1-1: Process flow overview – order fulfilment

In the performance test the focus was on the process steps with the orange background – order
management.

Order management includes the creation and the processing of sales orders and the triggering of
subsequent logistics processing in SAP ERP. A sales order is a customer’s binding request to an
enterprise to deliver a specific quantity of parts. A sales organization accepts the sales order, and
thus becomes responsible for fulfilling the sales contract.

Order Management with the SAP solution for Service Parts Management enables the customer to
trigger the delivery creation directly from the SAP Customer Relationship Management (SAP
CRM) system. Due to the system configuration the sales order is not replicated to SAP ERP (like
in a normal CRM ERP linkage), but rather the system immediately creates either unchecked or
checked deliveries in SAP ERP. Data such as the requested delivery date and quantity is
transferred from the sales order to the deliveries in SAP ERP.
The desire of avoiding data replication allows the creation of a large number of sales orders and
the triggering of logistic delivery processes in a very short time frame.

For the creation of a sales order the following information needs to be entered:
o Business partner (sold-to-party)
o Product and quantity

60
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The following functions and checks are performed by the system during the sales order creation:
o Global Available To Promise (gATP) check
o Route determination and delivery scheduling
o Pricing
o Credit check
o Compliance checks – export control

A very important aspect of the Order Fulfillment Process is the communication between the
different systems. The following figure shows detailed steps in the scope as well as the
communication between the involved systems.
SAP CRM SAP ERP SAP SCM
ATP

Create sales
order

Enter business
partner

Enter product gATP, e.g. system checks


availability, determines the
and quantity
route and schedules the
1 order lines
Confirmed
quantities and

Customer
Price conditions are requirement is
determined created

Save the sales


order

Trigger credit check


2 Perform credit
Update with credit
check information Accounting update
5 6 and credit limit
update Customer
Export compliance 4 requirement is
checks updated
Unchecked
Sales order is delivery is
created created
3

Diagram 8.1-2: Processing steps and inter-system communication

The following table shows a high level overview of the different communication steps.
Step Usage Source Destination RFC Type
1 ATP Check CRM APO sRFC
2 Credit Check CRM ERP sRFC
3 Create unchecked delivery CRM ERP qRFC
4 Create Customer CRM SCM qRFC
Requirement
5 Update Accounting CRM ERP qRFC
6 Update Credit Limit ERP CRM qRFC
Graph 8.1-1: Processing steps and communication type

61
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Application View including Communication

Diagram 8.1-3: Overview of inter-system communication

8.1.1 Global Availability Check (gATP)


The global Availability Check (gATP check) is used to check whether a product can be
confirmed as available in a sales order, based on the fact that enough stock is available or can be
produced or purchased on time. The product is reserved in the required quantity, and the ATP
requirements are transferred to production or purchasing.

When a sales order is created and a product is entered with the requested delivery date, SAP
CRM transfers data to the SAP SCM - ATP system, where the availability check is performed.
The results of the availability check are returned to the sales order in SAP CRM. If the ATP
system cannot confirm anything for a particular item, the quantity of the confirmation schedule
line is set to 0.

The following basic methods of availability check used in SAP APO are supported in CRM
Enterprise:
o Product availability check
o Rules-based ATP check
Options include:
o Product substitution
o Location substitution

62
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

After the availability check, details for each item are transferred back to CRM Enterprise, such
as:
o Delivering country (for tax reasons) and region (U.S.)
o Confirmed quantities
o Confirmation dates
o Globally unique identifier
o Item category usage

System Set Up for the Performance Test


The gATP check for the order management scenario was set up in such a way that parts are
fulfilled at the customer facing location, through location substitution (partly at the second
location and partly at third location) and through product substitution.

Only a very limited number of parts cannot be fulfilled, they need to be further processed with
Backorder Processing (BOP).
Note that additional order line items in CRM are created due to product and location substitution.
Route Determination and Delivery Scheduling
Route determination and delivery scheduling is used to check whether the products can be
delivered on time. For this purpose, route determination and delivery scheduling are carried out
for the items in a sales order. To plan shipments in line with the requested delivery date, it creates
shipment proposals, and determines the goods issue and loading dates. These functions are
carried out in SAP Advanced Planning and Optimization – Global Available to Promise (SAP
APO- ATP).
Route Determination and Delivery Scheduling is performed using the dynamic route
determination functionality.

8.1.2 Pricing
Pricing is a function that the system uses to determine pricing information when creating a sales
order. For example, the system automatically determines the gross price and the discounts and
surcharges that are relevant for a specific customer or a specific product at a particular date in
accordance with the valid conditions.
The system uses the condition technique for pricing in order to automatically determine the
relevant pricing information from condition records for a business transaction. The system carries
out pricing in CRM via the IPC (Internet Pricing and Configurator).

Taxes are also calculated automatically in CRM via the Transaction Tax Engine (TTE), which is
part of the IPC. The Transaction Tax Engine (TTE) determines and calculates tax for the
following scenarios:
Country/Tax Community Tax Scenario
EU and other countries VAT
Invoice Verification
CA External Tax Calculation
CA Internal Jurisdiction Code
US External Tax Calculation
US Internal Jurisdiction Code
Table 8.1-1: Tax Calculation Scenarios

63
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

System Set Up for the Performance Test


The sales order price is derived from the list price considering customer discounts (fix and
relative) and parts specific discounts.
Taxes are calculated differently for each country parts are exported to.

8.1.3 Credit Check


Credit Management minimizes financial risks for your organization. Credit Management carries
out automatic credit checks in sales order processing, and it forwards documents that are blocked
for credit reasons directly to the person responsible to be checked.
The credit limit check carries out a check against the credit limit and the outstanding claims in
SAP ERP – Financials. CRM calls up the automatic credit check and sends the information on
open values to SAP ERP, along with data on the credit group, sales organization, currency, payer
and other business partners. SAP ERP uses this information to determine the credit group, credit
control area, credit management account of the payer, and the account's risk category and checks
the payer's credit standing.

System Set Up for the Performance Test


The credit check for the order management scenario is performed in the ERP system.
Compliance Checks – Export Control

SAP Compliance Management supports international export trade compliance issues in these
areas:
● Sanctioned Party List Screening
● Embargo check
Legal Control – Export License Management

SAP Compliance Management helps you rationalize your extended logistics chain and automate
the complicated processes that are involved with international trade compliance issues, a primary
prerequisite for successful international trading activities. This minimizes the risk of penalties
and fines, maximizes your companies return on selling initiatives, and increases your overall
competitiveness by improving customer satisfaction.

Application View including Communication

8.1.4 Global Trade Service (GTS)


Sanctioned Party List Screening
Sanctioned party list screening ensures that trading partners are not designated as restricted by
any particular agencies.

Embargo check
An embargo is a government prohibition against the shipment of particular products to a
particular country for economic or political reasons. SAP Global Trade Services (SAP GTS)
64
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

provides an embargo service that automatically checks whether there are any ‘critical’ business
partners or countries are in the sales order

Legal Control – Export License Management


Legal Control allows you to maintain your product master data in accordance with the applicable
export laws. It allocates licenses for specific products, determines if a license is required for a
particular transaction, and performs the appropriate assignment.

Orders to embargo countries, with blocked business partners or with parts without a valid export
license are blocked and need to be further processed in the CRM system.

System Set Up for the Performance Test


One third of the overall orders are export orders. For the order management scenario, the
compliance checks are set up in the following way:
Embargo check: One country is blocked
Sanctioned Party List Screening: Several customers per country are blocked
Legal control for critical parts (10 %)

8.2 Performance Test


8.2.1 Master Data Setup

The master data setup is based on the SPM IDES environment. Additional master data objects are
created by eCATT. This eCATTS are used by our Quality Management in order to set up an
environment for our internal tests.
The IDES landscape has a complex enterprise structure because different business are covered
e.g. cross-company stock transfer orders, inter-company stock transfer orders, global company
with currency USD and local companies with local currencies, global trade.
The data distribution is a complicated process and we distinguish 3 different sequences:
1. Customer sequence
2. Vendor sequence
3. Material sequence
The master-data is distributed between ERP, CRM, APO and EWM. The following diagrams
show the data flow.

65
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 8.2-1: Structure


Sales
ORG
SPB1

Credit Control Area Credit Control Area Purch


Sales ORG ORG
ERP SPU1 SPE1
SPU1 SPE1
Controlling Area Controlling Area
ERP Purch Sales ORG Sales
ORG SPU1 SPE1 ORG
SPG1
SPU1 Chart of Accounts eCATT Chart of Accounts SPP1
ERP
CANA Plant INT
SP99
Co. CODE Co. CODE Co. CODE Co. CODE
ERP Plant SPB1
SPU1 SPG1 SPP1
SPU5
Plant Plant Plant Plant Plant Plant Plant Plant Plant
ERP
SPU1 SPU2 SPU3 SPU4 SPG1 SPG2 SPB1 SPB2 SPP1

Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto Sto
ERP LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC LOC

ROD AFS ROD AFS ROD AFS NWM POD ROD AFS ROD AFS ROD AFS ROD AFS ROD AFS

ERP Whse ERP Whse ERPWhse ERPWhse ERP Whse ERP Whse ERP Whse ERP Whse ERP Whse
ERP (LEAN WM)
SU1 SU2 SU3 SG1 SG2 SB1 SB2 SP1
SL1
EWM Whse
EWM Whse EWM Whse EWM Whse EWM Whse EWM Whse EWM Whse EWM Whse EWM Whse
SCM/ SPU5
EWM SPU1 SPU2 SPU3 SPG1 SPG2 SPB1 SPB2 SPP1

Plant Plant Plant Plant Plant Plant Plant Plant Plant


SCM/ Location Location Location Location Location Location Location Location Location
APO SPU1 SPU2 SPU3 SPU4 SPG1 SPG2 SPB1 SPB2 SPP1

=
Sto Sto
LOC LOC
NWM POD

Diagram 8.2-2: Master data distribution – Customers sequence


2
5
BP Distribution
- ALE / IDoc Automatic Creation of
CRM 5.0 APO 5.0 Location Master Data
(Tx /SAPAPO/LOC3)
- Location Type = 1010 (Customer)

1
4 Customer Distribution (Tx CFM1/CFM2)
Create BP (Tx BP) - CIF
- Role General
- Role Sold-to-Party
(including Sales Org. data) 2
- Role Ship-to-Party
3
- Role Invoice Recipient BP Distribution Enhance Customer (Tx XD01)
- Middleware / BDoc ERP 5.0
- Role Payer - Create Company Code Data
- Maintain Billing data - Account Group = KUNA (Customer)
- Set flag Customer

4 Customer Distribution (Tx CFM1/CFM2)


- CIF

2 5
BP Distribution Automatic Creation of
- ALE / IDoc
EWM 5.0 Location Master Data
(Tx /SAPAPO/LOC3)
- Location Type = 1010 (Customer)

66
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 8.2-3: Master data distribution – Vendors sequence


2
5
BP Distribution
- ALE / IDoc Automatic Creation of
CRM 5.0 APO 5.0 Location Master Data
(Tx /SAPAPO/LOC3)
- Location Type = 1011 (Vendor)

1
4 Vendor Distribution (Tx CFM1/CFM2)
Create BP (Tx BP) - CIF
- Role General
- Role Vendor
2
3
BP Distribution Enhance Vendor (Tx XD01)
- Middleware / BDoc ERP 5.0
- Create Company Code Data
- Create Purchasing Org Data
- Account Group = 0001 (Vendor)

4 Vendor Distribution (Tx CFM1/CFM2)


- CIF

2 5
BP Distribution Automatic Creation of
- ALE / IDoc Location Master Data
EWM 5.0
(Tx /SAPAPO/LOC3)
- Location Type = 1011 (Vendor)

Diagram 8.2-4: Master data distribution – Material sequence


3
Automatic Creation Product Master
CRM 5.0 APO 5.0 - TX (SAPAPO/MAT1)
- View: Global Data and Location

5
2 Material Master Distribution (Tx CFM1/CFM2)
Automatic Product Creation - CIF
- TX COMMPR01
- e.g.: View sales and distribution

4
ERP 5.0
Material Master Distribution
- Middleware / BDOC

2 Material Master Distribution (Tx CFM1/CFM2)


- CIF

3
EWM 5.0 Automatic Creation Product Master
- TX (SAPAPO/MAT1)
- View: Global Data and Location

67
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.2.2 Test Execution


The test was performed using 1.000 different product and 100 customers in six different
countries.

The high number of sales orders was created with a report using standard BAPIs. We didn’t
create the sales orders using any user interface. A master report created the packages and
scheduled up to 80 batch jobs which run in parallel. Each batch job created the sales order in a
series without any waiting time.

The number of open sales orders increased in SCM after several runs. This had an impact to the
ATP response time. After a certain number of runs we reset the SCM system. It was not possible
to reset the SCM system after each test run because of time constraints in the project. We could
figure out a small increase of ATP response by increasing number of open sales orders. At some
point in time we had up to 1 million open sales orders in the SCM system.

68
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.2.3 Test Landscape for Service Parts Fulfillment

Layout and Configuration of the SAP System Components

Diagram 8.2-5: Overview of SAP system implementation and landscape

The diagram above shows the configuration of the 3 SAP systems of the integrated landscape.
Each system is housed in a separate LPAR (see logical system landscape).

JVM activity was distributed over multiple dedicated application servers with good results. The
experiences from this project indicated that 10 VMCs per application server brought the best
performance. For larger test scenarios, additional application server instances were added to
support the application load scale-up.

69
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.3 Results and Scalability

8.3.1 Results Summary

Table 8.3-1: Results per Order configuration – decreasing order lines per order
Number of order lines / hour Number of orders / hour
Orders with 100 order lines 352.300 order lines/hour 3.520 orders
Orders with 10 order lines 320.560 order lines/hour 32.056 orders
Orders with 5 order lines 286.200 order lines /hour 57.240 orders
Orders with 1 order line 111.373 order lines/hour 111.373 orders

These results were done using 40 parallel processes. Repeating the 5 order lines/order using 60
parallel processes brought the best results: 311.965 order lines/hour, 62.393 orders/hour.

8.3.2 Test Case 1: Varying number of Line Items per Order

One of the interesting performance characteristics is the throughput depending on the number of
line items per order. If the total number of line items is constant then less number of line items
means more orders and therefore more system overhead per line item.
The test case comprises the runs 214 to 220 with a parallelisation of 40 batch jobs, each job
creating 480 orders (240 orders in run 220). The line items (LI) vary from 1 LI/order to 10
LI/order:

Table 8.3-2: Results per Run – ordered by increasing order-lines per order

Throughput
Test Run Line Line Items Avg. Job (Line Items /
Nunmber Num. Jobs Orders/Job Items/Order created Runtime (s) min)
214 40 480 1 22888 740 1856
215 40 480 3 69240 1069 3885
217 40 480 5 114946 1446 4771
218 40 480 8 184000 2049 5388
220 40 240 10 115000 1291 5343

For orders with less than 8 line items per order the impact of the order header is getting more
noticeable. Beyond 8 line items per order, the overhead of the order header gets negligible and
the throughput in line items per time unit is constant.4 The maximum number of line items per
order we tested was 100 in another test series with 60 batch jobs.
The following graph shows the runtime per line item (LI). Between 1 and 8 LIs, the run time is
higher due to overhead for the order header:

4
The slight reduction of throughput (0.8%) from run 218 to 220 is due to variance (measurement and system
behavior).

70
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Runtime / Line Item (ms)

1400,00
1200,00
1000,00
800,00 Runtime / Line Item
600,00 (ms)

400,00
200,00
0,00
1 3 5 8 10
Line Items per Order

Graph 8.3-1: Runtime per order-line with increasing order-lines per order

The following two graphs show the physical CPU utilization over the three LPARs for each of
the runs mentioned above. The units are in physical CPUs utilized in the shared processor pool.
The first graph shows the average utilization, and the second shows the peak utilization.
The CPU utilization below shows no bottleneck in resources.

Graph 8.3-2: Max CPU Utilization in physical CPUs per SAP system in the integrated landscape5

Physical CPU Usage Average

64
56
48
40
Physical CPU's 32 is03d3(DMQ)
24 is03d2(DND)
16
8 is03d1(DSZ)
0
214 215 217 218 220
Testruns

5
Note: DMQ=SCM, DND=ECC, DSZ=CRM – 3 systems running in a 64CPU shared processor pool. This graph
shows total of peak utilization per run.

71
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Physical CPU Usage Max

64
56
48
40
Physical CPU's 32 is03d3(DMQ)
24 is03d2(DND)
16
8 is03d1(DSZ)
0
214 215 217 218 220
Testruns

Graph 8.3-3: Average CPU Utilization in physical CPUs per SAP system in the integrated landscape6

8.3.3 Test Case 2: Varying the Degree of Parallelisation

In the second test case we compare the throughput of the runs depending on the number of
parallel jobs (runs 190-198). The number of line items per order is constant: 5 LI/order. The next
table and diagram show average performance results of multiple runs:

Graph 8.3-4: Throughput benefits of increasing parallelism


Line Throughput Runtime /
Num. Orders/ Items/ Line Items Avg. Job (Line Items / Line Item
Runs Jobs Job Order created Runtime (s) min) (ms)
196-198 20 480 5 57369 1313 2623 458
194-195 40 480 5 114243 1560 4393 546
190-192 60 480 5 170875 1972 5199 693

Graph 8.3-5: Comparison of throughput for varying levels of parallelism

Throughput (Line Items / min)

6000

5000

4000
Throughput (Line
3000
Items / min)
2000

1000

0
20 40 60

6
Note: DMQ=SCM, DND=ECC, DSZ=CRM – 3 systems running in a 64CPU shared processor pool. This graph
shows total of the 3 systems at average CPU utilization over the different runs.

72
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Avg Physical CPU Utilisation of the high load phase


Run Number is03d1(DSZ) is03d2(DND) is03d3(DMQ)
196 14.3 4.4 11.7
197 14.0 4.4 12.3
198 14.0 4.8 12.6
194 23.8 10.5 22.0
195 24.2 9.6 21.4
190 24.8 9.9 23.2
191 24.1 10.2 23.7
192 23.1 9.5 22.1

Table 8.3-3: Physical CPU utilization per integrated system per run

Avg Physical CPU Utilisation of the high load phase

64.0

56.0

48.0
Physical CPU Usage

40.0
is03d3(DMQ)
32.0 is03d2(DND)
is03d1(DSZ)
24.0

16.0
40 parallel 60 parallel
8.0 20 parallel

0.0
196 197 198 194 195 190 191 192
Run Number

Graph 8.3-6: Physical CPU utilization per integrated system per run with increasing parallelism

Note: DMQ=SCM, DND=ECC, DSZ=CRM

The graph above shows the CPU distribution over the three participating systems for each of the
run tests. These runs are not organized according to scale; the number of parallel processes for
each is documented in the table above. The pattern of CPU distribution does clearly show that the
ratio remains quite stable. This is useful for sizing over the landscape.

73
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.3.4 Detailed Analysis of High Load Run

Deep Dive evaluation of Order Fulfillment


This section takes a close look at the behavior of the integrated solution by examining one
representative run. The run has the following characteristics:
• 60 parallel processes/ 5 Orderlines per Order.
• 170.000 order lines created in CRM (144000 order lines transferred to ERP and APO)
Runtime: 28.2.07 19:01-19:32 - Queue processing: 19:32-19:37

Graph 8.3-7: Total physical CPU utilization over 64 CPU shared pool, depicted by system. (Ref: run190)
Fullfilment 60 Parallel Physical CPU
is03d1(CRM) is03d2(ECC) is03d3(SCM)
64

56

48
Physical CPU's

40

32

24

16

0
1h

2h

4h

5h

7h

8h

0h

1h

3h

4h

6h

7h

9h

0h

2h

3h

5h

7h

8h

0h

1h

3h

4h

6h

7h
:0

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

Time

Graph 8.3-7 shows the total physical CPU consumption broken down by integrated system.
The total utilization of all three systems did peak at 100% of the 64way p5-595. The following
graphs look at the individual systems. The physical CPU utilized by the individual system is then
broken down further into percentage per component. The components group all application
servers together to show the distribution over application server to database.

Graph 8.3-8: CRM CPU utilization distribution by component


Fullfilment 60 Parallel CRM-System
System DSZ_SAP DSZ_DB2
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
1h

2h

4h

5h

7h

8h

0h

1h

3h

4h

6h

7h

9h

0h

2h

3h

5h

6h

8h

9h

1h

2h

4h

5h

7h
:0

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

Time

The CRM system maintains a consistent behavior pattern. The CRM system is driving the
scenario.

74
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 8.3-9: ECC CPU utilization distribution by component

Fullfilment 144.000 Locations/Products 60 Parallel ECC-System


System DND_SAP DND_DB2
100

90

80

70
Logical CPU Usage

60

50

40

30

20

10

0
1h

3h

5h

7h

9h

1h

3h

5h

7h

9h

1h

3h

5h

7h

9h

1h

3h

5h

7h
:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19
Time

The ECC system is primarily processing qRFC queues and the erratic load behavior is a result of
the qRFC polling intervals.

Graph 8.3-10: SCM CPU utilization distribution by component


Fullfilment 60 Parallel SCM-System
System DMQ_SAP DMQ_LC DMQ_DB2
100

90

80
70
Logical CPU Usage

60

50

40

30
20

10

0
1h

2h

4h

5h

7h

8h

0h

1h

3h

4h

6h

7h

9h

0h

2h

3h

5h

7h

8h

0h

1h

3h

4h

6h

7h
:0

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

Time

The SCM system is processing ATP checks which also drive the liveCache and database.

75
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

End SCM
processing

Fullfilment 60 Parallel Network-Traffic


is03d1(CRM) is03d2(ECC) is03d3(SCM)
3500

3000

2500
Network-IO in kB/s

2000

1500

1000

500

0
1h

2h

4h

5h

7h

8h

0h

1h

3h

4h

6h

7h

9h

0h

2h

3h

5h

7h

8h

0h

1h

3h

4h

6h

7h
:0

:2

:3
:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19
Time

Graph 8.3-11: Network IO – kilobytes/sec – by system and totaled for all systems

Graph 8.3-11 represents the network communication between the systems of the integrated
solution. In this graph we can see where SCM finishes, and ECC continues communication with
CRM, completing the order process and queue processing. Each system is shown individually
with systems stacked in the graph to give the total communications throughput over all systems.

Fullfilment 60 Parallel Disk-IO-Traffic


is03d1(CRM) is03d2(ECC) is03d3(SCM)
90000

80000

70000

60000
Disk-IO in kB/s

50000

40000

30000

20000

10000

0
1h

2h

4h

5h

7h

8h

0h

1h

3h

4h

6h

7h

9h

0h

2h

3h

5h

7h

8h

0h

1h

3h

4h

6h

7h
:0

:0

:0

:0

:0

:0

:1

:1

:1

:1

:1

:1

:1

:2

:2

:2

:2

:2

:2

:3

:3

:3

:3

:3

:3
19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

19

Time

Graph 8.3-12: Total SAN Traffic (read+write) depicted per system (LPAR) in KBs per second.

Each system in the integrated landscape has its own fiber channel resources but the storage server
is shared. Therefore it is interesting to see the combined IO requirements of the landscape during
this processing scenario.

76
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The following graphs show the results collected from the statistical records (transaction STAD)
for the batch jobs which create the sales orders in CRM. We aggregated the statistical records for
all batch jobs. The different key figures have the following meaning:
o ABAP time: the CPU time on the application servers,
o DB time: the response time of the database measured on the application server,
o VMC time: elapsed time spent in the Virtual Machine Container (VMC),
o RFC time: elapsed time of the various RFC calls (internal RFCs e.g. CRM middleware
and external RFCs e.g. ATP checks),
o Enq time: elapsed time waiting for the request sent to Enqueue server.

Chart-Series 8-1: Processing response time by component and system.

2%
Chart 8.3-3: CRM
ABAP Time (%)
DB Time (%)
42%
41%
VMC Time (%)
RFC Time (%)
Enq Time (%)

8% 7%

Chart 8.3-3: SCM


1% 1%
11%

7% ABAP Time (%)


DB Time (%)
liveCache Time (%)
Roll (i+w)
Enq Time (%)

80%

5% Chart 8.3-3: ERP


4%
7%
6% ABAP Time (%)
DB Time (%)
RFC Time (%)
Roll (i+w) Time (%)
Enq Time (%)

78%

77
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The following table and graph show the minimum, average and peak CPU utilization. Note that
the sum of the maximum utilizations of all systems exceeds the total number of available
physical CPUs. Here the system landscape benefits from the shared processor pool because
during the test only one system was at peak at the same time. If we would size each system
individually the overall result would 76 CPUs. Using the shared processor pool we save 11
CPUs. There is more information on processor sharing in the chapter on virtualization.

Physical CPU Utilization per System


System Min Avg Max
is03d1(DSZ) 19.1 24.8 28.2
is03d2(DND) 0.3 9.9 15.6
is03d3(DMQ) 12.8 23.2 31.4
SUM 32.2 57.9 75.2
Table 8.3-4: Physical CPU utilization per system

.
Graph 8.3-13: Physical CPU utilization – Fulfillment 60 Parallel

Physical CPU Utilisation per System

is03d3(DMQ)
Systems

Max
is03d2(DND)
Avg

is03d1(DSZ)

0,0 8,0 16,0 24,0 32,0


Physical CPU Usage

Graph 8.3-13 shows the CPU distribution over the three participating systems at both peak and
average utilization for a Fulfillment run with a volume of 144,000 Location/Products, running 60
parallel processes.

78
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Fullfilment 60 Parallel ECC

2%
15%
26%
3%

10%

40% 4%

SCM SAP SCM LC SCM DB2 CRM SAP CRM DB2 ECC SAP ECC DB2

Chart 8.3-4: Total CPU distribution over SAP components of the integrated landscape

SCM ECC

9% 11%

25%

66%

89%

SAP LiveCache DB2 SAP DB2

CRM
These pie charts show the CPU
utilization ratio within the individual
6%
systems. CRM shows almost 16:1
application server to DB, SCM a 2.6:1
ratio, and ECC 8:1.

94%

SAP DB2

Chart-Series 8-2: CPU over SAP components within individual systems

8.3.5 qRFC Queues

The following table shows performance characteristics of the qRFC queues. Queues with a low
number of entries are not included, because they do not have any impact on the throughput.

79
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The numbers are from run 190 (60 batch * 480 orders * 5 line items = 28800 orders with 144000
line items).

Table 8.3-5: performance characteristics of the qRFC queues


Queue Name Usage Source Destination Duration Number
CFTG remove temporary CRM APO 32min 28800
quantity assignments
CFSLS customer CRM APO 32min 28800
requirements
CSA_ORDER CRM middleware 40min 28800
(inbound)
CSA_CMDOC CRM middleware 40min 28800
(inbound)
R3AU_ORDER create unchecked CRM ERP 40min 144000
delivery in ERP
R3AU_CMDOC credit limit CRM ERP 40min 28800
R3AD_CMDOC credit limit ERP CRM 45min 28800

Note: R3AU_ORDER queue is created per CRM line item.

8.4 Recommendations
8.4.1 Application Recommendations

VMC Configuration
For the VMC (virtual machine container) enough swap space must be available on operating
system level. A rule of thumb to estimate the swap space: (Number of workprocesses + 1) *
application Java Heap * VM Java Heap + shared pool size (see note 854170).

Diagram 8.4-1: virtual machine container memory


vmcj/max_vm_heap_ vmcj/maxJavaHeap
MB Default:64MB

VM Java Heap Application Java Heap

vmcj/option/ps

User Engine Shared Data


and Cache
Shared Pool (used for user sessions,

Java VM Java VM Java VM Java VM


Template

Virtual Machine Pool

~ number of work processes (rdisp/wp_no_*)

Example for calculation:


• 10 Workprocesses
• 200 MB Application Java Heap (recommended for IPC, depends on Knowledge Base
size)
• 64 MB VM Java Heap

80
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

• 512 MB Shared Poolsize


Required additional swap space is: (10 + 1) * 200MB * 64MB + 512MB = 3416MB

A possibility to improve the performance of the VMC processing is to increase the number of
instantiated VM Container in transaction rz10.
The parameter rdisp/min_jvm indicates the minimum number of Java VM’s that are available for
http (WEB) and RFC/RMI (REM) processing. The possible values are WEB=[0-n ], REM=[0-m]
The default values are WEB=1, REM=1.
In the current standard scenarios that use the VMC no direct http communication with the VM
container is done. So only the REM value is of importance.

8.4.2 Infrastructure Recommendations

Configuration of liveCache data volumes


The size and the number of data volumes is important for the system performance. The number of
volumes determines the scaling of the I/O. Depending on the number of configured volumes,
liveCache configures thread which are responsible for the I/O.
We recommend the following calculation formula: use the square root of the system size in GB,
rounded up to specify the size of the MaxDB data volumes (see note 820824).
For example:

Size Number of volumes


10 GB 4 Volumes
50 GB 8 Volumes
100 GB 10 Volumes
200 GB 15 Volumes
500 GB 23 Volumes
1 TB 32 Volumes
Graph 8.4-1: rule of thumb for LiveCache volumes

8.4.3 Database Recommendations

Self Tuning Memory Manager (STMM)


In general, the performance of the ERP and CRM databases that were running with STMM
enabled was good. For test purposes, the parameter DATABASE_MEMORYwas set to
AUTOMATIC. With this setting, the DB2 memory tuner determines the overall memory
requirements for the database and increases or decreases the amount of memory allocated for the
database shared memory depending on the current requirements. Especially the CRM system
started with a very small memory footprint and adopted its memory usage rapidly to the
increasing workload.

81
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 8.4-2: memory utilization for DBs using self-tuning


Database Minimum memory consumption Maximum memory consumption
CRM 3 GB 20 GB
ERP 12 GB 17 GB

The bufferpool size varied between 5.5 GB and 10 GB for the ERP system and between 120 MB
and 11 GB for the CRM system.

Running with STMM causes some CPU overhead. Depending on your workload, STMM will try
to reallocate memory in intervals of 30 seconds up to 10 minutes. For workloads with more
constant memory profiles the memory tuner will tune memory more frequently. If your workload
and memory profile is relatively constant you can run with STMM enabled until your system is
tuned and STMM only makes minor changes to the memory configuration, then turn it off and
continue to run with the latest values set by STMM.

Specific recommendations for the ERP system


• Consider applying SAP OSS note 522550 to the tables VBDATA and DDLOG. This will
speed up access to these tables.

Specific recommendations for the CRM system


• Consider applying SAP OSS note 522550 to the CRM middleware table SMW3_BDOC1,
SMW3_BDOC2 and to VBDATA and DDLOG. This will speed the access to these
tables.
• Occasionally apply SAP OSS note 206439 to clean up the CRM middleware tables. After
deleting the content as described on this note you should also reorganize these tables at
database level.
• Apply SAP OSS note 893023 to turn on buffering for the table SAPLIKEY. This table is
frequently access during RFC processing. Performance will improve with buffering.

Recommendations for the SCM system


• Consider applying SAP OSS note 522550 to the tables VBDATA and DDLOG. This will
speed up access to these tables.
• Apply the code changes in SAP OSS note 975484 to reduce locking on table
/SAPAPO/TBHEAD.

82
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.5 Service Parts Fulfillment – Customer Stress Test

We ran a special test case in order to proof a specific customer requirement. The customer
requirement was to create 31.150 orders with a total of 75.000 order line items in 15 minutes.

The distribution of order line items was:

Number of Number of
Orders Order Line Items
20.000 1
10.000 3
1.000 10
150 100
Total 31.150 75.000
Graph 8.5-1: order-lines per order

So in average each order had 2,4 line items. The special challenge of this customer requirement
was the large number of orders with a single order line item, which leads to a much larger amount
of order header processing compared to the other tested scenarios, as well as a much higher
communication load between the 3 integrated systems.

The capacity of a single 64-way p5-595 was not enough to achieve this requirement. Therefore
we had to add temporarily a second 64-way p595 to the system landscape:

P595 MILLAU

R2 C5
10.3.47.1
P1 AIX 5.3
1.1.1.1
P595 YLI065 10.3.47.2 Idle
1.1.1.2 4 FC DS4800 4 EXP
10.3.47.3 ERP – DND
DB Server Fibres 2 Gb/s
1.1.1.3 4 FC
AIX5.3 TL05
10.3.47.4 SCM - DMQ 2 FC
AIX5.3 TL05
1.1.1.4
2 FC
4 FC
CRM – DSZ 10.3.47.5 2 Gb/s
ERP – DND 1.1.1.5
App Server
AIX5.3 TL05

R2 C6

100 Mb Full Duplex : ONN 347

Diagram 8.5-1: Overview of hardware infrastructure for Customer Stress Test

83
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

To minimize the amount of time required to add the second system to the landscape, we just
configured it as a single LPAR and initially moved the complete CRM system over to the new
LPAR. However initial tests showed that this didn’t free up enough capacity on the first machine,
so we moved the application server instances of the ERP system to the new p595 as well. Again
to save some time, those application server instances were co-hosted in the same LPAR running
the CRM system. The final system layout looked like:
Table 8.5-1: SAP configuration – component per LPAR
HW LPAR
System Name System Role
p595-1 DMQ lpar1 SCM database + application instances
p595-1 DND lpar2 ERP database instance
p595-2 DSZ lpar1 CRM database + application instances
DND ERP application instance
This does not represent implementation best practices but was a necessity of the time limit for the
project.

With this setup we were able to run 100 order entry jobs in parallel and achieved the target, to run
the customer specific distribution, in just under 15 minutes.

The job run times were:

• Avg 792 sec


• Min 680 sec
• Max 890 sec ( 14 min 50 sec)
Graph 8.5-2: Physical CPU utilization per LPAR
Total CPU Consumption 26.02.2007

DMQ DND-DB DSZ+DND-AS


120

100

80

60

40

20

0
1 1 :3 1

1 1 :3 2

1 1 :3 3

1 1 :3 4

1 1 :3 5

1 1 :3 6

1 1 :3 7

1 1 :3 8

1 1 :3 9

1 1 :4 0

1 1 :4 1

1 1 :4 2

1 1 :4 3

1 1 :4 4

1 1 :4 5

1 1 :4 6

1 1 :4 7

1 1 :4 8

1 1 :4 9

Graph 8.5-2 shows the combined physical CPU consumption of all three systems over the period
of the test. The peak usage was about 97 physical processors shortly after the start of the scenario

84
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

and then averaged out at about 90 physical processors. During that time the ERP database used
about 2 physical processors and the rest was distributed almost equally over the SCM and the
combined CRM plus ERP application server system. Due to the limited time we had available for
this scenario, we missed a small problem with the setup of the monitoring tools and are
unfortunately not able to break down the combined workload of the CRM and ERP application
server system into its single components. It can be assumed that the load ratios observed in the
other test scenarios can be applied to this test scenario as well.

Some key performance results of this challenging customer scenario were:

Table 8.5-2: Summary of KPI requirements for customer scenario


Performance figure Results
Number of orders/hour 126000
Number of order line items/hour 303400
Average number of order line items/order 2,4
Global ATP requests/hour 303400
Processed qRFC queues/hour 660000
IPC requests/hour 658000
GTS checks/hour 126000

85
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

8.6 Combined Load and the Shared Processor Pool

The following graph depicts the Order Fulfillment Scenario for all three systems on a single
timeline.
It was created using data from a tool called LPARSTAT which supports 1 second intervals. It
shows a section of the fulfillment load. With 1 second intervals, it is possible to see more closely
the load behavior which is handled by the shared processor pool. In this picture, 2 of the systems
are running on the one 64way p5-595 in the shared processor pool: DMQ and DND. DSZ, the
CRM system, was moved to an additional server to cover the CPU additional requirements of the
customer stress test. These snapshots, at 1 second intervals, show quite a different picture to the
CPU utilization graphs in the preceding chapters. In the earlier graphs, the intervals were an
average over several minutes. The Hypervisor scheduler works in 10 ms time slices so it is easier
to see, with these charts, how the virtualization layer is able to interleave the disparate
requirements of several concurrent systems, covering the peak loads of each.

Graph 8.6-1: Physical CPU consumption per system in 1 sec snapshots - stacked
DSZ-CRM
Virtualization- System landscape
DMQ-SCM
DND-ECC
140

120

100
Physical CPUs Utilized

80

60

40

20

Graph 8.6-1 shows the stacked CPU consumption – the total requirement was for 120 physical
CPUs to cover the combined peaks for all three systems.

Graph 8.6-2: Physical CPU consumption per system in 1 sec snapshots -unstacked
DND-ECC
Virtualization- System landscape
DMQ-SCM
DSZ-CRM
60

50
Physical CPUs Utilized

40

30

20

10

86
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 8.6-1, shows the same view as Graph 3.2-1 except the CPU consumption is not stacked.
This gives a feeling of the interleaving possibilities.

Using the same LPARSTAT output, Graph 8.6-3 shows the combined load of the SCM and ECC
systems as they ran in the shared processor pool. These are not stacked, the green
“combinedLoad” depicts the total peak requirement during the 1 second interval. As we can see,
the value of the combined load peaks often appears to exceed the physical machine capacity (64
CPUs). This is possible as the actual peaks are occurring in 10 millisecond time slices in this 1
second summary captured by LPARSTAT. This picture emphasizes the flexibility and benefits of
the shared pool.
Graph 8.6-3: Physical CPU consumption per system, and combined on single p595 CEC –not stacked
DND-ECC
Virtualization- System landscape
DMQ-SCM

80 combinedLoad

70

60
Physical CPUs Utilized

50

40

30

20

10

8.6.1 General Recommendations for Service Parts Management Landscape

Virtual Processors – as many as necessary, no more than needed.


All virtual processors compete for physical processor resources. The System p hypervisor and
AIX support an advanced feature called processor folding,that optimizes the dispatching of
virtual processors. With processor folding enabled (recommended) unused virtual processors in
an LPAR are de-activated over time (one per second). This reduces the effect of over allocating
virtual processors, depending on the load behavior. It can have a short impact on performance in
the case of load burst behavior, where a bunch of threads become runable simultaneously, after a
period of reduced utilization (less than 80%). The dormant virtual processors are activated by
default 1 per second while the high load continues. AIX provides the tuning parameter
“vpm_xvcpus”, which can be used to tune the number of dormant virtual processors that are
reactivated on a burst of activity, in case the load is known well enough for such fine tuning to be
desirable.
Virtual processors are the access paths from the LPAR to the physical CPUs. There should be
enough virtual processors in the LPAR to allow the LPAR to reach the maximum physical CPU
capacity you wish to allow this LPAR to use. If the LPAR is uncapped and its load can consume
up to 16 CPUs but only 8 virtual processors are defined, it can only access the equivalent of 8
87
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

physical CPUs. Therefore there need to be enough virtual processors to allow for peak
processing, but no more to optimize the hypervisor dispatching to physical CPUs..

Processing Units or Entitlement – High to online, lower to batch.


The entitlement is reserved capacity. Reserved capacity is immediately available to the owner
LPAR, but can be shared by others if not required by the owner. This setting is a powerful means
of controlling resource allocation priority. By setting the entitlement high on those LPARs which
support response-time sensitive loads, these LPARs can compete effectively with larger batch
loads which maybe running with much higher resource requirement.

Weighting: High to online, low to batch.


The weighting of an LPAR determines its allocation of the resources which are available for
sharing (not in use by entitlement owner). By setting a higher priority for LPARs supporting an
online or response time sensitive load, these LPARs will automatically get a quicker response to
resource requirements. Using weighting with entitlement, it is possible to protect the response
times of a “bursty” online load against a heavy batch load and still share the CPU resources
effectively.

Configuration Example
Using the example of the load documented in the graphs above, these would be the starting
recommendations:

OS Settings:
• CPU folding enabled (vpm_xvcpus=0)
• Memory affinity disabled (memory_affinity=0)
Disabling memory affinity results in a better use of the memory pools in the virtual environment.

Virtualization Settings:
The following is a proposal for the virtualization settings and policies for ECC and SCM sharing
the processor pool. It is based on the utilization monitoring graphs above, and assumptions about
the load profiles for these two systems.

ERP
The assumption is that ERP will also be handling online activity as well, and is not dedicated to
the order fulfillment scenario. The load profile is burst orientated. The objective is to support
response time sensitive
• Entitlement: 22
The SCM peak is currently at 22 and as this is only covering the qRFC requirements, it
will be given 22 as its entitlement to start with.
• Virtual processors: 30
SCM is currently using 22 physical CPUs. Expecting that it will also have online
requirements we give it the option to expand to 30 CPUs. Having done this, we would
ensure that the qRFC configuration is setup to limit the parallel queue processing in that it
cannot consume all of the CPU capacity for queue handling. We wish to use these
additional virtual processors to cover online peaks.

88
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

• Weighting: 192
The weighting is set relatively high to ensure priority for this LPAR in the distribution of
shared CPU resources.
• Type: Uncapped
This LPAR is able to exceed its entitlement of resources.

SCM
Assumption is that this is primarily batch. The objective is to allow it to use all free resources in
the shared pool which are not needed by higher priority systems.
• Entitlement: 30
The average, as seen in the graph above, is about 40 physical CPUs. The minimum was
20. We will start by giving it an entitlement of 30 physical CPUs as we do not want to
reserve too much of the shared resource, and this is an aggressive workload.
• Virtual processors: 60
In the 1 second snapshots above we see this LPAR peaking at about 52 CPUs. Within this
snapshot there were likely higher bursts of activity. We set the maximum CPU utilization
in the LPAR to 60 physical CPUs by allowing it access to this capacity by allocating it 60
virtual processors. This allows it to use double its actual entitlement.
• Weighting: 64
The weighting is set relatively low. It can only access CPU above its entitlement if other
priority systems do not need this capacity.
• Type: Uncapped

Proposal for Combining All Three


CRM – The CRM system, depicted in Graph 8.6-1 and Graph 8.6-2, are not running in the
same processor pool. Nevertheless, if we were to consider placing it into the shared pool with the
other systems, based the utilization from this chart, we might set the initial SPLPAR settings
based on the following observations:
1) the load is very erratic – very burst orientated
2) the load peaks at 55 physical CPUs
3) the CRM load drives this scenario
We would want this load to be able to achieve high peaks quickly, get the resources it requires to
drive the rest of the load at its best throughput, but not impact the ERP online load. To add this
LPAR into the shared pool, the other system SPLPARs would also need to be modified as the
entitlement is a resource reservation and the total of the entitlement cannot exceed the size of the
pool. Reworking the values to add all three into the pool could look like the table below.

System Peak Entitlement Virtual Weighting Profile


Physc Processors
CRM 53 21 60 128 Batch, driving system,
online
SCM 55 21 64 64 Batch, driving system,
online
ECC 22 22 30 192 qRFC dialog, online
Total 130 64 150
Graph 8.6-4 : Example of a shared pool configuration

89
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

• Entitlement : The entitlement is shared over the three systems. The ECC entitlement is
larger in ratio to its peak physical CPU consumption. Here we try to ensure a rapid access
to resources in support of online response times.
• Virtual processors: The number of virtual processors are related to the peak physical
CPU requirements. All allow a high upper limit.
• Weighting: The relative weighting is set such that ERP has high priority, CRM has
medium priority, and SCM has low priority. The expectation is that during planning runs,
the SCM could flood the system if it were not for the lower priority. During gATP
requests, driven by the CRM, the CRM must wait for a response so the resources it uses
would be relinquished and used by SCM. The background qRFC activity will be
processed in low priority on the SCM side and higher priority on the ERP side.

This chapter attempts to show the logic behind an initial virtualization configuration using the
order fulfillment scenario. An integrated virtualization scenario will have to monitored and tuned
like any other integrated system. The benefit of virtualization and dynamic LPAR features is that
these tuning activities can be done dynamically without interruption to the workload.

90
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

9 Service Parts Warehousing


9.1 Business Process Description for Outbound Processing with
Warehouse Management in SCM
9.1.1 Purpose
The outbound business processes of SAP Extended Warehouse Management are designed for
optimization of the overall process steps for picking, packing, and preparing goods for shipment
from the warehouse. Outbound processing starts with the creation of an outbound delivery that
typically has been created based on reference documents such as sales orders or stock transport
orders. The outbound delivery serves as the basis for the subsequent process steps.
A typical outbound business process in the Extended Warehouse Management may be comprised
of the following activities:
• Notification of goods to be supplied from a warehouse to a customer (for which the outbound
delivery serves as the reference document)
• Picking
• Packing
• Loading
• Goods issue
• Finalizing the delivery process

In the tested scenario, the stock is picked from the source bin into a container (pick handling
unit). The container is then moved to a packing station, at which point all of the containers for
the order are over-packed into a larger shipping container (pack handling unit). This shipping
container is then moved to the staging area, loaded onto a trailer, and finally the delivery is
posted for goods issue at the time of departure of the trailer. This process is very typical in a
service parts environment, where small orders with just a few lines (or less) are common.
Because there is processing overhead associated with each order, the number of order lines on
each order is kept consistently small for the performance test in order to test under the most
processing intensive conditions (simulating a near “worst case” condition for the warehouse).

91
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

9.1.2 Process Flow

Diagram 9.1-1: Flow chart for warehouse outbound business process.

92
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 9.1-2: Graphical representation of warehouse outbound business process.

Warehouse Business Processes


As documented in the flow chart above, the outbound business process includes the warehousing
processes of picking, packing, staging, loading, and goods issue. In the picking process, the
items are picked from various pick locations in the warehouse (as determined by the picking
strategies assigned to the product) and placed into a picking handling unit. In this case, two
items are picked from one activity area into one handling unit, and one item is picked from
another activity area. The items are picked into separate handling units, and both handling units
are independently transferred by the respective picker to the packing station.
At the packing station, the operator combines each of the picking handling units with the picking
handling units from other areas for the same order into an “overpack” or “nested” handling unit.
The nested handling unit is then moved by the packer to the outbound area of the packing station
where it will be picked up by an equipment operator and moved to the staging area. From the
staging area, the nested handling unit is loaded onto the trailer. As the final step, the trailer is
posted for goods issue from the outbound door.
The entire business process is facilitated via warehouse orders which are comprised of warehouse
tasks. The warehouse tasks indicate the individual movements for products from one location to
another, and the warehouse orders group together the tasks into a unit of work which can be
performed by a single operator, according to the configured warehouse order creation rules.
The outbound business process is similar to the other business processes in the warehouse, in that
they include the use of warehouse tasks and related warehouse orders. The warehouse tasks and
related warehouse orders are created based on the determination of relevant warehouse order

93
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

creation rules and according to the configuration defined in the process oriented storage control.
For this reason, other business processes in the warehouse would have similar performance
characteristics as the outbound processes (though perhaps with fewer steps and thus better
performance in terms of lines per hour). For the sake of repeatability of the business process and
the usability of the process within a defined script, only the outbound business process was used
in performing the mass test for performance and scalability.

9.2 How was it tested?


In order to test the performance and scalability of the warehouse operations, a single repeatable
process was designed and the process was repeated in multiple sessions simultaneously. The
multiple simultaneous processing was facilitated using operating system level scripts which
emulated the sessions and performed the prescribed steps in the respective order for each
assigned document. The documents were created ahead of time and replicated to the EWM
server using the standard qRFCs for replication. The script started with the initial processing
steps within EWM, namely task creation for the picking tasks. The script read the assigned
documents numbers from a flat file and created the relevant sessions, with 3 documents being
assigned to each session and being performed in succession in order to replicate the load across a
time period of several minutes. This allowed sufficient processing time for accurately measuring
the results. The test was replicated several times, with various numbers of simultaneous
processors. For the start of each performance run, the number of simultaneous processors
ramped up with a new group of processors starting every few seconds until all processors were
logged in and processing transactions. The results were then captured during the time period
when all processors were logged in and processing. Finally the results were analyzed, with
emphasis placed on CPU utilization compared to lines per hour and response times per
transaction code, common measures of performance for warehouse IT operations.

9.3 Results and Scalability


Graph 9.3-1: Lines per hour versus percent of CPU utilization

CPU % Lines/Hr
4,5 2288,1
9,7 4576,3
13,6 6864,4
17,6 9152,5
22,8 11440,7
28,3 13728,8
56,9 16016,9
73,4 18305,1

94
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

In Graph 9.3-1 , the results in terms of the number of lines per hour versus CPU utilization is
displayed. We note that for the data point corresponding with 300 simultaneous processors, only
28% of the overall CPU is used. At this point, over 13,000 lines per hour are being processed by
the system.

CPU % Steps/sec
4,5 4,4
9,7 8,9
13,6 13,3
17,6 17,8
22,8 22,2
28,3 26,7
56,9 31,1
73,4 35,6

Graph 9.3-2: Steps per second versus percent CPU utilization

Graph 9.3-2 indicates similar data expressed in numbers of steps per second, where each step is
an interaction with the system where some data is entered and some function is performed by
pressing Enter or another function key. Viewing the data in this way allows easier comparison
across multiple warehouse processes, where the other processes use a different number of steps to
perform the function.

CPU % Avg RT
4,5 200
9,7 204
17,6 205
22,8 218
28,3 223
56,9 403
73,4 660

Graph 9.3-3: Mean response time of transactions versus number of simultaneous processors.

95
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 9.3-3 indicates the mean response time of the individual steps. The system maintains a
fast mean response time (less than 0,25 seconds per step) even while the number of simultaneous
processors is ramped up.

Physical CPU Usage

24,0
Physical CPU Count

16,0 Avg
Max
8,0 Linear (Avg)

0,0
50 100 200 250 300
Number of Simultaneous Processes

Graph 9.3-4: Physical CPU usage versus number of simultaneous processors.

Graph 9.3-4 indicates the number of physical processors being used on average and at peak load,
as the number of simultaneous processes is scaled up. The trend-line shows a nearly linear
scalability to 300 simultaneous processes.

CPU Utilization in Categories

100,0
8,6
7,8
% CPU Utilization

80,0
5,4
60,0

40,0 79,9 86,9


2,6
62,4
20,0 1,2 34,3
15,4 Idle%
0,0
Wait%
50 100 200 250 300
Sys%
Number of Simultaneous Processes
User%

Graph 9.3-5: Logical CPU usage versus number of simultaneous processors.

Graph 9.3-5 shows the CPU utilization in utilization categories. With an increase in load, there is
an increase in kernel CPU utilization (Sys%) which normally indicates an application or
synchronization or OS level overhead (system calls, process swapping).

96
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

SysRatio
SysRatio
30
25
%Sys_vs_User

20
15
10
5
0
50 100 200 250 300
Number of Simultaneous Processes

Graph 9.3-6: Ratio of overhead (System CPU %) increase in relation to production load (user CPU utilization).

In Graph 9.3-6, the kernel utilization is put into context of the total load. This shows an increase
in the ratio from 7 to 10 % of the kernel in relation to user CPU while the load was scaled up by a
factor of 6. This indicated a light trend toward increasing overhead, but is still a healthy
utilization ratio.

Fiber-Channel I/O Max

3500,0
3000,0
2500,0
in kB

2000,0 kB-write
1500,0 kB-read
1000,0
500,0
0,0
50 100 200 250 300
Number of Simultaneous Processes

Graph 9.3-7: Maximum fiber channel input/output utilization versus number of simultaneous processors.

Graph 9.3-7: Average fiber channel input/output utilization versus number of simultaneous
processes, representing the system load throughout. There is a deviation in the pattern between 50
and 100 processes which indicates higher read and is probably the result of a DB restart.
Otherwise, the pattern indicates that for each step of 50 additional processes increases the IO
requirements by 500K/sec.

97
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

WLM-Memory Avg

100,0

80,0

60,0 DMQ_db2
in %

DMQ_SAP
40,0 System
20,0

0,0
50 100 200 250 300
Number of Simultaneous Processes

Graph 9.3-8: Ratio of memory utilization over components

Graph 9.3-8 shows the memory utilization ratio over the system components. As memory
allocation is relatively static in an SAP environment, comprising of application server and
database memory pools, there is not much increase in memory as result of load scaling. Using the
AIX memory management method (ES/SHM) in the SAP application servers, the full shared
memory requirement of the high-load does not have to be pre-allocated. This memory method
does allow the user contexts to grow on demand and unused storage to be released back to the
system. This portion of the memory can grow and shrink according to the load demands.
In a 2-tier system, as used for these tests, the majority of the memory is utilized by the database
and this remains quite static. The oscillation in application server memory is very small in
comparison and not really visible in the total picture.

WLM-CPU Max

100,0
8 8
80,0
6
60,0 DMQ_db2
in %

DMQ_SAP
3 87 89
40,0 System
70
20,0 42
1
14
0,0
50 100 200 250 300
Number of Simultaneous Processes

Graph 9.3-9: CPU utilization distribution over components at peak utilization.

98
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Graph 9.3-9 shows the distribution of the CPU utilization over the components of the system.
This shows the relative utilization between the database and the SAP application servers. For a 3-
tier system, this ratio determines the physical landscape. The trend shows 11:1 ratio - 11
Application server CPUs to each DB CPU used in this scenario.

9.4 Recommendations
9.4.1 Application Settings
In the SAP SCM configuration and technical settings, the following recommendations are made
to speed the performance of processing in Extended Warehouse Management. These
recommendations were followed during the performance testing in order to achieve the
performance documented above.

The latest relevant and available support pack should always be applied to the system to
maximize performance. A note search should be performed to find any notes which may have
been released after the latest support pack to check for any performance related notes. In our test,
SCM service pack 4 was applied to the system and several performance notes from later support
packs were also individually applied.

All irrelevant or extraneous logs, outputs, workflows, or follow-on actions should be deactivated
at the earliest possible time in the business process, eliminating the overhead of evaluating the
relevance for the activity. In our test, all outputs and logs and any irrelevant post-processing
activities were deactivated. In addition, change documents of the delivery were deactivated using
the standard customizing, eliminating extraneous data flow between SCM and ERP and
extraneous processing cycles in SCM.

In the test, certain configuration changes were made to the warehouse in order to ensure single
threaded processing which would normally be ensured by the physical process – for example, at
the packing station or at the staging area. These changes should not be necessary in a customer
system, but were required here due to the testing methodology.

Number range buffers for warehouse documents (ex. warehouse order, warehouse task) were
increased from 10 to 100 in order to reduce the system load caused by buffer exchanges. Note
that this is considered a modification to the number range object parameters and should only be
undertaken if deemed necessary for performance reasons. In most cases, where warehouse orders
are created by wave release functions, this sort of change would not be necessary.

Due to the high degree of variability of potential business processes in the warehouse, the
customer should note that process variation in the Extended Warehouse Management will largely
affect the system resource consumption and care should be taken to carefully map the business
processes before undertaking any sizing analysis, including utilization of the SAP Quicksizer
(available via the SAP Service Marketplace). In addition, for businesses with very large volumes
(those near to or exceeding the volumes demonstrated here), a more thorough analysis of system

99
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

performance requirements should be undertaken before final hardware setup and configuration
decisions are made.

9.4.2 Database Settings


This application issues frequent commits of small transactions. Therefore, logging should be
configured for optimal performance to support this:
• Data and the log files should be stored on different file systems, not only for security
reasons but also for best performance.
• It may be beneficial to set the value of the database configuration parameter mincommit to
a value greater than 1. This parameter controls commit grouping and allows you to delay
the writing of log records to disk until a minimum number of commits have been
performed. This will result in more efficient logging file I/O as it will occur less
frequently and write more log records each time it does occur. Grouping of commits will
only occur when the value of this parameter is greater than one and when the number of
applications connected to the database is greater than or equal to the value of this
parameter. When commit grouping is being performed, application commit requests could
be held until either one second has elapsed or the number of commit requests equals the
value of this parameter. If you increase mincommit, you might also need to increase the
database configuration parameter logbufsz to avoid having a full log buffer force a write
during these transaction intensive periods. In this case, logbufsz should be equal to
mincommit * (log space used, on average, by a transaction).
If you experience long commit times for highly parallel workloads you may want to tune
this parameter by increments of 1.
• The database configuration parameter softmax influences the number of logs that need to
be recovered after a database crash and determines the frequency of soft check points.
Increasing the value of this parameter will trigger the page cleaners less often and reduce
the number of soft check points which reduces the overhead of logging. The
recommended initial value of this parameter according to SAP OSS note 899322 is 300.
Consider increasing this value to reduce logging overhead if you can accept a longer
recovery time after a database crash. For example, setting the value of this parameter to
1000 reduces the number of soft check points and the number of times the page cleaners
are triggered but requires 10 instead of 3 log files to be recovered after a database crash.

The Enterprise Warehouse Management application frequently accesses data is stored in tables
containing LONG VARCHAR, BLOB and DBCLOB fields. For optimizing access to this data,
consider storing these fields in separate tablespaces with file system caching enabled. In SAP
installations, by default, 2 tablespaces are associated with each data class: a data and an index
tablespace. LONG and LOB data is stored in the data tablespace. You can specify an additional
tablespace for LONG/LOB data in the table storage parameters section of file DDLDB6.TPL
when installing the SAP system.

100
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

10 Benefits of Advanced Power Virtualization


System landscapes with massive peak demands occurring at different times are ideal candidates
to exploit the Advanced Power Virtualization features of the IBM System p platform. For
example the Service Part Planning scenario of the Service Parts Management solution would
typically run during weekends, possibly at night, and the Service Parts Fulfillment scenarios
would run during the day. System landscapes using shared processor pools only need to be
designed for the combined peak load scenarios and you don’t have to reserve capacity for the
individual peak loads. Single systems can automatically exploit the total capacity available in the
shared processor pool without requiring any configuration changes.

For example in the largest test scenario for Service Parts Planning tests, the average peak load
was about 55 processors with occasional peaks reaching up to 60 processors. In the largest
scenario for Service Part Fulfillment, the peaks of the individual components were about 25
processors for SCM, 10 processors for ERP, and another 25 processors for the CRM system. The
combined total is again about 60 processors.

To satisfy the same peak demands for both scenarios with dedicated systems, one would have to
allocate 60 processors for SCM and another 35 processors for ERP plus CRM, for a total of 95
processors. Hence the exploitation of the shared processor pool feature allows for processor
savings of almost 40% for this specific scenario. Another advantage is that even with the lower
total number of processors, the ERP and CRM system could easily grab more than the planned
peak resources of 10 respectively 25 processors (should they need it) without any required
configuration changes.

A complete customer system landscape would also include a number of additional supporting
systems like development, test, quality insurance, training, etc. Adding these systems into the
consideration as well as the additional demands for failover systems for the production system,
one can conclude that the saving potential for processors resources is typically even much larger
than the above 40%.

101
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

11 Database Technology
11.1.1 Properties of DB2 for Linux, UNIX and Windows

• High End Scalability

• DB2 provides high end scalability through two classes of parallelism:

• Intra-partition parallelism can be used on any server machine with more than one CPU

• With intra-partition parallelism, DB2 can start subagents to work on a subset of the data.
This kind of parallelism requires no administrative overhead.

• Inter-partition parallelism is available with the Database Partitioning Feature (DPF).

DPF can be used in environments with several database server machines, or database partitions
can be created logically on large SMP machines. A database partition is a part of a database that
consists of its own data, indexes, configuration files, and transaction logs. Tables can be
distributed over several database partitions. Even though the database is partitioned, it appears to
be a single database for users and applications. Queries are processed in parallel in each database
partition, as well as maintenance operations like table reorganization and index creation and data
loads. For backup and restore, once the first database partition (the catalog partition) is processed,
the other database partitions can be backed up and restored in parallel.

DPF is supported for SAP NetWeaver BI and all applications based on SAP NetWeaver BI. This
includes SAP SCM.

Ease of use and configuration

Automatic Storage Management

Automatic Storage Management was introduced in DB2 UDB V8.2.3 for single-partition
databases. In DB2 9, it is also available for multi-partition databases.

With automatic storage management, DB2 can manage its storage space by itself. Tablespace
containers are either created in the home directory of the DB2 instance owner, on a path or drive
specified during database creation or on multiple storage paths specified either during database
creation or added later. Instead of the complete database, single tablespaces can also be created as
automatic storage tablespaces. The containers are then created in the storage paths available for
the database.

Automatic storage management simplifies space management considerably. Database


administrators only have to ensure that there is enough space in the storage paths defined for the

102
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

database or add additional ones. When tablespaces get full, containers are extended automatically
as long as there is space available in the storage paths.

SAP NetWeaver 2004s systems except SAP NetWeaver BI are installed with automatic storage
by default. Beginning with SAP NetWeaver BI 2004s SR2, SAP NetWeaver BI systems on DB2
9 are also installed with automatic storage management by default.

Automatic memory management

DB2 9 contains the self-tuning memory manager (STMM). With STMM, database memory
parameters, like size of the sort area, buffer pools, lock list and package cache, are adapted
automatically depending on the workload.

STMM works in two modes which are distinguished by the setting for the database configuration
parameter DATABASE_MEMORY. This parameter specifies the total amount of shared memory
available to the database.

In the first mode, DATABASE_MEMORY is set to a numerical value or to COMPUTE.


COMPUTE means that DATABASE_MEMORY is calculated by DB2 at database activation
time. If the database configuration parameter SELF_TUNING_MEM is set to ON and at least
two memory consumers are set to AUTOMATIC, STMM starts to work by balancing the overall
available memory resources between the consumers which are set to AUTOMATIC.

In the second mode of STMM, the DATABASE_MEMORY itself is set to AUTOMATIC. In


this mode STMM also takes memory from the operating system if it needs it and if it is available.
Memory can also be given back to the operating system if no longer needed. This mode is only
available on the Windows and AIX platforms.
In new SAP DB2 9 installations, STMM is switched on by default for all memory consumers. For
more information, see SAP OSS notes 899322 (special section about STMM) and 976285.

Automated Table Maintenance

DB2 9 offers automatic statistics collection and table reorganization.

Deep Compression

Deep compression uses a compression technique with a static dictionary to compress the data
rows of a table. The dictionary has to be created after a representative sample of table data is
available. It is created by either an offline table reorganization which also compresses the existing
table data or via the DB2 INSPECT command which only creates the dictionary without
compressing the existing data. In this process, recurring patterns in the table data are identified
and replaced by shorter symbol strings. The patterns can span multiple columns. The dictionary is
stored in the table header and loaded into memory when the table is accessed.

103
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

With deep compression, only table data is compressed but no indexes, LONG and LOB data. The
data is compressed on disk and in the bufferpool. Log records for compressed data contain the
data in compressed format. This has the following advantages:
• Disk storage can be saved. Tests with customer data have shown that for SAP data
compression ratios up to 80% can be achieved. In some customer cases, SAP NetWeaver
BI databases could be reduced to 50% of the consumed disk space overall. This reduces
the TCO considerably.
• The bufferpool hit ratio is increased because the data is stored compressed in the
bufferpool.
• The IO data transfer can be potentially reduced (for example for range queries and
prefetching)
• Log records are shorter (except for some kinds of updates)

Deep compression is released for all SAP applications. SAP OSS note 980067 contains general
recommendations and tools for using deep compression in SAP applications. Additional support
for managing compression for InfoCubes, DataStore objects and PSA in SAP NetWeaver BI and
SAP Business Information Warehouse are available (see SAP OSS notes 926919 and 906765).

Multi-dimensional Clustering (MDC)

Multi-dimensional clustering (MDC) was especially designed for data warehousing applications
to improve the performance of queries that have restrictions on multiple columns.

MDC provides a means for sorting the data rows in a table along multiple dimensions. The
clustering is always guaranteed. The following figure compares MDC to standard row-based
indexes:

Region
Region
… EAST WEST WEST …

EAST,2005 WEST,2005 WEST,2006

… 2004, 2005, 2005, 2006, …


Year
Year
Standard row-based indexes MDC
Diagram 9.4-1: MDC overview

With standard row-based indexes one index can be defined as the clustering index. In the
example above, this is the index on column Region. Queries that restrict on Region benefit from
sequential I/O while queries that restrict on another column usually require random I/O. With
MDC, the data can be sorted along multiple columns, like Region and Year. In the example

104
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

above, both queries that restrict on Region and queries that restrict on Year benefit from
sequential I/O.

Each unique combination of MDC dimension values forms a logical cell, which can physically
consist of one or more blocks of pages. The size of a block equals the extent size of the
tablespace in which the table is located, so that block boundaries line up with extent boundaries.
This is illustrated in the following figure:

create table ... organize by (Year,


Region)

Year Region Customer Revenue Overhead


2004 West 001532 500,000$ 280,000$
2005 West 002047 710,000$ 60,000$
Clustering
Blocks per Value Pair
2004 East 013901 250,000$ 100,000$
2005 North 009954 330,000$ 10,000$ 2004 2005

… … … … … West 1 2

East 3
System generated
Block Index: Year
indexes Block Index: Region Block Index: Compound North 4
2004 1 3 West 1 2 2004, West 1

2005 2 4 East 3 2004, East 3


North
MDC logical
4 2005, West 2 cell
2005, North 4

Diagram 9.4-2: MDC dimension values

MDC introduces indexes that are block-based. These indexes are much smaller than regular
record-based indexes. Thus block indexes consume a lot less disk space and can be scanned
faster. With block indexes, the blocks or extents allocated for the MDC cells are indexed.

MDC also provides support for fast data roll-in and roll-out operations in data warehouse
applications:
• When a large amount of data is inserted into an MDC cell the blocks allocated are locked
instead of every single row. This feature is called BLOCKLOCKING and can be enabled
with the ALTER TABLE statement for MDC tables.
• When data is deleted along one or more MDC dimensions, only the data pages of the
MDC blocks are marked as deleted instead of every single row.
Furthermore, the maintenance of the block indexes on the MDC dimensions is much more
efficient than the maintenance of standard row-based indexes. For both data roll-in and roll-out,
this reduces index maintenance time considerably.

SAP supports MDC for PSA, InfoCubes, and DataStore objects in SAP NetWeaver 2004s BI. A
backport with limited functionality to SAP NetWeaver 2004 BI and earlier SAP Business
Warehouse releases is available in SAP OSS note 942909.

105
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

DB2 “optimized for SAP”

DB2 is jointly developed with SAP to help customers ease configuration, enhance performance,
and increase availability of their SAP solutions running on DB2. The joint effort involves close-
tied development teams at SAP headquarter in Walldorf, Germany and IBM laboratories in
Böblingen, Germany, Toronto, Canada, and Silicon Valley, USA.

Features like automatic storage management, STMM, Deep Compression, removal of tablespace
size limits and extensions to MDC and the SQL optimizer were designed and developed in close
cooperation with SAP to optimally support SAP applications.

DB2 provides a parameter for configuring DB2 for SAP workloads. By setting the DB2 registry
variable DB2_WORKLOAD=SAP a number of other DB2 registry settings for running SAP
applications are enabled automatically.

SAP is also an extensive in-house user of DB2. More than 800 SAP development systems run
with DB2, among them the most important ones for SAP NetWeaver development. SAP IT
adopted DB2 for business systems in 2002. The productive ERP, CRM and HR systems of SAP
run on DB2.

11.1.2 DB2 Configuration used in the project

11.1.2.1 Configuration of the SCM System

The SCM test system was set up with DB2 9 FixPak 1. It was installed with a uniform page size
of 16KB and the system catalog tablespace managed by automatic storage. For each of the other
tablespaces, seven DMS file containers were created on one large file system that was striped
over 10 disks. For logging, a separate file system striped over 4 disks was used.

The system was set up with a SAP NetWeaver BI layout with four database partitions. It was
designed that way because the SPP processes read historical planning data from an InfoCube
which contained about 1 billion rows. The fact tables of this InfoCube were distributed over the
four database partitions. The system was configured with a bufferpool of 28 GB on database
partition 0 and 4GB on partitions 1 to 3 (see Diagram 9.4-3). For some tests, the size of the
bufferpool on database partition 0 was increased to 60 GB.

106
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 9.4-3: Configuration with 4 database partitions

The system was configured with standard parameter settings according to SAP note 899322. The
following configuration parameters were adjusted:

Database manager configuration


Adjustments were made to the number of agents as described in SAP note 899322. The sort heap
threshold was increased to about 8 GB. The monitor heap was increased to 220 MB (56128 4KB
pages).

Database configuration
Adjustments were made to the configuration parameters that control locking, logging, sort
memory, application group memory and certain other parameters, as shown in the table below:

Parameter New value Remark


LOGBUFSZ 24 MB (6144 4KB pages) on partition 0 Log buffer
8 MB (2048 4KB pages) on partition 1-3

LOCKLIST 625 MB (160000 4KB pages) on partition 0 Lock list


156 MB (40000 4KB pages) on partition 1-3

CATALOGCACHESZ 20 MB (5120 4KB pages) Catalog cache

PCKCACHESZ 80 MB (20480 4KB pages) Package cache

SORTHEAP 128 MB (32768 4KB pages) Sort heap

APPGROUP_MEM_SZ 1.5 GB (384000 4KB pages) on partition 0 Application group


500 MB (128000 4KB pages) on partition 1-3 memory

Graph 9.4-1: Database configuration parameters

The number of primary log files was increased to 80.

107
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

DB2 registry variable


In addition to DB2_WORKLOAD=SAP, DB2_PARALLEL_IO=’*’ was set which enables
parallel IO for the tablespace containers.

This configuration of the SCM system was used for the Service Parts Planning tests described in
chapter “SPM Test Scenarios” of this document. During the tests it was observed that the
workload accessing the historical planning data InfoCube that was distributed over the four
database partitions was very low compared to the actual planning workload running solely on
database partition 0 and mainly accessing the time series management tables. Furthermore, the
queries executed on the InfoCube processed a rather small amount of rows. In this case,
configuring multiple database partitions is not necessary. Therefore we recommend a single-
partition database layout for Service Parts Planning, as described in section “Data Base
Recommendations” in the “SPM Test Scenarios” chapter.

Some additional SPP Forecasting and DRP tests were made with the InfoCube completely
located on database partition 0 and the other database partitions not used any more. Although,
due to the tight timeframe, not all SPP tests could be repeated this proved to be the better
configuration for this setup. By applying MDC and DB2 9 deep compression to the InfoCube the
queries executed against it were even faster than with the original 4 database partitions
configuration. This layout is shown below:
Partition 0 Partition 1 Partition 2 Partition 3
(Administration
Partition)

SAP Basis Tables

Master Data Tables

SPP Tables

9ADEMAND fact,
dimension tables

Diagram 9.4-4: Configuration with one database partition

11.1.2.2 Configuration of the CRM and ERP Systems


As the SCM database, the ERP and CRM databases were set up with uniform page size of 16KB,
automatic storage for the system catalog tablespace, DMS tablespaces with seven containers for
each of the other tablespaces and with standard DB2 parameter settings according to SAP OSS
note 899322.

The CRM and ERP database were configured with STMM during all of the tests, including the
customer stress test.

108
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The following configuration parameters were adjusted:

Database manager configuration


Adjustments were made to the number of agents as described in SAP note 899322.

Database configuration
The log buffer was increased to 16 MB (4096 4KB pages) and the number of primary log files
was increased to 80. The catalog cache size was set to 5120 4KB pages on the ERP system.

DB2 registry variable


In addition to DB2_WORKLOAD=SAP, DB2_PARALLEL_IO=’*’ was set which enables
parallel IO for the tablespace containers.

The memory used by the 2 databases varied between 3 GB and 20 GB for the CRM system and
between 12 GB and 17 GB for the ERP system.

STMM configured the memory as follows for the CRM system:


LOCKLIST 15 – 125 MB
SHEAPTHRES_SHR 250 – 780 MB
SORTHEAP 50 – 156 MB
PCKCACHESZ 32 MB – 2 GB
Bufferpool 120 MB – 11 GB

STMM configured the memory as follows for the ERP system:


LOCKLIST 15 – 86 MB
SHEAPTHRES_SHR 250 - 780 MB
SORTHEAP 50 - 156 MB
PCKCACHESZ 1.5 GB – 2 GB
Graph 9.4-2: STMM Memory configuration

109
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

12 Hardware Infrastructure

Diagram 9.4-1: Overview of landscape hardware for the integrated Service Parts Solution tests

12.1 Attributes of the IBM POWER5 Server


64-bit POWER5+ & POWER5 processing power
The P5+ (2.3 GHz) & P5 (1.9 GHz) chips are two-way simultaneous multithreaded dual-core
chips, designed to maximize the utilization of the computing power. They also include dynamic
resource balancing to ensure each thread receives their fair share of system resources, these
cutting-edge processors each appear as a four-way symmetric multiprocessor to the application
layer: each hardware thread appears as a logical processor.
Advanced Scalability
Built on IBM’s advanced MCM (multi-chip modules), the p5-595 is designed to scale up to 64
processing cores in a single system. These 8-core MCMs place the processors extremely close
together, to enable faster movement of data, and increase reliability. The IBM p5-595 comes with
8GB of DDR2 memory which can scale up to 2TB.
Consolidate with virtualization and partitioning
The IBM System p5-595 utilizes logical partitioning (LPAR) technology with IBM’s
Virtualization Engine™ to support the consolidation of multiple UNIX and Linux workloads.
The p5-595 offers advanced consolidation technologies such as Dynamic Logical Partitioning,

110
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Micro-Partitioning™ and Virtual I/O Server to maximize the resource utilization of the p5-595
system
Micro-partitioning allows the consolidation of multiple independent AIX 5L and Linux
workloads with finely tune performance. Micro-partitioning is based on a concept of a shared
processor pool. The individual LPARs access the CPU resources in the shared processor pool via
“virtual CPUs” (VCPUs). Virtual servers can have a resource entitlement as small as 1/10th of a
processor, and in increments as small as 1/100th of a processor. By use of entitlements and LPAR
weightings, an effective priority system can be implemented to control the CPU resource
distribution. Physical CPU allocation is done at a micro-second level allowing an extremely
quick response to shifting load requirements.

System p Virtualization

In this document, virtualization is used to refer to the System p micro-partitioning or the shared
processor pool functionality. This functionality implements a level of virtualization above the
physical processors, and allows LPARs to share the actual physical processor resources. In SAP
landscapes, this functionality is expected to improve the efficiency of resource utilization. The
graphs below depict the consolidation of 4 individual SAP load profiles, with different peak
requirements and peak times, into a single resource pool.

mySAP BI

SAP BI
Batch
100
Processor Utilization Percentage

01:00 03:00 05:00 07:00 09:00 11:00 13:00 15:00 17:00 19:00 21:00 23:00
% Overall server utilization

Batch 80

60

Web Services
40

20
00:00 02: 00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 24:00

mySAP ERP

0
night noon eob night

mySAP ERP
Diagram 12.1-1: Combined load in shared processor pool

The idle resources inherent when hardware is sized to manage the peak requirements for
individual SAP systems can now be utilized to cover combined peak periods where the peaks do
not occur simultaneously. The distribution is on millisecond time slices, so even concurrently
active workloads benefit from resource sharing.
The shared processor pool provides mechanisms to allow the shared resources to be distributed
according to policy. It is possible to restrain the resource consumption of a partition, for example,
by “capping” it. Capping basically sets a hard limit for the LPAR. Uncapped partitions are
guaranteed a minimum entitlement, and then according to their priority, are able to use resources
far in excess of their entitlement.

111
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

Diagram 12.1-2: Virtual CPUs for processor sharing


Micro partitions, or shared pool LPARs, see virtual CPUs
rather than physical CPUs. Virtual CPUs are scheduled by
LPAR1 LPAR2 LPAR3 the Hypervisor much as processes are scheduled by the
OS. Each VCPU is given a processing time-slice
VCPU VCPU VCPU VCPU VCPU VCPU according to its entitlement. One VCPU can utilize up to
the capacity of one physical CPU in the pool. An LPAR
Hypervisor
can have as many VCPUs as there are physical CPUs in
CPU CPU CPU the spared pool. For further information:

Shared Processor Pool http://www.redbooks.ibm.com/redbooks/SG247463/wwhelp/wwhi


mpl/java/html/wwhelp.htm

For this series of tests, processor virtualization was used to accommodate CPU resource sharing
between 3 SAP systems in the integrated Landscape.

12.2 POWER5 SAP Configuration


The hardware landscape for this series of tests consisted of a single IBM System p model p5-595.
The p595 supports logical partitioning and hardware virtualization, both of which were used to
implement the logical system landscape.

Diagram 12.2-1: Logical System Landscape p595

112
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The diagram above shows the implementation of the landscape on the p5-595. One LPAR, q1i,
was used for administration, backup/restore, and performance monitoring.
The remaining three LPARs each house one complete system of the landscape. All LPARs are
running in virtualization mode with no priority policy established. With these settings it is
possible for any of the SAP systems to consume the full 64 CPU shared processor pool.
The system has 256GBs memory which has been fully allocated over the 4 partitions. DMQ, the
SCM system is given the lion’s share of memory as it also houses the liveCache with a cache-size
of 6000000 - 12000000 8K pages. The cache-size of the liveCache was set extremely large
simply to avoid any bottlenecks in data management in this component. This memory
implementation is not based on an actual sizing and is expected to look significantly different in
real life.

12.3 Storage Technology

Storage Configuration for SPP and Fulfillment Scenarios

Features of the DS4800


 Scalability up to 67TB / 112TB
 Up to 35.000 IO/sec OLTP
 Dynamic configuration capabilities
 Seamless model and data migration
 Advanced Copy Functions (incl. Mirroring)
 High-speed Open Systems performance

Figure 12-1: DS4800 - Storage Server used for SPM Planning and Order Fulfillment

Storage Configuration for SPM Project

ECC, CRM one fibre channel connection to each controller -> one active path to disks
SCM two fibre channel connections to each controller -> two active paths to disks
Fibre channel connections used were 2 Gigabyte.
6 EXP710 expansion drawers with 16 disks each

12 x RAID5 arrays (4+1) for data volume groups


2 arrays on each drawer
Arrays were shared by each system, on each array:
1 x 70 GB LUN for ECC
1 x 70 GB LUN for CRM
1 x 70 GB LUN for SCM
2 x RAID10 array (4+4)
1) SCM db2 log_dir
2) SCM liveCache log

113
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

4 x RAID1 arrays (1+1)


1) CRM db2 log_dir
2) ECC db2 log_dir
3) (Not used)
4) (Not used)
The rest of the disks were used for hot spares and the backup filesystem LUN.

On each system with had one scalable volume group (32MB pp size) with the 12 RAID5 LUNs
for the data filesystem and a separate volume group for the log files.

Storage Configuration for Warehouse Management

Figure 12-2: DS8100 - Storage Server used for Warehouse Management

Features of the DS8100


 POWER5+ server technology based, dual-clustered design
 Designed for highest availability and performance
 Highly efficient, patented cache algorithm (ARC)
 Additional abstraction layers for disk virtualization
 Storage System LPARs
 Linear scalability up to 192TB / 320TB*
 Up to 120.000 IO/sec OLTP
 Industry-leading copy and mirroring capabilities, compatible with
DS6000, ESS

In the final stages of the enablement tests, it was necessary change storage systems. This is a
result of the rules regarding the use of loaner equipment and not due to any technical necessity.
The new storage system was a more powerful server, and therefore care was made not take
advantage of the capacity in anyway that would distort the data comparison. Although the full
integrated scenario was move to the DS8300, the majority of the testing for the first 2 scenarios
was already complete, and only the Warehouse tests were focused on this storage server.

12 RAID5 ranks were used for the SCM system for Warehouse Management

1 x 70 GB LUN for DND


1 x 70 GB LUN for DSZ
1 x 200 GB LUN for DMQ

The fibre channel connectivity:


2 adapters for ECC and CRM
4 adapters for SCM

114
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

The difference is that with SDDPCM using MPIO, there are 2 active paths (or 4) to the disks
instead of only 1 active path with the DS4000 RDAC driver.

13 Appendix:

Software Stack
• AIX
The following AIX version and OS parameters were used for all three systems.
AIX 5.3 ML 5
OS Parameters:
maxclient% = 90
maxperm% = 90
minperm% = 1
memory_affinity = 0
minfree = 960
maxfree = 1088
strict_maxclient = 1
strict_maxperm = 0
lru_file_repage = 0

• SAP

115
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

14 Copyrights and Trademarks


© IBM Corporation 1994-2005. All rights reserved. References in this document to IBM products or services do not
imply that IBM intends to make them available in every country.

The following terms are registered trademarks of International Business Machines Corporation in the United States
and/or other countries: AIX, AIX/L, AIX/L(logo), DB2, e(logo)server, IBM, IBM(logo), System p5, System/390,
z/OS, zSeries.

The following terms are trademarks of International Business Machines Corporation in the United States and/or other
countries: Advanced Micro-Partitioning, AIX/L(logo), AIX 5L, DB2 Universal Database, eServer, i5/OS, IBM
Virtualization Engine, Micro-Partitioning, iSeries, POWER, POWER4, POWER4+, POWER5, POWER5+,
POWER6.

A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

SAP, the SAP Logo, SAP, R/3 is trademarks or registered trademarks of SAP AG in Germany and many other
countries

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Other company, product or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement
material, or other publicly available sources and does not constitute an endorsement of such products by IBM.
Sources for non-IBM list prices and performance numbers are taken from publicly available information, including
vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm
the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the
capability of non-IBM products should be addressed to the supplier of those products.

More about SAP Trademarks at. http://www.sap.com/company/legal/copyright/trademark.asp

116
SAP SOLUTION FOR SPM
PERFORMANCE AND SCALABILITY

ISICC-Press CTB-2007-2.1

IBM SAP International Competence Center, Walldorf

117

Вам также может понравиться