Вы находитесь на странице: 1из 28

N o.

1 2 O R A C L E – S A P NEWS 2 0 0 3

for SAP – reducing your TCO


RAC
® ®

ABB shows HP Tru64™ UNIX and Oracle9i Editorial


Dear SAP customer,
Real Application Clusters are ready for The latest Oracle for SAP Technology Update,
business-critical SAP® applications published by the Oracle/SAP Global Technology
Center, will help you better understand Oracle
technology used by SAP, Oracle‘s service offe-
ABB is a leader in power and automation techno- rings, Oracle/SAP release combination, real life
logies that enable utility and industry customers customer stories and more:
to improve performance while lowering environ- • Successful implementation of Oracle Real Ap-
mental impact. The ABB Group of companies plication Clusters Technology for SAP at ABB
operates in more than 100 countries and employs • SAP customer story: database migration from Gerhard Kuppler
around 146,000 people. ABB is a recognized global DB2 to Oracle Director Corporate SAP Account
• Oracle9i Online Reorganisation, Oracle Internet Oracle Corporation
Oracle for SAP ® ® Directory for SAP
• Cost savings by using Oracle9i table compres-
is safe, reliable sion for SAP BW
and scalable. Reliab Oracle9i Real Applications Clusters provides both
le leader in all of its core areas of competence: power scalability and availability as a single, easy to
le

manage database product. And that means de-


ab

technologies (products, services and solutions for


al

creasing the Total Cost of Ownership. SAP is com-


Sc

Safe power transmission and distribution), oil, gas and


petrochemicals (oil and gas drilling onshore and mitted to develop, port and migrate their appli-
offshore, offshore production installations, refine- cations to function optimally with Oracle database
Content: and Real Application Clusters technologies.
ries and petrochemical plants), automation techno-
ABB shows HP Tru64 TM and Oracle9i RAC for SAP 1 logies (products, systems, software and services The 15 year Oracle/SAP technology partnership
for the automation and optimization of industrial, delivers great value to mutual customers by pro-
Oracle9i RAC Network Technology 2 commercial and utility operations) and financial viding the most proven, stable, reliable and scala-
Oracle9i Online Data Reorganization services (financing, sales support, risk management ble database technology platform for SAP systems
4 services and insurance for internal and external running on Windows, Unix and Linux.
Delta Consulting 8 customers). To support these varied lines of busi- “SAP on Oracle” means customer satisfaction and
ness, ABB has approximately 20 SAP productive proof of a functioning technology partnership for
Oracle Collaboration Suite
9 systems. These systems are accessed by some more than 39000 joint mySAP, SAP R/3 and BW
Oracle Internet Directory 9.2. for SAP 10,000 users worldwide. Storage Area Networks installations worldwide. However, “SAP on Oracle”
10
(SAN) and standby databases are used to achieve also stands for global cooperation in database
Diagnosing Performance Bottlenecks using Statspack the desired levels of availability.
11 reselling – development and support services in
Pressing the “famous button” SAP on Oracle at Kodak Walldorf and Redwood Shores to be able to supply
14 More for less shared customers with an optimized and custo-
Oracle RAC and Fujitsu Siemens 15 mized combination of products.
ABB was keen to increase manageability, availa-
Oracle9i Release2 for Intel Itanium2 Systems bility and scalability, at the same time cutting Oracle’s innovative database technologies (e.g.
15
costs by scaling out its SAP deployment. A SAN milestones like Row Level Locking, Partitioning,
IT – Energy for Halliburton 16 coupled with clustered servers offers various Real Application Clusters) are an important con-
The SAP Special Interest Group (SIG) at IOUG advantages but also has certain drawbacks. tribution in enabling SAP applications to be
17 Failover clusters do not enable load balancing, embedded in a world-wide accepted and supported
Successful migration from DB2 to Oracle 18 and database recovery takes several minutes or DB technology platform.
more in the event of server failure. Additionally,
Effectiveness of Oracle9i Table Compression in SAP BW 19 Please don’t hesitate to contact us, eMail:
like many other companies, ABB found it diffi-
cult to justify the cost of standby servers. All saponoracle_de@oracle.com
SAP Server Consolidation – flexibility with low TCO
25
going well, these powerful, expensive machines For more information please see also
SBS and Oracle Real Application Clusters may sit idle for months on end without providing www.oracle.com/newsletters/sap
26
mySAP Solutions Running Oracle9i RAC and Linux any additional processing power or capacity.
27 Sincerely
cont. page 2
Oracle for SAP – Release Matrix Gerhard Kuppler
28
from page 1
The answer accessed by all clustered servers. This vastly sim- Phased approach
The answer presented itself in the form of plifies management of clustered storage. HP On July 24, HP supplied ABB with a pilot plat-
Oracle9i Real Application Clusters. Real TruCluster Server software allows system admi- form for the tests. ABB decided in favor of a pha-
Application Clusters exploit modern hardware nistrators to manage clustered systems as easily sed test roadmap in order to measure the exact
and software clustering technologies to provide as a single server. That’s because only TruCluster impact of Real Application Clusters. The first
the most flexible cluster database platform. Server software presents a single system image phase of the project involved copying the entire
Instead of having idle standby systems, Real where the operating system environment is sha- database (which ran under Oracle8.1.7) to the
Application Clusters harness the processing red across all servers in the cluster. test system provided by HP. This was followed
power of all servers in the cluster. by a series of baseline tests to assess performance
before the upgrade. ABB then upgraded the

9i
database to Oracle9 and repeated the baseline
tests. The final phase involved upgrading to
Oracle9i Real Application Clusters – a straight-
forward procedure involving the simple addition
of a second node. This was again followed by
thorough functional tests that included all
management and monitoring tools. The results
were impressively conclusive.

Proof-of-concept
Although ABB did not manage to push the HP
platform to its performance limits, the paralle-
lization of jobs did not result in any degradation
of SAP performance. With minimum additions
to the hardware infrastructure (one additional HP
AlphaServer was required) and no adjustments to
the SAP system, ABB succeeded in creating a
highly available, high-performance infrastructure
with almost linear scalability at a fraction of the
cost of standard clusters. This was recently con-
firmed by an SAP benchmark proving almost lin-
ear scalability across up to four nodes. Particularly
in challenging times, the cost efficiencies of scaling
Key to cluster scalability, HP memory channel First customer to run SAP on Oracle9i Real out and redeploying existing resources presents
acts as the cluster interconnect between the Real Application Clusters the company with a valuable competitive advan-
Application Cluster nodes. It leverages high band- tage. In fact, the project was so successful that
width and low latencies for fast communication Keen to see how Real Application Clusters inte- ABB is now the first customer live with SAP R/3
and efficient messaging. grate with their SAP systems and fit into their on HP Tru64 UNIX with Oracle9i Real Appli-
overall SAP hosting strategy, ABB decided to cation Clusters.
Tru64™ UNIX TruCluster™ Server software from put Oracle9i Real Application Clusters to the test.
HP is one of the enabling technologies for Oracle9i In close consultation with Hewlett-Packard,
Real Application Clusters. The Cluster File Sy- Oracle and SAP, they organized a proof-of-con-
stem, a key component of HP’s Tru64 UNIX cept pilot project running SAP R/3 4.6C across
TruCluster, enables all files to be simultaneously HP Tru64 UNIX 5.1A clusters on an HP Alpha-
Server™ platform.

Oracle9i RAC The unbreakable approach for deploying Oracle9i ™


Real Application Clusters
Network Appliance™ Filer Technology Helps
Network Technology
Executive Overview thereby achieve higher levels of system availabi-
Unleash the Power of Oracle® Technology lity and flexibility. To get the full benefit of the
The demands being placed on data centers are Oracle technology, however, companies need sto-
D ATA B A S E increasing. High availability is becoming a must- rage that “matches” the powerful capabilities

9
have requirement for companies that cannot offered by clustered servers and databases.
afford system downtime. Global businesses need
D ATA B A S E
to serve customers around the clock and often Like Oracle9i RAC, NetApp® storage technolo-

9
D ATA B A S E rely on IT solutions to buy, sell, and service those gies enhance the high availability, manageabili-
customers. This business pressure can be costly

9
CLUSTER ty, and scalability of an IT environment. In addi-
when a system is not available, whether through tion, research shows that NetApp’s unique archi-
planning or not. In response, many companies tecture and innovative approach to network-cen-
are turning to the Oracle9i Real Application tric storage offer a significantly lower total cost
CLUSTER Clusters (RAC) solution, which enables them to of ownership, compared to traditional storage
run Oracle databases on clusters of servers, and solutions. Overall, NetApp solutions give IT

2 CLUSTER
departments using Oracle9i RAC a cost-effective Just such an approach can be found in network- online in a matter of minutes. They are essenti-
way to have a total clustered solution that sup- centric storage technology from NetApp. The ally building blocks: capacity can be added to a
ports the high availability and flexibility needed NetApp architecture separates storage from the system on-the-fly in small increments without any
in a rapidly changing business environment. server, makes use of existing open-standard net- downtime, and NetApp filers can scale from 50GB
to multiple terabytes. According to a recent study
from INPUT, 200GB can be added to a NetApp
filer in about 10 minutes, as opposed to the 4
hours it typically takes with storage area network
(SAN) solutions. Overall, companies can start
small with NetApp, buying only the capacity they
need, and then expand storage to stay in step
with the business. NetApp filers provide greater
Oracle9i RAC Server than 99.99% data availability. They incorporate
Oracle9i RAC Server many redundant hardware features and built-in
(Node 2)
(Node 1)
RAID that protects against downtime resulting
from disk failures – if a disk fails; automatic
reconstruction takes place on a hot spare disk.
Database
Several high-availability models incorporate an
architecture that enables two filers to work in an
active /active Clustered Failover configuration.
Figure 2: Trends in computing power, storage capacity.
The Underlying Software
Storage: Completing the Picture
NetApp filers rely on two key building blocks:
Over the last decade or so, the computing power of working infrastructure, and puts it on appliances,
the Data ONTAP™ operating system and the
most platforms has doubled every 18 months, the such as its NetApp filers. These filers are dedicated
patented NetApp Write Anywhere File Layout
storage capacity has doubled every 12 months, storage systems that handle a focused set of tasks,
(WAFL(r)) file system.
and the network speeds have doubled every 8 to rather than the range of interrelated functions
Data ONTAP supports a wide range of protocols,
9 months. This leads to a very important conver- handled by the operating systems of servers.
including NFS, CIFS, SCSI, HTTP, FTP, and the
gence in the three most important aspects of a Such specialization has made it possible to opti-
Direct Access File System (DAFS) used by
corporate IT infrastructure. Storage, which was mize NetApp filers to handle storage tasks with
Oracle9i. It allows the simultaneous use of multi-
dependent on the bandwidth of the network, can speed and efficiency, and to build storage specific
ple protocols; thus, providing a truly uniform
now leverage the network to deliver and store intelligence and tools into the system. In essence,
storage infrastructure that can be accessed by
information without causing as much as a hiccup the NetApp storage technology supports the
UNIX®, NT, and Web-based servers and clients.
in the infrastructure. Oracle9i RAC architecture by simplifying the
The WAFL file system enables dynamically ex-
pandable data storage and supports data integrity
with block-level checksum capability. It incor-
porates the functions of file system, volume
manager, and RAID subsystems, so that there is
a high level of integration between those three
Network Speed Doubling Every 8-9 Months layers.
Unlike traditional file systems, it is not initiali-
zed: It can use any space that is available to it on
disk, and determine which disks to use based on
Storage Capacity Doubling Every 12 Months the actual geometry of the array. Administrators
can modify the amount of space available to the
system on-the-fly.

Computing Power Doubling Every 18 Months The WAFL file system does not overwrite exi-
sting blocks of data – instead, it always writes a
new block. When a new block is created, the
WAFL file system updates the “pointers” that
indicate the location of that block – that is, it
Figure 2) Trends in computing power, storage capacity.
essentially takes a freeze-frame Snapshot™ of the
pointers at a specific point in time. It then makes
Corporations are now in a position to leverage storage layer and providing high availability, the frozen versions of the file system available via
this convergence to deliver the scalability and horizontal scaling, and manageability – the same special subdirectories that appear in the current,
availability demands placed on mission-critical qualities delivered by the Oracle solution. or active, file system. Up to 31 of these versions
applications. Oracle9i RAC brings new levels of can be maintained concurrently. Several of these
availability, performance, and scalability to the NetApp Filers can be viewed simultaneously, which means that
data center. But, to realize the full benefit of the NetApp provides several technologies that help many data management tasks can be performed
technology, companies need to take an end-to- companies align their storage layer with the while the system is in use – even if users or appli-
end view of data management that looks beyond Oracle9i RAC solution. The NetApp family of cation servers are heavily accessing and updating
the database and server, and considers data stora- filers encompasses a wide range of appliances, the data. NetApp has created several data mana-
ge technology, as well. They need an approach to from high-end devices designed for the corporate gement tools based on the WAFL file system's
data storage that supports and enables the clu- data center and e-business applications to systems freeze frame capability, including:
stered database by providing the same levels of designed for small and medium businesses and
availability, performance, and scalability – and machines for departmental use. Because they are • Snapshot – enables backups and the recovery
which does so cost-effectively. separate appliances, NetApp filers can be deployed of accidentally damaged or deleted data without
in a true plug-and-play fashion, and brought disruption of service.

3
• SnapRestore® – allows any system to revert back

Oracle9i Online Data Reorganization


Oracle9i
to a specified data volume for instant file-system
recovery, in minutes, without going to tape.
• SnapMirror® – provides remote mirroring at
high speeds over a LAN or WAN. The asyn-
chronous mirroring can be used for disaster re-
covery, replication, backup, or testing on a non- REASONS AND NON-REASONS FOR REORGA-
production system. NIZATION
The Right Fit for Oracle9i RAC The database administrator will consider reorga-
nization in order to reduce or avoid segment-
Together, NetApp’s network-centric architecture, level, block-level or row-level fragmentation. The
dedicated filer appliances, and sophisticated opera- minimal reorganization goal is to reduce existing
ting and file systems “complete the picture” for fragmentation as much as possible by creating a
Oracle9i RAC installations. The NetApp approach new copy of an object while leaving its storage
complements the Oracle solution by extending characteristics unchanged. This is mostly true for
the benefits of the clustered solution into the sto- index rebuilds. The maximum goal is to avoid
rage layer. As does Oracle9i RAC, the NetApp future fragmentation by changing the object’s
approach provides: SUMMARY storage characteristics. This is true for most table
• High Availability. According to a study from Requirements have changed. Only a few years reorganizations.
INPUT that examined Oracle production environ- ago, it was acceptable for most customers that An important rule is, that the administrator should
ments at 63 enterprises, NetApp technology reorganizing an index or a table caused the (base) aim to reduce fragmentation, because it hurts data-
provides data availability of 99.99+% – one of table to be locked and that no DML operation base server performance, but only in measure as it
the highest rates in the industry. (insert, update, delete) was allowed until the hurts database server performance.
• Manageability. The simplicity of NetApp archi- reorganization was completed. In many of today's
tecture and its suite of database management mission-critical environments continuous data- Segment-Level Fragmentation
tools save time for DBAs and system admini- base availability has become one of the admini-
The term “segment-level fragmentation” means
strators. strator's top goals.
that a data segment (table) or an index segment
This is why the Oracle database server has chan-
• Scalability. NetApp filers can be scaled up in (index) consists of too many extents. The obvious
ged as well. Both Oracle8i and Oracle9i have
small increments to handle the largest enterprise question is: How many are too many?
introduced features that help reduce the frequen-
workloads – giving IT departments a cost-effec-
cy and minimize the impact of data reorganizati- In Oracle versions before 7.3 the number of
tive, “pay-as-you go” capability.
ons: extents that could make up one single segment
• Affordability. NetApp’s storage technologies was limited. The value depended on the database
• New features such as locally managed tablespaces
offer a 70% lower total cost of ownership (TCO), block size, so it was not exactly one static limit,
(Oracle8i) and support for multiple block sizes
compared with competitive storage solutions but once the block size was chosen (during data-
(Oracle9i) help avoid table or index fragmenta-
from HP, EMC, and Hitachi Data Systems, base creation), the corresponding limit was effec-
tion and chained rows – two main reasons for
according to the INPUT study. NetApp’s techno- tive. If the limit was reached, one extent more
reorganization.
logies demonstrated a lower TCO in three areas: was too many. In that case the segment had to be
acquisition costs for hardware and software, • Oracle8i introduced online index reorganization,
meaning that users can continue to update and reorganized using export and import in order to
operational costs in terms of people and time, decrease the number of extents and allow for
and business costs due to downtime. query the base table while an index is being
reorganized. The reorganization can be further growth.
accomplished either in place (online index Version 7.3 has removed this limitation. Begin-
Conclusion
defragmentation) or by creating a new copy of ning with Oracle 7.3, it is allowed to specify
Oracle9i RAC provides efficient, reliable, secure the index (online index rebuild). Oracle8i also MAXEXTENTS=UNLIMITED in the storage
data management for high-end applications such introduced limited online table reorganization clause of SQL statements that create or modify
as high-volume online transaction processing capabilities. These were limited, because they segments. Nevertheless, some customers decided
(OLTP environments), query-intensive data ware- were restricted to index-organized tables (IOTs). to leave the limit in place. There is nothing wrong
houses, and demanding Internet applications. with such a decision as long as customers simply
• Oracle9i further extended Oracle’s online capa-
Advanced options like Oracle9i RAC provide un- want an additional control mechanism and are
bilities with a new feature called online data
limited scalability and high availability for any not affected by the single extent myth.
redefinition. This functionality supports all types
packaged or custom application by exploiting
of tables. It can be used to perform a simple
clustered configurations with the simplicity and In a white paper called Myths and Folklore about
online table reorganization. However, it is much
ease of use of a single system image. Oracle9i RAC Oracle Performance Tuning Gaja Krishna Vaidyanatha
more powerful, because it allows administrators
allows access to a single database from multiple (Quest Software Inc.) calls the opinion that “the
to change the structure and the storage
nodes of a cluster system configuration to insulate optimum number of extents for every object is
characteristics of a table online.
application and database users from hardware and one” the single extent myth. In contrast to this
software failures, while providing performance • Oracle Services for SAP have recently proposed myth, he correctly states that “having 1,000
that scales with the hardware environment. solutions for some special cases. Of particular extents for an object by itself does not pose any
NetApp delivers solutions for today’s IT challen- interest is a strategy that uses Oracle partitio- performance problems, so long as the extents are
ges. Deploying Oracle9i RAC on NetApp tech- ning in order to reduce the amount of reorgani- sized as a multiple of
nology helps customers overcome many of chal- zation needed after SAP archiving runs. (DB_FILE_MULTI-BLOCK_READ_COUNT
lenges facing IT today most notably, the growing Oracle’s online maintenance capabilities improve * DB_BLOCK_SIZE)” and that “periodic reorga-
volumes of data and the ability to manage and data availability, query performance, response ni-zation efforts to get objects back to 1 extent
keep it safe and secure. Deploying Oracle9i RAC time and disk space utilization, all of which are create significant free space fragmentation pro-
on NetApp storage lowers your overall TCO. important in a mission-critical environment. As blems” due to the “various extent sizes within a
NetApp storage solutions provide simple, fast, several white papers that are circulating among tablespace”. Considering these points, a reasona-
centrally managed data protection and optimized Oracle/SAP customers take no notice of these ble space management strategy would be to defi-
performance for Oracle9i RAC installations, functionalities and solutions, this article is inten- ne very few standard extent sizes (ideally in diffe-
(allowing Oracle database administrators to per- ded to provide a detailed overview of the current rent tablespaces) and use them according to seg-
4 form complex tasks quickly). options. ment size and growth.
This is exactly what locally managed tablespaces fragmentation for tables. However, in special
offer. Traditional or dictionary managed tablespa- cases it can happen that after several cycles of
ces use data dictionary tables to log allocations insert and delete operations data is stored less
and deallocations of extents. Locally managed efficiently than in the beginning. This situation
tablespaces use bitmaps to keep track of the free is discussed in more detail in the last section.
or used status of blocks in Oracle data files. Due The number one reason for block-level fragmen-
to the simplified internal algorithms, locally tation in Oracle/SAP systems is archiving. How-
managed tablespaces offer better performance ever, this does not mean that each archive run
than dictionary managed tablespaces. In addition, should be followed by a reorganization of all seg-
administrators can choose between two extremely ments involved. The following rules should be
simple and efficient extent size management stra- considered:
tegies: UNIFORM means that all extents within
one single tablespace are created with the same size, • As archiving is basically a mass delete, it will
AUTOALLOCATE means that extent sizes are inevitably lead to block-level fragmentation
determined by the system which chooses between within index segments. So the indexes on the
few standard sizes according to segment size and tables involved in the archive operation should Selective archiving can cause data to be stored less Picture 2b:
growth. There is no maximum number of extents be analyzed carefully. Reorganization of indexes efficiently, so reorganization of the affected tables, Effects of first archive
for locally managed tablespaces. with many logically deleted rows is the most combined with additional measures designed to run on the deallocation
important and most efficient measure to ensure prevent the negative impact of selective archiving and reallocation
With the introduction of locally managed table- a good performance of the system after archiving. in the future, might be useful (see last section of
spaces, segment-level fragmentation is no issue Oracle has recognized this and consequently in- this article).
anymore. That's why SAP has adopted this tech- troduced online index reorganization as the first
nology and recommends using it for Oracle/SAP of all online reorganization functionalities (see Row-Level Fragmentation
systems (see OSS notes 387946 and 409376). below). “Row-level fragmentation” or “row chaining”
• Table reorganizations should be performed cau- occurs, if a row does not fit completely within one
Block-Level Fragmentation tiously, they can easily turn out to be unnecessary single Oracle block and two ore more pieces of it
The term “block-level fragmentation” indicates as well as detrimental. For normal, periodic archi- are stored in two ore more blocks. There are two
that space is wasted within data or index blocks, ving runs, as a rule, table reorganization is not possible reasons for this behavior:
because after numerous inserts and deletes unused necessary, because the space freed by archiving • The size of the record is such that it could fit
space can not (or not efficiently) be reused for new will be refilled between this and the next archive within one single block. If, nevertheless, the
inserts. The details are different for indexes and run (see picture 2a). Reorganization that decreases record is split and stored in multiple blocks, it
tables:
• If records are deleted from a table, the corre-
sponding index entries are not always deleted as
well. Especially if a delete operation involves
many records (mass deletes), the corresponding
index entries are only marked as deleted (logi-
cally), but do still exist (physically) and consu-
me space that cannot be reused. This behavior
was chosen, because it avoids the overhead of
frequently rebalancing the index tree structure. Picture 2a:
However, due to the “islands of wasted space” Effects of
the index information is stored inefficiently, regular archiving
because it is spread across more Oracle blocks on the deallocation
than it should be. This, in turn, leads to addi- and reallocation
tional I/O requests, increased memory con-
sumption and ultimately to performance degra-
dation. Eventually, the administrator will have the number of extents allocated will result in is because of an inappropriate value of the stor-
to reorganize the index in order to improve additional extent allocation operations when the age parameter PCTFREE. As SAP has provided
block density and utilization (see picture 1). segment is growing again. appropriate storage parameter configurations,
this is a rather rare problem in Oracle/SAP
systems.
• The size of the record is such that it cannot pos-
sibly fit within one single block. This does not
cause performance problems. Nevertheless,
Oracle9i allows multiple block sizes within one
database. So the administrator can create a
tablespace with blocks bigger than the standard
block size and move tables containing long
records to that tablespace.
Picture 1:
Block-level ONLINE INDEX REORGANIZATION
fragmentation in As the previous section has shown, the main rea-
index segments son for reorganization in Oracle/SAP systems is
block-level fragmentation in index segments that
• Delete operations from the table itself are There are only two cases where table reorganiza- occurs after archiving. Accordingly, when Oracle8i
always physical deletes. The space previously tions should be considered. The very first archive introduced online reorganization, the first step
used by the deleted records is freed immediately run probably frees much more space than will be was support for online index reorganization. This
and is available for new inserts unless that is refilled until the next regular run, so reorganization comes in two flavors: online index rebuild, which
prevented by the storage parameter PCTUSED. could be used to shrink the segments involved (see creates a copy of the index to be reorganized, and
So, strictly speaking, there is no block-level picture 2b). online index defragmentation, which is an in-place
5
reorganization. Both methods improve database Online Table Move and Coalesce for
availability by providing users full access to data IOTs
in the base table during an index reorganization.
Oracle8i and Oracle9i allow index-
organized tables (IOTs) to be reorga-
Online Index Rebuild nized while users are accessing and
Online index rebuild creates a copy of the index to updating the data in the table. As
be reorganized. No table or row locks are requi- IOTs are basically indexes that con-
red for that operation, so users can continue to tain the complete table data, it proba-
update and query the base table while the index bly comes as no surprise that the
is being created. If changes to the base table during administrator can choose between the
the rebuild process require changes to the index two options that are available for
as well, these changes are recorded in a journal indexes: online table move creates a new
table the contents of which are merged into the copy of the IOT to be reorganized,
new index at the completion of the operation (see Picture 4a: Online index rebuild using SAPDBA whereas online table coalesce is an in-
pictures 3a and 3b). place reorganization.

Online table move is initiated via the


SQL command ALTER TABLE
<iot_name> MOVE ON-LINE. It
can be used to create a new copy of
the IOT with or without modification
of the storage characteristics. Changes
to the table during the operation are
recorded in a journal table and mer-
ged with the table at the completion
of the operation.

Online table coalesce is initiated via the


SQL command ALTER TABLE
Picture 4b: Online index rebuild using SAPDBA <iot_name> COALESCE. No ONLI-
NE clause is needed, because the
stored on disk as long as the reorganization ope- coalesce operation is an online operation by
Picture 3a: Online index rebuild (step 1: index copy, journal table) ration has not yet completed. default. It locks a few blocks at a time, and quick-
ly frees them as soon as the coalesce on the blocks
Online Index Defragmentation is completed. The operation reorganizes the
Online index defragmentation, or index coalescing, search tree structure, reduces block-fragmentati-
is similar to the online index rebuild operation. on and thus frees up space, resulting in improved
Both reorganize an index and improve space utiliza- storage utilization and query performance.
tion and query performance. The major difference
is that online index defragmentation is an in-place Both operations can be performed on a complete
reorganization and does not require additional IOT or on a single partition in a partitioned IOT.
storage space, so it is to be preferred, if storage
space for rebuilding the index is limited. As enhancements to these Oracle8i features,
Oracle9i added online reorganization support for
The operation is initiated via the SQL command IOT secondary indexes (i.e. indexes other than
ALTER INDEX <index_name> COALESCE. As the primary key index) and additional index
online index rebuild, online index defragmentati- types. Oracle9i added support for reverse key,
on supports parallel processing and partitioning. functional and key compressed index types.

ONLINE TABLE REORGANIZATION Online Data Redefinition


Oracle supports two basic types of tables. The From the perspective of this article, what is mis-
Picture 3b: Online index rebuild (step 2: switch to new index) “classic” type is called heap-organized (or simply heap) sing is an online reorganization mechanism for
table. A heap-organized table is just an unordered heap-organized tables. Online data redefinition
set of records. If a search tree for fast access of provides this missing mechanism, however it is
The operation is initiated via the SQL command individual records is required, the user will have
ALTER INDEX <index_name> REBUILD much more. As the name suggests, it allows
to create an index in addition to the table. An administrators to redefine the structure and stor-
ONLINE. It can be used to create a new copy index-organized table (IOT) combines table data
with the same storage characteristics as the old age characteristics of tables online. This includes:
and search tree within one single segment. Index-
one, but it also supports modifications like chan- organized tables were introduced with Oracle8i • converting a non-partitioned table to a parti-
ges of the extent size or a relocation of the index, (see Oracle for SAP Technology Update, vol. 10, tioned table and vice versa
i.e. its move to an other tablespace. These online p. 12). • switching a heap-organized table to an index-
operations also support parallel processing and organized table and vice versa
can act on some or all of the partitions of a parti- Although index-organized tables are less com- • dropping non-primary key columns
tioned index. mon, they are discussed first, because online table
Starting with version 6.10, online index rebuild move and reorganization, a reorganization mecha- • adding new columns
can also be initiated using SAPDBA (see pictures nism for index-organized tables only, is supported • adding or removing parallel support
4a and 4b). since Oracle8i. Online data redefinition, a reorgani- • modifying storage parameters or moving the
The option to modify storage characteristics is the zation mechanism for all table types (including table to a new tablespace.
most important advantage of online index reorgani- heap-organized tables), was introduced with
zation over online index fragmentation. The Oracle9i. Performing one of these operations en- It is not likely that SAP customers will make use
obvious disadvantage is that more disk space is sures that users continue to receive both optimal of all these options. However, it can be seen easily
6 needed, as the original and the new copy have to be performance and continuous data availability. that online data redefinition is a very powerful
TWO FOR SPEED AND STABILITY

The HP and SAP alliance keeps your business above water -


no matter how far or how fast you are moving.
Over 30.000 joint installations reflect our shared
commitment to customer success.

SAP is the world’s leading provider of e-business software solutions. HP is the leading provider of
comprehensive services and robust infrastructures for one-stop e-business solutions.

www.hp.com/go/sap
mechanism designed for online maintenance of positions of the order, two or more I/O requests because a large number of partitions can be created
all table types and that, for this mechanism, online are necessary (instead of one), two ore more blocks in advance.
reorganization of heap tables is one of the easy need to be stored in memory (instead of one), and
jobs. this will end up in performance degradation (see
picture 5b).
Prior to Oracle9i table redefinition was only pos-
sible using (1) export/import, which meant the
table was offline during the process, or (2) the
ALTER TABLE MOVE command which locked
DML during the operation (no ONLINE option
available for heap-organized tables). Neither of
these methods is suitable for large OLTP tables,
as the downtime can be considerable. To solve
this problem, Oracle9i has introduced online table Picture 5b:
redefinition using the DBMS_REDEFINITION Selective archiving
package. does not free blocks completely,
so a group of related records
The process is similar to online rebuilds of inde- later on must be inserted into
xes in that the original table is left online while a several blocks.
new copy of the table is built. The original table
is accessible by all read and write operations

9i
during the redefinition process. The results of An efficient way to solve this issue can be based
DML operations are stored in a temporary table
for interim updates. Once the new table is com-
on Oracle partitioning. Partitioning is particu-
larly interesting for tables that have a direct or
Delta Consulting
plete, the interim updates are merged into it and indirect time key. Examples are VBRK, VBRP
the names of the original and the new table are and VBFA. The indirect time key of these tables
swapped in the data dictionary. This step requires consists of the VBELN field (or fields associated
a DML lock, however the switch process is very with it). Since the entries in these fields are incre-
brief and is independent of the size of the table or mented sequentially with a sequential time, they
the complexity of the redefinition. Once the can be used as a range partitioning key. Delta Consulting is an SAP-focused consultancy
switch is completed, all DML is processed against committed to providing innovative yet practical
the new table. The idea is to create a separate partition for each solutions that combine real-world business ex-
specific realm of VBELN numbers, so each parti- pertise with world-class e-business technologies
ADVANTAGES OF PARTITIONING tion contains exactly one of these realms (e.g. a designed to extend and enhance the performance
month). If, then, an archiving run is performed, of SAP.
As already stated in the section on block-level frag-
data is deleted from the affected partition(s) only,
mentation, no table reorganizations are necessary
whereas all other partitions are not affected at all. Our consultants average over eight years of SAP-
after regular, periodic archiving runs, because the
In this case, half-empty blocks are not an issue, specific experience that spans all SAP solutions
space freed will be refilled afterwards. However,
because, due to the partitioning key, no new ins- and business disciplines such as supply chain
this statement presupposes that the selection of
erts are directed into that partition. Also, a reor- optimization, finance and cost accounting, and
the records to be archived is purely time-based
ganization of one partition or few partitions is merger and acquisition accounting. In addition,
(e.g. all records created within a period of 3 or 6
much easier and faster than a reorganization of a we provide services that address the functional,
months). If such a criterion is chosen, it is very
whole table. technical and infrastructure requirements of an
likely that Oracle blocks are freed completely.
When, later on, a new group of related records Since partitions are dealt with as independent SAP solution.
(such as several positions of an order) are to be storage objects within Oracle, these partitions can
inserted, they can be inserted together into one be deleted or merged very simply after the archi- Founded in 1998 by a small group of former SAP
single block (see picture 5a). ving runs. If, in addition, the individual partitions executives, Delta draws upon knowledge gained
from involvement in over 200 SAP implementa-
tions. As a National Implementation Partner,
Accelerated SAP Partner and member of the
mySAP marketplace, SAP lies at the core of
Delta’s solutions and services.

As a member of the Oracle PartnerNetwork


(OPN), Delta Consulting is able to deliver Oracle
technology as part of a world-class SAP e-business
Picture 5a: solution for its customers in all industries, while
Time-based archiving frees also capitalizing on Oracle’s expanded offerings
blocks completely, so a group in the CPG and financial services industries.
of related records later on
can be inserted into one For additional information regarding Delta
single block Consulting, please visit their website at www.go-
delta.com or contact Jack Tomb, VP of Business
Development at 610-558-1730.
Using additional archiving criteria (such as plants are created in separate files or tablespaces, the
or cost centers) will probably result in a large disk space can be released at file system level as
number of data blocks only being partially emp- well.
tied. Groups of new records will then frequently An initial reorganization (online data redefiniti-
not fit within one single block, so they need to be on) is required to convert the non-partitioned
spread across two or more blocks. This is, of cour- table to a partitioned table. Also, a new partition
se, not block-level fragmentation in the strict needs to be available for each new time period

8 sense; nevertheless, these records are stored ineffi-


ciently, because if later on a user needs to read all
(e.g. month), because the partitioning key conti-
nually changes. This is not an issue, however,
Oracle Collaboration
Voicemail & Fax Wireless and Voice Access
Oracle Voicemail provides true unified messaging Oracle Collaboration Suite gives your mobile
by storing all messages – including email, voice- employees full access to all of their corporate

Suite mail, and fax – in the same Oracle database. This


eliminates the need to synchronize message stores
and the chore of administering multiple stores
information anywhere, from any device. Its com-
plete set of capabilities lets users access email,
calendar, tasks, files, and corporate directories via
Communication, content management, and colla- that each contain different message types. Oracle their voice, PDAs, Web-enabled mobile phones,
boration – via email, voicemail, file sharing, and Collaboration Suite users can access and manage and pagers. Oracle Collaboration Suite will alert
Web conferencing – have become critical to every all messages from the interface of their choice, your employees of important events and emails. It
organization. Unfortunately, technology has not including a Web browser, phone, PDA, and fax. lets you define where you are right now (in other
kept up with needs. More and more customers are words, your context), so that alerts get routed to
suffering from poor reliability and security, unm- Meeting your desktop, mobile phone, or other device.
anageable information storage, inadequate inte-
With the addition of the new Oracle iMeeting
gration of voice and data systems, and skyrocketing Oracle Collaboration Suite is built on one infra-
technology in Release Two, online collaboration
costs. structure, managed by the same tools, and fully
activities such as co-browsing, Web conferences,
Oracle Collaboration Suite, Release Two, addresses integrated. Oracle Collaboration Suite gives your
voice streaming, meeting playback and even
these issues with an Internet architecture that enterprise a way out of the cycle of data fragmen-
casual communications such as instant messaging,
tightly integrates communications and informa- tation and systems integration.
can be consolidated and centralized. The iMeeting
tion, and incorporates intelligence to deliver the
functionality in Oracle Collaboration Suite acts as
right information when and where it’s needed, via For more information please see
a foundation for integrating enterprise content,
email, voicemail, calendaring, file sharing, search, http://www.oracle.com/collaboration
commerce and comprehensive business flows and
and wireless access. In addition, Release Two fea-
allows users to capture and act on information
tures real-time communication including Web
before, during and after online meetings.
conferencing, white boarding, and instant messa-
ging all built on a single scalable collaboration
platform for Web conferencing, as well as improved Calendar
group productivity, new messaging capabilities, Oracle Calendar provides calendaring, scheduling
and further reduction of total cost of ownership. and personal information management (PIM)
Add in the Unbreakable Oracle9i infrastructure, capabilities through desktop clients, the Web and
and Oracle Collaboration Suite Release Two is any mobile device. The scalable calendar architec-
uniquely positioned to meet the communication ture allows companies to utilize sophisticated
and collaboration requirements of modern enter- group calendars and resource scheduling across an
prises. entire enterprise.

Oracle Collaboration Suite includes several well- Files


established and successful Oracle technologies, Oracle Files can replace dozens or hundreds of file
each addressing a different aspect of communica- servers in an enterprise with one scalable, reliable
tions, content, or context. In fact, Oracle Colla- file server for everyone to use. Built for large-scale
boration Suite consists of the next generation of collaboration, Oracle Files makes your file system
technologies that Oracle Corporation itself uses more manageable for both your data center and
for its own business. More importantly, however, your users. Users also know exactly where they
customers are already using these solutions success- should be storing, sharing, and collaborating on
fully today. documents. Self-service management features let
users create workspaces to secure, author, and
Email publish content among them, and they can use all
Oracle Email is the most cost effective, reliable their favorite productivity tools and network
and secure messaging system in the marketplace. protocol servers.
Save on administration, hardware and software by
consolidating distributed email systems into a Search
single enterprise mail store. At the same time, Oracle Ultra Search provides an enterprise Web
enjoy reliable messaging that withstands machi- search engine that empowers employees to locate
ne and site failures. Use any desktop client to valuable information in your organization’s intranet
access the industry standard mail servers. or extranet. Oracle Ultra Search gathers and indexes
Take advantage of superior virus protection and all documents including Web sites, databases,
eradication capabilities. Access all your messages files, mailing lists, portals and a variety of user
anytime, anywhere using any device. defined sources including applications. 9
Oracle Internet Directory 9.2. for SAP
SAP announces the certification of Oracle Internet Directory 9.2 (OID) as a component of a
security solution within the mySAP technology interface (directory interface for user mana-
gement via LDAP).
The support of the Lightweight Directory Access Protocol (LDAP) enables SAP to use exter-
nal directories to centrally administer the data of several SAP or third party systems. These
central directories can contain informa-
tion such as user data, security details,
details for system resources and confi-
guration parameters.
With this certification, enquiries
such as “Can OID be used with SAP”
or “Does OID support SAP” can be
answered positively. What is im-
portant is that the integration of
OID and SAP AS 6.1 is an SAP and
not an ORACLE functionality.
Apart from the integration of MS
Active Directory mysap repre-
sents another widespread plat-
form that works together with
OID.

10
the way which may not be related to the primary
Diagnosing Performance Bottlenecks
Statspack
bottleneck; in this case, note the data, but
ignore it, as it will not help in significantly
reducing response time.
Using Statspack • Begin gathering additional data. Look at:
• the Wait Events and Background Wait Events
sections for the average wait time for the high-
est ranking events (this column is identified
by the heading Ave wait(ms)). This data can
Introducing Statspack Oracle performance data. The summary page is sometimes provide insight into the scale of
the wait. If it is relevant to do so, also cross-
Statspack is a performance data gathering tool broken down into these areas (in order of impor-
check the event times with any other appli-
which first shipped with Oracle8i release 8.1.6. tance): cable Statspack or OS data. For example, if
Statspack gathers data from the memory-resident
• Top 5 Wait Events the events are IO related, is the Oracle data
v$ views, and stores that data in Oracle tables for
consistent with the OS read times, or does
later analysis. Although similar to its predecessor • Load Profile
the OS data indicate the disks containing the
(BSTAT/ESTAT), Statspack simplifies performance • Instance Efficiency datafiles are overly busy?
diagnosis by presenting the performance data in a
manner which is effective for pinpointing bott- The remaining sections of the Statspack instance • the Load Profile and Instance Efficiency sections
lenecks. report are used to gather additional data. The on page 1, specifically at any statistics or
high-load SQL sections are always scanned irre- ratios which are related to the top wait
THE IMPORTANCE OF BASELINES AND spective of the problem, as they provide insight events. Is there a single consistent picture?
into the SQL executing at the time the problem If not, note other potential issues to investi-
STATISTICS gate while looking at the top events, but
One of the biggest challenges for performance occurred, and the application in general.
If a level 5 or above snapshot is taken, the SQL don’t be redirected away from the top wait
engineers is determining what changed in the events. Scan the other statistics. Are there
system to cause a satisfactory application to start report (sprepsql.sql) which is new in Oracle9i,
any statistics in the Load Profile which are
having performance problems. The list of possibi- provides in-depth information for a single SQL unusually high for this site, or any ratios in
lities in a modern complex system is extensive. statement. The SQL report includes all of the SQL the Instance Efficiency section which are atypi-
Historical performance data is crucial in elimina- statistics for a particular hash value, the complete cal for this site?
ting as many variables as possible. This means text of the SQL statement. Additionally, if a level 6
that you should collect operating system, database, • Drill-down for additional data to the appro-
snapshot or above was taken the SQL execution priate section in the Statspack report.
and application statistics starting from the
system’s testing phase onwards, or at least from plans are also included. This report is frequently The relevant sections to examine are indicated
the first day an application is rolled out into pro- used to tune problems identified as local to a spe- by the top wait event. For example, if the top
duction. This applies even if the performance is cific program. events are IO related, look at the SQL ordered by
unsatisfactory. As the application stabilises and the This remainder of this paper focuses on how to Reads, and the Tablespace IO Stats, and File IO
performance characteristics are better understood, Stats sections. Is the data in these sections con-
use the instance report (spreport.sql).
a set of statistics become the baseline for future sistent with the wait events? What other infor-
reference. These statistics can be used to correlate mation does the drill-down data provide (que-
Statspack Strategy stions to ask include: Are the number of times
against a day when performance is not satisfactory,
and can assist in quantifying subsequent impro- • Use the Top 5 Wait Events on page 1 to identify a resource was used high or low? Are there any
vements made. They are also essential for future the events with most wait time by percentage1. related resources which when pieced together
capacity and growth planning. form a pattern?)
These events are preventing the majority of
Oracle statistics are queried from the v$ views • Also note that it is vital to examine the SQL
server processes from being productive, and so
using a snapshot method such as Statspack. sections of the Statspack report, to identify
are likely2 the bottleneck. Check whether the
Statistics which should be gathered include: what the application was requesting of the
top events are related. Are the events consi-
• Wait events instance which caused this performance regres-
stent with any OS statistics?
• SQL statistics and SQL Plans sion. The SQL sections also sometimes identify
There may be one event which greatly
• Overall systems statistics (shared pool, buffer tunable high-load SQL, or SQL statements
outranks the others, in which case this should
cache, resource such as latches, locks, file IO) which are avoidable.
be considered the bottleneck: focus on this
event. Alternatively, there may be a set of rela- • Significantly less important to scan through
Using Statspack to Identify Bottlenecks ted events which again indicate one primary are the Library Cache Activity and Dictionary
area of contention. A third possibility is there Cache Stats sections, although they may provi-
Statspack provides a simple way of collecting
is a set of disjointed events which may rank de some insight.
Oracle performance data and identifying bottle-
closely for the greatest wait time. In this case, • In some situations, there may not be sufficient
necks. The reports pre-compute many useful
you may want to look at each one in turn. data within the Statspack report, which will
statistics, and eliminate misleading statistics. Ignore events in the Top 5 listing which do not necessitate gathering additional statistics
There are two Statspack reports. comprise a significant portion of the wait time. manually.
The first, spreport.sql, is an overall instance per- • Considerations while gathering additional data.
formance report. The first page of this report con- The purpose of gathering additional data is to 1 Idle events are omitted from this list. Also, if timed_statistics
tains an instance performance summary, which help build up an understanding of the charac- is true, the events are ordered by the amount of time each event
concentrates a complete view of instance health. teristics of the instance and to identify the was waited for; this ordering gives the best indication of where
application code executing, at the time the most of the time was lost, and therefore where the biggest bene-
Subsequent pages include sections which report
fits can be gained. If timed_statistics is false, the order is by the
detailed statistics on the various tuning areas. problem occurred. Gathering additional data
number of waits. Oracle recommends setting timed_statistics
The instance performance report is used when usually requires skipping backwards and for- to true for best performance diagnosis.
investigating an instance-wide performance pro- wards through the report to check statistics 2 Note that in a healthy, well performing system, the top wait
which may be of interest. You can gather some
blem. events are usually IO related. This is an example of a case where
additional data up-front, and while drilling the statistics alone do not accurately indicate whether there is a
The instance performance summary page always down. The data gathered may portray a consis- problem, which is why the most important indicator of perfor-
indicates the biggest bottleneck to investigate,
and hence the place to start when examining
tent picture of one bottleneck, or a series of
bottlenecks. You will find interesting data along
mance is user perception. 11
By this stage, candidate problems, and conten- as a resource to assist identifying the cause, and to analyze the problem, it is still possible to fall
ded-for resources have been identified (with the the resolution of the contention. The manual into the traps outlined in Performance Tuning
highest priority issues dictated by the top-5 wait includes detailed descriptions of how to: Wisdom.
events). This is when the data should be analyzed.
Consider whether there is sufficient data to build • diagnose causes and solutions of Wait Events Performance Tuning Wisdom
a sound theory for the cause and resolution of the • optimally configure and use Oracle to avoid the Below are a list of traps, and some wisdom which
problem. problems discovered may help you find a faster, or more accurate dia-
Use the Oracle9i Performance Guide and Reference Even while following the strategy outlined above gnosis.

EXAMPLE OF USING STATSPACK REPORT


The following example comes from a scalability benchmark for a new OLTP application. Excerpts of the Statspack report have been included, rather than
the complete report. The requirement was to identify how the application could be modified to better support a greater concurrent user load. The instance
was running Oracle8i release 8.1.7, and the Statspack snapshot duration was 50 minutes.
The intent of using a specific Statspack report is not to identify how to fix these specific problems, rather to provide an example of the technique described above.
DENTIFY THE LARGEST WAIT EVENTS, BY TIME STARTING IN THE TOP-5 Observations:
In this case, the most significant wait events are
Top 5 Wait Events distributed over enqueue, latch free, and buffer busy
~~~~~~~~~~~~~~~~~ Wait % Total waits, with enqueue being the most prominent.
Event Waits Time (cs) Wt Time There is no obvious correlation between these
-------------------------- ---------- ------------- -------------- events, which implies there are probably three
enqueue 482,210 1,333,260 36.53 separate problems to investigate.
latch free 1,000,676 985,646 27.01 The most important wait event to investigate is
buffer busy waits 736,524 745,857 20.44 enqueue.
log file sync 849,791 418,009 11.45 • Problem 1: enqueue: Drill down to the En-
log file parallel write 533,563 132,524 3.63
queue activity data.
Although the only focus should be on the primary bottleneck, it is sometimes necessary to investigate the next most important issues, while the solution for the
first bottleneck is being addressed. These are:
• Problem 2: latch free: Which latch? Why used? Check the Latch Activity and Latch Misses sections
• Problem 3: buffer busy waits: Which files and buffers? Why? Check the Buffer Waits section and the Tablespace IO and File IO sections
Before jumping to the drill-down sections, gather additional background data which will provide context for the problem, and details on the application.

Additional Information - Wait Events Detail Pages


Check the Wait Events and Background Wait Events detail sections for the average time waited (Avg wait (ms))for the top events. Identify the magnitude of
the wait, and if possible, compare Oracle statistics to any relevant OS data (e.g. check whether the average read time for the event db file sequential read,
and the OS read time for the disk containing that datafile are consistent, and as expected).
Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
------------------------------------------ ----------- -------------- ---------- --------
enqueue 482,210 50 1,333,260 28 0.8
latch free 1,000,676 751,197 985,646 10 1.6
buffer busy waits 736,524 3,545 745,857 10 1.2
log file sync 849,791 13 418,009 5 1.4
log file parallel write 533,563 0 132,524 2 0.9
SQL*Net break/reset to clien 535,407 0 20,415 0 0.9
db file sequential read 22,125 0 6,330 3 0.0
...
Observations:
• None of the wait times are unusual, and there are currently no relevant Oracle or OS statistics to compare against.

Additional Information - Load Profile


Examine other statistics on the summary page, beginning with the load profile.
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 1,316,849.03 6,469.71
Logical reads: 16,868.21 82.87
Block changes: 5,961.36 29.29
Physical reads: 7.51 0.04
Physical writes: 1,044.74 5.13
User calls: 8,432.99 41.43
Parses: 1,952.99 9.60
Hard parses: 0.01 0.00
Sorts: 1.44 0.01
Logons: 0.05 0.00
Executes: 1,954.97 9.60
Transactions: 203.54
12 % Blocks changed per Read: 35.34 Recursive Call %: 25.90
Rollback per transaction %: 9.55 Rows per Sort: 137.38
Observations:
• This system is generating a lot of redo (1mb/s), with 35% of all blocks read being updated.
• Comparing the number of Physical reads per second to the number of Physical writes per second shows the physical read to physical write ratio is very low (1:49).
Typical OLTP systems have a read-to-write ratio of 10:1 or 5:1 – this ratio (at 1:49) is quite unusual.
• This system is quite busy, with 8,432 User calls per second.
• The total parse rate (Parses per second) seems to be high, with the Hard parse rate very low, which implies the majority of the parses are soft parses.
The high parse rate may tie in with the latch free event, if the latch contended for is the library cache latch, however no assumptions should be made.

Additional Information – Instance Efficiency

Instance Efficiency Percentages (Target 100%)


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 98.56 Redo NoWait %: 100.00
Buffer Hit %: 99.96 In-memory Sort %: 99.84
Library Hit %: 99.99 Soft Parse %: 100.00
Execute to Parse %: 0.10 Latch Hit %: 99.37
Parse CPU to Parse Elapsd %: 58.19 % Non-Parse CPU: 99.84

Shared Pool Statistics Begin End


------ ------
Memory Usage %: 28.80 29.04
% SQL with executions>1: 75.91 76.03
% Memory for SQL w/exec>1: 83.65 84.09

...

Observations:
• The 100% soft parse1 ratio indicates the system is not hard-parsing. However the system is soft parsing a lot, rather than only re-binding and re-
executing the same cursors, as the Execute to Parse % is very low. Also, the CPU time used for parsing is only 58% of the total elapsed parse time
(see Parse CPU to Parse Elapsd). This may also imply some resource contention during parsing (possibly related to the latch free event?).
• There seems to be a lot of unused memory in the shared pool (only 29% is used). If there is insufficient memory allocated to other areas of the database
(or OS), this memory could be redeployed.

Additional Information – SQL Sections

It is always a good idea to glance through the SQL sections of the Statspack report. This often provides insight into the bottleneck at hand, and may also
yield other (less urgent) tuning opportunities.

SQL ordered by Gets for DB: XXX Instance: XXX Snaps: 46 -48
-> End Buffer Gets Threshold: 10000

Buffer Gets Executions Gets per Exec % Total Hash Value


--------------- ------------ ---------------- --------- ---------------
8,367,163 766,718 10.9 16.4 38491801
INSERT INTO EMPLOYEES VALUES(:1,:2,:3,:4,:5,:6)

3,695,306 798,317 4.6 7.2 1836999810


SELECT DEPARTMENT FROM PAYROLL WHERE COST_CENTER = :"SYS_B_00"
AND FNO = :"SYS_B_01" AND ORG_ID != :"SYS_B_02"

...

2,951,100 65,580 45.0 5.8 2714675196


select file#, block# from fet$ where ts#=:1 and file#=:2

Observations:

• The majority of the SQL statements were well tuned, and do not require many logical or physical reads.
• Most of the activity was INSERT or SELECT, with significantly fewer updates.
• The modification of fet$ (this is the data dictionary Free ExTent table) implies there is dynamic space allocation. This has been executed 65,000 times
during the report interval of 50.48 minutes, which is on average, 21 times per second! This can easily be avoided, by using locally managed tablespaces.

3
For definitions of hard parse, and soft parse, please see the Oracle9i Performance Guide and Reference.

13
SQL ordered by Executions for DB: XXX Instance: XXX Snaps: 46 -48
-> End Executions Threshold: 100

Executions Rows Processed Rows per Exec Hash Value


------------ ---------------- ---------------- ------------
766,718 536,313 0.7 38491801
INSERT INTO EMPLOYEES VALUES(:1,:2,:3,:4,:5,:6)

275,327 275,305 1.0 3906762535


SELECT MANAG_MRD_ID_SEQ.NextVal FROM dual

275,327 275,305 1.0 341630813


INSERT INTO MANAG_MRD_IDENTITY VALUES(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:1
1,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:2
7,:28,:29,:30)

258,595 258,566 1.0 3303454348


SELECT PROD_ID_SEQ.NextVal FROM dual

258,595 258,561 1.0 4222973129


INSERT INTO PROD_IDENTITY VALUES ( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,
:11,:12,:13,:14,:15,:16,:17,:18,:19,:20 )

914 914 1.0 1425443843


update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9 where obj#=:1

Observations:
• There was nothing of any interest in SQL ordered by reads section, which implies the application is well tuned to avoid unnecessary IO.
• A significant proportion (30%) of the INSERTs into the EMPLOYEES table in failing, which is evident by comparing the number of Rows Processed to
the Executions.
• Many of the INSERTs are executed the same number of times as the SELECT from a similarly named sequence number. This implies the sequence number
is used as a column value in the insert. Possibly a key value?
• The update of seq$ (this is the SEQuence number data dictionary table) is performed 18 times per minute, which is once every 3 seconds. Unless this
data coincides with the SQ (SeQuence number) enqueue being contended for, this is not the largest bottleneck. However, for additional efficiency it may
be useful to increase the cache size for frequently modified sequence numbers.

SAP on Oracle at Kodak with over 30 Oracle databases in conjunction with


the SAP applications and hundreds of non-SAP
The key to solving business problems
“Kodak’s global business relies upon the availabi-
Pressing the “famous Oracle databases. It was Kodak’s experiences with
the safe, reliable and scalable Oracle corporate data-
lity of the central SAP environment. Oracle’s
availability and reliability has been key in mee-
base standard that lead Kodak to select the Oracle
button” database under SAP.
ting our business and technical objectives”, says
Karen Schuh, IT architect in Kodak’s Architecture
The company’s ERP efforts and SAP implemen- Strategy and Planning group.
“You press the button, we do the rest”. With this
slogan, George Eastman made picture taking tation activities were launched in 1996 and the
initial go live took place in September 1997. Oracle’s hot standby capability has been the key
popular for hundreds of thousands of amateurs to solving business problems at Kodak. Initially,
and professionals around the globe. Eastman reco- The system supports 10000 named users and 2200
concurrent users at a time. Actually Oracle 8.1.7 hot standby was used as the cornerstone of a disaster
gnized the potential of the world market for ama- recovery implementation to meet the business
teur photographers. The genius of his invention is in use, anticipating an Oracle 9.2 upgrade as
soon as SAP completes the certification process. needs for less than 8-hour availability in the event
was that it took a complicated and cumbersome of a declared disaster. Over time, hot standby was
process and made it easy to use and accessible to The overall size of the SAP R/3 Oracle Database
is around 2 terabytes with additional 700 giga- extended to include graceful failover and this
nearly everyone. capability was used to support a move of Kodak’s
bytes for the production SAP BW database.
production data center and to facilitate major
By 1900, distribution outlets had been established SAP R/3 3.1I is in use and will be upgraded to changes to the storage architecture.
in England, France, Germany, Italy and other R/3E. Further upgrades are planed for SAP BW
countries outside the U.S. Today, Kodak products from 2.1c to 3.0b and SAP B2B from 3.0 to 3.1. Hardware Platforms
and services are marketed by subsidiary companies Each flavor of SAP (OLTP, BW, HR, CRM, B2B)
to people in more than 150 countries. consists of multiple environments. Database E10K 64x400 MHz CPU
Two servers in a high availability cluster
Being part of the digital age All UNIX servers used in this environment are
Central Instance E4500 8x400 MHz CPU
The combination of traditional photography with Sun Solaris systems. The Oracle Business Ware-
Two servers in a high availability cluster
digital technology is the major step to produce house (BW) Database and central instances are
systems that bring levels of utility and fun to the installed on an E6500 system with ten 400 MHz Application Mix of E3000, E3500, and V880
taking and utilization of images. CPUs, where two servers represent a high availa-
bility cluster. Another two systems E3500 operate The entire configuration except of high availability is duplicated
at the disaster recovery site.
Therefore Kodak manages an IT infrastructure as Application Servers.
14
Oracle and Fujitsu Siemens A unique
combination of state-of-the-art technologies
Oracle and Fujitsu Siemens Computers evalua-
te the use of Oracle9i RAC in cooperation with
FlexFrame for mysap
Operation of complex mysap landscapes becomes
possible on a simple, clear and flexible IT infra-
structure with FlexFrame for mysap – including
significant TCO savings. This innovative deve-
lopment by SAP, Network Appliance and Fujitsu
Siemens Computers revolutionizes the introduc-
tion and operation of complex mySAP environ-
RAC and @Network @Appliance filers within the
framework of FlexFrame for mysap. Oracle9i RAC
enables the operation of several parallel database
nodes to enhance availability and performance.
The MCOD concept (Multiple Components in One
Database), also in combination with OLTP and
OLAP (e.g. R/3 and Business Warehouse), can only
be fully used with Oracle9i RAC. In particular,
the dynamic use of servers as DB or APP servers
as required can be enabled by using FlexFrame for
Oracle9i Release2 for
Intel Itanium2 Systems
9i
The Itanium 2 processor is the second generation
of a 64-bit processor by Intel. The 64-bit processor
architecture is ideal for running Oracle9i data-
bases and mysap business applications. These areas
require as much information as possible to be
administered in a computer’s RAM. The more
data that can be held in the RAM, the higher the
throughput of transactions in a SAP system with
the Oracle9i database. The Itanium 2 processor’s
64-bit architecture allows an unlimited amount
of data from mysap applications and Oracle9i
mysap in order to efficiently employ available IT databases to be administered in the main memory.
ments. resources. To date the 32-bit architecture was only able to
hold up to 4 gigabytes in the RAM.

According to IDC the only growing operating systems are Linux and
Windows holding a market share of about 70% in the enterprise than.
The trend is clearly moving to standard based infrastructures, where
Intel provides the building blocks.
The Itanium2 architecture scaling well over 8, 16, 32, and more pro-
cessors provides world class performance in recent TPC-C and SAP-SD
benchmark for non clustered 4 way machines using Oracle databases, by
far outperforming competitive hardware architectures with over 80000
tpm/c’s and costs under 5$/tpm-c.
Server consolidation doesn’t necessarily mean big boxes. With Intel’s
32bit environment in combination with Oracle9i RAC, enterprises
can consolidate their IT infrastructure and adapt the hardware as needed.
Most of the benefits of server consolidation are achieved through
centralization and data consolidation, meaning storing data at a central
place. Oracle9i RAC on Linux Xeon MP Servers are a very good archi-
tectural concept to achieve this.
Werner Schueler, Alliance Manager Intel Europe

What is behind FlexFrame for mysap?


It is the combination of several coordinated inno- FlexFrame is particularly interesting for cost-con-
vative components which makes this new solution scious customers whose requirements are fre- Oracle was quick to realize the advantages of 64-
so special. These include for example, standardized quently and rapidly changing in their existing bit architectures and correspondingly optimized
server modules, blade servers, the virtualization mysap environment. This new offer is also ideal its database architecture for 64-bit systems.
of the SAP software, snapshot functions of the for consolidation projects and customers who Oracle9i Release 2 is now available on all 64-bit
NetApp filer and the built-in configuration-free would like to change from R/3 to mysap. The platforms of the various manufacturers.
high availability. All this guarantees rapid scaling aims of FlexFrame are clearly defined: businesses It comes as no surprise that the first 64-bit version
and expansion whilst in operation and service can achieve flexible and uninterrupted operation of SAP R/3 in 1999 was realized with an Oracle
availability 24/7. In addition there are the functions and “SAP software services on demand” with this database. Oracle’s technological lead in the 64-bit
contained in the control nodes for back-up, cluste- new business-critical computing solution. Major technology is now being continued by providing
ring, monitoring, and remote access for the entire advantages are maximum security, scalability, Oracle9i for Linux64, HP-UX and Windows .NET
mysap landscape. high availability and central control and monito- Server on Intel Itanium 2 hardware.
ring of the system. Significant benefits can be The close cooperation between SAP and Oracle’s
A combination of Oracle9i RAC and FlexFrame achieved through Oracle9i RAC in cooperation development departments allows SAP applications
enables further significant benefits: with FlexFrame for mysap. with Oracle9i to be tested on Itanium 2 systems
• Maximum availability of the SAP systems at a very early stage. To date successful tests have
• Increased flexibility for operation and expan- been conducted for SAP R/3 4.6C with Oracle9i
sion For more information contact Release 2 on Itanium 2 systems under the opera-
ting systems Linux64, Windows .NET Server and
• Simplification of administration, in particular Fujitsu Siemens Computers GmbH HP-UX.
in the back-up/restore area Email: ccsap@fujitsu-siemens.com According to current planning, mysap products
Therefore Oracle and Fujitsu Siemens Computers Phone: +49 (0) 6227 73 1800 with Oracle9i Release 2 on Linux64, HP-UX,
have started a joint evaluation project to test and Fax: +49 (0) 6227 73 1801 Windows .NET Server for Intel Itanium 2 systems
optimize the cooperation of blade servers, Oracle9i http: www.fujitsu-siemens.com/sap will be available summer 2003.
15
Oracle to support a 350 gigabyte implementation
IT– Energy of SAP’s Business Information Warehouse (BW)
module. In addition to the R/3 and BW produc-

for Halliburton
tion instances there is also a read-only reporting
instance of R/3 – a nightly copy of the production
database – and a number of other test and deve-
lopment instances used to support the production
Managing Massive Volumes in SAP environment.
Founded in 1919, Halliburton is one of the world’s
largest providers of products and services to the Despite the incredible demands placed on its
petroleum and energy industries. Halliburton capabilities, SAP system performance is measured
employs 85,000 people in more than 100 countries at just above a half second internal response time
working in two major operating groups. Halli- on average, with peaks during the monthly close
burton’s energy services group (ESG) offers a broad at less than one second internal response time. Mike Perroni, Director of Halliburton’s ERP Center
array of products and services to the upstream oil The Oracle database technology is integral to of Expertise.
and gas industry while the KBR group provides ensuring this level of performance.
engineering and construction services to the “We are excited about Oracle’s new technolo-
downstream energy industry. The Future of Halliburton, SAP, and Oracle gies such as RAC which we believe can lower
Challenges facing Halliburton in the future in- our total cost of ownership while meeting our
In 1996, Halliburton initiated a global ERP im- clude further simplifying business processes,
plementation project. Aiming at resolving Y2K improving efficiencies, and lowering costs. The growing demands”.
system issues and standardizing business processes company has started to introduce some of the
across the organization, Halliburton selected SAP mySAP components to meet these additional
as its ERP system. business challenges, but naturally, this is making
the system landscape even more complex.
During 1998 and 1999 Halliburton rolled out
the complete suite of SAP R/3 modules and new To buffer this complexity and reduce operating
streamlined business processes to its Energy costs, Halliburton is investigating Oracle’s RAC
Services Group (ESG) and to a subset of the KBR technology for potential use in its SAP R/3 and
engineering and construction group. The SAP data warehouse environments. RAC could be used
system and the new business processes were sta- as a means to lower cost while improving avail-
bilized during the next two years and in Novem- ability of these environments.
ber 2001 the system was upgraded to SAP 4.6c.
In 2002 SAP was implemented to several addi- Additionally, using Oracle’s database replication
tional business units that had been acquired while features to replace EMC’s SRDF data replication
the initial deployment was underway. At the same facility could offer better partitioning for segment-
time Halliburton initiated an Enterprise Buyer ing transactions in Oracle’s data warehouse envi-
Professional (EBP) e-procurement pilot to further ronment.
streamline its purchasing processes.

The SAP Operating Environment


The resulting system is one of the largest SAP
deployments in the world operating on a single 3 Terabyte SAP Oracle Database
Oracle database. Halliburton has approximately
14,500 SAP users worldwide, and that number is
14,500 SAP Users Growing to 17,000
expected to reach about 17,000 during the first
quarter of 2003.
3 – 3.5 Million SAP Dialog Steps per Day
On average, Halliburton’s SAP system handles
70,000,000 transactions per month; 120,000 70,000,000 SAP Transactions per Month
background jobs per month; and 3 to 3.5 million
dialog steps per day. These numbers continue to 120,000 Background Jobs per Month
increase as the business continues to grow. To
meet the high volume database performance 2 HP Data Center Locations
requirements of their SAP environment, Halli- in Toronto, Canada (24x7)
burton chose Oracle as their database provider.
30 Sun Unix Servers (294 CPUs)
“At the time Halliburton chose Oracle as our
including 2 SunFire 15k servers
database, it was the only safe choice on the
market to meet our requirements,” said Mike
Perroni, director of Halliburton’s ERP Center of 14 NT Servers
Expertise. “Six years later, we still see it as being
the only safe choice for our large environ- 63 Terabytes (4 EMC Symmetrix Frames)
ment.”
4.5 Terabytes Sun Storage
A 3 terabyte Oracle database supports the primary
production instance of SAP R/3. The production
database, which is growing at a rate of 60 GB per
month, is expected to reach well over 4 terabytes in
size by the end of 2003. Halliburton also utilizes The SAP Operating Environment
16
The SAP Special Interest Group (SIG) at IOUG
AG and Oracle. The benefits of SAP SIG mem- For more information please contact

I UG
INTERNATIONAL ORACLE USERS GROUP – AMERICAS
bership include providing profile and survey
information, offering open forums for discussion
and providing networking opportunities among
members.
Thomas Stickler
e-mail: thomas.stickler@oracle.com
Fax: 610-408 4815 (USA)

The IOUG sponsors Special Interest Groups (SIGs) Membership


designed to assist members with specific Oracle
products and products that are tightly integrated Membership in the SAP SIG comes with your
with the Oracle products they use. The goal of IOUG membership.
the IOUG SIGs is to offer technical information
and peer-to-peer collaboration in order to facilitate SAP SIG Events
the effective implementation of products. This is The main events for the SAP SIG are held at the
achieved with technical resources, tips, links, and yearly IOUG conference. Each year there is a
list serves. business meeting and information exchange, a

IOUG
SAP SIG survey and report of the previous year's
The SAP SIG specifically provides a forum of survey, a vendor forum panel discussion with re-
open discussion and education on Oracle related presentatives from both SAP and Oracle, lunch
issues associated with SAP running on an Oracle time opportunities for informal discussion, and
database. During the 1998 IOUG conference a formal technical sessions on various SAP/Oracle
group of about 40 attendees started as a roundta- issues.
ble to surface issues and solutions to common
problems. The group has grown every year since The SAP SIG also sponsors quarterly technology
and officially became an IOUG SIG in 1999. forums. The forums are one hour dial-in calls and
each forum focuses on a specific topic of interest
IOUG SAP SIG Mission to the SIG. Developers from both SAP and Oracle
participate with product updates and are avai-
The mission of the SAP SIG is to provide a forum lable to answer questions.
for open discussion, education and networking to
meet the challenges of implementing, and main-
taining Oracle SAP R/3 environments. In additi- Web Site
on, the SAP SIG facilitates raising issues and The SAP SIG web site can be accessed from
providing enhancement suggestions to both SAP the IOUG web site: www.ioug.org.

I UG
INTERNATIONAL ORACLE USERS GROUP – AMERICAS
Position: DBA Manager Developer Basis
Other ________________________________
Database: Oracle Other _______________________
R/3 Version: –––––––––––
SURVEY 2003
Please complete a copy of the survey and send via fax to the attention of Thomas Stickler: 610-408 4815 (USA)
You can also complete this Survey on-line by visiting our website: www.ioug.org and go to the SAP SIG page.

This is the fourth iteration of our survey on issues that surround Oracle and ____ Providing the ERP/DB infrastructure to support Enterprise Application
SAP installations. Using the 5 point scale provided, please rate each item on the Integration (EAI)
following list with respect to its overall importance within your organization. ____ Tuning the SAP database: can’t use regular Oracle methods
Use the numbers between 1 and 5 as many times as you like. The anchors for
the scale appear below. The results will be made available to the membership ____ Really understanding how Oracle and SAP work together
of the IOUG Oracle on SAP Special Interest Group. ____ Gathering and using Oracle metrics to analyze capacity and troubleshoot
(1= not important at all; 2 = slightly important, 3 = moderately important,
problems
4 = very important, 5 = extremely important)
____ Tuning ABAP SQL
Importance Issue ____ Cross-training staff so all skills have a back-up person
____ Archiving to and/or interfacing with data warehouses
____ Impact on ERP/DB as Internet solutions proliferate in the business
environment ____ Developing a strategy to handle database re-organizations
____ Deciding on a data warehousing strategy and making it work ____ Integration of Oracle Enterprise Manager with SAPDBA and other SAP utilities
____ Planning and predicting growth of the database to assure no down time ____ Integration tools which assist getting SAP data into Oracle data warehouses
____ Identifying performance bottlenecks in the database What other issues or challenges (that are not on this list) do you face in using SAP
____ Backing up the system off line in a 24-7 environment in an Oracle Database environment?
____ Retaining trained staff over time Of all the issues presented here, which would IOUG-SAP Special Interest Group be able
to help most?
____ Restructuring the tablespaces when transporting tablespaces between
instances Please complete the survey and send via fax to Thomas Stickler: 610-408 4815
You can also complete this Survey on-line by visiting our website: www.ioug.org
17
and go to the SAP SIG page.
from DB2
Successful migration from DB2 to Oracle in just Stefan Reitinger,
SAP platform
four days reduce TCO for Continuous Availability migration project
manager.
Every day is newsstand day – which makes downtime for the commercial R/3 SAP system not some-
thing Europe’s biggest media retailer would wish to contemplate. That is why the short migration time
was a major success factor in the changeover from a DB2 to an Oracle database. Oracle’s sound migra- Careful testing before migration
tion advice and experience contributed greatly to bringing this challenging project to its successful con- The decision in favor of the Oracle database plat-
clusion in a mere four days. form in mid-January was made for two reasons,
time and the guaranteed performance of the
Oracle database engine in conjunction with the
database, migration tool and SAP release.
Another factor in favor of Oracle was that Kiosk
AG already had a number of Oracle supported
applications and time critical logistics applicati-
ons running, which meant that the firm disposed
of DB administrator know-how – important for
the future.

Once the decision was made, action followed


action in rapid succession:
Two test platforms were created, one the in-house
AS/400 test computer running the new RS/6000
hardware, the other the serious computer power
available at IBM’s benchmark center in Mont-
pellier, France. According to Stefan Reitinger,
”The tests ran more or less concurrently. While the
Kiosk AG operates 1,320 retail outlets, serving more than a million customers daily. Montpellier tests failed to produce the hoped-for results,
the first in-house export conducted in Muttenz was more
The figures speak for themselves: Every day more mance needed. This is when the planners first promising – it was accomplished in just 96 hours”.
than a million customers shop at Kiosk AG’s began to contemplate a systems change, a pursuit One reason no further migration tests were con-
1,300 retail outlets in German and Italian-spea- that led inexorably to casting about for a database ducted in Montpellier was problems with data
king Switzerland for virtually everything the system good enough to answer present and future transfer quality to the South of France and back.
heart desires while traveling and in-between needs. Stefan Reitinger: “For reasons of economy Consequently the migration team, with the sup-
times. A plethora of 6,000 food and non-food we first toyed with the idea of retaining our DB2 port of partners East AG in Switzerland,
products ranging from candies to tobacco pro- environment, but time constraints made this Germany’s Realtech AG and Oracle Germany’s
ducts to incidentals, are available to customers in unfeasible.” The switch from AS/400 to RS/6000 own SAP Competence Center team, confined it-
a hurry, not to mention more than 4,000 media entailed a new operating system and a system self to testing in-house data export and import
products and 5,500 book titles. In addition to architecture change from EBCDIC to ASCII. into the new system. Anticipation went up ano-
its own retail outlets, Kiosk AG also serves more Oracle proved to be the best-performing database ther couple of notches when, prior to the live
than 4,500 outside retailers, such as supermarket platform. As daily newsstand turnover generated migration, the second test export was concluded
newsstands. An immense amount of merchandise ever more data, the migration process had to be in an unbelievable 52 hours.
is moved day after day, not to mention the completed in record time, a fact that severely
purchasing and sales data generated to move it. limited the choice of migration tools. When Easter 2001 came around, everything was
According to SAP platform migration project ready. The final physical migration, consisting of
manager and Kiosk AG Oracle database admini- The Kiosk team wanted the optimum solution for data export, transfer and import, took just 48
strator Stefan Reitinger, “Volumes such as these its volume of data. So they went to Oracle’s SAP hours; then the test team checked out the func-
really tax our logistics and IT capabilities”. competence center in Walldorf and to IBM tionality of all SAP features, which took an addi-
Switzerland for information about possible confi- tional ten hours. The many interfaces of Kiosk
But Kiosk AG has it all under control with two gurations. In late 2000, in cooperation with IBM AG’s complex, heterogeneous system environ-
commercial data processing systems: A proprie- implementation partners Gate Informatic AG, con- ment were closely scrutinized at the same time.
tary software package running on an IBM AS/400 figuration of the RS/6000 platform began to take Having started the Wednesday before Easter, the
helps manage media products – about one third shape. Part of the equation entailed forsaking a migration team had good reason to pop champa-
of overall sales – and an SAP R/3 system for tob- monolithic solution in favor of a modular system gne corks as early as Saturday evening. This
acco and non-media products, order processing, that incorporates several computer nodes. It soon meant that, including post-processing, the
financial and cost accounting, which until the emerged that the data had to be managed in three migration took just four days. On Easter Monday,
changeover ran on another AS/400. When the stages, data export using SAP-certified migration all processes were tested from each of the depart-
overall data volume exceeded 730 gigabytes, this tools, transfer of export data to the target platform, ments and when staff returned to work from the
became a bottleneck. and import into the target platform’s Oracle data- Easter holiday on Tuesday morning, every Kiosk
base. “Original migration time projections ran- AG department had full use of its SAP environ-
Time is money ged from six to fourteen working days”, said ment. Stefan Reitinger summed it up this way:
In early 2000, IT management addressed the pro- Stefan Reitinger. “Nobody was willing to stick “The changeover equaled our best-case scenario … we
blem. Since it was designed for considerably less their neck out any further and nobody thought a continue to reap the benefits of this robust, balanced con-
data and a lower data growth rate, the old AS/400 shorter time span was possible.” The plan was figuration and we have reached all our objectives: no
system simply wasn’t up to this kind of volume. risky because not having the commercial R/3 more resource bottlenecks and we are optimally prepared
Even add-on outside, high-performance storage environment available for an extended time period for continued growth and any SAP upgrades down the
and backup systems failed to produce the perfor- could cost Kiosk AG a small fortune. road.”
18
Kiosk AG

Please contact the Oracle Services and Support team for further
information about database migration programs for SAP!
e-mail: Saponoracle_de.oracle.com, or Fax: +49 6227 8398-199

Effectiveness
Effectiveness

Executive Overview
Of Oracle9i Table Compression in SAP BW

Data stored in relational databases grows as a result


of business requirements for more information. A
significant portion of the cost associated with
maintaining large amounts of data is the cost of
disk systems, and the resources utilized in mana-
compressed data, and may have a significant posi-
tive impact on queries accessing large amounts of
data. Furthermore, customers should experience
improved performance of data management opera-
tions such as backup and recovery and Oracle9i
compression techniques ensure that compressed
data is never larger than uncompressed data.
table may contain mixed blocks is utilized to
9i
guarantee that data size will not increase as a
result of compression. In situations where com-
pression could increase the size of a block, it is
simply not applied to that block.
Compression occurs while data is being bulk in-
ging that data. Oracle9i Release2 Enterprise serted or bulk loaded. These operations include:
Edition introduces a unique way to deal with this
How it Works • Direct Path SQL*Loader
cost by compressing data stored in relational
tables. There is virtually no negative impact on Oracle9i Release2 compresses data by eliminating • CREATE TABLE … AS SELECT statement
query time against that data, thereby enabling duplicate values in a database block. Compressed • Parallel INSERT (or serial INSERT with an
substantial cost savings. This white paper docu- data stored in a database block is self-contained, APPEND hint) statement
ments a test conducted with data generated from that is, all the information needed to recreate the
a project initiated by SAP, Sun Microsystems and uncompressed data in a block is available within Existing data in the database can also be com-
Oracle in summer 2002. The type of data used that block. Duplicate values in all the rows and pressed by moving it into compressed form
was the result of a customer survey conducted by columns in a block are stored once at the begin- through ALTER TABLE…MOVE statement.
SAP and the amount of data was approximately ning of the block, in a symbol table for that block. This operation takes an exclusive lock on the table,
5.5 terabyte. All occurrences of such values are replaced with a and therefore prevents any updates, and loads until
short reference to the symbol table. Compressed it completes. If this is not desirable, Oracle9i’s
The test results show that a reduction in the data- database blocks look very much like regular data- online redefinition utility (dbms_redefinition plsql
base size could be more than 2 Terabytes, and the base blocks, with the exception of a symbol table package) can be used to override the exclusive lock.
operational data was compressed by a factor bet- at the beginning. Data compression works for all data types except
ween 2 and 3 times. This space saving was achieved
Program modifications done in the server to allow all variants of LOBs and data types derived from
without the usual impact on the systems perfor-
for compression were very localized. Only the LOBs, such as VARRAYs stored out of line or the
mance.
portions of the program dealing with formatting XML data type stored in a CLOB. Releases prior
Even by using the worst-case scenarios for the to Oracle9i Release2 already had the ability to
the block, and accessing rows and columns needed
compression feature, the overall performance was compress indexes, both Bitmap and B*tree, as
to be modified. As a result, all database features
slightly increased. Nevertheless the focus for well as Index Organized Tables.
and functions that work on regular database blocks
customers using this new Oracle9i feature should
also work on compressed database blocks.
clearly be the space saving aspects. Cost and Benefit Analysis
What can be Compressed The primary benefit of compression is the space sa-
Introduction Database objects that can be compressed in vings achieved. The ratio of the size of uncompres-
Commercially available relational database systems Oracle9i Release2 include tables and materialized sed data to compressed data is often referred to as
have not relied on compression techniques for data views. For partitioned tables, it is possible to the compression ratio. For example, a compression
stored in relational tables because the trade-off choose compression for some or all partitions. The ratio of 2 indicates that uncompressed data takes
between time and space for compression has not compression attribute can be declared for a table- twice as much disk space as compressed data.
always been attractive. A typical compression space, a table, or a partition of a table. When a high compression ratio can be achieved
technique may offer space savings, but only at the If compression is declared at the tablespace level, for large amounts of data, the resulting benefit is
cost of greatly increased query response times. then all tables created in that tablespace will use a significant savings of disk space. Often this
Oracle9i Release2 Enterprise Edition introduces a compression. But it is possible to alter the com- translates into indirect benefits when accessing
unique compression technique that is very attrac- pression attribute for a table or a partition within data. For example, if the access involves a table
tive for large data warehouses. The reduction of a tablespace with a different compression attribute, scan, the operation will be much faster because
disk space can be significant compared with stan- and the change will only applied to new data going compression made the table much smaller.
dard compression algorithms because it is opti- into that table or partition. As a result, a single Additionally, compression allows greater amo-
mized for relational data, has virtually no negative table or partition may contain some compressed unts of data to be kept in the database cache.
impact on the performance of queries against blocks and some regular blocks. The fact that a When compressed tables are accessed through an
19
index, there may be a performance advantage approximately 1:3 “Out-Of-The-Box”. Oracle’s load process is less. In the step of creating com-
because it increases the chance of finding more of compression algorithm is based upon eliminating pressed aggregated table data from a compressed
the table data in the database cache. An inherent duplicate values in each block, in most situations; source tables, we measured overheads of more
advantage of the compressed tables is that they there are additional techniques, which can improve than 80% on the database layer compared to non-
tend to have a better clustering factor for adjacent the compression ratios. compressed scenarios. SAP BW customers, who
indexes. First, the data can be loaded in order to maximize carry out the complete task on their BW layer will
the duplication of values within a database block. see less than 5% overhead in total. So taking the
Cost of Compression in General Due to the fact that this would require a coding application layer into account is essential to get
With Oracle’s unique compression technique, effort for the SAP BW software, we decide to use the the correct picture, when assessing the impacts
there is no expensive decompression operation “easiest” way of testing the feature, in order to for SAP BW customers.
needed to access compressed table data. Data flatten hurdles for a fast (first-step) implementation There is no measurable difference in the perfor-
compression is performed during bulk loading of of this technology by the SAP BW development mance of non-bulk INSERT operations on com-
data and the overhead associated with compressing team. For that reason any kind of aggregated data pressed and uncompressed data. The reason is
data is compensated by the reduction in free that contains summarized data is an ideal candi- that a conventional INSERT operation does not
block-search operations of the mass data. date for compression, because the GROUP BY go through compression. However, bulk INSERT
If rolling window partitioning techniques are used clause in the generation of such an aggregated operations, such as parallel INSERT or CREATE
for loading data, then increased load time may be data segment has the side effect of generating TABLE … AS SELECT operations, and INSERT
less of an issue, because the impact of load on other partially sorted data. Also, larger block sizes may with an APPEND hint (a.k.a. direct path INS-
data warehouse workload is minimal. Once the yield better compression in general. ERT) go through compression, and are subject to
data is loaded, compressed tables or partitions can This may happen for two reasons: First, there is the same bulk load performance characteristics as
be used like any other Oracle tables or partitions. an increased probability of duplicate values in a outlined above.
Data can be modified using INSERT, UPDATE, larger amount of data. Second, space taken by the DELETE operations are 10% faster for compressed
and DELETE commands. symbol table in each compressed block is amortized tables. The benefit comes from the fact that the
over more data fitting in a larger block. compressed rows are smaller, so that there is less
Note: data, which is modified without using bulk data to be logged. No extra work, such as cleaning
insertion or bulk loading techniques, may no In our testing environment the Oracle block size
was set to 32 KB, and we did not take advantage up of symbol tables when appropriate, is currently
longer be compressed. Data is not compressed done during this operation.
when conventional inserts are used, rather it is of the sorting option during our test. The reason
compressed only doing bulk load or when using is simple: To get this optimization implemented UPDATE operations are 10-20% slower for com-
the append-hint. at SAP BW customer sites it would be necessary pressed tables on average, mainly due to some
to change the code in the BW loading code. This complex optimizations that have been implemented
Deleting compressed data is as fast as deleting might be possible in the future, but we did not for uncompressed tables and not yet implemented
uncompressed data and inserts are also fast but want our results to be dependent on an SAP for compressed tables but these may be implemen-
not compressed. Following inserts are also as fast, development effort. ted in a future release of Oracle.
because data is not compressed in case of conven- Nevertheless a quick test showed we would have Querying on compressed data is generally faster
tional insert. been able to save an additional 10-20% of space than querying uncompressed data. For I/O-bound
consumption when data was sorted during inser- queries, accessing compressed data may be signifi-
Updating compressed data may be somewhat slo- tion in favor of the compression needs. cantly faster, while queries that gets all its data
wer in some cases. It is possible to cause frag- Storage attributes of a table affects compression from the buffer cache might get only slight ad-
mentation and waste disk space when modifying ratio, for example, a large PCTFREE will lead to vantages or even suffers on the compression, in the
compressed data. low compression ratios. Since frequent updates worst case of running in a CPU bounded environ-
are not expected on compressed tables, setting ment.
In some cases, updating compressed data may be
somewhat slower and no disk fragmentation may PCTFREE to 0 is recommended for all tables
storing compressed data. SAP BW Test Results
occur when modifying compressed data. For
Note: PCTFREE is automatically set to 0 for all Our tests were conducted on the same system
example, if a row is deleted, the space occupied
tables created with the COMPRESS attribute. used by SAP and Sun Microsystems in the BW 5
by a deleted row becomes block fragmentation
terabyte project. Detailed descriptions of this test
free in that block. But, since a conventional insert
with the configuration information, business
does not go through compression a future row to Performance Impact on Loads and DML in General model, and workflow details can be found in the
be inserted is likely not to fit into the space relea- Compressing data has a performance impact on project report white paper at:
sed by a compressed row. loads, DML statements, and queries. Oracle has http://www.sun.com/solutions/thirdparty/-sap/-
Data compression is more suitable for Data Ware- run numerous experiments to measure the perfor- collateral/index.html or http://service.sap.com
housing applications than OLTP applications mance characteristics of compression. (section BW/Media Center).
because many modifications to the compressed Compression overhead is most visible during bulk The project was audited by the Winter Corpora-
data might result in greater space wasting versus loading, for example, simple loads; compressing tion, and the corresponding report can be handed
the space saved through compression. In such a data may cause twice the CPU usage on average. out on request. Briefly the test was done on Sun
case, it would be necessary to recompress the data. If run on a system with unlimited I/O bandwidth, Microsystems E15000 server with 72 CPUs and
Also, data should be organized so that read only this may translate into doubling the load time 24 T3 storage arrays. The software used was SAP
or infrequently changing portions of the data when considering only the database layer. BW 3.0, Oracle9i R2 and the operating system
(e.g. historical data) should be kept compressed. However, two facts usually guarantee that this was Solaris 9.
slowdown will never be observed in real SAP BW Compression testing was conducted on almost all
Compression Ratio customer systems. The first is, that bulk loads are steps from the original project. The only exception
Oracle has tested compression with several real I/O-bound on most systems. In those cases, since was the step involving the transfer of data between
world customers’ data across different industries. compression reduces the amount of data blocks the Persistent Staging Area (PSA), which acted as a
The typical compression ratio for large data that have to be read in first and written down staging area, to the compressed InfoCube, that
warehouse tables range from 2:1 to 4:1. Higher later, there would be some benefit in terms of represented the fact tables. In addition to the
compression ratios have also been observed. For elapsed load time to equalize the cost of additio- compressed flag at table level Oracle requires either
example, call detail data from a major telecom nal CPU usage. Secondly, loading data into a SAP a bulk insert or append hint on individual state-
company produced 12:1 compression. BW mostly involves complex data transformations, ments, this could not be implemented in the
In our SAP BW test scenario the compression of which alone take a significant amount of time. application layer with ease. But the result can be
20 the main targeting objects achieved a ratio of Compression overhead, as a percentage of the entire derived from the step of transferring data between
Making the Grade …
SAP Solutions on Dell

Dell has made the grade as the recent recipient of the SAP Pinnacle Award for Excellence in

Customer Satisfaction and Support! Dell leverages its business model and core competency in

build-to-order delivery to help accelerate and enhance the deployment of SAP solutions. Through the

use of Dell™ PowerEdge™, PowerVault™ and Dell/EMC® solutions, Dell and SAP work together

to optimize successful implementations.

• SAP Global Technology Partner … for Successful Implementations

• SAP Competence Centers … for Sizing Assistance

• Certified PowerEdge Servers … for Reliable Platforms

• Dell Business Model … for High ROI

• SAP Pinnacle Award Winner 2002 … for Dell’s Excellence in Customer Satisfaction

Easy as

Visit www.dell.com/sap for more information.


Dell cannot be responsible for errors in typography or photography. SAP is a registered trademark of SAP AG. EMC is a registered trademark of EMC Corporation in the United States of America.
Dell, the DELL logo, PowerEdge, and PowerVault are trademarks of Dell Computer Corporation. Dell disclaims any proprietary interest in the marks and names of others. © 2002 Dell Computer Corporation.
All rights reserved. Printed in the USA. Reproduction in any manner whatsoever without the written permission of Dell Computer Corporation is strictly forbidden. November 2002, Cho.
system, in our tests the 72 Sparc III CPU’s were processing time for a single request was 5:52:35
fully loaded, and 500 MB/sec random read/write hours. During the loading phase an average CPU
activity pass through the storage system. The load of 94% and disk activity of 30% was observed.
bottom line of this quick and non-comprehensive
test; it seems possible SAP BW customers that Using compressed PSA tables had no impact on
the InfoCubes and the aggregates. These tasks run into the limits of their storage space require- the runtime of this step. This can mainly be explai-
end up in very similar types of SQL statements ments should consider migration to compressed ned by the fact that the majority of CPU work
and database tasks with similar data. tables “On-The-Fly”. Of course some work with was done in the application layer and the system
SAP BW development has to be done, but tech- load of reading few compressed blocks or even
nically it seems possible to migrate huge BW three times more uncompressed PSA blocks
databases in a time window of less than 8 hours. seems to be the same (all dominated by application
CPU workload).
Loading ASCII File Data Into Persistent Staging
Creating Indexes on the Persistent Staging As already mentioned, we were not able to mea-
Area (PSA) Tables
sure the runtimes for compressed target tables,
The Persistent Staging Area (PSA) is a set of Area (PSA) Tables
because that would need complex application
transparent database tables that acts as the initial In order to speed up the load into PSA, the indexes code changes. Nevertheless, regarding the data-
storage area of data where requested data unchan- on the PSA tables have been dropped. Since the base task we have a similar step when transferring
ged from the source system is saved according to indexes will be needed for loading the data from data from the InfoCube table to the aggregate
the structure defined in the DataSource. the PSA into the cubes, they must be recreated. tables later on. This phase was investigated deeply,
For the process of loading 960 ASCII files with To avoid disk sorts, the Oracle sort area size was so that these results can be transferred to this
R/3 operational data into the PSA tables, an R/3 increased from 10 to 70 MB, then the index step. As only slight slowdowns on the database
central instance has been set up with 3 batch creation was scheduled with 10 parallel processes level can be expected, the total impact in this
processes and 20 additional application servers directly at the database level. application CPU bounded task will be in a range
each with 4 batch processes resulting in 83 batch Creating all indexes on the uncompressed PSA below one percent, if at all measurable.
processes in total, which were loaded in parallel. tables took approximately 15:30 hours. The index
It took 44:50:25 hours to load 11,680,000,000 creation on compressed PSA tables was even Creating Indexes on the InfoCubes
records into the uncompressed PSA tables, which faster, with a runtime of 10:15 hours. The overall
performance of this step improved after the original As in the case of loading the PSA tables, all indexes
implies a throughput of more than 260 million on the InfoCube tables were dropped prior to the
records per hour. The average time required by a measurements, and new runtimes came down to
about 6 hours for uncompressed PSA tables and 4 load. Recreating all indexes on 80 uncompressed
single InfoPackage with 12 million records was InfoCubes (8 indexes per cube) required 2:53:00
3:47:34 hours. During the load the average CPU hours for compressed PSA tables.
hours. A maximum of 32 indexes were generated in
consumption was approximately 70%, and the parallel application jobs, each using 4 to 6 parallel
average disk usage was more than 95% on the Creating Database Statistics on the Persistent Oracle shadow processes. Creating the 800 indexes
disks holding the ASCII files, and 10% on the Staging Area (PSA) Tables on compressed InfoCubes took roughly the same
disks storing the database files. amount of runtime. Note that we never used com-
Therefore, database statistics on the PSA tables
Clearly the load process was I/O bound on rea- were created using 10 parallel jobs for 10 diffe- pression features for the index structures. Usually it
ding the ASCII data. Using compression on the rent PSA tables. In total this phase took approxi- should be possible to get very good buffer hit ratios
targeting PSA tables have had an enormous mately 19:30 hours to create all PSA table stati- on indexes at customer sites, and the space savings
impact on the storage requirements, while the 80 stics when tables were uncompressed. The same in this area can be ignored.
PSA tables originally filled up 3.5 terabytes; we task on the compressed PSA table structures could
measured only 1.1 terabyte with activated com- have been speed up to only 7:30 hours. Again, Generating Database Statistics of InfoCubes
pression. This was the main area of interest, after finishing the project, improvements lead to
because 70% of all data was stored in the PSA To make sure that the Oracle optimizer chooses
runtimes of about 7 hours for uncompressed and the best access path to the data when aggregates
objects and these savings have been achieved 2:45 hours on compressed PSA tables. The huge
without any significant performance impact. are created or queries are executed directly on
difference in favour of the compressed object can InfoCubes, it is necessary to have up to date sta-
Customer sites should experience an increased be explained by the I/O bounding of this task.
advantage, because in productive systems addi- tistics not only on the InfoCube tables, but also
Keep in mind that the compressed objects are on master data tables, etc. Therefore the complete
tional source data of the same type is stored in only about 1/3 in size of the original ones.
OSD tables that tend to be larger than the PSA Oracle database schema has been analyzed at this
by several factors, and have absolute identical point. The analysis of the schema and creation of
Loading Data from PSA Tables into InfoCubes all relevant database statistics took approximately
compression characteristics.
and ODS Objects 6 hours.
Regarding load performance, no measurable
effect will happen because source data reading and The next large step in the 5 TB data warehouse Using compressed tables was very effective in this
verifying are the dominating and bottlenecking was the loading of the PSA table data into the step, because analyzing more or less the whole
tasks in the phase. Because we see most of the corresponding InfoCubes and ODS objects. Since database was an I/O intensive task, and reading
space saving potential in the area of PSA and loading data from the PSA tables into the approximately 2/3 less database blocks will speed
OSD tables, it was decided to start some measure- InfoCubes is more CPU-intensive than loading up the process significantly, so a runtime of
ments regarding migration speed from non-com- the data from flat files to the PSA tables, this task approximately 4 hours was achieved.
pressed objects to compressed ones. Due to some is more compression-sensitive in terms of possible Note: Our testing environment tended to be I/O
specific reasons the test probably did not use the runtime drawbacks. The central instance was set bottlenecked at this point. So customers that run
fastest method of using ALTER TABLE MOVE up to have 3 batch processes and 25 application their BW on only 4 or 8 CPUs might observe
statements. But even with normal parallel insert servers were set up to have 4 batch processes each, no improvements depending on the system and
mechanisms we reached a runtime throughput of resulting in total of 103 batch processes. the detail level of the statistics and performance
roughly 1.5 million rows per second. In this phase, 1152 load requests needed to be degradation is also possible.
This would migrate a 146 million row InfoCube processed; 12 load requests for each of the 80
in 90 seconds. All big tables of the test scenario InfoCubes, and 48 requests for each of the 4 ODS Creating Aggregates and their Indexes
(80 InfoCubes and 80 PSA tables) would have objects. In total, more than 14 billion records were Aggregates dramatically improve runtimes of
had a core database migration time between 4 written to the data targets and required 65:17:19 queries, which retrieve sums from a set of records
and 5 hours only! Taking into account that to hours for this step. The average throughput was by storing pre-summarized data. Based on the
22 achieve this speed would require at least the test 214,677,532 records per hour and the average queries defined for the InfoCube, 10 aggregates
have been designed for each cube, which help to
optimize the query response times. The aggregates
were created in parallel; a maximum of 5 aggrega-
tes were created at a time. However, a parallel
degree of 12 on the Oracle database level has been
used, which means, for each aggregate, 12 parallel
query slave processes were reading data from the
InfoCube fact tables and writing to the aggregate
tables at the same time.
The creation of 800 aggregates took 14:14:45
hours when reading from uncompressed InfoCube.
Gathering the source data from compressed
Infocube resulted in a slightly longer runtime of
about 3%. The performance difference at this step
was heavily dependent on the kind of aggregate
that had to be built.
If aggregates do not massively summarize the
source data, then this step tends to be more I/O
sensitive and therefore compression should be
advantageous. On the other hand the CPU over-
head of uncompressing the source data will
impact negatively if the aggregation itself uses all However, unlike other phases of the test these characteristic for indexes on compressed tables.
the CPU power that it can get. results can hardly reach stages of general validity.
Another interesting point in this phase was the As already mentioned, it would have been easy to
generate tests, where compression takes advanta- Conclusion
test that investigated the potential on space savings The cost of disk STORAGE systems can be a
at the aggregate level. As aggregates are normal ge of improved buffer hit ratios in the database
buffer cache. These scenarios would not suffer on large portion of building and maintaining large
tables from the database point of view, they can data warehouses like SAP BW. Oracle9i Release 2
easily be compressed like other non-redundant realism, however our defined goal was slightly
different. Because it is impossible to simulate a helps to reduce this cost by compressing the data
tables. Because of the different information stored in the database, and it does so without the
abstraction layer of those objects, it’s is interesting scenario that represents the majority of all custo-
mer queries, we were more interested in looking typical trade-offs in run time performance.
to observe that the compression ratio potential
was almost the same as the one of their under- at a worst-case scenario when using the compres- The test case demonstrated that a space saving
lying InfoCubes. sion option. factor from 2 to 3 could be accompanied by a
This should be an environment where no I/O slight performance advantage. The database size
The resulting number of one typical, big aggregate
saving can be achieved with compression, because was reduced from 5 terabytes to less than 2 tera-
(/BIC/E100966) was that the table shrank from
all blocks are already loaded to the buffer cache. bytes and the overall runtime performance was
66016 blocks to 31214 blocks after compression.
Surprisingly enough, this test showed a clear improved by approximately 10%. However, the
Investigating the potential of compression on the
advantage for the compressed tables. Additionally, performance impacts cannot be generalized in the
aggregate level has a different background than
it appears the way compression affects memory- same manner that the space saving potential can
the compression tests done on the larger InfoCube
only queries is data dependent and therefore not be.
and PSA/OSD tables. Because only a small per-
centage of the overall storage of the warehouse generally foreseeable. Depending on the specific system environment
was occupied by the aggregates, there won’t be a Using our test environment, a query that has to these can be the object of change. But, because of
significant space saving at the disk level. touch all data from an InfoCube using a Full- the pessimistic assumption used in the test it is
What makes the compression of aggregates an Table-Scan finished in 11 seconds on an uncom- unlikely that customers will see performance deg-
interesting target for investigation is the fact that pressed InfoCube table. The buffer cache hit ratio radations.
customers will do most of their reporting on was 100% and Parallel-Query-Slave processes Our tests were conducted without using special
these tables. And because they are aggregated used all 72 CPUs exhaustively. SGA sizing that favors compressed data seg-
there is a good probability to reach good buffer Under the same conditions the query took about ments. With regard to the compression option we
hits ratios in the SGA. Getting a compression 8 seconds, when running on the compressed tried to simulate a worst-case scenario, that is,
ratio of approximately 2 for the aggregates, can InfoCube. Proofing the results of a previously either all data had to be read from disk (in cases
lead to a significant improvement in the buffer completed test, which used a more realistic I/O where huge tables were fully scanned) or all data
hit ratio and consequently performance improve- load, performance was improved as expected when (even the uncompressed data) was held in the buffer
ments at the query level. using compression. cache (for tasks that only involve small aggregated
In SAP BW, aggregates technically are stored in Uncompressed InfoCube full table scan runtime tables and/or indexed data).
the same way as InfoCubes. The considerations was almost twice as high as the compressed scena- In real customer scenarios, you will usually be
regarding indexes on the tables made above apply rio; 31 seconds compared to 17 seconds. The query between these extreme situations, and so the
to aggregates as well. Therefore aggregate indexes in this test was obtained from the aggregate crea- reduction of space requirements will improve
were dropped prior to their population and recreated tion step, because this simulates what happens at your buffer hit ratios and consequently your per-
after loading the aggregates. The time required to productive customer sites, when an ad-hoc query formance. Consequently, this project shows that
create indexes on uncompressed aggregates was hits the system that has no prepared aggregation SAP BW customers using Oracle9i Release 2 and
0:52:12 hours. The same runtime was measured table, like typical reporting queries do. the compression option for table segments should
when aggregates run through compression before. Queries that used index based access plans have be able to massively reduce their storage space
also been tested. Doing I/O for the InfoCubes the requirements without negatively impacting run-
User Queries uncompressed target took 313 seconds – much time performance.
This test was probably the most difficult; in order longer than the less I/O intensive 254 seconds This is valid for tasks of warehouse maintenance as
to match “real-world-scenarios” SAP has conduc- running the same query on compressed targets. well as for warehouse query workload. Customers
ted customer surveys gathering information Not only a better buffer hit ratio for the query can can expect some performance improvements, but the
about the type and the amount of query types be assumed because of the higher density, but main focus should clearly target the space saving
that typically can be expected on the system. also inherently better clustering factors are aspects.
23
SAP Server Consolidation – combining high
flexibility with low TCO
It’s a fact that the use of SAP solutions in companies is increasing on a steady pace. However, the
implementation of additional mySAP application components also demands challenges for existing IT
infrastructures. For various reasons enterprises view the “add another server” approach critically – and
focus consequently on consolidated SAP solutions. Dr. Michael Missbach from HP SAP Competence
Center Walldorf, Germany explains the main advantages of an SAP server consolidation for the user
and outlines the concept of SAP consolidation.
? Why is SAP consolidation in the spotlight Missbach: That is different for each SAP instal-
now ? lation. However, in general a consolidation of an
older system with dedicated database and several
application servers into a single central system,
Missbach: In the past larger client-server systems can be expected to have a redemption period of isolation on the other hand. Here, the various par-
had to be distributed due to CPU and memory about six months, simply through the savings on titioning methods come into play. For example,
restrictions. Today, we have immense CPU power maintenance and operating costs; consequently with HP-UX, we have an unrivalled wide range
available, and with the 64-bit SAP core also the the return of investment is extremely good. of various possible partitioning technologies. I
address space to allocate sufficient memory If this is taken one step further, consolidation of will mention only resource, virtual and hard par-
resources – even for a large number of users on a several SAP systems onto a single server results in titions. Through this, we are able to combine the
single computer. As a result, the number of servers further cost saving, and increased flexibility various methods in order to achieve the optimum
needed for an SAP system can be significantly because every computer that is no longer neces- for the customer.
reduced, despite increasing performance require- sary for the SAP landscape reduces the operation
ments. In addition we have partitioning technolo- effort by two to three working days per month ? Could you briefly outline an example of when
gies on hand to grant resources on demand to plus the maintenance costs. In many cases, enter- it could be beneficial to freely allocate resources
multiple SAP systems hosted together on one prises operate environments with over 100 com- on demand ?
single computer. puters just for SAP today. The benefits achieved
by the reduction of such an environment to only Missbach: In a world of mergers and acquisitions
? Why do companies think about SAP/server four to six servers through consolidation can be
consolidation? What are the driving forces ? where the performance demands can’t be predic-
seen very clearly. ted over longer periods the benefits of flexibility
is obvious. Typical examples are mySAP compo-
Missbach: Under the current economic conditi- ? Does this relate to Unix as well as NT infra- nents with temporary performance peaks, like
ons enterprises put priority on cost reductions, structures ? CRM during Christmas business or Financials
primarily in maintenance and operating costs. during year end closing. With a consolidated
However, there is also a need for greater flexibility Missbach: The principle is basically applicable system, all the available resources, which are not
in the deployment of mySAP components. Both to all SAP IT infrastructures. However, the 64- necessarily needed by other components at the
can be achieved by SAP server consolidation, bit SAP core is the key to SAP server consolidati- same time, could be thrown into the battle.
providing resources in a flexible way – where they on, which until now has only been available from However, it is important that this allocation is
are needed. Furthermore, consolidation provides Unix for common operating systems. There are dynamic as well as transparent for the mySAP
additional performance due to optimal load also still some differences in the partitioning components, i.e. that it can be done without a re-
balancing. Nevertheless, companies having faced capabilities of the various operating systems. boot of the SAP system. The combination of run-
the proliferation of a huge number of servers go Consequently, consolidation on 32-bit computers ning HPUX and Oracle9i for example is a very
for consolidation in order to reduce the complexity and operating systems has only been possible important area where CPU and memory resources
of their infrastructure and make them “manageable” with a very small number of users in the past. can be used dynamical.
again. This is now changing with the availability of 64-
bit Itanium systems and the corresponding SAP ? By what means is an SAP server consolidation
? One possible retort is that IT infrastructure 64-bit core for NT and Linux. The excellent per- project carried out ?
manufacturers have brought up the subject of formance of these new Itanium systems could
SAP server consolidation simply to generate already be demonstrated by running SAP SD and
Missbach: Due to the fact that SAP server conso-
more new sales ? ATO benchmarks based on the Oracle9i database
lidation does not involve changes within the
(see SAP benchmark certification 2002069*)
application logic, such a project is relatively
Missbach: That is absolutely not the case. The unspectacular. In a joint workshop the customers
customers drive the subject, in particular IT ? You said before that greater flexibility is also define their requirements and our consolidation
managers responsible for data centres. They are achieved by SAP server consolidation. Could specialists introduce the corresponding technical
looking for possible solutions in order to accom- you explain this aspect in more detail ? solutions. This includes an evaluation of the exis-
plish a controllable and efficiently manageable ting environment from a sizing perspective as
number of servers despite the increasing number Missbach: Running multiple mySAP components well as concepts to grant the necessary level of
of users and applications, something along the on a server with partitioning enables flexible system availability and isolation. After the conso-
lines of mySAP. Incidentally, SAP supports con- allocation of resources to applications. Without lidated approach is approved, the new hardware is
solidation actively for example with initiatives consolidation, dedicated extra power has to be installed and the applications are moved to the
like “Multiple Components, One Database implemented for the peak load on every system. consolidated environment.
(MCOD)”. In a consolidated environment, these “extras” can For a couple of years, this has been our daily
be shared flexibly. The art of consolidation is to business, expenditure and possible escalation pro-
? Which cost savings can be expected with SAP achieve a maximum of flexibility on the one hand cedures being well under control. Longer project
server consolidation ? while guaranteeing the necessary degree of duration periods don’t have to be expected.
25
? With a logical consolidation of SAP applicati- Database Architecture Consulting (SAP)
ons, however, things are different, aren’t
they ?
SBS and Oracle Real
Missbach: In contrast to a technical consolidati-
Application Clusters
on, a merger or fusion of SAP systems involves
also the business processes. Therefore we establis- Information technology is the lifeblood of vir-
hed cooperation with the SAP landscape optimi- tually all business operations. Information
zation (LSO) group providing application exper- systems are used to interact with customers, to
tise and tools. Such a project however, is in the sell products, to track inventory, and to make
range between the Euro conversion and a R/2 – strategic decisions that can significantly affect the
R/3 migration. According to our experience, enter- future course of operations.
prises start with technical consolidation, in order This package is of relevance for nearly every SAP
to achieve rapid ROI and providing the technical IT departments must be able to maintain their customer of the relevant size.
pre-conditions for the next step. This is then fol- systems to avoid impacting the flexibility of their Various database availability concepts are evalua-
lowed by an appropriate logical consolidation company’s business operations. In today’s rapidly ted for customers on the basis of their system
project over a longer period. Using this method, changing market, this imperative means that landscapes. Interdependencies with other soft-
we already have successfully carried out several systems must be able to sustain rapid growth in ware clusters or I/O technologies are examined
hundred SAP technical as well as logical consoli- demand as well as being consistently available. and assessed. Integration with system manage-
dations worldwide. ment is also analyzed. The existing database
As a company expands their operations, either backup and recovery concept is adapted.
? Can many servers be reduced to just one through customer growth, demand growth or
single server through an SAP consolidation ? entering into new areas of business, their systems RAC Workshop
must be able to easily grow with them. As opera- Management workshop on RAC and its introduc-
Missbach: In theory, yes – but in practice this tions become more global and customer-interac- tion.
makes little sense. For mission critical systems tive, the systems that support these operations • RAC Technology Workshop
like SAP you always have to answer the question: must be available all the time. And no business Technical workshop for administrators.
“how can we maintain high availability?” – can afford a crushing maintenance burden to sup-
port these new demands. • RAC Introduction
because even for a consolidated system you should Turnkey introduction of RAC.
never put all eggs in one basket. Our recommen- Oracle9i Real Applications Clusters provides
dation is therefore not to consolidate a bunch of • Database Cluster Tuning
both scalability and availability as a single, easy Tuning of database clusters and applications
servers into a single one, but to consolidate many to manage database product. A recent study by
into a few. With a concept we call redundant running on them.
the Hurwitz Group on total cost of ownership
array of consolidated servers (RACS) we can found that almost 60% of companies identified • OPS / RAC Turnkey Migration
achieve maximum flexibility combined with high- scalability or availability would result in an esti- Integration of RAC for OPS customers,
est availability and lowest TCO. mated 185 savings in the total cost of ownership from concept creation to acceptance.
over three years.
? What does an SAP server consolidation mean • RAC Operational Support
for specialists in system operation, in the com- Siemens Business Services has now gained a very Maintenance and operation of RAC clusters!
puter centre ? high level of market acceptance in IT and is a
major service partner in the SAP arena. A primary For more information please contact
focus of its activity consists of consulting and E-Mail: Ulrich.wilmsmann@siemens.com
Missbach: From a technical point of view, not E-Mail: eduard.port@siemens.com
too much. It’s more a change in the paradigm project implementation on the topics of availabi-
that an SAP system has to consist of distinct data- lity and scalability.
base and application server. Since we have made
the consolidation of SAP systems a reality some The issue of scalability plays an especially impor-
years ago, this paradigm change was launched. tant role, in particular in the SAP arena. Whereas
Today, there are many satisfied HP customers the number of application servers can be increa-
who have implemented an SAP server consolida- sed virtually as desired, the database server is
tion with our support. The fact that we hear litt- always the only non-transparent scalable resource.
le from these installations is a good sign for us, To date, server power has most often been used as
because it simply means that everything is o.k. the estimate of the transaction performance of a
database. Particularly in the SAP R/3 environ-
*Certification Number. 2002069 ment, the database server constituted the actual
The SAP ATO (Assemble-to-Order) standard 4.6 C weak point as regards scalability in large applica-
application benchmark performed on November 25, tions. RAC is one option for increasing the over-
2002 by HP in Cupertino, California, USA, was cer- all performance of the database and the servers
tified on December 13, 2002 with the following data: involved.
Fully business processed Assembly Orders/hour(*): 3,090
CPU utilization central server: 94%, Operating System: And for the first time, smaller SAP customers
HP-UX 11i now also have opportunity of creating an afforda-
RDBMS: Oracle 9i, R/3 Release: 4.6 C, Total disk ble, highly available database infrastructure.
space: 528 GB
Configuration: 1 Central Server: hp rx5670, 4-way
SMP, Itanium II, 1 GHz, 3-Tier Architecture for SAP R/3 Applications
Siemens Business Services offers service packages
3 MB L3 cache, 24 GB main memory (*) Assembly
Order: Request to assemble pre-manufactured parts and in the RAC environment that are of particular
assemblies to finished products according to an existing interest to SAP customers.
26 sales order.
my SAP Solutions Running
Oracle9i RAC and Linux
Mission-Critical Applications with Scalability and High-
Availability: A Technical Demonstration
The Challenge: Enterprise solutions such as mySAP® applications from SAP demand
high-availability and easy scalability regardless of the size of the customer’s organization.
SAP users require uninterrupted database access whether hardware and software
failures occur or not. Businesses and organizations grow, and their IT requirements can
change rapidly. Adding capacity or changing workloads to SAP systems that includes re-
configuring or replacing the database server is disruptive and increases system down-time
and raises cost.

The Solution: Working closely with SAP, Oracle, Intel, and Red Hat, Dell has leveraged
Oracle9i ™ RAC technology to demonstrate the technical viability of running mySAP
on Linux in a robust, highly available, and scalable environment, allowing organizations
to optimize their return on IT investment using flexible, industry-standards based solutions.

Configuration Example:
• 2 x Dell PowerEdge 2650 servers, each with two Intel® Xeon™ processors,
as Linux based nodes of the Oracle9i RAC database (Node #1 and #2)
• 1 x Dell PowerEdge 2650, with two Intel® Xeon™ processors, as SAP central
instance/application server
• 1 x Dell|EMC fibre channel storage unit providing the resilient repository
for the SAP data
• Gigabit Ethernet as cluster-interconnect for Oracle9i RAC
• Oracle9i RAC Release 2 version 9.2.0.1
• Operating System: Red Hat® Linux Advanced Server 2.1
• SAP 4.6C with 4.6D kernel

How it works:
When a user logs onto the SAP system using the SAPGui, the SAP central instance/appli-
cation server connects to the Oracle9i database via one of the Oracle9i RAC nodes
(assume it is Node #1) and can work normally in the SAP system. Simulating a database
node failure, Node #1 of the Oracle9i RAC cluster is brought down suddenly. The existing
database connection of the application server is redirected to the surviving node (Node #2)
by means of the Oracle TAF (Transparent Application Failover) feature, and continues to work.
For the user, the system continues to run and he can work normally. The failure of Node #1
does not result in having to restart the Oracle9i database, because the database connection to
the failed node is routed to a surviving node.

©2002 Dell Computer Corporation. Dell, the Dell logo, and PowerEdge, are registered trademarks or trademarks of Dell Computer Corporation. Oracle and the Oracle logo are registered trademarks and Oracle9i is a trademark of Oracle
Corporation. Intel and Pentium are registered trademarks; Xeon is a trademark of Intel Corporation. Red Hat and the Shadowman logo are registered trademarks of Red Hat Inc. Linux is a registered trademark of Linus Torvalds. SAP and mySAP
are registered trademarks or trademarks of SAP AG. Dell disclaims proprietary interest in the marks and names of others.

27
Oracle for SAP – Release Matrix SAP Business Information Warehouse 3.0B:

8.1.7 32-bit:
Intel NT, Windows2000/XP, Intel Linux
8.1.7 64-bit:
Current Database-SAP R/3 Release Matrix HP Tru64, IBM AIX, HP-UX,
Solaris (SUN and Fujitsu-Siemens)
9.2 32-bit:
SAP R/3 Version 3.1I, 4.0B, 4.5B, 4.6B: Intel NT, Windows2000/XP, Intel Linux
9.2 64-bit:
8.1.7 32-bit: HP Tru64, HP-UX PA-RISC, HP-UX IA-64,
Intel NT/Windows2000/XP, Intel Linux, IBM AIX 5L, Solaris (SUN and Fujitsu-
IBM AIX, HP-UX, Reliant UNIX, Solaris Siemens), Windows2003 (planned for May)
8.1.7 64-bit:
HP Tru64, IBM AIX, HP-UX, Reliant UNIX,
SAP Business Information Warehouse 3.1:
Solaris 9.2 32-bit:
9.2 32-bit Intel NT/ Windows2000/ XP, Intel Linux
Intel NT, Windows2000/XP, Intel Linux 9.2 64-bit:
9.2 64-bit HP Tru64, HP-UX PA-RISC, HP-UX IA-64,
HP Tru64, HP-UX PA-RISC, HP-UX IA-64, IBM AIX 5L, Solaris (SUN and Fujitsu-
IBM AIX 5L, Solaris (SUN and Fujitsu- Siemens), Windows2003 (planned for May)
Siemens), Windows2003 (planned for May)
Oracle9i Real Application Clusters
SAP R/3 Version 4.6C/D:
SAP R/3 4.6 C/D: April/ 2003:
8.1.7 32-bit: (controlled availability planned)
Intel NT/Windows2000/XP, Intel Linux, HP Tru64
IBM AIX, HP-UX, Reliant UNIX, Solaris
8.1.7 64-bit:
Oracle Desupport Dates:
HP Tru64, IBM AIX, HP-UX, Reliant UNIX,
Solaris 8.0.6 Sept. 30, 2001
9.2 32-bit 8.1.7 Dec. 31, 2003
Intel NT, Windows2000/XP, Intel Linux 9.2 Dec. 31, 2005 (planned)
9.2 64-bit
HP Tru64, HP-UX PA-RISC, HP-UX IA-64, End of Maintenance for SAP R/3 Releases:
IBM AIX 5L, Solaris (SUN and Fujitsu- standard additional support fee
Siemens), Windows2003 (planned for May) 3.1I 12/2003 12/2004
SAP R/3 Enterprise: 4.0B 12/2003 12/2004
4.5B 12/2003 12/2004
8.1.7 32-bit: 4.6C 3/2006
Intel NT, Windows2000/XP, Intel Linux (planned)
8.1.7 64-bit:
HP Tru64, IBM AIX, Solaris (SUN and
Fujitsu-Siemens)
Imprint
Oracle for SAP Global Technology Center
9.2 32-bit Altrottstr. 31
69190 Walldorf, Germany
Intel NT, Windows2000/XP, Intel Linux Tel. ++49 (0) 62 27-83 98 - 0
Fax ++49 (0) 62 27-83 98 - 199
9.2 64-bit E-Mail: saponoracle_de@oracle.com
Albrecht Haug
HP Tru64, HP-UX PA-RISC, HP-UX IA-64, albrecht.haug@oracle.com
Internet: http://www.oracle.com/newsletters/sap
IBM AIX 5L, Solaris (SUN and Fujitsu-
http://www.sap.com/partners/directories/technology.asp
Siemens), Windows2003 (planned for May) Reproduction allowed only with the publisher’s express permission
Oracle, Oracle8i, Oracle9i, Oracle Express, Discoverer, Designer, Developer, and the Oracle Logo are trademarks or registered trade-
marks of Oracle Corporation.
SAP Business Information Warehouse 2.0B/2.1C SAP, R/2, R/3, mySAP, mySAP.com, and other SAP products and services mentioned herein as well as their respective logos are trade-
marks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service
names mentioned are the trademarks of their respective companies.
8.1.7 32-bit: This publication is provided “as is“ without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of merchantability, fitness for a particular purpose, or non- infringement. This publication could include technical inaccuracies
Intel NT/Windows2000/XP, Intel Linux, or typographical errors. Changes are periodically added to the information herein; these changes will be incorporated in new editions
of the publication. Oracle Corporation. may make improvements and/ or changes in the product(s) and/ or the program(s) described
IBM AIX, HP-UX, Solaris (SUN and in this publication at any time. Oracle Corporation has intellectual property rights relating to technology described in this document.
In particular, and without limitation, these intellectual property rights may include one or more patents or pending patent applications
Fujitsu-Siemens) in the U. S. or other countries.No part of this publication may be reproduced or transmitted in any form or for any purpose without
the express permission of Oracle. The information contained herein may be changed without prior notice. Some software products
8.1.7 64-bit: marketed by SAP AG and its distributors contain proprietary software components of other software vendors.
HP Tru64, IBM AIX, HP-UX, Solaris Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are registered trademarks of Microsoft Corporation.
IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®, S/390®, AS/400®, OS/390®, OS/400®, iSeries,
(SUN and Fujitsu-Siemens) pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix and Informix® Dynamic ServerTM
are trademarks of IBM Corporation in USA and/or other countries.
9.2 32-bit ORACLE® is a registered trademark of ORACLE Corporation.
Intel NT, Windows2000/XP, Intel Linux UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.

9.2 64-bit Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®, VideoFrame®, MultiWin® and other Citrix product
names referenced herein are trademarks of Citrix Systems, Inc.
HP Tru64, HP-UX PA-RISC, HP-UX IA-64, HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts
Institute of Technology.
IBM AIX 5L, Solaris (SUN and Fujitsu-
28 Siemens), Windows2003 (planned for May)
JAVA® is a registered trademark of Sun Microsystems, Inc. JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used
under license for technology invented and implemented by Netscape
MarketSet and Enterprise Buyer are jointly owned trademarks of SAP AG and Commerce One.
All trademarks and registered trademarks are the sole property of their respective owners.
© Copyright 2003 Oracle Corporation. All rights reserved.

Вам также может понравиться