Вы находитесь на странице: 1из 49

PeopleSoft Red Paper Series

PeopleSoft 8 Performance on Oracle 9i Database


By: Jayagopal Theranikal, Sumathy Muthuswamy December 2004

Including: PeopleSoft Batch Performance Tips Database Tuning Tips SQL Query Tuning Tips Use of Database Features Capturing Traces

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

PeopleSoft 8 Performance on Oracle 9i Database


Copyright 2004 PeopleSoft, Inc. All rights reserved. Printed on Recycled Paper. Printed in the United States of America. Restricted Rights The information contained in this document is proprietary and confidential to PeopleSoft, Inc. Comments on this document can be submitted to redpaper@peoplesoft.com. We encourage you to provide feedback on this Red Paper and we will update it based on your feedback. When you send your feedback to PeopleSoft, you grant PeopleSoft, a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, for any purpose without the express written permission of PeopleSoft, Inc. This document is subject to change without notice, and PeopleSoft does not warrant that the material contained in this document is error-free. If you find any problems with this document, please report them to PeopleSoft in writing. This material has not been submitted to any formal PeopleSoft test and is published as is. It has not been the subject of rigorous review. PeopleSoft assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by PeopleSoft for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments will do so at their own risk Information in this book was developed in conjunction with use of the product specified, and it is limited in application to those specific hardware and software products and levels. PeopleSoft may have patents or pending patent applications covering the subject matter in this document. The furnishing of this document does not give you any license to these patents Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. PeopleSoft, PeopleTools, PS/nVision, PeopleCode, PeopleBooks, PeopleTalk, and Vantive are registered trademarks, and Pure Internet Architecture, Intelligent Context Manager, and The Real-Time Enterprise are trademarks of PeopleSoft, Inc. All other company and product names may be trademarks of their respective owners. The information contained herein is subject to change without notice.

Table of Contents
TABLE OF CONTENTS........................................................................................................................................................... 2

CHAPTER 1 - INTRODUCTION............................................................................................................................................ 5 Structure of this Red Paper Related Materials 5 5

CHAPTER 2 - PEOPLESOFT BATCH PERFORMANCE TIPS ....................................................................................... 6 Table and Index Statistics 6 Gather Statistics ....................................................................................................................................................................... 6 Statistics at Runtime for Temporary Tables............................................................................................................................. 7 Histograms ............................................................................................................................................................................... 8 Dedicated Temporary Tables(DTT) 13 What are Dedicated Temporary Tables(DTT)? ..................................................................................................................... 13 Performance Tips for DTT..................................................................................................................................................... 14 AE Performance with DTT .................................................................................................................................................... 14 Create them as Oracle Global Temporary Tables (GTT) - Not advisable ......................................................................... 16 Tablespace Selection 17 Dictionary Managed Tablespaces(DMTs) ............................................................................................................................. 18 Locally Managed Tablespaces(LMTs)................................................................................................................................... 18 Temporary Tablespaces ......................................................................................................................................................... 20 Automatic Segment Space Management................................................................................................................................ 20 Index Validation 21 Index Maintenance Tips......................................................................................................................................................... 21 Function-based Indexes.......................................................................................................................................................... 21 Table/Index Partitioning 22 What Is Partitioning? ............................................................................................................................................................. 22 Partitioning Methods.............................................................................................................................................................. 22 Partitioned Indexes................................................................................................................................................................. 23 Advantages of Partitioning..................................................................................................................................................... 24 Guidelines for Partitioning..................................................................................................................................................... 25 UNDO Management 25 Automatic Undo Management ............................................................................................................................................... 25 Manual Undo Management.................................................................................................................................................... 26 Parses vs. Executes 27 Use of Bind Variables ............................................................................................................................................................ 28 BATCH SERVER SELECTION ............................................................................................................................................ 34 Scenario 1: Process Scheduler And Database Server on Different Boxes Scenario 2: Process Scheduler and Database Server on one BOX What is the recommended scenario? 34 34 35

CHAPTER 3 - CAPTURING TRACES ............................................................................................................................... 36 Application Engine Trace Online Trace 36 36

Oracle Trace 37 Trace at Instance Level: ......................................................................................................................................................... 37 Trace at Session Level: .......................................................................................................................................................... 37 Trace for different session :.................................................................................................................................................... 38 TKPROF 38

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

STATSPACK 38 Installing and Using Statspack ............................................................................................................................................... 38 CHAPTER 4 -DATABASE TUNING AND INIT.ORA PARAMETERS ......................................................................... 40 Database Tuning Tips 40 Block Size .............................................................................................................................................................................. 40 Shared Pool Area ................................................................................................................................................................... 41 Data Dictionary hit ratio......................................................................................................................................................... 41 Buffer busy waits ................................................................................................................................................................... 41 Log Buffer.............................................................................................................................................................................. 42 Tablespace I/O ....................................................................................................................................................................... 42 Full Table Scans..................................................................................................................................................................... 42 Rebuilding Indexes ................................................................................................................................................................ 43 Sorting.................................................................................................................................................................................... 43 Important Parameters for Oracle 9i 44

APPENDIX A SPECIAL NOTICES................................................................................................................................... 45 APPENDIX B VALIDATION AND FEEDBACK............................................................................................................. 46 Customer Validation Field Validation 47 47

APPENDIX C - REFERENCES ............................................................................................................................................. 47 APPENDIX D REVISION HISTORY ................................................................................................................................ 48 Authors................................................................................................................................................................................... 49 Reviewers............................................................................................................................................................................... 49 Revision History .................................................................................................................................................................... 49

PeopleSoft 8 Batch Performance on Oracle 9i Database 1/4/2005

Chapter 1 - Introduction
This Red Paper is a practical guide for technical users, database administrators, and programmers who implement, maintain, or develop applications for a PeopleSoft system. In this Red Paper, we will discuss guidelines on how to improve the performance of PeopleSoft 8 batch/Online processes in the Oracle9i environment. Most of the information in this document originated within the PeopleSoft Benchmarks and Global Support Center and is therefore based on "real-life" problems encountered in the field. The issues discussed in this document are the problems that prove to be the most common or troublesome.

STRUCTURE OF THIS RED PAPER


This Red Paper provides guidance to get the best performance of PeopleSoft batch processes in the Oracle database environment. Please note that PeopleSoft updates this document based on the most current feedback that we receive from the field. Therefore, the structure, headings, content, and length of this document might vary with each posted version. To check for updates after you downloaded it, compare the verion number of your document with the version number of the document posted on Customer Connection.

RELATED MATERIALS
This paper is not a general introduction to environment tuning and we assume that our readers are experienced IT professionals, with a good understanding of PeopleSofts Internet architecture and Oracle database. To take full advantage of the information covered in this document, we recommend that you have a basic understanding of system administration, basic Internet architecture, relational database concepts/SQL, and how to use PeopleSoft applications. This document does not intend to replace the documentation delivered with the PeopleTools 8 or 8.4 PeopleBooks. We recommend that you read the PeopleSoft application related information in the PeopleBooks before you read this document to ensure that you have a good understanding of PeopleSoft batch process technology. Note: Most of the information in this document eventually gets incorporated into subsequent versions of the PeopleBooks. You will find many fundamental concepts related to the performance tuning in the Oracle Tuning chapter in PeopleSoft Installation Guide. We also recommend that you read the Oracle9i database administration guide.

Copyright PeopleSoft Corporation 2004. All rights reserved.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Chapter 2 - PeopleSoft Batch Performance Tips TABLE AND INDEX STATISTICS


CBO(Cost Based Optimizer) is the PeopleSoft recommended optimizer for Oracle 9i. Performance of a query with Oracle's CBO depends on an appropriate table and index statistics. Keeping the statistics up to-date is crucial for optimum performance. You should have a set of scripts to update the statistics and run them weekly, monthly, or quarterly depending on the data growth.

Gather Statistics
The DBMS_STATS package provides the ability to generate statistics in parallel by specifying the degree of parallelism. Generating statistics in parallel significantly reduces the time required to refresh object statistics. Create SQL scripts to gather table-level or schema-level statistics and run them periodically.

Sample DBMS_STATS Command:


SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME , TABNAME => 'PS_CUSTOEMR' , PARTNAME=> NULL , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); => 'SYSADM'

SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME => 'SYSADM' , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); SQL> EXECUTE DBMS_STATS.GATHER_DATABASE_STATS (ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); Note: Specifying the ESTIMATE_PERCENT => 100 will COMPUTE. This is also the default value .

Note: The use of DBMS_STATS is preferred over ANALYZE command because you get table statistics faster with DBMS_STATS and ANALYZE command will not be supported in the future releases of Oracle.

When CASCADE parameter is set to TRUE, the associated indexes will also be analyzed. The default setting for CASCADE parameter is FALSE. Note: Specifying the DEGREE will only help the tables (Partitioned or Non-Partitioned) to run in parallel. The index statistics cannot use this flag and does not run in parallel.

Copyright PeopleSoft Corporation 2001. All rights reserved.

Statistics at Runtime for Temporary Tables


PeopleSoft uses shared temporary tables or dedicated temporary tables in the batch processes. These temporary tables will have few or no rows in the beginning of the process and few or no rows at the end of the process. Temporary tables get populated during the process and get deleted or truncated at the end or at the beginning of the process. Keeping the statistics up for these tables is challenging. With PeopleSoft 8, if the process is written in AE (Application Engine), then there could be %UpdateStats meta-SQL used in the program after the rows are populated. This would ensure that the statistics are updated before the selection happens from that table. Note: Commit is required prior to executing the %UpdateStats statement. The implicit commit feature that Oracle uses when performing DDL(analyzing is considered DDL) causes the AE to ignore the %UpdateStats command after any uncommitted changes because allowing the implicit commit may affect the restart capability of the program.

Example
The command in a SQL Step of an AE program is: %UpdateStats(INTFC_BI_HTMP) This meta-SQL will issue "ANALYZE TABLE PS_INTFC_BI_HTMP ESTIMATE STATISTICS" command to the database at runtime. Note: PeopleSoft stores the default syntax for the ANALYZE command in a table PSDDLMODEL. Use the supplied script (DDLORA.DMS) to change the default setting or to add a required SAMPLE ROWS/PERCENT for the ESTIMATE clause. Ensure that the temporary table statistics have been handled as shown above. If you find any temporary table that was not updated during the run time, then use a manual method of updating the statistics.

Turn off %UpdateStats


Having the update statistics at the runtime incurs some overhead. In fact, it is not necessary to run the statistics for each run. If the volumes are the same for each run, then the statistics can be maintained for the temporary tables instead of analyzing the tables for each run. To turn off the %UpdateStats command, follow these steps: 1. 2. Run the AE program for the desired volume. Turn off the %UpdateStats command for the next run, so that the next run does not capture the statistics again. Note: If required, these statistics can be exported for future use with the DBMS_STATS.EXPORT_TABLE_STATS package. 3. Remove the temporary tables from the list of tables that get analyzed weekly or monthly. Change the script accordingly to handle this.

Note: If the schema-level statistics are run using the DBMS_STATS.GATHER_SCHEMA_STATS, then the previously captured statistics will be erased. In such cases, you may wish to turn on the %UpdateStats again or import the statistics for those tables from previously saved stats using DBMS_STATS.IMPORT_TABLE_STATS command.

PeopleSoft 8 Batch Performance on Oracle 9i Database Update Statistics can be turned off in two ways:.

1/4/2005

Program level: Identify the steps that issue %UpdateStats and inactivate them. These steps can be identified by the AE trace. This is a program-specific setting. Installation Level: Once the batch process runs are stabilized and the temp table statistics are captured for all the batch processes, then the installation level setting can be applied to turn off the %UpdateStats. The following parameter should be set in the Process Scheduler configuration to achieve this.

psprcs.cfg: ;------------------------------------------------------------------------; DbFlags Bitfield ; ; Bit Flag ; -----; 1 - Ignore metaSQL to update database statistics(shared with COBOL) DbFlags=1

Histograms
What are Histograms?
Cost-based optimization uses data value histograms to get accurate estimates of the distribution of column data. A histogram partitions the values in the column into bands, so that all column values in a band fall within the same range. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with non-uniform data distributions. Oracle uses height-balanced histograms (as opposed to width-balanced). Width-balanced histograms divide the data into a fixed number of equal-width ranges and then count the number of values falling into each range. Height-balanced histograms place approximately the same number of values into each range so that the endpoints of the range are determined by how many values are in that range.

Use of Histograms for PeopleSoft Applications


Histograms can affect the performance and should be used only when they substantially improve the query plans. In general, you should create histograms on columns that are frequently used in the WHERE clauses of the queries and have a highly skewed data distribution. For many applications, it is appropriate to create histograms for all indexed columns because indexed columns are typically used in the WHERE clauses. Histograms are persistent objects; therefore, there is a maintenance and space cost for using them. You should compute histograms only for the columns that you know have highly skewed data distribution. For uniformly-distributed data, costbased optimization can make fairly accurate guesses about the cost of executing a particular statement without the use of histograms. Histograms, like all other optimizer statistics, are static. They are useful only when they reflect the current data distribution of a given column. (The data in the column can change as long as the distribution remains constant.) If the data distribution of a column changes frequently, then you must re-compute its histogram frequently. Histograms are not useful for columns with the following characteristics: The column data is uniformly distributed. The column is not used in the WHERE clauses of the queries.

Copyright PeopleSoft Corporation 2001. All rights reserved.

The column is unique and is used only with equality predicates.

Columns like PROCESS_INSTANCE, ORD_STATUS benefit from histograms.

sample: Sample query that used histogram statistics to boost the performance

Problem Statement:
We observed that the trace files were showing full table scans for most of the queries involving the tables PS_BI_HDR, PS_BI_LINE, and PS_BI_LINE_DST. Queries with full table scans on big tables are almost always a relatively costly process. The following is the sample of a SQL statement that we found to be inefficient due to a full table scan on PS_BI_LINE, a large volume key table.
********************************************************************************

UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT DISTINCT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3698 AND INVOICE = PS_BI_LINE.INVOICE AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3698
call count cpu elapsed disk query current rows ------ ------ -------- ---------- ---------- ---------- ---------- ---------Parse 1 0.01 0.01 0 0 0 0 Execute 1 303.75 667.66 739444 1630166 340095 300000 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------total 2 303.76 667.67 739444 1630166 340095 300000 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300000 6000000 300000 300000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' FILTER TABLE ACCESS GOAL: ANALYZED (FULL) OF 'PS_BI_LINE' TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'PS_BI_CURRCONV_TMP' INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)

Recommendation:
This particular SQL statement had to process 100,000 invoices. There were 600,000 rows that qualified for an update in the PS_BI_LINE table. Using an index to access the table would definitely help the performance of the SQL statement. The existing index, PSDBI_LINE is a good candidate to use. It has the following columns: PROCESS_INSTANCE, BUSINESS_UNIT, INVOICE. Because the index has the PROCESS_INSTANCE as its leading column, it is safe to assume that the index was created for the batch performance. In Oracle Rule Base Optimization, the index will be favored to access the table. Unfortunately, it is not readily the case in the Cost Base optimization. The CBO would favor a full table scan, which in this case, is not optimal. A full table scan will still be chosen by the optimizer even if the usual ANALYZE command is run against the index. This is due to the fact that the optimizer would make the assumption that the distinct values in the PROCESS_INSTANCE have equal

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

statistical weights. For example, if there were no BICURCNV process executing, the value of the PROCESS_INSTANCE in each row of PS_BI_LINE table is zero. If a BICURCNV process is run, there will be two distinct values in the PROCESS_INSTANCE column. They are zero for the majority of the rows in the table, and an assigned process instance number for those rows that will be processed by BICURCNV. Then, if the usual ANALYZE command is run, the optimizer will assume that fifty percent of the rows in the table contain the number zero and the other fifty percent has the assigned process instance number. Unfortunately, this is an inaccurate assumption for the optimizer to make. Since it would be the case, the CBO will favor the use of full table scan instead of an index scan on PSDBI_LINE. To correct this discrepancy, we added the FOR COLUMNS option in the ANALYZE command. Consequently, we built the data distribution information or histogram about the PROCESS_INSTANCE column. This led the CBO to make informed decision to use the PSDBI_LINE index. To take advantage of these histograms, create on column PROCESS_INSTANCE of all the tables with high volume. The following execution plan shows the improved access path and timings.
********************************************************************************

UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3694 AND INVOICE = PS_BI_LINE.INVOICE AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3694
call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.02 0.02 0 0 0 121.28 238.28 42701 203395 340093 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------121.30 238.30 42701 203395 340093 rows ---------0 300000 0 ---------300000

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300001 100000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PSDBI_LINE' (NON-UNIQUE) INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)

Please note that the access path shown above was a result of incorporating the literal value of the PROCESS_INSTANCE. If the Re-Use flag is checked, the value of the %BIND(PROCESS_INSTANCE) will be in the form of a bind variable. Having the bind variable for the PROCESS_INSTANCE column will not produce the execution plan that favors the use of the PSDBI_LINE index. To make the AE program pass a resolved literal value for the PROCESS_INSTANCE column even when the Re-Use flag is checked, the WHERE clause should be written as: WHERE PROCESS_INSTANCE = %ProcessInstance OR WHERE PROCESS_INSTANCE = &BIND(PROCESS_INSTANCE, STATIC) The additional parameter, STATIC will resolve the literal value of the PROCESS_INSTANCE before sending the query to the database. The same thing can be achieved with %ProcessInstance also. For additional information on this parameter, please refer to PeopleTools documentation on &BIND, %ProcessInstance.

Copyright PeopleSoft Corporation 2001. All rights reserved.

10

Result:
By creating the histograms on the PROCESS_INSTANCE for the PS_BI_LINE table, the SQL statement produced better performance. Without histogram in seconds 667 With histogram in seconds 238 %Gain 64%

Creating Histograms
Create histograms on columns that are frequently used in WHERE clauses of queries and that have highly skewed data distributions. To do this, use the GATHER_TABLE_STATS procedure of the DBMS_STATS package. For example, to create a 10-bucket histogram on the SAL column of the EMP table, issue this statement: EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE 10 sal'); The SIZE keyword declares the maximum number of buckets for the histogram. You would create a histogram on the SAL column if there were an unusually high number of employees with the same salary and few employees with other salaries. You can also collect histograms for a single partition of a table. Column statistics appear in the data dictionary views: USER_TAB_COLUMNS, ALL_TAB_COLUMNS, and DBA_TAB_COLUMNS. Histograms appear in the data dictionary views USER_HISTOGRAMS, DBA_HISTOGRAMS, and ALL_HISTOGRAMS.

Choosing the Number of Buckets for a Histogram


The maximum number of buckets, also referred to as 'the sampling rate' for a histogram, is 255, and the default number is 75. The default value provides an appropriate level of detail for most data distributions. However, because both the number of buckets in the histogram and the data distribution affect a histogram's usefulness, you may need to experiment with different numbers of buckets to obtain optimal results. If the number of frequently occurring distinct values in a column is relatively small, set the number of buckets to be greater than the number of frequently occurring distinct values.

Viewing Histograms
You can find information about existing histograms in the database using these data dictionary views: USER_HISTOGRAMS ALL_HISTOGRAMS DBA_HISTOGRAMS Find the number of buckets in each column's histogram in: USER_TAB_COLUMNS ALL_TAB_COLUMNS DBA_TAB_COLUMNS

Operational Guidelines for Maintaining Histograms in Oracle


Create the histograms with the following command: EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => SCM89, TABNAME => 'PS_BI_HDR', METHOD_OPT => 'FOR COLUMNS SIZE 10 PROCESS_INSTANCE);

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

To maintain the histogram, create histograms immediately after analyzing the table. Caution: When the ANALYZE/DBMS_STATS command is performed on the table, the histogram information is lost. Therefore, the ANALYZE <table> must be immediately followed by the ANALYZE FOR COLUMNS PROCESS_INSTANCE/EXECUTE DBMS_STATS..

You can use the following FAQ as your reference to maintain histograms:

FAQ on Histograms
1. What are the steps necessary in creating the histogram for the PROCESS_INSTANCE column of the PS_BI_LINE table? Run the DBMS_STATS command as shown below. It will generate the stats as well as create the histograms for the column PROCESS_INSTANCE . EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => 'SYSADM' , TABNAME => 'PS_BI_LINE' , METHOD OPT => 'FOR COLUMNS SIZE 10 PROCESS_INSTANCE); Note: To change the number of histograms buckets you need, change the value of SIZE. 2. How should I create histograms if the table statistics already exist? Run the DBMS_STATS command as follows: EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => 'SYSADM' , TABNAME => 'PS_BI_LINE' , METHOD OPT => 'FOR COLUMNS SIZE 10 PROCESS_INSTANCE '); 3. Can histograms exist without having table statistics? Yes, but it will not be effective without having statistics on the underlying table. 4. How do I delete histograms and keep the table statistics in place? Run the DBMS_STATS command as follows: EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => 'SYSADM' , TABNAME => 'PS_BI_LINE' , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); 5. How do I delete the statistics on an entire table including histograms? First, unless you have compelling reasons to delete the statistics, do not run the DBMS_STATS command as shown below. EXECUTE DBMS_STATS.DELETE_TABLE_STATS (OWNNAME => 'SYSADM' , TABNAME => 'PS_BI_LINE'); 6. What happens if the table statistics are run after creating histograms? Running DBM_STATS (without the METHOD_OPT parameter) on the table after creating histograms would erase all the previously created histograms and just create the table statistics.

Copyright PeopleSoft Corporation 2001. All rights reserved.

12

7.

How often should I run the histogram? To maintain histogram information on a specific column like PROCESS_INSTANCE, the DBMS_STATS METHOD_OPT=> command must be run as often as the DBMS_STATS. is being run. See FAQ #1 for details.

8.

What is the overhead of running histograms? The overhead incurs when running the histogram is same as the overhead when running the typical DBMS_STATS command for a table. As a rule of thumb, any DBMS_STATS command should be run during the maintenance window.

9.

What is a good source to learn more about the Oracle histogram? For more information on histogram, please refer to the Oracle Tuning Manual.

DEDICATED TEMPORARY TABLES(DTT)


What are Dedicated Temporary Tables(DTT)?
Batch processes written in AE use PeopleSoft-designated temporary tables, also called the dedicated temporary tables(DTT) for better processing. The use of DTT minimizes potential locking issues and improves the processing time. These tables are regular Oracle tables but flagged as temporary in the PeopleSoft dictionary tables. When implemented on Oracle databases, PeopleSoft-designated temporary tables are built using Oracle tables. The required temporary tables are linked to the AE program and the required number is also specified for each AE program. Figure 1 shows the property window for the AE program Bill Finalization (BIIF0001): The instance count specified here is the limit on the number of a temporary table's instances that can be used when multiple instances of the program are run. If the number of programs run are more than the specified count (10 in this example), then the additional processes will be abandoned or the base temporary tables will be used depending on the Runtime radio button selection in the previous window.

Figure 1: Property Window for the AE Program

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Performance Tips for DTT


Proper sizing of these temporary tables help improve the processing time. Consider the following tips: 1. 2. 3. Create these temporary tables in a separate Tablespace and spread the data files on multiple disks for minimizing the I/O. Hardware disk striping may be another way to spread the I/O. Create them as a Locally Managed Tablespace with fixed extent size ( e.g. 1M or 2M). In some cases, the truncate command issued from an AE program is converted into a DELETE statement. This happens when there is no commit before the truncate step. Identify such tables and manually truncate them to release the buffer block and maximize the performance. Create the temporary tables on a tablespace that has different Oracle block size than the rest of the tablespaces. By doing this, the temporary tables will be placed on a different buffer pool, which will improve truncate time. Having a separate buffer pool for temporary tables will also reduce RO enqueue contention when multiple AE jobs are running in parallel and truncating temp tables.

4.

AE Performance with DTT


How do temporary tables work in AE?
Based on the number of temporary tables that are associated to an AE program and the number of instances setup for the program, the appropriate temporary table instance will be used during the runtime.

Test case explaining temporary table behavior


Here is the sample scenario that explains the temporary table usage Temporary table settings Setup the Temp Tble Instance (online) as per the requirement. Temp Tble Instance (Total) will be same as the online number, unless you are using EPM(See Figure 1).

Tools Properties

Figure 1: Temporary Table Settings

AE Properties

Setup the Instance Count to the required value(See Figure 2). This should be equivalent to the number of concurrent streams you are planning to run. Choosing the Continue for the runtime option will use the base temporary table if there are no temporary table instances available at the time of run.

Figure 2: AE Properties

If the Batch Only option is selected(See Figure 3), that means that the program will not be called from online. You dont need to change this 14

Copyright PeopleSoft Corporation 2001. All rights reserved.

Figure 3: Program Properties

setting unless you are advised to do so.

Number of temporary table instances Number of temporary tables (Online) = 3 Number of temporary tables (AE Program) = 3 Scenario 1: Batch Only option is not selected Total number of temporary table instances created for each temporary table associated to the AE program = Base temporary table + Number of temporary tables (Online) + Number of temporary tables (AE Program) = 1+3+3 = 7 Scenario 2: Batch Only option is selected Total number of temporary table instances created for each temporary table associated to the AE program = Base temporary Table + number of temp tables (AE Program) = 1+3 = 4 In this example we will look at Scenario 2. Temporary table allocation When the program runs for the first time, temporary table instance 1 will be used. The subsequent parallel streams will use the rest of the instances in sequence. In this example, the first three concurrent streams will use the instance counts 1, 2, and 3. When the user tries to run the 4th 5th, and 6th streams, the program will not find an available temporary table instance and will use the base temporary table. Number of concurrent executions The number of concurrent executions in this example is six, while the number of available temporary table instances are justthree. So, the first three processes are using the temporary table instance,while the final three are using the base temporary tables. Drawbacks AE Program issues a delete for the base temp tables while it truncates the temp table instances. The use of base temporary tables for any AE process is not recommended because: Frequent deletes and inserts could cause fragmentation for the base temp tables. Runtime table statistics on the base temp tables are ignored. AE PROCESS AE PROCESS AE PROCESS

Not advisable
AE PROCESS AE PROCESS AE PROCESS

TAB1 TAO TAB2 TAO TAB3 TAO TAB4 TAO

TAB1 TAO1 TAB2 TAO1 TAB3 TAO1 TAB4 TAO1

TAB1 TAO2 TAB2 TAO2 TAB3 TAO2 TAB4 TAO2

TAB1 TAO3 TAB2 TAO3 TAB3 TAO3 TAB4 TAO3

PeopleSoft 8 Batch Performance on Oracle 9i Database As multple streams use the same base temp table, there could be a possiblity of contention.

1/4/2005

Recommendations 1. 2. 3. To achieve good performance, always setup an adequate number of temporary table instances. To overcome the drawbacks described above, set up a temporary table instance even if you are planning to run only one process at a time. Setup required value for the process scheduler server in the max concurrent field. Max API Aware value should be larger than or equal to the total of max concurrent value set of all the process types.

4.

Setup required number of PSAESERV processes on the process scheduler server.

[PSAESRV] ;========================================================================= ; Settings for Application Engine Tuxedo Server ;========================================================================= ;------------------------------------------------------------------------; The max instance should reflect the max concurrency set for process type ; defined with a generic process type of Application Engine as defined ; in the Server Definition page in Process Scheduler Manager

Max Instances =12 5. The use of DTTs are recommended even when the process is running in single stream.

Create them as Oracle Global Temporary Tables (GTT) - Not advisable


What are Global Temporary Tables(GTT)?
Oracle8i introduced GTT, which can be used as temporary processing tables for any batch process. Instances of a global temporary table will be created at the runtime in the user's temporary tablespace. These tables are session-specific. Table data is deleted once a session is closed or a transaction is committed. During table creation, it gives the option to preserve or delete the rows after the commit. Some advantages of using the Oracle GTT in place of Dedicated Temporary Tables are: Reduction in Redo

Copyright PeopleSoft Corporation 2001. All rights reserved.

16

Faster full scans High Water Mark is always set to zero at the start of the process. Faster Truncates Space management occurs inside the temporary segment. Easier table management . There is no need to create the entire temporary table instances up-front. Base table definition is stored once.

Can GTTs be used in place of Dedicated Temporary Tables?


As of now, PeopleSoft does not provide a script or utility to create the GTTs. Also, there is no direct method to specify the dedicated temporary tables as GTTs. But, the indirect method allows you to use GTTs in place of DTTs. GTTs show improved truncate time in our internal testing. Important caution while using the GTTs is Application Engine's ability to restart: Since GTTs lose the data when the session ends, there is no way to restart the program. An indirect method to implement GTTs is: 1. 2. 3. 4. 5. 6. In the AE program's properties window, click on the TempTables tab Set the Instance Count to 0 Select Continue radio button for Run Time settings. Generate script to create the temporary tables. Change the script to create the tables as GTTs. Make necessary changes to support the syntax. Create the GTTs with the above script.

Note: When the multiple runs of the same program occur, the AE looks for temporary table instances and fails due to the setting done above. Then it continues by using the base temp table. This in turn uses GTT instance at runtime. The experiment suggested can be run in a demo database first before using in production environment. Misusing them may cause loss of data. It is advisable to do this experiment with the help of experienced DBAs. 7. Use them with caution. Application Engine performance with dedicated temporary tables The steps given above are for experimental purpose only. PeopleSoft does not recommend nor support the Oracle GTT due to the programs inability to restart.

TABLESPACE SELECTION
There are two types of tablespaces that Oracle 9i allows you to create. They are: LOCALLY MANAGED Table spaces(LMTs) DICTIONARY MANAGED Table spaces(DMTs)

The difference between these two tablespaces is in the extent management. Also, the files that make up the tablespaces could be either plain datafiles or tempfiles. If you want to store permanent objects in a tablespaces, do not use the tempfile option.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

There are a lot of options available while creating the tablespaces. Please refer to the syntax and documentation provided by Oracle to create the right type of tablespaces that you want. Here is a table that gives a recommended use of various combinations.

Sample Name TS_PERM_DICT TS_PERM_LOC_AUTO TS_PERM_LOC_UNI TS_PERM_DICT_TEMP TS_TEMP_LOC_UNI

Tablespace Type
Datafile Based, Regular Tablespace, Dictionary Managed Datafile Based, Regular Tablespace, Locally Managed, Auto Allocate Datafile Based, Regular Tablespace, Locally Managed, Uniform Extent

PeopleSoft Objects
SYSTEM Tablespace on Oracle9i, release 1 All the non-SYSTEM, non-RBS, nonTEMP tablespaces Rollback Tablespace, Peoplesoft Temporary Tables

Datafile Based, Regular Tablespace, NOT RECOMMENDED TO USE Dictionary Managed, TEMPORARY Type Datafile Based, Temporary Tablespace, Locally Managed, Uniform Extent Default Temporary Tablespace

Refer to the Oracle documentation for a detailed understanding of each option. PeopleSoft's supplied create scripts will create only LMTs. The creation of new dictionary-managed tablespaces is scheduled for desupport by Oracle.

Dictionary Managed Tablespaces(DMTs)


DMTs are Oracles traditional extent management system. Sample syntax: CREATE TABLESPACE TS_PERM_DICT size 100M Datafile '/perm/ora/ts_perm_dict.dbf' EXTENT MANAGEMENT DICTIONARY Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0) If the SYSTEM tablespace is created as locally managed, then you cannot create a DMT. If you do not specify extent management when you create a tablespace, then the default is locally managed.

Locally Managed Tablespaces(LMTs)


LMT are the default starting in Oracle 9i. Extent management is done within the datafile/tempfile using the bitmaps. Object storage clause specification is not required (and is ignored) with these tablespaces. You can use the BLOCKSIZE clause to specify a non-database default block size for the tablespace. To use different block sizes in a database, you must have the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter set, and the integer you specify in this clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting.

Copyright PeopleSoft Corporation 2001. All rights reserved.

18

Advantages of LMTs
The advantages of LMTs are: Reduced recursive space management Reduced contention on data dictionary tables and space management latches No coalescing required No rollback generated for space allocation and deallocation activities. Fragmentations is reduced but not completely eliminated.

Locally Managed - Space Management


Space management feature means: Free extents recorded in bitmap (so some part of the tablespace is set aside for bitmap) Each bit corresponds to a block or group of blocks The bit value indicates free or unused Common views used are DBA_EXTENTS and DBA_FREE_SPACE

Locally Managed - AUTO ALLOCATE


This option enables the extent size allocation to be managed by Oracle depending on the object size. This should a preferable method if the tablespace holds the object with various sizes. CREATE TABLESPACE TS_PERM_LOC_AUTO size 100M Datafile '/perm/ora/ts_perm_loc_auto.dbf' EXTENT MANAGEMENT LOCAL AUTO ALLOCATE;

Locally Managed - UNIFORM EXTENT


This option allows the size of each extent to be fixed to the specified size. Specify the appropriate size to avoid creating the table with a large number of extents. CREATE TABLESPACE TS_PERM_LOC_UNI size 100M Datafile '/perm/ora/ts_perm_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K; Uniform extent gives best predictability and consistency. Having the consistent extent size eliminates wastage of tablespace as "holes." It will help the DBA in capacity planning. Proper planning should be done to determine the optimum extent size. Create different category tablespaces such as small, medium, and large with different uniform extent sizes. Place the tables in an appropriate tablespace depending on its size.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Temporary Tablespaces
Every database user should be assigned a default temporary tablespace(s) to handle the data sorts. You cannot specify nonstandard block sizes for a temporary tablespace. In Oracle9i, a regular tablespace can not be assigned as the temporary tablespace. It flags an error when the tablespace assigned is not a true Oracle temporary tablespace.

Datafile-based Tablespaces
These are regular tablespaces with an additional setting as TEMPORARY at the end of the command. These temporary tablespaces should only be used for temporary segments. This will make sure that the permanent objects are not created by accident also. By default, this will create a DMT. CREATE TABLESPACE TS_PERM_DICT_TEMP size 100M Datafile '/perm/ora/ts_perm_dict_temp.dbf' Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0) TEMPORARY;

Tempfile-based Tablespaces
Oracle introduced this new type that used tempfile instead of datafile. This should be a preferred method for any temporary tablespace. This will give better extent management and space management than the datafile-based ones. In this type of tablespace, only the LMT with UNIFORM EXTENT management is allowed. CREATE TEMPORARY TABLESPACE TS_TEMP_LOC_UNI size 100M tempfile '/temp/ora/ts_temp_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K; Advantages: Space management (extent allocation and deallocation) is locally managed. The sort segment created for each instance is reused. All processes performing sorts reuse existing sort extents of the sort segment, rather than allocating a segment (and potentially many extents) for each sort.

Automatic Segment Space Management


The automatic segment space management is a new feature introduced in Oracle9i to simplify the space administration tasks, and eliminate much of the space management related performance tuning. This feature simplifies the management of free space within an object such as tables or indexes and improves space utilization. The automatic segment space management feature is available only with LMT. A new clause, SEGMENT SPACE MANAGEMENT in the CREATE TABLESPACE command, allows choosing between automatic and manual modes. A tablespace that is created with MANUAL segment space management continues to use FREELISTS for managing free space within the objects located in it. The following example illustrates how to create a tablespace with automatic segment space management. CREATE TABLESPACE data DATAFILE '/u02/oracle/data/data01.dbf' SIZE 500M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

Copyright PeopleSoft Corporation 2001. All rights reserved.

20

All objects created in the above tablespace will use the automatic segment space management feature to manage their free space. Any specification of PCTUSED, FREELISTS, and FREELIST GROUPS parameters for objects created in this tablespace will be ignored. A new column called SEGMENT_SPACE_MANAGEMENT in DBA_TABLESPACES view will indicate the segment space management mode used by a tablespace.

INDEX VALIDATION
PeopleSoft supplied indexes are of a generic nature. Depending on the customer's business needs and data composition the need for indexes varies. The following tips will help the DBA manage indexes efficiently:

Index Maintenance Tips


1. 2. Run the Oracle trace/TKPROF report for a process and check the access paths to determine the usage of indexes. Consider the index order on the composite indexes to make the high selectivity column as the leading column of the index depending on the data distribution used. E.g. Change the index with the column order (BUSINESS_UNIT, INVOICE) to (INVOICE, BUSINESS_UNIT) Caution: Sufficient research and testing by an experienced DBA is required prior to making any such changes in a production environment. A poor choice could be detrimental to performance. Note: As of Oracle 9i, the new index scan INDEX SKIP SCAN will help use the INVOICE column even if that column is second one in the order. It may not be necessary to flip the index order in such cases. 3. 4. 5. Consider adding additional indexes depending on your processing needs. Review the index recommendation document supplied by the product to see if any of the suggestions apply to your installation. Examine the available indexes and remove any of the unused indexes to boost the performance of INSERT/UPDATE/DELETE operations. Some times, the unused index in a batch process may be useful for an online page. Do thorough analysis before deleting the index. It may impact other programs. . Indexes tend to fragment more frequently than tables. Indexes should be rebuilt typically when greater than 10% has been deleted or greater than 20% has been added. Rebuild the indexes frequently to boost the index performance.

6.

Function-based Indexes
A function-based index is an index on an expression, such as an arithmetic expression or an expression containing a package function. Test case: Table PS_CUSTOMER has an index PS0CUSTOMER with NAME1 as leading column: SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'Adventure%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54

PeopleSoft 8 Batch Performance on Oracle 9i Database Uses index PS0CUSTOMER and return the result faster. SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'ADVENTURE%'; SQL> No rows selected Uses index PS0CUSTOMER and return the result faster. But, gives no rows.

1/4/2005

If data is stored in mixed case such as the above example, the only way to get the result is using the function UPPER.
SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54 Does not use the PS0CUSTOMER index and takes longer time to return.

In such cases, the use of function-based indexes are useful.


CREATE INDEX ON PSFCUSTOMER ON PS_CUSTOMER (UPPER(NAME1)); SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54 Uses PSFCUSTOMER index and returns the query faster.

TABLE/INDEX PARTITIONING
What Is Partitioning?
Partitioning is a data volume management technique. It may have performance benefits, but that is not all. It is most effective on multi-processor machines when implemented with increased db_writers, free lists, and degree of parallelism. Partitioning addresses the key problem of supporting very large tables and indexes by allowing you to decompose them into smaller and more manageable pieces called partitions. Once partitions are defined, SQL statements can access and manipulate the partitions rather than entire tables or indexes.

Partitioning Methods
There are four basic methods of partitioning . They are: Range Partitioning Hash Partitioning

Copyright PeopleSoft Corporation 2001. All rights reserved.

22

Composite Partitioning List Partitioning

Range Partitioning
Data can be divided on the basis of ranges of column values, for e.g. PS_LEDGER by FISCAL_YEAR PS_GP_RSLT_ACUM by EMPLID CREATE TABLE PS_GP_RSLT_ACUM (EMPLID, CAL_RUNID, .......) PARTITION BY RANGE (EMPLID) (PARTITION GPACUM1 VALUES LESS THAN (GP0101) TABLESPACE PSTABLE, PARTITION GPACUM2 VALUES LESS THAN (GP0201) TABLESPACE PSTABLE, .... ..... PARTITION GPACUM8 VALUES LESS THAN (GP0801) TABLESPACE PSTABLE)

Hash Partitioning
Data will be distributed evenly through the hashing function. It will be useful for the table where there is no appropriate range to be used.

Composite Partitioning
It is a combination of range and hash partitioning. It uses range partitioning to distribute the data and divides the data into subpartitions within each range using hash partitioning.

List Partitioning
List partitioning enables you to group and organize unordered and unrelated sets of data. You can explicitly control how rows map to partitions. This is done by specifying a list of discrete values for the partitioning key in the description for each partition. Multicolumn partition keys are not supported for list partitioning. If a table is partitioned by list, the partitioning key can only consist of a single column of the table.

Composite Range-List Partitioning


This is also a combination of Range and List partitions. First, the data is divided using the Range partition and then each set of Range partitioned data is further subdivided into List partitions using List key values. Each sub partition individually represents a logical subset of the data not like composite Range-Hash Partition.

Partitioned Indexes
In addition to table partitioning indexes on partitioned tables can also be partitioned. Oracle supports two types of index partitioning.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

LOCAL Index
A local index is equipartitioned with its underlying table. That is, the index has the same number of partitions and partition keys as the base table. CREATE UNIQUE INDEX PS_GP_RSLT_ACUM ON PS_GP_RSLT_ACUM (EMPLID, CAL_RUN_ID, ....) STORAGE (INITIAL 500M NEXT 500M ) LOCAL(PARTITION GPACUM1 TABLESPACE PSINDEX, PARTITION GPACUM2 TABLESPACE PSINDEX, ....., PARTITION GPACUM8 TABLESPACE PSINDEX)

GLOBAL Index
A global index may or may not be partitioned. If it is partitioned, it should not be equi-partitioned with the base table. Global partitioned indexes are flexible in the degree of partitioning and the partitioning keys are independent of the table's partitioning method.

Global Index Vs Local Indexes


The global or local indexes problem is generic global indexes can be faster than local indexes. In general, local indexes will be faster when partition elimination can take place, and the expected volume of data acquired is significant. If either of these conditions are not met, then local indexes probably won't help performance. Remember, the optimizer has to perform some extra work to deal with partitioned tables. If you are using very precise queries (e.g., on a nearly unique pair of columns) then the cost of optimizing and identifying the partition may outweigh the benefit of having the partitions in the first place (e.g., the saving might be just one logical I/O, therefore it is less than 1/10,000 of a CPU second). Where queries are very precise, a global index is quite likely to be a better performer than a local index. This is a common Oracle trade-off between how much you win or lose and how often you want to make that win or loss. In the local or global indexes case, you (should expect to) lose a tiny amount of performance on every query in order to win a huge amount when you do partition maintenance, such as eliminating entire partitions of very old data with a quick drop partition command. Of course, if your partitions are large, and the indexes hit a large number of table rows, then the work saved by hitting just the one partition through exactly the correct index partition may prove to be a significant gain. Partitioned tables can have (local or global) partitioned or non-partitioned indexes. An unpartitioned table can have a global partitioned index.

Advantages of Partitioning
Partitioning improves the availability and manageability of large tables and helps DBAs to perform administrative tasks on a partition without affecting other partitioning. It also helps the SQL statements to deal with a reduced number of scanned rows and improve performance. When running PeopleSoft batch processes in parallel, you can reduce I/O contention by isolating each job stream in its own partition on large, high-volume transaction tables and carefully managing the placement of the partitioned datafiles. You are also likely to see huge performance gains on queries that perform full table scans. When the table involved is properly partitioned, the query will only need to perform a full scan on a single partition rather than the entire table.

Copyright PeopleSoft Corporation 2001. All rights reserved.

24

Guidelines for Partitioning


The main guidelines for partitioning are: Before choosing the keys for partitioning, check the run control to see the possible input fields for the processes. The run control fields are the ones that decide the parallel criteria for PeopleSoft jobs. You want to partition the table based on how your processes are going to access the data. As far as possible, choose a partition key such that data is evenly distrbuted across partitions. If you choose to partition a table by Business_Unit(BU) and if most of your data end up in one BU, then sub-partition the partition that has most of the data using Hash Partitioning. Always use dbms_stats package to analyze partitioned tables. Bitmap indexes can be created on the partitioned tables but they must be created as local indexes. Local indexes offer better availability during maintenance operations on the partitions. They also provide better performance when running huge batch jobs in parallel, if they select against the partitions.

UNDO MANAGEMENT
Oracle 9i provides two ways to handle your undo data. You can either create an undo tablespace and let Oracle manage it automatically or you can create rollback segments on the tablespace and manage it in the traditional way. The following inti.ora parameter controls whether your undo management is auto or manual. To use automatic UNDO, you also need two other parameters, and I recommend that you suppress the errors, which means a third parameter. These parameters are: UNDO_MANAGEMENT=AUTO/MANUAL UNDO_RETENTION = 3600 UNDO_SUPPRESS_ERRORS = TRUE UNDO_TABLESPACE = UNDOTBS

Automatic Undo Management


This is the preferred method of undo management for Oracle9i. Automatic undo management lets you allocate undo space in a single undo tablespace, instead of distributing undo space in a set of statically allocated rollback segments. The creation and allocation of space among the undo segments is handled automatically by the Oracle server. You need to specify the initialization parameter, UNDO_TABLESPACE, to tell Oracle which tablespace to use for undo and then create that tablespace while creating the database. The only way to have automatic undo management requires you specify an undotablespace.

Example :
CREATE DATABASE INVDB DATAFILE '/data3/oradata/INVDB/system/system01.dbf' SIZE 1024M EXTENT MANAGEMENT LOCAL LOGFILE . DEFAULT TEMPORARY TABLESPACE TEMPTS1 UNDO TABLESPACE UNDOTS

PeopleSoft 8 Batch Performance on Oracle 9i Database DATAFILE '/data4/oradata/INVDB/undo/undots01.dbf' SIZE 5048M;

1/4/2005

Manual Undo Management


Setting the UNDO_MANAGEMENT to MANUAL lets you create rollback segments on the undo tablespace. Managing Rollback Segments is always challenging. Due to the varying size requirements for online and batch operations, it is necessary to manage two sets of rollback segments. Conventional rule of thumb is: Online Batch Have many,small rollback segments Have few,large rollback segments

The preceding rule, while valid, may not be practical for the DBA to implement in the environment, where online and batch execute at the same time. One may create many small rollback segments and few large rollback segments in the database and a specific large rollback segment can be allocated using the "SET TRANSACTION USE ROLLBACK SEGMENT RBSLARGE" for a batch process. A practical problem could be truly dedicating the large rollback segment to the batch process only. Other online transactions may also use the large segment. The only way to dedicate the large segments to batch processes is to run the process when no online transactions are running. Therefore, a DBA should make a fair assessment of the requirement to run the batch and online processes simultaneously and size the Rollback Segments accordingly. The following are a few generic guidelines:

Online
If the batch processes are not run when the online transactions are running, then the following setup may be useful. Example: RB01 - Online RB02 - Online RB03 - Online RB04 - Online RB05 - Online RB06 - Online RBL1 - Offline RBL2 - Offline RB01 - RB06 are smaller rollback segments. RBL1 - RBL2 are larger rollback segments. If the online transactions are run along with batch processes, then the following setup may be useful. Example: RB01 RB02 RB03 RB04 RB05 RB06 RB07 RB08 Online Online Online Online Online Online Online Online

RB01 - RB08 are medium sizes rollback segments to support both online and batch processes.

Copyright PeopleSoft Corporation 2001. All rights reserved.

26

Batch
If the batch process can be run when no online transactions are running, then dedicating the large rollback segment to the process will help. But, this may not be practical when multiple jobs of the same process are run. The better option in such cases is to bring the required large rollback segments online and make other small rollback segments offline before running the batch processes. The following examples will give some guidelines to specify the large rollback segment to the process. SQR/COBOL If the batch process is of SQR or COBOL, then the program can be changed to add the following command at the beginning of the process. "SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;" Example: The following code bit should be called in the beginning of an SQR or after a transaction COMMIT or ROLLBACK. ! -------------------! - BEGIN CODE BIT ! -------------------begin-procedure get-large-rollback begin-sql SET TRANSACTION USE ROLLBACK SEGMENT RBS_LARGE end-sql end-procedure get-large-rollback ! -------------------! - END CODE BIT ! -------------------AE If the batch program is written in AE, then the specific rollback segment can be allocated by adding a step at the beginning of the process with PeopleCode action. Specify the following code line to achieve that. %SQLEXEC("SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;");

PARSES VS. EXECUTES


When a SQL statement, which is not in the shared pool, is executed, it has to be parsed fully. Oracle has to allocate memory for that statement from the shared pool and check the statement syntactically and semantically. This is referred to as a hard parse and is very expensive both in terms of CPU used and in the number of latches that get performed. Hard parsing happens when the Oracle server parses a query and cannot find an exact match for that query in the library cache. This occurs due to inefficient sharing of the SQL statements and can be improved by using bind variables instead of literals in those queries. Sometimes, hard parsing causes excessive CPU usage. The number of hard parses can be identified in a PeopleSoft AE trace (128). In Oracle trace output, such statements are shown as individual statements and each statement parses once. Relying on Oracle trace output to identify the SQL statements that are hard parsed because of literals instead of bind variables is somewhat difficult.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Use of Bind Variables


The number of hard parses can be reduced to one, per multiple executes of the same SQL statement by coding the statement with bind variables instead of literals. Most of the PeopleSoft programs written in AE, SQR, and COBOL have been rewritten to address this issue. There are some steps in AE processes that do not use bind variables because these SQL statements cannot handle bind variables in some platforms. Oracle deals with bind variables efficiently and such statements can typically be rewritten to use bind variables. The following section gives some guidelines to follow to use the bind variables.

Application Engine - Reuse Flag


PeopleSoft AE programs use bind variables in the SQL statements, but these variables are PeopleSoft specific. When a statement with bind variables is passed to the database, it sends the statement back with literal values. However, if the Re-Use flag is set for that statement, then the AE program will send the statement back with bind variables. Example: Statement in PC_PRICING.BL6100.10000001 UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = %Sql(PC_COM_LIT_CHAR,%NEXT(LAST_RESOURCE_ID),1,20,20) WHERE PROCESS_INSTANCE = %ProcessInstance AND BUSINESS_UNIT = %Bind(BUSINESS_UNIT) AND PROJECT_ID = %Bind(PROJECT_ID) AND ACTIVITY_ID = %Bind(ACTIVITY_ID) AND RESOURCE_ID = %Bind(RESOURCE_ID) AND LINE_NO = %Bind(LINE_NO) Statement without Re-Use flag:

AE Trace -- 16.46.00 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000498 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1' AND ACTIVITY_ID = 'ACTIVITYA1' AND RESOURCE_ID = 'VUS004VA10114050' AND LINE_NO = 1 / -- Row(s) affected: 1
SQL Statement BL6100.10000001.S C o m p i l e Count Time 252 0.6 E x e c u t e Count Time 252 1.5 F e t c h Count Time 0 0.0 Total Time 2.1

Oracle Trace Output ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000561 WHERE
Copyright PeopleSoft Corporation 2001. All rights reserved.

28

PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1021' AND ACTIVITY_ID = 'ACTIVITYA2042' AND RESOURCE_ID = 'VUS004VA10210124050' AND LINE_NO = 1 call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.01 0.01 0 2 5 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.01 0.01 0 2 5 rows ---------0 1 0 ---------1

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------1 2 Rows ------0 1 2 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735) Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE)

Statement with Re-Use flag:

AE Trace -- 16.57.57 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6 / -- Bind variables: -1) 10000751 -2) US004 -3) PRICINGA1 -4) ACTIVITYA1 -5) VUS004VA10114050 -6) 1

PeopleSoft 8 Batch Performance on Oracle 9i Database -- Row(s) affected: 1


SQL Statement BL6100.10000001.S C o m p i l e Count Time 1 0.0 E x e c u t e Count Time 252 0.4 F e t c h Count Time 0 0.0 Total Time 0.4

1/4/2005

Oracle Trace Output ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6

call count ------- -----Parse 1 Execute 252 Fetch 0 ------- -----total 253

cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.22 0.22 0 509 1284 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.22 0.22 0 509 1284

rows ---------0 252 0 ---------252

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------252 504 Rows ------0 252 504 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)

Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE) ********************************************************************************

SQR/COBOL - CURSOR_SHARING
Most of the SQR and COBOL programs are written to use bind variables. Oracle provides a CURSOR_SHARING option for any programs that do not use bind variables. Oracle introduced this new parameter CURSOR_SHARING as of Oracle8i. By default, its value is set to EXACT, which means, the database looks for an exact match of the SQL statement while parsing. It can also be set to FORCE. This setting prompts the database to look for a similar statement excluding the literal values that are passed to the SQL statement. Oracle replaces the literal values with the system bind variables, treats them as single statement, and parses once. In Oracle9i, the cost-based SQL optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor. This lets the optimizer to determine the selectivity of the bind varable (using histograms if they exist) and determine the execution plan. Please note that the next time this query is used, the value is not re-peeked. The same execution (which may be improper with the new bind value) that was used in the first invocation will be used. This enhancement greatly improves the performance of cursor sharing when a bind variable is used against a highly skewed column (only for the first invocation). In Oracle9i, there is another new setting for CURSOR_SHARING called SIMILAR. With CURSOR_SHARING=SIMILAR, Oracle will switch in the bind variables, if the outcome is not different, but will continue using literal values, if using bind variables would make a significant difference to the outcome. This setting will be useful only for those queries that are dependent on the histogram statistics of the column PROCESS_INSTANCE.

Copyright PeopleSoft Corporation 2001. All rights reserved.

30

How to set the CURSOR_SHARING values? The parameter can be set at the instance level or at the session level. Instance Level: Set the following parameter in the init<dbname>.ora file and restart the database. CURSOR_SHARING = FORCE Session Level: Following syntax can be used to set the value at the session level. ALTER SESSION SET CURSOR_SHARING = FORCE; Setting the CURSOR_SHARING value at the instance level is not recommended in a PeopleSoft environment. Setting the value at the instance level will force the use of bind variables for every statement that run in the database instance. This may improve the performance because of reduced parsing, but may not be required if the application programs are written to handle the bind values. Setting the value at the session level is more appropriate. If you identify the program (SQR/COBOL) that is not using the bind variables and need to force them to use the binds at the database level, then adding the ALTER SESSION command at the beginning of the program should be a better option. If you are not willing to change the application program, then implementing the session level command through a trigger gives you more flexibility. Session Level (using trigger): Following sample trigger code can be used to implement the session level option. CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET CURSOR_SHARING=FORCE'; END; / Note: Make sure to give ALTER SESSION privilege to MYDB to make this trigger work. Example: Sql Statement issued from SQR/COBOL program: SELECT . FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE. NOT EXISTS (SELECT 'X' FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = 'US008' AND PZI.INV_ITEM_ID = 'PI000021' AND ..) ORDER BY .. The above statement uses a literal value in the WHERE clause, which causes a hard parse for each execute. Every hard parse has certain performance overhead; therefore, minimizing the number of hard parses will boost the performance. This statement gets executed for every combination of BUSINESS_UNIT and INV_ITEM_ID. As per the data composition used in this benchmark, there were about 13,035 unique combinations of BUSINESS_UNIT and INV_ITEM_ID and about 19,580 total executes. Oracle TKPROF Output with CURSOR_SHARING=FORCE SELECT FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE ..

PeopleSoft 8 Batch Performance on Oracle 9i Database NOT EXISTS (SELECT :SYS_B_09 FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = :SYS_B_10 AND PZI.INV_ITEM_ID = :SYS_B_11 AND ..) ORDER BY .. Pros and Cons of CURSOR_SHARING By setting the above parameter at the database level, the over all processing time reduced significantly. Over all statistics with no bind variables:

1/4/2005

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query ------- ------ -------- ---------- ---------- ---------Parse 26389 98.27 99.54 0 1074 Execute 404647 51.09 50.11 1757 242929 Fetch 517618 47.85 47.43 3027 1455101 ------- ------ -------- ---------- ---------- ---------total 948654 197.21 197.08 4784 1699104 267830Misses in library cache during parse: 13190 Misses in library cache during execute: 1 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query ------- ------ -------- ---------- ---------- ---------Parse 27118 5.35 5.06 0 49 Execute 33788 2.42 2.22 0 5577 Fetch 54988 2.44 2.57 1 97241 ------- ------ -------- ---------- ---------- ---------total 115894 10.21 9.85 1 102867 Misses in library cache during parse: 65 current ---------0 371000 235446 ---------606446 rows ---------0 78376 189454 ----------

current ---------1 235 0 ---------236

rows ---------0 229 47621 ---------47850

Over all statistics with bind variables: OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call ------Parse Execute Fetch ------total count -----26389 404647 517618 -----948654 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------15.44 15.69 0 0 0 44.02 43.51 173 231362 333538 45.47 43.02 2784 1439571 235104 -------- ---------- ---------- ---------- ---------104.93 102.22 2957 1670933 568642 rows ---------0 78376 189454 ---------267830

Misses in library cache during parse: 64 Misses in library cache during execute: 1 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 356 Execute 357 Fetch 667 ------- -----total 1380 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.08 0.10 0 0 0 0.47 0.48 0 5568 228 0.00 0.01 0 1333 0 -------- ---------- ---------- ---------- ---------0.55 0.59 0 6901 228 rows ---------0 228 552 ---------780

Misses in library cache during parse: 1 From the above trace statistics, it can be seen that the number of library cache misses decreased with the use of bind variables.

Copyright PeopleSoft Corporation 2001. All rights reserved.

32

Original Timing 197 Sec

Time with CURSOR_SHARING option 102 Sec

%Gain 48%

Parameter: SESSION_CACHED_CURSOR
The processes that use bind variables with a cursor open/close or a (soft) parse for a SQL statement will get some scalability improvements with the Oracle parameter SESSION_CACHED CURSOR. This will be mainly useful for the repeating statements issued through PeopleCode using the SQLExec command. SESSION_CACHED_CURSORS is a numeric parameter, which can be set at an instance level or at a session level using the command: ALTER SESSION SET session_cached_cursors = NN; The value, NN, determines how many 'cached' cursors there can be in your session. To be placed in the session cache, the same statement has to be parsed three times within the same cursor.A pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded. Depending on the available memory, the value between 10 and 50 can give show some performance gains.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Batch Server Selection


Process Scheduler executes PeopleSoft batch processes. As per the PeopleSoft architecture, process scheduler (Batch Server) can be setup to run from Database server or any other server.

SCENARIO 1: PROCESS SCHEDULER AND DATABASE SERVER ON DIFFERENT BOXES


Scenario 1 SERVER1 Batch Server TCP/IP Process Scheduler Oracle DB SERVER2 Database Server

Running the Process Scheduler on a box other than the Database Server will use a TCP/IP connection to connect to the database. As the batch process may involve extensive SQL processing, this TCP/IP can be a big overhead and may impact processing times. The effect is more evident in a process where excessive row by row processing is done. The impact due to TCP/IP overhead may not be that big for the processes where majority of SQL statements are set-based. Have a dedicated network connection between the batch server and the database to minimize the overhead.

SCENARIO 2: PROCESS SCHEDULER AND DATABASE SERVER ON ONE BOX


Scenario 2 SERVER2 Database Server / BatchServer

Oracle DB

Use Local Connection

Process Scheduler

Copyright PeopleSoft Corporation 2001. All rights reserved.

34

Running the Process Scheduler on the database server will eliminate the TCP/IP overhead and improve the processing time. At the same time, it does use the additional server memory. Set the following value in the process scheduler configuration file "psprcs.cfg" to use the direct connection instead of TCP/IP UseLocalOracleDB=1 This kind of setup is useful for the programs that do excessive row by row processing.

WHAT IS THE RECOMMENDED SCENARIO?


Considering the performance impact due to TCP/IP for the row by row processing, the scenario 2 is recommended where the connection overhead is eliminated. At the same time, it may not be possible to run the extensive batch processes on the database server due to the limited availability of server resources. Make fair judgment depending on your environment and usage. You could setup both scenarios in your environment and use a specific scenario depending on the time of the run and the complexity of the process. All the nightly jobs can be run using scenario 2.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Chapter 3 - Capturing Traces


This chapter discusses how to Following are the recommendation to capture the traces to identify problems. Make sure to set the values back to zero after capturing the trace. Note: Running the production environment with these settings will cause performance degradation due to the overhead introduced by tracing.

APPLICATION ENGINE TRACE


psprcs.cfg ;------------------------------------------------------------------------; AE Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace STEP execution sequence to AET file ; 2 - Trace Application SQL statements to AET file ; 4 - Trace Dedicated Temp Table Allocation to AET file ; 8 - not yet allocated ; 16 - not yet allocated ; 32 - not yet allocated ; 64 - not yet allocated ; 128 - Timings Report to AET file ; 256 - Method/BuiltIn detail instead of summary in AET Timings Report ; 512 - not yet allocated ; 1024 - Timings Report to tables ; 2048 - DB optimizer trace to file ; 4096 - DB optimizer trace to tables ;TraceAE=(1+2+128+2048) TraceAE=2179

ONLINE TRACE
psappsrv.cfg ;------------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb ; 32 - Set Select Buffers (identifies the attributes of columns ; to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask

Copyright PeopleSoft Corporation 2001. All rights reserved.

36

TraceSql=0 TraceSqlMask=12319 ;------------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace entire program ; 2 - List the program ; 4 - Show assignments to variables ; 8 - Show fetched values ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=0 TracePCMask=0

ORACLE TRACE
The following parameters are required for Oracle trace:

Trace at Instance Level:


Init<database_name>.ora SQL_TRACE = TRUE TIMED_STATISTICS = TRUE

Trace at Session Level:


ALTER SESSION SET SQL_TRACE = TRUE; It is required to set TIMED_STATISTICS = TRUE in addition to the above trace setting. If the TIMED_STATISTICS value is not set at the instance level in the init.ora parameter, then this must also be set for each session along with the SQL_TRACE value. Session Level (using trigger): CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE=TRUE'; END; / Note: Make sure to give ALTER SESSION privilege to MYDB to make this trigger work.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Trace for different session :


In most cases, it may be required to set the trace for the program that is currently executing. In such cases, following package can be executed from SQL prompt by passing the SID and the serial number(serial #) of the session you are required to trace. To get SID and Serial#: select sid, serial#, username from v$session; To turn on the trace: exec sys.dbms_system.set_sql_trace_in_session( sid, serial#, TRUE ) ; To turn off the trace: exec sys.dbms_system.set_sql_trace_in_session( sid, serial#, FALSE) ; Make sure to run " GRANT EXECUTE ON DBMS_SYSTEM to <user/role>; " before running this command.

TKPROF
TKPROF Capture the Oracle trace and run the TKPROF with following sort options. tkprof <trace_input> <trace_output> sys=no explain=<user_id>/<password> sort=exeela,fchela,prscpu,execpu,fchcpu

STATSPACK
What Is Statspack?
Tuning a database can take multiple iterations to get to the stable environment. Oracle has provided a tool called STATSPACK to gather the database information for a given period and give a report on database health. STATSPACK is a useful tool provided by Oracle for reactive tuning. STATSPACK differs fundamentally from the well-known BSTAT/ESTAT tuning scripts becuuse it collects more information and stores the performance-statistics data permanently in Oracle tables, which can be used for later reporting and analysis. STATSPACK is a set of SQL scripts, PL/SQL stored procedures and packages for collecting performance statistics. It gathers more information than UTLBSTAT/UTLESTAT utilities, and it automates some operations.

Installing and Using Statspack


Installation
1. 2. Check if you have TOOLS tablespace on your database, otherwise create it (minimum size is 35M). Run SQL*Plus and connect as SYSDBA: connect / as sysdba

Copyright PeopleSoft Corporation 2001. All rights reserved.

38

3.

To install STATSPACK run the following script: On Unix: @?/rdbms/admin/spcreate On NT: @%ORACLE_HOME%\rdbms\admin\spcreate

Collect statistics
1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect perfstat/perfstat To collect statistics run the following command: execute statspack.snap;

2.

Each time the above command is issued, the database information is recorded along with the time. So, it is required to issue this command twice, once before the start of the process and once after the completion of the process in order to capture the information between the two snaps.

Generate Report
1. 2. Run SQL*Plus and connect as perfstat (default password is perfstat): connect / as perfstat To generate a report, run the following script: On Unix: @?/rdbms/admin/spreport On NT: @%ORACLE_HOME%\rdbms\admin\spreport You need to specify the start and end snap ids to get the report.

Uninstall
1. 2. Run SQL*Plus and connect as SYSDBA: connect / as sysdba To uninstall STATSPACK run the following script: On Unix: @?/rdbms/admin/spdrop On NT: @%ORACLE_HOME%\rdbms\admin\spdrop

Clean old statistics


1. 2. Run SQL*Plus and connect as perfstat (default password is perfstat): connect perfstat/perfstat To clean old statistics, run the following command: On Unix: @?/rdbms/admin/sppurge On NT: @%ORACLE_HOME%\rdbms\admin\sppurge

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

Chapter 4 -Database Tuning and init.ora Parameters DATABASE TUNING TIPS


Block Size
Thorough analysis should be done before choosing an appropriate block size at the time of database creation. There could be significant performance impact depending on the size selected.
In Oracle9i, once you create the database, you can go back and change just about any parameter EXCEPT the default DB_BLOCK_SIZE. The only way to change this is to delete everything and start over. Because of the importance of this parameter, you should choose one that best suits your needs before you start.

Size Considerations
Small Block Size (2K to 8K) Pros: Larger Block Size (16K) Pros:

1) Reduces block contention. 2) Good for small number of rows. 3) Good for random access.
Cons:

1) 2) 3) 4)

Less overhead. Good for sequential access. Good for very large rows. Better performance of index reads.

1) 2) 3) 4)

Has relatively large overhead. Has small number of rows per block. Can cause more index blocks to be read. You may not be able to parse delivered, convoluted SQL statements if your block size is smaller than 8K.

Cons:

1) Increases block contention. 2) Uses more space in the buffer cache. 3) For very small tables, a lot of unused space will be left unusable.

Recommended Block Size General recommendation for PeopleSoft applications is not less than 8K. Probably 8K for most of the tables and different block sizes for temporary tables and other tables based on how they are used.
If you are running online and batch processes on the same database then set the value to 8K. Do not set the value less than 8K.

Copyright PeopleSoft Corporation 2001. All rights reserved.

40

Tablespaces with different Block Sizes


Oracle 9i lets you create tablespaces with different block sizes. When you do so, you should define an appropriate cache size as listed below. DB_16K_CACHE_SIZE DB_2K_CACHE_SIZE DB_32K_CACHE_SIZE DB_4K_CACHE_SIZE DB_8K_CACHE_SIZE

Shared Pool Area


Check GETHITRATIO in V$LIBRARYCACHE select gethitratio from v$librarycache where namespace = SQL AREA; Find out the statement that users are running: The following statement will dump the entire sqlarea: select sql_text, users_executing, executions, loads from v$sqlarea; Consider using the following statement, if you want to get the SQLs with the top 10 buffer gets per execution:
select * from ( Select trunc(Buffer_Gets/ Decode(Executions, 0, 1, Executions)) "Gets/Exec", trunc(Disk_Reads/ Decode(Executions, 0, 1, Executions)) "Reads/Exec", Executions "Execs", Buffer_Gets "Gets", SQL_Text From V$SQLArea Where Disk_Reads > 100000 or Buffer_Gets > 100000 Order By Buffer_Gets/ Decode(Executions, 0, 1, Executions) desc) where rownum < 11

Consider increasing SHARED_POOL_SIZE to improve ratio. Warning: Making the shared pool too large could cause memory fragmentation and harm performance.

Data Dictionary hit ratio


Keep the ratio of the sum of GETMISSES to the sum of GETS less than 15%. select parameter, gets, getmisses from v$rowcache; select 1-(sum(getmisses)/sum(gets)) from v$rowcache; Consider increasing SHARED_POOL_SIZE to improve ratio.

Buffer busy waits


select name, value from v$sysstat where name = free buffer inspected; Consider increasing DB_BLOCK_BUFFERS if this shows high or increasing values. This statistic is the number of buffers skipped to find a free buffer.

PeopleSoft 8 Batch Performance on Oracle 9i Database select event, total_waits from v$system_event where event in (free buffer waits, buffer busy waits); Buffer busy waits means that a process has been waiting for a buffer to become available. Free buffer waits occurs after a server cannot find a free buffer or when the dirty queue is full. Keep in mind that these statistics and events could also indicate that the DBWn process needs tuning.

1/4/2005

Log Buffer
There should be no log buffer space waits. select sid, event, seconds_in_wait, state from v$session_wait where event = log buffer space; If some time was spent waiting for space in the redo log buffer, consider increasing LOG_BUFFER, or moving the log files to faster disks such as striped disks. The redo buffer allocation retries value should be near 0; the number should be less than 1% of redo entries. select name, value from v$sysstat where name in (redo buffer allocation retries, redo entries); If necessary, increase LOG_BUFFER (until the ratio is stable) or improve the checkpointing or archiving process. The log buffer flushes once when it is 1/3 full or reaches 1 MB. Making the log buffer greater than 3-5 MB is just wasting memory. Note: A modest increase can significantly enhance throughput, and the LOG_BUFFER size must be a multiple of the operating system block size.

Tablespace I/O
The following steps will guide you create a temporary tables space: 1. 2. 3. 4. 5. 6. Reserve the SYSTEM tablespace for data dictionary objects. Create locally managed tablespaces to avoid space management issues. Split tables and indexes into separate tablespaces. Create separate tablespace for rollback segments in case of manual undo management. Store very large database objects in their own tablespace. Create one or more temporary tablespaces.

Full Table Scans


The following steps will help you monitor full table scans: 1. 2. 3. Investigate the need for full table scans. Specify DB_FILE_MULTIBLOCK_READ_COUNT (8 is default). Monitor long-running full table scans with v$session_longops view.

select sid, serial#, opname, to_char(start_time, HH24:MI:SS) as START, (sofar/totalwork)*100 as PERCENT_COMPLETE from v$session_longops; select name, value from v$sysstat where name like %table scans%;
Copyright PeopleSoft Corporation 2001. All rights reserved.

42

Rebuilding Indexes
Index_usage represents the percentage of rows deleted. If it is greater than ten percent, consider rebuilding. The following code sample helps in rebuilding indexes: analyze index acct_no_idx validate structure; select (del_lf_rows_len/lf_rows_len) * 100 as index_usage from index_stats; alter index acct_no_idx rebuild;

Sorting
Try to sort in memory instead of in temporary. The init.ora parameter, SORT_AREA_SIZE will allocate memory for sorting (per user / as required). This is the space allocated in main memory for each process to perform sorts. This memory will reside in the UGA section of the PGA for non-MTS (Multi-Threaded Server) and in the SGA for MTS databases. If the sort cannot be performed in memory, temporary segments are allocated on disk to hold intermediate results. Increasing the value of SORT_AREA_SIZE will reduce the total number of disk sorts, thus reducing disk I/O. This can cause swapping, if too little memory is left over for other processes. Page swapping dramaticaly affects performance. The statements that will generate temporary segments are: Create Index, Select .... Order By, Distinct, Group By, Union, Unindexed Joins, Some Correlated Sub queries.

Since temporary segments are created to handle sorts that cannot be handled in memory, the initial extent default for temporary segments should be at least as large as the value of SORT_AREA_SIZE. This will minimize the extension of the segment.
Set SORT_AREA_SIZE and SORT_MULTIBLOCK_READ_COUNT (forces the sort to read a larger section of each run into memory during a merge pass) appropriately. The default value for SORT_AREA_SIZE is 64K , which is far too small for most cases. A range of 512KB to 1MB should be considered. SORT_AREA_SIZE of 2-3 MB for a data warehouse is not implausible. However, avoid disk sort operations whenever possible. Reduce swapping and paging by ensuring that sorting is done in memory where possible and reduce space allocation calls by allocating temporary space appropriately. select disk.value Disk, mem.value Mem, (disk.value/mem.value) * 100 Ratio from v$sysstat mem, v$sysstat disk where mem.name = sorts (memory) and disk.name = sorts (disk); The ratio of disk sorts to memory sorts should be less than 5%. Adjust SORT_AREA_SIZE if necessary.

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

select tablespace_name, current_users, total_extents, used_extents, extent_hits, max_used_blocks, max_sort_blocks from v$sort_segment; PGA_AGGREGATE_TARGET: Oracle uses PGA_AGGREGATE_TARGET as a target for PGA memory. Use this parameter to determine the optimal size of each work area allocated in AUTO mode (in other words, when WORKAREA_SIZE_POLICY is set to AUTO). You must set this parameter to enable the automatic sizing of SQL working areas used by memory-intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create.

WORKAREA_SIZE_POLICY: WORKAREA_SIZE_POLICY works in conjunction with the PGA_AGGREGATE_TARGET parameter. PGA_AGGREGATE_TARGET parameter is set to AUTO to have Oracle automatically manage the work areas policies. This cannot be set to AUTO when the other one is not set.

IMPORTANT PARAMETERS FOR ORACLE 9I


_UNNEST_SUBQUERY=FALSE The following bug was reported by a customer using PT8.4x, Bug# 2948326 Query gives incorrect results when using MIN. MAX Function under CBO The complete description of the bug can be found on Oracle's metalink site, Reference Bug# 2948326 While the bugs are not yet resolved in Oracle 9i (9.2.0.2, or 9.2.0.3), Oracle does recommend that a workaround by using the following init.ora parameter will produce correct results. _unnest_subquery = false OPTIMIZER_FEATURES_ENABLE=8.1.7 It has come to our attention that the Oracle 9.0.1.x.x CBO optimizer may produce inefficient plans. Oracle recommends that our customers set the following init.ora parameter for Oracle 9i versions 9.0.1.x.x to 9.2.0.1. optimizer_features_enable=8.1.7 These optimizer issues have been addressed in Oracle 9.2.0.2. So optimizer_features_enable=8.1.7 parameter is not needed with 9.2.0.2 and beyond. _COMPLEX_VIEW_MERGING=FALSE According to Oracle, the base bug# 2700474 has been fixed in Oracle 9.2.0.4.0, however the following bug(s) still require using the _complex_view_merging=FALSE parameter in the init.ora It is possible to have incorrect results on queries using aggregate functions with correlated subqueries, or to receive ORA-3113 and ORA-7445 on certain SQL queries. While the bugs are not yet resolved in Oracle 9i (9.2.0.2, or 9.2.0.3), Oracle does recommend a workaround that fixes the problem we have encountered. Oracle indicates no side effects to implementing this workaround. The complete description of the bug can be found on Oracle's metalink site, Reference Bug# 2415893 Set the following init.ora parameter as indicated below:

Copyright PeopleSoft Corporation 2001. All rights reserved.

44

_complex_view_merging=FALSE O7_DICTIONARY_ACCESSIBILITY=TRUE The following issue was found while testing CRM on PT8.4x Running the AE_SYNCIDGEN process in UA , will generate the following error: Error Message SQL error. Function: SQL.Execute Error Position: 7 Return: 1031 - ORA-01031: insufficient privileges The O7_DICTIONARY_ACCESSIBILITY initialization parameter controls restrictions on system privileges when you migrate from Oracle7 to Oracle8i and higher releases. If the parameter is set to TRUE, access to objects in the SYS schema is allowed (Oracle7 behavior). If this parameter is set to FALSE, system privileges that allow access to objects in "any schema" do not allow access to objects in SYS schema. The default for O7_DICTIONARY_ACCESSIBILITY is FALSE. This is different from Oracle versions prior to 9i. Beginning with PeopleTools 8.4 Oracle triggers were used in the CRM applications. The triggers work fine on Oracle8i but were failing on Oracle9i. For these triggers to work on Oracle9i, we need to revert Oracle database catalog access behaviour to Oracle 8.1.7 or earlier. This is accomplished by setting the following init.ora parameter: O7_DICTIONARY_ACCESSIBILITY=TRUE Bounce (shutdown and restart) the SID for the parameter to be take effect. _COST_EQUALITY_SEMI_JOIN=FALSE An upgrade issue was discovered internally while testing the upgrade to PT 8.4x upgrades. The upgrade compare step completed, but the results of the step were incorrectbecause the underlying select criteria in some of the upgrade compare steps were not updating the correct number of rows. PeopleSoft recommends that the customers still on Oracle 9.2.0.2 and 9.2.0.3 should add the following parameter to their init.ora: _cost_equality_semi_join=false to address the specific upgrade issue. The aforementioned specific upgrade issue has been fixed in Oracle 9.2.0.4.

Appendix A Special Notices

PeopleSoft 8 Batch Performance on Oracle 9i Database

1/4/2005

All material contained in this documentation is proprietary and confidential to PeopleSoft, Inc., is protected by copyright laws, and subject to the nondisclosure provisions of the applicable PeopleSoft agreement. No part of this documentation may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including, but not limited to, electronic, graphic, mechanical, photocopying, recording, or otherwise without the prior written permission of PeopleSoft, Inc. This documentation is subject to change without notice, and PeopleSoft, Inc. does not warrant that the material contained in this documentation is free of errors. Any errors found in this document should be reported to PeopleSoft, Inc. in writing. The copyrighted software that accompanies this documentation is licensed for use only in strict accordance with the applicable license agreement, which should be read carefully as it governs the terms of use of the software and this documentation, including the disclosure thereof. See Customer Connection or PeopleBooks for more information about what publications are considered to be product documentation. PeopleSoft, the PeopleSoft logo, PeopleTools, PS/nVision, PeopleCode, PeopleBooks, and Vantive are registered trademarks, and PeopleTalk and "People power the internet." are trademarks of PeopleSoft, Inc. All other company and product names may be trademarks of their respective owners. The information contained herein is subject to change without notice. Information in this book was developed in conjunction with use of the product specified, and is limited in application to those specific hardware and software products and levels. PeopleSoft may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. The information contained in this document has not been submitted to any formal PeopleSoft test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While PeopleSoft may have reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites.

Appendix B Validation and Feedback


This section documents that real-world validation that this Red Paper has received.

Copyright PeopleSoft Corporation 2001. All rights reserved.

46

CUSTOMER VALIDATION
PeopleSoft is working with PeopleSoft customers to get feedback and validation on this document. Lessons learned from these customer experiences will be posted here.

FIELD VALIDATION
PeopleSoft Consulting has provided feedback and validation on this document. Additional lessons learned from field experience will be posted here.

Appendix C - References
1. Peoplesoft Installation Guide - Oracle Tuning chapter

PeopleSoft 8 Batch Performance on Oracle 9i Database 2. 3. 4. 5. 6. 7. 8. 9. http://technet.oracle.com http://www.oracle.com/oramag/ http://metalink.oracle.com http://www.ixora.com.au http://www.dbasupport.com http://www.dba-village.com http://www.lazydba.com http://www.orafaq.com

1/4/2005

10. http://www.oracletuning.com

Appendix D Revision History

Copyright PeopleSoft Corporation 2001. All rights reserved.

48

Authors
Jayagopal Theranikal, Performance Engineer - Having more than 12 years of Oracle database experience and more than 4 years of Peoplesoft Application tuning experience. Worked on SCM application tuning and benchmarks in Performance & Benchmarks group. Sumathy Muthuswamy, Performance Engineer - Having more than 7 years of Oracle database experience and more than 5 years of Peoplesoft Application tuning experience. Worked on SCM, Financials, CRM application tuning and benchmarks in Performance and Benchmarks group. Contributors: Durgesh Desai, Performance Engineer

Reviewers
The following people reviewed this Red Paper: John Houghton Jim Houghton Lawrence Schapker Vadali Subrahmanyeswar Vishnu Badikol

Revision History
1. 2. 05/15/04: Created document.

Вам также может понравиться