Академический Документы
Профессиональный Документы
Культура Документы
This guide gives you an introduction to conducting OLTP (Online Transaction Processing) workloads on the Oracle Database. This guide will equip you with the essentials for assessing the ability of any system that runs the Oracle Database for processing transactional workloads. On completion of this guide you will be able to run detailed and comprehensive Oracle load tests. After building a basic skill set, you should be able to take a system from 'bare metal' to generation of a full performance profile within one day. If you have not already done so you should read the Quick Start tutorial before proceeding with this guide. Database load testing is an advanced skill and therefore familiarity with the Oracle Database and basic Oracle DBA skills are assumed. You should already be able to create, administer and connect to an Oracle database. If you do not have these skills I recommend start with an Introduction to Oracle.
Introduction ..................................................................................................................................... 2 Single Threaded Tests ................................................................................................................... 2 What is TPC-C? ............................................................................................................................. 2 Performance Profiles .................................................................................................................... 4 Test Network Configuration ............................................................................................................. 5 Load Generation Server Configuration ........................................................................................ 6 SUT Database Server Configuration ............................................................................................. 7 Administrator PC Configuration ................................................................................................... 8 Installation and Configuration.......................................................................................................... 8 Load Generation Server Installation ............................................................................................ 8 Load Generation Server Configuration ...................................................................................... 15 SUT Database Server Installation ............................................................................................... 16 Network Connectivity ................................................................................................................ 16 Creating the Test Schema ............................................................................................................... 17 Build Options .............................................................................................................................. 18 Starting the Schema Build .......................................................................................................... 19 Pre-Testing and Planning ................................................................................................................ 25 Driver Options ............................................................................................................................ 26 Loading the Driver Script............................................................................................................ 30 Pre-Test 1 Verifying the Schema ................................................................................................ 30 Pre-Test 2 Single and Multiple Virtual User Throughput ........................................................... 38 Planning and Preparation ........................................................................................................... 46 Running Timed Tests with the AWR Snapshot Driver Script .......................................................... 47 Automating Tests with Autopilot Mode ..................................................................................... 56 Performance and Price Performance Analysis ............................................................................... 62 Conclusion ...................................................................................................................................... 68 Support and Discussion .................................................................................................................. 68
Introduction
In this introduction we give an an overview of the correct approach to take for Oracle load testing and discuss the tests that Hammerora implements.
COUNT(*) ---------160427556 Elapsed: 00:00:40.86 The challenge was that such a statement runs as single-threaded and therefore was valid in the era of single-core processors to test the performance of one processor but became obsolete in the era of multicore processors. In other words with an example eight core processor such a test would give indications on the performance of one core and leave the other cores idle testing only a fraction of the performance potential of the CPU as a whole. Additionally such tests focused on CPU performance only without testing any of the storage component. Such a simple approach is flawed and to test a multiple CPU or multicore database environment requires a multithreaded test framework. Fortunately Hammerora is multi-threaded and therefore ready to test your multi-core environments with multiple virtual users all interacting independently with the database simultaneously.
What is TPC-C?
Designing and implementing a benchmark is a significant challenge. Many performance tests and tools experience difficulties in comparing system performance especially in the area of scalability, the ability of a test conducted on a certain system and schema size to be comparable with a test on a larger scale system.
When system vendors wish to publish benchmark information about Oracle performance they have long had to access to such sophisticated test specifications to do so. In particular it can be noted that Oracle recognise the TPC-C as the standard for Online Transaction Processing, the type of workload we are looking to simulate. Fortunately the TPC benchmarks are industry standards. The TPC, at no charge, distributes its benchmark specifications to the public. For this reason Hammerora includes an implementation of the specification of the TPC-C benchmark that can be run in any Oracle environment. This implementation has the significant advantage that you know that the test is reliable, scalable and tested to produce consistent results. It is important to emphasise that the implementation is not a full specification TPC-C benchmark and the transaction results cannot be compared with the official published benchmarks in any way. Instead the implementations in Hammerora take the best designed specifications for a database transactional workload available in the world and enable you to run an accurate and repeatable workload against your own Oracle database. Audited TPC-C benchmarks are extremely costly and time consuming to establish and maintain, the Hammerora implementation of the TPC-C benchmarks is designed to capture the essence of TPC-C in a form that can be run at low cost bringing professional load testing to all Oracle environments. TPC-C implements a computer system to fulfil orders from customers to supply products from a company. The company sells 100,000 items and keeps its stock in warehouses. Each warehouse has 10 sales districts and each district serves 3000 customers. The customers call the company whose operators take the order, each order containing a number of items. Orders are usually satisfied from the local warehouse however a small number of items are not in stock at a particular point in time and are supplied by an alternative warehouse. Figure 1 shows this company structure.
It is important to note that the size of the company is not fixed and can add Warehouses and sales districts as the company grows. For this reason your test schema can be as small or large as you wish with a larger schema requiring a more powerful computer system to process the increased level of transactions. Figure 2 shows the TPC-C schema, in particular note how the number of rows in all of the tables apart from the ITEM table which is fixed is dependent upon the number of warehouses you choose to create your schema.
For additional clarity please note that the term Warehouse in the context of TPC-C bears no relation to a Data Warehousing workload, TPC-C defines a transactional based system and not a decision support (DSS) one. In addition to the computer system being used to place orders it also enables payment and delivery of orders and the ability to query the stock levels of warehouses. Consequently the workload is defined by a mix of 5 transactions as follows: New-order: receive a new order from a customer: 45% Payment: update the customers balance to record a payment: 43% Delivery: deliver orders asynchronously: 4% Order-status: retrieve the status of customers most recent order: 4% Stock-level: return the status of the warehouses inventory: 4%
Performance Profiles
For an official audited TPC-C benchmark the result of the tests is detailed as tpmC which represents the number of New Orders processed only. One particular advantage of Hammerora is is the ability to generate a performance profile as the load increases on your system. Whereas an official TPC-C benchmark gives you a single data-point and a typical single-threaded test (such as timing SQL Statements) also gives you a single
Transactions
Users
SYSTEM A
SYSTEM B
This graph shows the relative performance of real tests on different Oracle configurations. If observed as a single data point system A exhibits considerably higher performance then system B. However the performance profile clearly shows a crossover point. Represented by the steeper curve system B in fact shows higher performance up to the mid-point of the test whereas system A outperforms beyond this midpoint. This difference in performance highlights the differing attributes of performance and scalability, system B shows higher performance but system A shows better scalability. It should be clear that your testing goal should be to measure the performance profile of your system across all levels of utilisation.
with the Oracle database. You also require a load generation server to run Hammerora installed with the Hammerora software and an Oracle client. Typically the administrator will monitor and manage the load testing from a separate notebook or PC. All systems will be connected across a network. Technically it is possible to run Hammerora on the SUT however this is not recommended. Firstly it makes it comparatively harder to distinguish between the load generation server workload and the database workload. Secondly running the Hammerora workload will skew the results. By eliminating the network component of the workload results for a smaller number of virtual users will be comparatively higher however as the workload increases performance will be comparatively lower. To eliminate this skew in results a dedicated load generation server should be used.
138MB of memory. Again this represents a highly efficient load testing environment in comparison to commercial database load testing applications. Consequently it is it is entirely feasible to load test with a 32-bit x86 operating system on the load generation client with a 64-bit operating system only required when conducting tests in excess of 1000 virtual users. For the load testing operating system, Hammerora is available pre-compiled for 32-bit Windows, 32-bit Linux and 64-bit Linux however you may compile the packages used for Hammerora manually for another operating system if you wish. Hammerora is a graphical application and therefore on Linux the operating system installation must include the X-windows packages. Storage requirements on the load generation server are minimal and all modern servers are likely to meet the storage required. Hammerora consumes approximately 15MB of disk space and you will also need an Oracle client. All Oracle database installations include an Oracle client, you therefore have the option of installing the oracle instant client, Oracle express edition or a full install of the Oracle Database software on the load generation server. Of course if the instant client or express edition is used the Oracle software will be license free. As an example build by installing Oracle Enterprise Linux operating system the oracle instant client and Hammerora for Linux 32 or 64 bit you can build a load generation server that is entirely license free. You should note that a load generation server built with Linux or Windows will be able to connect to and test an Oracle Database running on any operating system you choose. The load generation server does not need to be running the same version of Oracle.
Administrator PC Configuration
The administrator PC has the minimal requirement to display the graphical output from the load generation server. for example for a Linux load generation server the ability to display X windows. The PC should also have the ability to connect to the SUT to collect AWR reports and monitor performance by the installation of an Oracle client.
On Windows 32-bit installations you also have the option of installing Oracle Express Edition as detailed in the Hammerora Quick Start guide. Once you have installed your Oracle client download Hammerora from Sourceforge here: http://sourceforge.net/projects/hammerora/ The page will show you the right version for your operating system. If you need a different version click view all files. For Linux make sure that you have the correct version for your 32-bit or 64-bit Linux operating system respectively. If you have a 32-bit operating system you will need the hammerora-2.5-Linux-x86 package If you have a 64-bit operating system you will need the hammerora-2.5-Linux-x86-64 package Make sure that you use the correct version the 32-bit install will not function on 64-bit and Vice Versa. On Linux Hammerora should be installed as the oracle user, the same user that owns the Oracle software and not the root user. To start the installer on Linux make the installer file executable
oracle@test:~> chmod u+x hammerora-2.5-Linux-x86 and run the executable oracle@test:~> ./hammerora-2.5-Linux-x86 On Windows double-click on the installer file. For Windows 7 if you have the correct permissions to do so you should right click on the installer and choose the option Run as Administrator NOTE : Known Oracle Product Issue Bug #3807408 Before installing Hammerora on Windows you should note that there is a bug in some versions of the Oracle client and database software that causes Oracle error: ORA-12154: TNS:could not resolve the connect identifier specified This bug is caused whenever any Oracle client program (including Hammerora) is installed in a directory containing parenthesis such as the following: "C:\Program Files (x86)\..." (NOTE: This is an Oracle software bug not a Hammerora one). The workaround is as follows: Use a version of the Oracle client AND database software that contains the fix for Bug 3807408. This fix requires that both the client and database software be patched. OR Find the location of the application that is generating the error. Check the path to this location and see if it contains any parenthesis. If so, you must relocate the application to a directory without any parenthesis in the path. If running Hammerora on Windows and your client or database is affected by Oracle bug 3807408 then ensure that Hammerora is installed to a directory that does not contain parenthesis. The installer will start giving you the option of selecting the installation language shown in figure 6.
Figure 6 Select installation language You can then choose whether to continue with the installation as shown in figure 7.
Figure 7 Continue?
Figure 8 Welcome
Choose the destination location as shown in figure 9 and Click Next. To change the default location Click Browse and select a new location
Hammerora will be installed in your selected location as shown in figure 11. Note that the installation is entirely self contained. No software is installed external to the selected directory.
Figure 11 Installing
On the completion screen shown in figure 12 click finish and optionally launch Hammerora at this time.
If you opt to launch Hammerora as shown in figure 13 a the main application window is displayed
Figure 13 Hammerora
Hammerora is now installed. If you close the application using Flie -> Exit you can restart it again under Linux by running ./hammerora.tcl as the oracle user or windows by double clicking on hammerora.bat.
<tpcc> <schema> <count_ware>1</count_ware> <num_threads>1</num_threads> <tpcc_user>tpcc</tpcc_user> <tpcc_pass>tpcc</tpcc_pass> <tpcc_def_tab>tpcctab</tpcc_def_tab> <tpcc_def_temp>temp</tpcc_def_temp> <plsql>0</plsql> <directory> </directory> </schema> <driver> <total_iterations>1000</total_iterations> <raiseerror>false</raiseerror> <keyandthink>false</keyandthink> <checkpoint>false</checkpoint> <oradriver>standard</oradriver> <rampup>2</rampup> <duration>5</duration> </driver> </tpcc>
<tpch> <schema>
Network Connectivity
You must be able to connect from your load generation server to your SUT database server across the network using Oracle TNS. This will involve successful configuration of your listener on the SUT database server and the tnsnames.ora file on the load generation server. You can troubleshoot connectivity issues using the ping, tnsping and sqlplus commands on the load generation client and the lsnrctl command on the SUT database server. For example a successful tnsping test looks as follows: >tnsping DEV TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production
Used parameter files: D:\oraclexe\app\oracle\product\10.2.0\server\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = SUT)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DEV))) OK (0 msec) Note that where the instant client is being used on the load generation server you should configure the TNS_ADMIN environment variable to the location where the tnsnames.ora and sqlnet.ora files are installed. When you have installed the load generation server and SUT database and have verified that you can communicate between them using Oracle TNS you can proceed to building a test schema.
To create the TPC-C schema select the TPC-C schema options menu tab from the top level TPC menu. This menu will change dynamically according to your chosen database.
The schema options window is divided into two sections. The Build Options section details the general login information and where the schema will be built and the Driver Options for the Driver Script to run after the schema is built. Build Options are of importance at this stage and Driver Options will be considered further in this guide however note that you dont have to rebuild the schema every time you change the Driver Options , once the schema has been built only these Driver Options may need to be modified. For the Build Options fill in the values according to the database where the schema will be built.
Build Options
The Build Option values have the following meanings.
TPC-C User
The TPC-C user is the name of a user to be created that will own the TPC-C schema. This user can have any name you choose but must not already exist and adhere to the standard rules for naming Oracle users. You may if you wish run the schema creation multiple times and have multiple TPC-C schemas created with ownership under a different user you create each time.
standard rules for Oracle user password. You will need to remember the TPC-C user name and password for running the TPC-C driver script after the schema is built.
Number of Warehouses
The Number of Warehouses is selected by a slider. For fine-tuning you may click either side of the slider to move the value by 1. You should set this value to number of warehouses you have chosen for your test based on the guidance given previously in the section SUT Database Server Configuration.
On clicking this button a dialogue box such as the one shown in Figure 18 appears.
When you click Yes Hammerora will login to your chosen service name with a monitor thread as the system user and create the user with the password you have chosen. It will then log out and log in again as your chosen user, create the tables and then load the item table data before waiting and monitoring the other
threads. The worker threads will wait for the monitor thread to complete its initial work. Subsequently the worker threads will create and insert the data for their assigned warehouses as shown in figure 19. There are no intermediate data files or manual builds required, Hammerora will both create and load your requested data dynamically. Data is inserted in a batch format for optimal network performance.
When the workers are complete the monitor thread will create the indexes, stored procedures and gather the statistics. When complete Virtual User 1 will display the message TPCC SCHEMA COMPLETE and all virtual users will show that they completed their action successfully as shown in figure 20.
Press the button to destroy the virtual users as shown in figure 20 and clear the script editor as shown in figure 21.
The schema build is now complete as an example a 200 warehouse build as follows with nearly 100 million rows should take approximately half an hour or less to create and insert on an up to date 2 socket Linux server.
SQL> select table_name,num_rows from user_tables; TABLE_NAME NUM_ROWS ------------------------------ ---------NEW_ORDER 1800000 ORDER_LINE 59994097 ORDERS 6000000 STOCK 20000000 WAREHOUSE 200 ITEM 100000 HISTORY 6000000 DISTRICT 2000 CUSTOMER 6000000 9 rows selected. SQL> SQL> select sum(num_rows) from user_tables;
SUM(NUM_ROWS) ------------99896297
The TPC-C schema creation script is a standard Hammerora script like any other so you can save it, modify it and re-run it just like any other Hammerora script. For example if you wish to create more than the 1-5000 warehouses available in the GUI you may notice that the last line in the script calls a procedure with all of the options that you gave in the schema options. Therefore change the second value to any number you like to create more warehouses, for example the following will create 10000 warehouses. do_tpcc manager oracle 10000 tpcc tpcc tpcctab temp 0 /tmp 8 Similarly change any other value to modify your script. If you have made a mistake simply close the application and run the following SQL to undo the user you have created.
SQL>drop user tpcc cascade;
When you have created your schema you can verify the contents with SQL*PLUS or your favourite admin tool as the newly created user.
SQL> select tname, tabtype from tab; TNAME TABTYPE ------------------------------ ------HISTORY TABLE CUSTOMER TABLE DISTRICT TABLE ITEM TABLE WAREHOUSE TABLE STOCK TABLE NEW_ORDER TABLE ORDERS TABLE ORDER_LINE TABLE 9 rows selected. SQL> select * from warehouse; W_ID W_YTD W_TAX W_NAME W_STREET_1 ---------- ---------- ---------- ---------- -------------------W_STREET_2 W_CITY W_ W_ZIP -------------------- -------------------- -- --------1 773095764 .11 4R0mUe rM8f7zFYdx JyiNY5zg1gQNBDO v2973cRoiFSJ0z OF 374311111
SQL> select index_name, index_type from ind; INDEX_NAME -----------------------------IORDL ORDERS_I1 ORDERS_I2 INORD STOCK_I1 WAREHOUSE_I1 ITEM_I1 DISTRICT_I1 CUSTOMER_I1 CUSTOMER_I2 10 rows selected. INDEX_TYPE --------------------------IOT - TOP NORMAL NORMAL IOT - TOP NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL
SQL> SQL> select object_name from user_procedures; OBJECT_NAME -----------------------------NEWORD DELIVERY PAYMENT OSTAT SLEV
You can also browse the stored procedures you have created by looking in the creation script. At this point the data creation is complete and you are ready to start running a performance test. Before doing so it is worth noting that the schema has been designed in order that you can run multiple tests and it will return the same results. You therefore do not need to recreate your schema after every run for consistent results. Conversely if you do wish to recreate your schema for such a reason as you have exhausted your available tablespace space the results of tests against different sizes are comparable. You can monitor the amount of space you have used in your schema with a statement such as follows:
SQL> select sum(bytes)/1024/1024 as MB from user_segments; MB ---------838.125
At this stage your focus is now on the options given under the section Driver Options as shown in Figure 23.
Driver Options
Under the Driver Options section you have the following choices:
Instead of the Standard Driver Script you can select the AWR Snapshot Driver Script. As shown in Figure 25 this produces a number of additional options. You should select the AWR Snapshot Driver Script when you wish to run timed tests and have Hammerora time these tests, measure the results, report on an average transaction rate for a period of time and generate AWR information for that test. With the AWR Snapshot Driver Script the first virtual user will do the timing and generate the results with the additional virtual users running the workload, therefore you should always select the number of desired virtual users + 1 when running the AWR Snapshot Driver Script. For example if you wish to measure a load generated by two virtual users you should select three virtual users before running the script. Additionally the AWR Snapshot Driver Script is designed to be run with Virtual User Output enabled, this ensures that the information gathered by the first virtual user on the transaction rates and AWR report numbers are correctly reported. Whilst running the AWR Snapshot Driver Script virtual user output for the virtual users generating the load is suppressed.
For both the Standard Driver Script and AWR Driver Script the further options selected within the Schema Options window are entered automatically into the EDITABLE OPTIONS section of the driver script as follows:
number of times that a script will be run in its entireity. The total_iterations value is internal to the TPC-C driver script and determines the number of times the internal loop is iterated ie for {set it 0} {$it < $total_iterations} {incr it} { ... } In other words if total_iterations is set to 1000 then the executing user will log on once execute 1000 transactions and then log off. If on the other hand Iterations in the Virtual User Options window is set to 1000 and total_iterations in the script set to 1 then the executing user will log on execute one transaction and then log off 1000 times. For the TPC-C driver script I recommend only modifying the total_iterations value. When running the AWR Snapshot Driver Script as the test is timed you should ensure that the number of transactions is set to a suitably high vale to ensure that the virtual users do not complete their tests before the timed test is complete, doing so will mean the you will be timing idle virtual users and the results will be invalid. Consequently it is acceptable when running timed tests to set the Total Transactions per User to a high value such as 1000000 (now the default value from Hammerora 2.5) or more to ensure that the virtual users continue running for a long period of time, When the test is complete you can stop the test running by stopping the virtual users.
Keying and Thinking Time Keying and Thinking Time is shown as KEYANDTHINK in the Driver Script. A good introduction to the
importance of keying and thinking time is to read the TPC-C specification. This parameter will have the biggest impact on the type of workload that your test will take.
TIP: The most common configuration error is to run a test with Keying and Thinking Time set to False with too many virtual users for the schema created. One virtual user without keying and thinking time will generate a workload equivalent to many thousands of users with keying and thinking time enabled. Without keying and thinking time you are likely to see peak performance at or around the number of cores/Hyper Threads on your Database Server.
Keying and thinking time is an integral part of an offical TPC-C test in order to simulate the effect of the workload being run by a real user who takes time to key in an actual order and think about the output. If KEYANDTHINK is set to TRUE each user will simulate this real user type workload. An offical TPC-C benchmark implements 10 users per warehouse all simulating this real user experience and it should therefore be clear that the main impact of KEYANDTHINK being set to TRUE is that you will need a significant number of warehouses and users in order to generate a meaningful workload. and hence an extensive testing infrastructure. The positive side is that when testing hundreds or thousands of virtual users you will be testing a workload scenario that will be closer to a real production environment. Whereas with KEYANDTHINK set to TRUE each user will execute maybe 2 or 3 transactions a minute you should not underestimate the radical difference that setting KEYANDTHINK to FALSE will have on your workload.
Instead of 2 or 3 transactions each user will now execute tens of thousands of transactions a minute. Clearly KEYANDTHINK will have a big impact on the number of virtual users and warehouses you will need to configure to run an accurate workload, if this parameter is set to TRUE you will need at least hundreds of vritual users and warehouses, if FALSE then you will need to begin testing with 1 or 2 threads, building from here up to a maximum workload with the number of warehouses set to a level where the users are not contending for the same data. A common error is to set KEYANDTHINK to FALSE and then create hundreds of users for an initial test, this form of testing will only exhbit a massive contention for data between users and nothing about the potential of the system. If you do not have an extensive testing infrastructure and a large number of warehouses configured then I recommend setting KEYANDTHINK to FALSE (whilst remembering that you are not simulating a real TPC-C type test) and beginning your testing with 1 virtual user building up the number of virtual users for each subsequent test in order to plot a transaction profile.
TIP: If you wish not to checkpoint during a test (for example if you are constrained on I/O) it is essential that you size your redo logs large enough to complete a full test without generating more redo than the size of a single redo log file. You can determine the redo rate per second from the Redo Size section of the Load Profile your AWR report and use this value to calculate the redo generated for the duration of test. For example if you determine that you generate 5 Gigabytes of redo per minute each redo log file needs to be greater than 25 Gigabytes to prevent Checkpointing during a test as the result of a log file switch.
Choosing to checkpoint when complete causes the intensive activity related to a checkpoint to occur outside of the time a test is conducted and does a log file switch resetting the redo back to the start of the next logfile in the group for the next test. When running in Autopilot mode this ensures that you have control over when Checkpoints will occur even though you are running tests unattended.
one to signal the test is complete and the active virtual users to complete their workload.
Mode Options
The mode value is taken from the operational mode setting set under the Mode Options menu tab under the Mode menu. If set to Local or Master then the monitor thread takes snapshots, if set to Slave no snapshots are taken. This is useful if multiple instances of Hammerora are running in Master and Slave mode to ensure that only one instance takes the snapshots.
When you have completed defining the Schema Options click OK to save your values. As noted previously under the section Load Generation Server Configuration you can also enter these values into the config.xml file to save a a permanent record of your values for pre-populating the values after restarting Hammerora.
This will populate the Script Editor window with the driver script shown in Figure 24 or 25 according to whether the standard or AWR driver script is chosen. These scripts provide the interaction from the Load Generation Server to the schema on the SUT Database Server. If you have correctly configured the parameters in the Driver Options section you do not have to edit in the script. If you so choose however you may also manually edit the the values given in the EDITABLE OPTIONS section. Additionally the driver scripts are regular Hammerora scripts and a copy may be saved externally and modified as you desire for a genuinely Open Source approach to load testing.
In this example we will create two virtual users and choose to display their output to verify the schema and database configuration. To do this Under the Virtual Users menu as shown in Figure 28 select Vuser Options and enter the number 2. Also check the Show Output button to see what your users are doing whilst the test is running. Note that displaying the output will reduce the overall level of performance (although Hammerora is multi-threaded many Window display systems are not and a display can only be updated by a single thread thereby limited performance) and click OK. Showing output is OK here as it is running a pretest and not a performance test.
There are three other related options under the Virtual User Options dialogue, namely User Delay(ms), Repeat Delay(ms) and Iterations. Iterations defines the number of times that Hammerora should execute a script in its entirety. With regards to running the TPC-C driver script this can be thought of as the number of times a Virtual User logs on to the database, runs the number of transactions you defined in Total Transactions per User and logs off again. For example if Total Transactions per User was set to 1000 and the Virtual Users Iterations was set to 10, the Virtual User would complete 10000 transactions in total logging off and on between each run. Setting Total Transactions per User to 10000 and Virtual User Iterations to 1 would also complete 10,000 transactions per virtual user but all in one session. User Delay(ms) defines the time to wait between each Virtual User starting its test and the Repeat Delay(ms) is the time that each Virtual User will wait before running its next Iteration. For the TPC-C driver script the recommended approach is to leave the Iterations and User and Repeat Delays at the default settings and only modify the Total Transactions per User or total_iterations value inside the Driver Script. When you have completed the selection press OK. Click the Create Virtual Users button as shown in Figure 29 to create the virtual users, they will be created but not start running yet.
You can observe as shown in Figure 30 that the virtual users have been created but are showing a status of idle. You can destroy the Virtual Users by pressing the Red Traffic light icon that has appeared in place of the Create Virtual Users button.
To begin the test press the button Run Hammerora Load Test as shown in Figure 31, the name of the button will appear in the information pane.
You can observe the Virtual User icon change to signify activity. The Virtual Users have logged on to the database, you will be able to see their presence in V$SESSION for example
SQL> select username, program from v$session where username = 'TPCC'; USERNAME -----------------------------TPCC TPCC PROGRAM -----------------------------------------------wish8.5@dragonfly (TNS V1-V3) wish8.5@dragonfly (TNS V1-V3)
are running transactions as can be observed in the Virtual User Output as shown in Figure 32.
When the Virtual Users have completed all of their designated transactions they will exit showing a positive status as shown in Figure 33. Once the Virtual User is displaying this positive status it has logged off the database and will not be seen in V$SESSION. The Virtual User is once again idle and not running transactions. The Virtual User does not need to be destroyed and recreated to re-run the test from this status. The Virtual Users can be destroyed to stop a running test.
If there is an error when running the Driver Script it will be reported in the Virtual User icon with the detail of the error shown in the Console window. Figure 34 shows an example of an error, in this case it is an Oracle error illustrating an unknown identifier in the connect string. The Virtual User is once again idle and not running transactions. The Virtual User does not need to be destroyed and recreated to re-run the test from this status.
At this stage in pre-testing the test configuration has been verified and it has been demonstrated that the load generation server can log on to the SUT Database Server and run a test.
Note that a single Virtual User without output is the default configuration if you have not modified the config.xml file and therefore creating the Virtual Users as shown in Figure 29 will give you this single Virtual Configuration without specifically configuring the Virtual Users as shown in Figure 35. Figure 36 shows the single Virtual User created and the Standard Driver script loaded.
Press the Run Hammerora Load Test button as shown previously in Figure 31 to begin generating the Single User Throughput test. As shown in figure 37 the Virtual User icon has been updated to signify that the workload is running.
To observe performance during the test you can use the Transaction Counter. The Transaction Counter options can be selected from the TX Counter Menu as shown in Figure 38.
Connect String
The Connect String must be a standard format Oracle connect string for a user with permissions to read the V$SYSSTAT table, you can validate by logging on with this user using sql*plus. A typical choice is the SYSTEM user.
Refresh Rate
The refresh rate defines the time in seconds between when the transaction counter will refresh its values. Setting this value too low may impact the accuracy of the data reported by the Oracle database and the default value of 10 seconds is a good choice for an accurate representation.
The transaction Counter will become active and start collecting throughput data as shown in figure 41.
After the first refresh time interval you will be able to observe the transaction counter updating according to the throughput of your system. The actual throughput you observe for a single Virtual User will vary according to the capabilities of your system, however typically you should be looking for values in the low tens of thousands. Additionally once the transaction rate reaches a steady state you should observe the transaction counter maintaining a reasonably flat profile. Low transaction rates or excessive peaks and troughs in the transaction counter should be investigated for system bottlenecks on throughput.
Once you are satisfied with the single Virtual User throughput close both the Transaction Counter and destroy the Virtual Users also stopping the test by pressing both Red Traffic Light icons. You should also proceed to pre-testing the throughput multiple Virtual Users. To do so repeat the testing you have done for a single Virtual User however instead increase the value for the number of Virtual Users to run the test in the Virtual User Options as shown in Figure 43.
Similarly monitor the throughput for a higher number of Virtual Users as shown in Figure 44.
Recommended Multiple Virtual Users for throughput testing are on an exponential scale from the single Virtual User test ie. 2,4,8,16,32 Virtual Users should be tested up to double the number of Cores or HyperThreads on the SUT Database Server. You should also not just test on an increasing scale. It is useful for example to run a single or two Virtual User test after running a test with Multiple Virtual Users to observe the importance of the caching of the data in the database Buffer Cache. This will be particularly noticeable with larger schemas. Figure 45 shows a typical performance profile of data being cached with a rising transaction rate through to a steady state.
Pre-testing is your opportunity to modify you configuration, tune your database and operating system and maximise the database throughput. Your aim should be ensure a consistent performance profile over a period of time such as that shown in Figure 46.
If the observed transaction rate has numerous peaks and troughs or the consistent throughput is lower than expected you should examine the system configuration to diagnose the reasons why performance is limited. To do this you can use Oracle provided or third party tools or AWR reports as described in the following section on running measured tests. You should also not neglect the relevant log files such as the Linux operating system logs and the Oracle Database alert log. Once you have completed your pre-testing and are satisfied with your configuration you should move to planning and preparing to run a series of measured tests. You do not have to restart the database or rebuild the schema to conduct your performance tests. In fact having run a series of pre-tests and to have data resident in the buffer cache is the ideal starting point for conducting measured tests.
supported virtual users. The tests will vary according to the aim, for example it is relatively meaningless to use a test without keying and thinking to determine the maximum number of supported virtual users (because each virtual user can use the maximum performance of one core or thread), similarly enabling keying and thinking time is not applicable to determining a performance profile. Alternative testing aims can be to compare multiple configurations on the same platform, for example looking at the impact on throughput of Virtualization, RAC or changing OS and Oracle parameters, the scope in this area for testing is limitless. In this guide we will focus upon one of the most common testing scenarios, to generate a performance profile for server. This aim is used to identify for a given configuration of CPU, memory and I/O on a defined OS and Oracle configuration the maximum number of transactions that the system can support. This is tested for a given number of virtual users, starting with one virtual user scaling up to the maximum number that the system can support. This approach ensures that the full capabilities of a multithreaded server are tested. With this approach we will define our Virtual Users without keying and thinking time. The number of cores/threads in this example on the SUT Database Server is 16, therefore we will prepare a simple tracking spreadsheet to record the results of our tests as shown in figure 47.
With the configuration documented, the aim defined and a method to track the results of the tests prepared for our performance profile test project it is now possible to proceed to running timed tests with the AWR Snapshot Driver Script.
need to recreate the schema to modify the driver options or to change from using the Standard Driver Script to the AWR Snapshot Driver Script or Vice Versa. Within the Driver Options shown in Figure 48, select the AWR Snapshot Driver Script radio button.
Once the AWR Snapshot Driver Script is selected this activates the options to choose to Checkpoint when complete and to select the Minutes of Rampup Time and Minutes for Test Duration as described previously in this guide. For a performance profile test you should plan to keep the Minutes of Rampup Time and the Minutes for Test Duration consistent for a number of tests with an increasing number of Virtual Users. For this reason you should plan to allocate sufficient rampup time for the higher number of Virtual Users at the end of your test sequence as well as the smaller number at the start. When you have selected your options click OK. From under the Benchmark and TPC-C Menu select TPC-C Driver Script, this populates the Script Editor Window as shown in Figure 49 with the AWR Snapshot Driver Script configured with your chosen options.
To change these options you can either change them in the Schema Options window and reload the driver script or more advanced users can also change them directly in the Driver Script itself.
TIP: The AWR is a feature of the Oracle Enterprise Edition Database only. If you are using Oracle Standard Edition the AWR functionality is not enabled. For this reason the AWR Snapshot Driver Script will run, it will take snapshots and report the snapshot numbers however the number of transactions per minute reported at the end of the test will always be zero and any AWR reports that you generate will be mostly blank. However within Standard Edition Statspack is a functional equivalent to the AWR and you can make one minor change to the AWR Snapshot Driver Script to make it compatible with Statspack. Firstly you need to create the PERFSTAT schema which you can do by running the following as SYSDBA. Typically you will create the schema within the SYSAUX tablespace. (Note on Linux/UNIX you will use forward slash / and on Windows backslash \ as a separator)
sql> @?\rdbms\admin\spcreate.sql
Once the perfstat user has been created you can modify the AWR Snapshot Driver script to use statspack
instead as follows: At Row 62 (use the Row Col guide in the top right hand corner of Hammerora to navigate) find the following row that takes the AWR snapshot.
set sql1 "BEGIN dbms_workload_repository.create_snapshot(); END;"
Note that if the Driver Script is reloaded the change you have made will be lost so it is recommended to save a copy of the Driver Script using the File -> Save Option to reload if required in your Statspack environment. When the script with this change is run Hammerora will now take Statspack Snapshots instead of AWR. You can view these snapshots with the following command.
sql> @?\rdbms\admin\spreport.sql
You may continue to follow this guide substituting statspack for AWR where necessary. However note that the AWR Snapshot Driver Script will continue to report the names of the AWR Snapshots it has taken and the transaction values from the AWR (i.e zero) as opposed to Statspack. You must therefore manually note the numbers of the Statspack Snapshots you have taken and also manually use the Statspack report to calculate your transaction rate from the Transactions: value in the Load Profile section.
To run the AWR Snapshot Driver Script you must configure the Virtual Users as you did with the Standard Driver Script however there are two notable differences to observe. Firstly when running the AWR Snapshot Driver Script one Virtual user will not run the Driver Script workload, instead this one Virtual User will monitor the timing of the test, take the AWR snapshots and return the results. For this reason you should configure your Virtual Users with a Virtual User + 1 approach. ie to measure the workload for 1 Virtual User you should configure 2 Virtual Users, to measure the workload for 2 virtual Users you should configure 3 and so on. Additionally the AWR Snapshot Driver Script is designed to be run with the Virtual User output enabled in order that you can view the Output from the Virtual User doing the monitoring, consequently the output for the Virtual Users running the workload is suppressed. The Virtual User configuration for the first test will look as Figure 50.
Click OK to save the configuration. Click the Create Virtual Users button as shown previously in this guide in Figure 29 to create the virtual users as shown in Figure 30 and Start the Virtual Users running as shown in
Figure 32. Note that the Virtual User output is now different as shown in Figure 51.
The output shows that rather than reporting the outcome of every transaction the worker Virtual User in this example Vuser-2-tid000007AC reports that it is processing transactions, however the output is suppressed. The Virtual User will print its message AFTER it has logged on and immediately BEFORE it runs its first transaction. If this message has not been printed the session is still in the process of logging into the database. You can check how this is proceeding on a Linux database server with a command such as follows "ps -ef | grep -i local | wc -l" to display the number of connections created. Increasing the User Delay(ms) value in the virtual user options can on some systems prevent a "login storm" and have all users logged on and processing tranasctions more quickly. Your rampup time should allow enough time for all of the users to be fully connected. You will also be able to observe that in this example this single virtual User has logged on to the database and is running the workload. You can also observe that the monitor Virtual User, in this example Vuser-1tid00000528 is not running a workload but instead has logged on to measure the rampup time followed by taking the first AWR Snapshot, measuring the timed test, taking the second AWR snapshot and reporting the outcome before logging off and ending the monitor script. It is worthwhile reiterating therefore that for the AWR Snapshot Driver Script you need to configure and run n+1 Virtual Users with the additional Virtual User doing the monitoring and measuring. The sample output of this monitoring Virtual User is shown in figure 52.
The monitoring user reports the TEST RESULT of TPM and NOPM. TPM measures the number of Oracle Transactions per minute and is not to be confused with the tpmC value from an official TPC-C benchmark. NOPM reports the number of New Orders per minute and is used as a database independent statistic. Consequently for example TPM cannot be used to compare the performance results of Oracle with MySQL but NOPM can. In addition to the test report the monitoring user also reports the SNAPIDs that can be used to generate an Oracle AWR performance report for the workload. If you have not chosen the Checkpoint when Complete option you should manually press the red traffic light icon to stop the Virtual User workload. You may if you wish also run the Transaction Counter during an AWR Snapshot Driver Script test. When you have stopped the test enter your data into your reporting spreadsheet as shown in Figure 53.
Note that the Run column enables you to record multiple tests for the same number of Virtual Users. You should run two or three tests at the same number of Virtual Users and take the average value of all of the tests as your final value for the workload. With the test complete and the values you recorded you should next generate the AWR report that corresponds to the reported SNAPIDs, in this example 436 and 437. Run the awrrpt script with interaction as follows, choosing either text or HTML as the format according to your preference: SQL> @?\rdbms\admin\awrrpt Current Instance ~~~~~~~~~~~~~~~~ DB Id DB Name Inst Num Instance ----------- ------------ -------- -----------2605379516 DEV 1 dev Specify the Report Type ~~~~~~~~~~~~~~~~~~~~~~~ Would you like an HTML report, or a plain text report? Enter 'html' for an HTML report, or 'text' for plain text Defaults to 'html' Enter value for report_type: text Type Specified: text
Instances in this Workload Repository schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ------------ -------- ------------ ------------ -----------* 2605379516 1 DEV dev SUT Using 2605379516 for database Id Using 1 for instance number Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing <return> without specifying a number lists all completed snapshots. Enter value for num_days: 1 Listing the last day's Completed Snapshots Snap Instance DB Name Snap Id Snap Started Level ------------ ------------ --------- ------------------ ----dev DEV 426 05 Oct 2010 09:06 1 427 05 Oct 2010 10:00 1 428 429 430 431 432 433 434 435 436 437 438 05 05 05 05 05 05 05 05 05 05 05 Oct Oct Oct Oct Oct Oct Oct Oct Oct Oct Oct 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 11:15 12:00 13:00 14:00 15:01 16:00 17:00 20:00 20:02 20:07 21:01 1 1 1 1 1 1 1 1 1 1 1
Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 436 Begin Snapshot Id specified: 436 Enter value for end_snap: 437 End Snapshot Id specified: 437
Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is awrrpt_1_436_437.txt. To use this name, press <return> to continue, otherwise enter an alternative. Enter value for report_name: Press Return End of Report Report written to awrrpt_1_436_437.txt SQL> You can now examine the report the corresponds to your workload. The first important section is the Load Profile, Within the Load Profile you can find the statistic that corresponds to the transaction rate which Hammerora reports.
Load Profile ~~~~~~~~~~~~ Redo size: Logical reads: Block changes: Physical reads: Physical writes: User calls: Parses: Hard parses: Sorts: Logons: Executes: Transactions:
Per Second --------------2,980,319.12 57,516.93 18,893.58 44.86 631.47 907.35 492.73 0.30 21.24 0.00 12,251.42 591.17
Per Transaction --------------5,041.40 97.29 31.96 0.08 1.07 1.53 0.83 0.00 0.04 0.00 20.72
In this example the number of transactions per second is 591.17. Multiplied by 60 (to convert transactions per second to minutes) returns 35470.2, the value that Hammerora reported. The next section that you should examine is the Top 5 Timed Events, For tests with a low number of virtual users you should always look for the value CPU Time or DB CPU as the top timed event in the high percentage value. Your next top timed event will usually be log file sync for the redo log writes. (For a higher number of Virtual Users these two events may be reversed, see the analysis section for a description of why this may be the case). The Top 5 Timed Events also give you the opportunity to diagnose any performance issues, in the this example the log file switch completion event illustrates an issue that should be investigated with the configuration of the redo log files resulting in a number of waits of nearly second each.
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ Event CPU time log file sync log file parallel write log file switch completion db file sequential read 128,155 171,695 26 13,677 Waits Time (s) 218 36 25 12 6 0 0 465 0 Avg %Total wait (ms) Call Time Wait Class 83.3 13.8 Commit 9.7 System I/O 4.6 Configurat 2.2 User I/O
-------------------------------------------------------------
Scrolling down to the Time Model Statistics section you should be able to observe that over 90% of the database time is spent processing SQL and PL/SQL exactly what is desired for a transactional performance test.
Statistic Name DB CPU sql execute elapsed time PL/SQL execution elapsed time Time (s) % of DB Time 218.3 217.7 26.4 83.3 83.1 10.1 ------------------------------------------ ------------------ ------------
Your examination of the AWR report should be sufficient to ensure that your test is valid and you are mostly
using the system CPU to process your workload. A more detailed review of the AWR report will be conducted in the analysis section. Once you are satisfied with the test results, repeat the test with the next value in the number of Virtual Users in your sequence remembering to add one for the monitor thread. Once this test is complete either repeat the process with the next value in the sequence or automate your testing with autopilot mode as detailed in the following section. With either method do this until you have completed your spreadsheet with all of the desired values for database performance.
shown in Figure 55. Configure the Autopilot options precisely in the same manner as you would use to instruct your Virtual DBA as follows:
the sequence is run. If however the test overruns the time interval and the Virtual Users are still running the sequence will wait for the Virtual Users to complete before proceeding.
TIP: When running the AWR Snapshot Driver Script in Autopilot Mode it is recommended to enable Checkpoint when Complete in the AWR Snapshot options as shown in Figure 48. This ensures that your tests are consistently checkpointed without you being in attendance. However take note that for high performance systems with sufficiently large redo logs the checkpoint may take as long or longer than the test itself and should be accounted for in the Minutes for Test duration Value defined in the Autopilot Options.
Once your Autopilot Options are defined, press OK to save the values. Close down all running virtual Users and the transaction counter and press the Autopilot button as shown in Figure 56.
You can now leave the autopilot mode to run your chosen sequence of tests without any further intervention. The Autopilot screen as shown in Figure 57 becomes active and reports your progress. In particular note the timer in the top right hand corner tracking the interval times at which your tests should be run.
The Autopilot will continue to run through your chosen sequence, creating virtual users and running the test in the test script as shown in Figure 58.
In this example a Checkpoint is performed after every test to ensure a consistent sequence of tests are performance. When your tests has completed as shown in Figure 59 you may retrieve your results.
If you have not chosen to use the Hammerora log file then it is necessary to use the data AWR snapshots to to report the performance recorded during your tests. If you have chosen to use the Hammerora log file you will have a record of all the data of your sequence of tests. For example in the following listing it shows that for the first test Virtual User tid0x42691940 was monitoring the workload and one Virtual User tid0x46c98940 was running the workload. The AWR report for this test was recorded between snapshots 5 and 6 and the test recorded 373930 Oracle TPM. Hammerora Log @ Wed Jun 09 15:10:55 BST 2010 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+tid0x42691940:Beginning rampup time of 2 minutes tid0x46c98940:Processing 1000000 transactions with output suppressed... tid0x42691940:Rampup 1 minutes complete ... tid0x42691940:Rampup 2 minutes complete ... tid0x42691940:Rampup complete, Taking start AWR snapshot. tid0x42691940:Start Snapshot 5 taken at 09 JUN 2010 16:07 of instance DEV (1) of database DEV (1019259476) tid0x42691940:Timing test period of 5 in minutes tid0x42691940:1 ..., tid0x42691940:2 ..., tid0x42691940:3 ..., tid0x42691940:4 ..., tid0x42691940:5 ..., tid0x42691940:Test complete, Taking end AWR snapshot.
tid0x42691940:End Snapshot 6 taken at 09 JUN 2010 16:12 of instance dev (1) of database DEV (1019259476) tid0x42691940:Test complete: view report from SNAPID 5 to 6 tid0x42691940:TEST RESULT : System achieved 37930 Oracle TPM (Transactions per Minute) tid0x42691940:at 12600 NOPM (New Orders per Minute) tid0x42691940:Doing Checkpoint tid0x42691940:Checkpoint complete Hammerora Log @ Wed Jun 09 15:20:56 BST 2010 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+tid0x42691940:Beginning rampup time of 2 minutes tid0x46c98940:Processing 1000000 transactions with output suppressed... tid0x45896940:Processing 1000000 transactions with output suppressed... tid0x42691940:Rampup 1 minutes complete ... tid0x42691940:Rampup 2 minutes complete ... tid0x42691940:Rampup complete, Taking start AWR snapshot. tid0x42691940:Start Snapshot 7 taken at 09 JUN 2010 16:17 of instance DEV (1) of database DEV (1019259476) tid0x42691940:Timing test period of 5 in minutes tid0x42691940:1 ..., tid0x42691940:2 ..., tid0x42691940:3 ..., tid0x42691940:4 ..., tid0x42691940:5 ..., tid0x42691940:Test complete, Taking end AWR snapshot. tid0x42691940:End Snapshot 8 taken at 09 JUN 2010 16:22 of instance DEV (1) of database DEV (1019259476) tid0x42691940:Test complete: view report from SNAPID 7 to 8 tid0x42691940:TEST RESULT : System achieved 76455 Oracle TPM (Transactions per Minute) tid0x42691940:at 25304 NOPM (New Orders per Minute) tid0x42691940:Doing Checkpoint tid0x42691940:Checkpoint complete When you have finished your test sequence press the traffic light icon to end Autopilot Mode. This data is available for all of the tests you performed allowing you to collect all of your results and AWR reports at a single point in time after all tests are complete.
Performance Comparisons
Use your spreadsheet to generate a graph of a performance profile with the TPM value for the y axis and the number of Virtual Users for the x axis. The performance profile should resemble the figure as shown in Figure 60 with an increasing level of transactions as the number of Virtual Users increases up to a maximum point of system utilisation. (Note that the actual values of virtual users and transactions per minute have been removed from this example as it is the relative performance that is important as opposed to the absolute values.)
TIP: The data produced for an official TPC-C benchmark is the logical equivalent of discarding all of the data points and keeping the top value only. Here you have produced significantly more data regarding the performance of the system across all levels of utilisation.
When you have data for multiple systems you can now add the performance data to the same spreadsheet and use the resultant graph to show a comparison as shown in Figure 61.
Figure 61 illustrates the point that it is often not a question of simply which system is faster?. In this example performance is represented by a steeper curve showing a higher number of transactions for a given number of virtual users and therefore in this example System is faster for a lower number of Virtual users. However there is a clearly a crossover point for a given number of Virtual Users where System A outperforms System B. As such System exhibit greater scalability. Consequently choosing the correct system for your needs is dependent upon a number of factors. If you have a high workload and number of users then System A would be better but for fewer users looking for increased throughput then System B would be preferred. Note that an official benchmark would not give you such a comparison from a single data point highlighting the importance of generating your own Oracle performance comparisons.
AWR Analysis
TIP: You can export and load your AWR reports between Databases using the $ORACLE_HOME/rdbms/admin/awrext.sql and awrload.sql scripts.
If you have any questions or comparisons to do on the relative system performance you can compare the AWR reports produced by each system for the same number of Virtual Users. As noted previously for the
top 5 timed events you should expect to see a high value for DB CPU to show a good level of system utilisation.
Event DB CPU
Waits
Time(s) 14,000
% DB time 73.02
Wait Class
latch: cache buffers chains log file sync enq: TX - row lock contention cursor: mutex S
667,452
2,815
14.68
Concurrency
4,731,309 54,287
1,662 119
0 2
8.67 0.62
Commit Application
644,823
99
0.51
Concurrency
Note that in some circumstances you may see a high level of system utilization and the top wait event of log file sync. This does not necessarily mean that your redo log disk performance is slow. The LGWR thread or process like any other needs CPU time to write to the redo logs and therefore at high levels of system utilization this process may have less CPU time and therefore be shown as taking longer in the top 5 timed events. As a result it is possible that resolving high values for log file sync may not necessarily be done by improving disk I/O performance as indicated in the Oracle manuals but instead by improving CPU performance. If however CPU utilisation is low with a smaller number of Virtual Users and log file sync times are long (log file sync wait events should be seen as < 1ms good, > 1ms and < 2ms average > 2ms and < 5ms satisfactory > 5ms and < 10ms unsatisfactory, > 10ms unacceptable), then this does indicate an issue with disk I/O performance. You can confirm the levels of CPU utilisation in the Time Model Category of the report. As shown in the following example over 87% of the database time was spent processing SQL and over 9% on PL/SQL.
% of DB Time 87.85
13,999.71 1,838.49
73.02 9.59
Also the SQL Ordered by section gives a finer granularity of detail on the various statements. For example the response times can be derived from dividing the Elapsed time by the number of executions in this case 0.005 seconds for the neword procedure. Additionally comparing the CPU Time and Elapsed Time columns indicates time that the database spent elsewhere while processing the procedures such as waiting for disk.
%Total 59.48
%IO 0.00
SQL Id 16dhat4ta7xs9
SQL Module
SQL Text
2,132.19
2,314,058
15.23
2,382.07
89.51
1.07
:p_d_id...
1,898.75
231,669
13.56
2,173.63
87.35
0.01
If the gap in timings in wide or you see non-redo related values in the top 5 timed events check your Cache Advice section. The Size factor value of 1 shows the current buffer cache size, in this example 100GB. You should then look at the column Estimated Phys Reads for a prediction of the additional disk activity required at the various buffer cache sizes both below and above the value of 1.0. In this example the buffer cache is well sized and increasing it will not necessarily reduce disk activity.
Buffers (thousands) 1,263 2,525 3,788 5,050 6,313 7,575 8,838 10,101 11,363 12,626 13,888 15,151 16,414 17,676 18,939 20,201 21,464 22,726 23,989 25,252
Estimated Est Phys Read Phys Reads Factor (thousands) 2.07 1.61 1.42 1.22 1.14 1.03 1.02 1.01 1.01 1.00 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 9,910 7,716 6,795 5,854 5,451 4,946 4,893 4,850 4,816 4,783 4,765 4,757 4,754 4,754 4,754 4,753 4,753 4,753 4,753 4,753
Est Phys Read Est %DBtime Time for Rds 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 4.00 3.00 3.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00
D D D D D D D D D D D D D D D D D D D D
10,240 20,480 30,720 40,960 51,200 61,440 71,680 81,920 92,160 102,400 112,640 122,880 133,120 143,360 153,600 163,840 174,080 184,320 194,560 204,800
0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 1.40 1.50 1.60 1.70 1.80 1.90 2.00
Another section you should take note of (in 11g only) is the I/O stat values. As shown below there was 35GB written to the redo log files with minimal activity read or written to the data files. These values can help you size your redo log files for your desired checkpointing strategy during a test. i.e. in this example assuming high levels of I/O throughput for the DBWR processes during a 5 minutes test with redo log files of 8GB in size there would be 4 checkpoints assuming no additional checkpoint related parameters were set.
Writes: Data
Small Read
Large Read
Log File
0M
0.00
0M
35.8G
18166.32
121.944
Data File
156M
62.48
.518939
1M
0.02
.003326
0.96
Control File
7M
1.54
.023285
2M
0.49
.006653
0.07
TOTAL:
163M
64.02
.542225
35.8G
18166.83
121.954
0.94
After reviewing a number of AWR reports you will become familiar with a typical workload profile and be able to pinpoint anomalies with performance data.
Price Performance
In Addition to a straightforward performance comparison you should also perform a price/performance comparison. No system exists in isolation and the potential cost of the entire configuration should be calculated against which performance should be judged in context. To calculate price/performance you should use a metric based on the cost per transaction against which the cost is calculated as the TCO of the system over 3 years. The transaction level you select should be the one against a realistic level of performance you expect or alternatively the highest recorded transaction level with full system utilisation. For example we will take an undefined x86 based system with 20 cores that performed 1 million Oracle TPM with Oracle Enterprise Edition (Note these are not values taken from any real measurement and are used as round figures for the purpose of a simple price/performance calculation). The system and storage cost and maintenance cost for 3 years is $200,000 and the operating system support for 3 years is $11500. For the software price check the Oracle licensing and pricing website here http://www.oracle.com/us/corporate/pricing/index.html . At the time of writing the Oracle Enterprise Edition License without options is $47,500 per core and maintenance is $10,450 per year (or 22%). Note however that the Oracle Core Factor License table http://www.oracle.com/corporate/contracts/library/processor-core-factor-table.pdf details the level of discount for a process type. For our x86 system this is a factor of 0.5 and therefore the total software cost for 20 cores is: 20 * 47500 * 0.5 = $475,000 license cost plus 22% maintenance per year at $104,500 per year which equals a software TCO for 3 years of $788,500. Taking the Total Software Cost and adding it to the System and Operating System cost produces a 3 year TCO as follows: $788,500 + $200,000 + $11,500 = $1000,000. Therefore in the simple example the system costs $1000,000 and does 1000,000 TPM meaning the price/performance value is $1000,000/1000,000 TPM = $1 TCO/TPM. Your values will vary but calculating the TCO should be a essential part of any comparison.
Conclusion
You should now be equipped to perform fully scalable transactional workloads against an Oracle Database environment. An experienced Oracle Database tester will be able to go from bare metal to test in one working day to install a system, create the schema and start the automated test collecting the results the following working day making a 2 day project time possible to determine a full system performance profile. Do not forget that Hammerora is Open Source and you have the ability to modify the test scripts to fit whatever purpose meets your needs for testing Oracle transactional performance.