Академический Документы
Профессиональный Документы
Культура Документы
2 Application Developers ...................................................................................................... 2 Applications involving Array processing ....................................................................... 3 User Case #1 ............................................................................................................... 3 User Case #2 ............................................................................................................... 3 Applications involving LOB columns ............................................................................ 4 User Case .................................................................................................................... 4 Database Administrators..................................................................................................... 5 Background of how SQL Apply works .......................................................................... 5 How many processes....................................................................................................... 6 Sizing the number of Appliers .................................................................................... 6 What is an LCR............................................................................................................... 7 What is the LCR Cache................................................................................................... 7 Sizing the LCR Cache................................................................................................. 9 What is an Eager Transaction ....................................................................................... 10 Why Eager Transactions ........................................................................................... 10 Why not use Eager all the time ................................................................................. 10 How many Eager Transactions may there be concurrently ...................................... 11 How many LCRs until a transaction is deemed eager .............................................. 11 The problem of having too large an eager size ......................................................... 11 Transactional Dependencies ......................................................................................... 12 The Hash Table ......................................................................................................... 13 Computing Dependencies ......................................................................................... 13 Hash Entries per LCR ............................................................................................... 13 The Watermark Dependency SCN............................................................................ 13 Appliers and transactional dependencies .................................................................. 14 Piggy backing commit approval ............................................................................... 14 DDL Transaction Dependencies ............................................................................... 15
Introduction
Utilizing Data Guard SQL Apply (logical standby database) will have zero impact on the primary database when configured with asynchronous redo transport. Some users, however, will be challenged to achieve standby apply performance that can keep pace with peak periods of primary workload. Keeping pace with primary workload is important to minimize failover time, and for queries and reports running on the logical standby database to return results that are up-to-date with primary database transactions. Tuning SQL Apply or logical standby has significantly improved with every release to the point that SQL Apply 11g can keep up very high loads. However, there are certain workload profiles where SQL Apply rates may be sub-optimal compared to the rate at which the primary database is generating workload. This note focuses on specific application use cases where SQL Apply performance may be sub-optimal and describes the best practices and potential application changes to accelerate SQL Apply performance.
Page: 1 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance While the information contained in this note focuses on Oracle Database 11g Release 1 (11gR1), many of the same principals can be applied to Oracle Database 10g.
The information below assumes a basic understanding of SQL Apply from the Oracle Data Guard Concepts and Administration guide and from the SQL Apply Best Practices papers available on the Oracle Maximum Availability Architecture MAA website.
Application Developers
The following section is intended primarily for Application Developers that are responsible for writing applications that will function in an environment that utilizes a
Page: 2 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance logical standby database. DBAs should also be aware of these design considerations so that they can identify them and work with the Application Development teams.
User Case #1
An Oracle customer that utilizes array processing extensively modified 10,000 rows per transaction. While the limit of 10,000 rows was optimal for the primary database, this provided an adverse affect on the standby database resulting in the inability of the standby database to stay synchronized with the primary database. Approximately 20% of all the DML for the application involved transactions were greater than the default value for the _EAGER_SIZE (See What is an Eager Transaction) parameter. When the SQL Applys _EAGER_SIZE parameter was increased to 11,000 rows to allow the transactions that were generated by the array processing, SQL Apply would appear to hang while the completed transaction was applied to the database (See The problem of having too large an eager size). The application was changed so that it performed array processing of 1,000 rows per transaction, and the _EAGER_SIZE parameter was set to 1,100. While the application committed more frequently, the impact on the logical standby database was significantly reduced because transactions were smaller and the lag was reduced as well as transactions were being applied more efficiently.
User Case #2
An Oracle customers application was written to perform array processing in units of 200 rows. However, auditing requirements meant that whenever a row was inserted into a
Page: 3 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance specific table, a row was also populated in an application audit table. On average 1/4th of the applications 200-row array size resulted in an audit record being generated. This meant that the transaction actually modified 250 rows on average, resulting in the transaction being considered eager. The application developer was able to easily reduce the array size to 100 rows for this particular transaction so that even with the auditing records the database transaction modified less than 201 rows. This approach was taken because in this case, it was easy to modify the array size and changing _EAGER_SIZE could have an adverse affect on the rest of the application.
User Case
An Oracle customer that utilizes LOB columns extensively loads documents that are typically less than 64K in size. A 64K LOB column converts into 5 16K Blocks for the customer, so when they wrote the application they commit every 100 documents which equates to approximately 600 LCRs. This is greater than the default value for _EAGER_SIZE parameter so the DBA team have explicitly raised the value for the parameter to 1,001.
Page: 4 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance
A side impact of raising the _EAGER_SIZE parameter to 1,000 is that the transaction could utilize 8Mb of the LCR Cache (See What is the LCR Cache). For this reason, the customer also has an LCR Cache of 1Gb so that they can hold the transactions in the LCR Cache without paging.
Database Administrators
The following sections are intended primarily for Database Administrators that are responsible for administering the logical standby database. Application Developers should also be aware of these considerations so that they can identify them and work proactively with the DBA team.
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance wait till the coordinator notifies the applier that it is now the lowest CSCN. If the transaction chunk is not an entire transaction, then when it completes applying the current chunk, it will signal the coordinator for additional chunks associated with the transaction. When it gets the transaction chunk that contains the commit record, then it will commit the transaction after first messaging the coordinator for commit approval. For more information, see Appliers and transactional dependencies.
Page: 6 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance If all applier processes have serviced an even percentage of the transactions and system resources are plentiful, then it might be advantageous to increase the number of applier processes. To determine if all appliers are being used evenly, execute the following query.
select min(pct_applied) pct_applied_min , max(pct_applied) pct_applied_max , avg(pct_applied) pct_applied_avg , count(server_id) number_of_appliers from ( select server_id , (greatest(nvl(s.total_assigned,0),0.00000001) / greatest(nvl(c.total_assigned,1),1)) * 100 pct_applied from v$streams_apply_server s , v$streams_apply_coordinator c ) PCT_APPLIED_MIN PCT_APPLIED_MAX PCT_APPLIED_AVG NUMBER_OF_APPLIERS --------------- --------------- --------------- -----------------1.152 4.913 2.857 35
This output indicates that 4.9% of all transactions were processed by the busiest applier while only 1.1% of all transactions were processed by the quietest applier. If all appliers had applied an even number of transactions, then they would have applied 2.8% of the transactions. This output indicates that if systems resources are limited, the number of appliers could be reduced. On a systems that was busy, the same script generated the following output.
PCT_APPLIED_MIN PCT_APPLIED_MAX PCT_APPLIED_AVG NUMBER_OF_APPLIERS --------------- --------------- --------------- -----------------2.854 2.858 2.857 35
This output indicates that the difference between the busiest and quietest applier is relatively small, so all appliers are being used evenly. This output indicates that if systems resources are plentiful, the number of appliers could be increased.
What is an LCR
An LCR is a Logical Change Record that in SQL Apply terms relates to a DML statement for an individual row of a table. An LCR can also be related to DDL statement and in terms of LOB data, a CHUNK of the LOB data.
Page: 7 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance
The easiest way to think of the LCR cache is as a bucket or barrel. The barrel is open at the top and the reader and preparer processes are responsible for filling up the barrel. At the bottom of the barrel is a small funnel where transactions are funneled through and assigned to the different applier processes. If there is a back log of work to be applied to the standby database, then the coordinator will try and keep the bucket at least half full by signaling the reader processes to read more LCRs which in turn causes the preparer and builder processes to construct more DML statements until the bucket becomes approx 95% full. Then the reader processes will stop until the coordinator process signals it to fill the bucket again. If there is no backlog, then as transactions are received into the standby redo log, they are immediately read, prepared and built.
The Applier processes will apply any and all transactions in the bucket assuming they have been successfully analyzed and no dependencies exist. To determine the gap between the last transaction applied and the last transaction received from the primary database, execute the following query periodically.
select numtodsinterval(latest_time - applied_time,'DAY') from v$logstdby_progress NUMTODSINTERVAL(LATEST_TIME-APPLIED_TIME,'DAY') ------------------------------------------------------------------------+000000000 00:00:06.000000000
The value returned in the example shows the most recently applied transaction on the standby database is 6 seconds behind the last transaction received from the primary database. If a redo log GAP has formed due to a network outage, then this query will only show how much lag exists between the data received and the data applied. If a lag is reported, then this is an indication that the standby database might benefit from additional applier processes if system resources are available. Note that if the primary database is idle and the standby database is up to date, then there can be an apparent lag reported that is typically less than 10 seconds. Additionally, if the log transport used to send data to the standby database is ARCH, then when a log switch occurs on the primary database and the standby database registers the log file, then a gap will occur. This gap will reduce until either the entire log has been applied or until another log switch occurs. If a log switch occurs on the primary database before SQL Apply finishes applying the previously switched log, then again consider increasing the number appliers if system resources are available.
Page: 8 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance
The value returned shows the total number of bytes that have been paged out since SQL Apply was started. If the query returns a non-zero value paging is occurring, you should run the query on a regular basis to attempt to identify if a particular transaction on the primary database is responsible for the paging. If the number of Bytes Paged Out is constantly increasing consider increasing the value of the MAX_SGA logical standby parameter. If the LCR Cache is too large, then the instance will not be able to redeploy the reserved memory to other parts of the SGA including the buffer cache. To determine if the LCR Cache is too large, the peak size of the LCR Cache will be reported in the v$sgastat view. To determine if the LCR cache is too large, execute the following query periodically.
select name,(least(max_sga,bytes)/max_sga) * 100 from ( select * from v$sgastat where name = 'Logminer LCR c' ) , (select value*(1024*1024) max_sga from dba_logstdby_parameters where name = 'MAX_SGA' ) NAME PCT_UTILIZATION -------------------------- --------------Logminer LCR c 5.43263626 pct_utilization
The value returned in this example shows that we have only ever utilized 5.4% of the maximum possible size of the LCR Cache, indicating that the LCR Cache might be over sized.
Page: 9 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance NOTE: The MAX_SGA parameter specifies the desired size of the LCR Cache, but the LCR Cache actually allocated can exceed the value specified by the MAX_SGA parameter. In this case, the query would return a PCT_Utilization of 100%.
Page: 10 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance
This concept of blocking the small transactions means that the small transactions take longer to be applied to the database thereby generating a larger lag between the primary and the standby database.
Page: 11 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance If there are 10,000 DML statements in a transaction, and the transaction is applied normally, which is to say _EAGER_SIZE is greater than 10,000, then it will take the reader, preparer and builder processes 100 seconds to construct the DML statements before being passed to the applier process. The applier process would then take 1000 seconds to apply and ultimately commit this transaction to the database. During this time, there might be hundreds of smaller transactions that started concurrently with the large transaction and which committed shortly after the large transactions. These smaller transactions will have to wait the 1,000 seconds that the large transaction takes to apply, because when the smaller transactions ask for approval to commit, they will need to wait for the large transaction to commit first. The SQL Apply database will appear to be making no progress for 1,000 seconds, and the standby database will appear 1,000 seconds behind the primary database. However, with a larger number of applier processes, more transactions will be queued up and waiting to be committed once the large transaction is committed on the standby database.
If we take the same scenario, but this time the applications commit every 100 DML statements, then we would have 100 transactions that make up the 10,000 DML statements that the application operated on. Assume again that an LCR takes 1/100th of a second to be read, prepared and built but 1/10th of a second to be executed by the Applier process. Concurrently, with the application of the first transaction by the first applier process, the Log Miner processes are constructing the subsequent transactions. Each 100 DML transaction would take 1 second to be mined. The second transaction is assigned to another Applier process, and again takes 10 seconds to be applied. However, this transaction commits approximately 1 second after the first transaction was committed. This continues and 100 seconds after the first transaction was mined, the last transaction has been mined. An additional 10 seconds after this the last of the transaction has been committed to the database.
Therefore the 10,000 DML transactions would be replicated to the standby database in 110 seconds using a smaller array size compared to 1,100 seconds using the large array size.
Transactional Dependencies
Computing transactional dependencies is the responsibility of the Analyzer process, but additional considerations need to be made when a transaction is deemed to be eager. The analyzer process utilizes a hash table when computing the dependencies as well as a number of memory structures.
Page: 12 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance
Computing Dependencies
When a transaction chunk is picked up from the queue by the analyzer process, each LCR has its dependencies computed. If the table has a Primary Key and or Non Null Unique Indexes, then the analyzer process utilizes the Primary Key columns and all the Non Null Unique Indexes to compute the dependencies. If the table does not have either a Primary Key or a Non Null Unique Index, then all the columns of the table are used when computing the dependencies. Once the hash key has been computed, the Analyzer process looks up the hash entry and determines if a previous transaction had hashed to the same key. If there is a previous transaction ID and Commit SCN present, then that information is associated with the current LCR being analyzed. What happens next depends upon the type of transaction chunk. If the transaction chunk is a single chunk and contains a commit record, then the hash entry is updated with the current LCRs transaction ID and commit SCN. If the transaction chunk refers to an eager transaction, then the hash entry is NOT updated.
Page: 13 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance The Watermark Dependency is used to say that all transaction less than the current apply low-watermark may commit if the coordinator grants approval. The apply low-watermark is broadcast to the applier processes frequently via the coordinate messages
Additionally, when an eager transaction is being applied, the analyzer process has not updated the hash entries in the hash table, so we do not know if a dependency exists between this transaction and another transaction. However, we know that the eager transaction was able to update the rows on the primary database, so we can safely say that if the transactions are processed in the same order, then the primary and standby database will not be out of sync. Therefore, when we receive the last transaction chunk for an eager transaction that contains the commit SCN, we raise the watermark dependency to the commit SCN for the eager transaction. This allows all transaction with a commit SCN prior to the commit of the eager transaction to be committed, but it also prevents any transaction with an SCN after this SCN to be applied. Once all transactions prior to the eager transactions commit SCN have been committed, then the eager transaction is allowed to be committed. Once the eager transaction has committed, then the apply low-watermark dependency is raised to the commit SCN for the eager transaction, thereby allowing other transactions to proceed.
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance when the transaction is assigned out, then the bit is not set saying the applier processes must request commit approval before proceeding. If however, during the course of applying the transaction to the standby database, the applier has to message the coordinator, then when the coordinator responds to the message, the coordinator will re-evaluate the bit, and if the transaction is now the lowest Commit SCN, the bit will be set to indicate the applier may proceed to commit the transaction without the need to first request commit approval.
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance mine new transactions. It takes a bit of time for these new transactions to go through the process of mining, preparing, building and analyzing before finally being available for application by an applier.
Page: 16 of 16