Академический Документы
Профессиональный Документы
Культура Документы
Jan 2015
1
2
Table of Contents
Introduction ......................................................................................................................................................................... 5
Conclusion ................................................................................................................................................................... 8
Introduction ..................................................................................................................................................................... 8
Introduction ................................................................................................................................................................... 11
3
Introduction .............................................................................................................................................................. 14
Introduction ................................................................................................................................................................... 28
High Availability with Oracle Data Guard and Physical Standby Database ................................................................... 28
Conclusion.......................................................................................................................................................................... 31
4
Oracle Business Intelligence Applications Version 11g Performance
Recommendations
Introduction
Oracle Business Intelligence (BI) Applications Version 11g delivers a number of adapters to various business applications on
Oracle database. Each Oracle BI Applications implementation requires very careful planning to ensure the best performance
during ETL, end user queries and dashboard executions.
This article discusses performance topics for Oracle BI Applications 11g (11.1.1.7.1 and higher), using Oracle Data Integrator
(ODI) 11g 11.1.1.7.1, and using Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.x. Most of the recommendations
are generic for BI Applications 11g contents and its BI techstack. Release specific topics refer to exact version numbers.
Note: The document is intended for experienced Oracle BI Administrators, DBAs and Applications implementers. It covers
advanced performance tuning techniques in ODI, OBIEE and Oracle RDBMS, so all recommendations must be carefully verified
in a test environment before applied to a production instance. Customers are encouraged to engage Oracle Expert Services to
review their configurations prior to implementing the recommendations to their BI Applications environments.
5
High performance SCSI or SAN with High performance SCSI or SAN with
Local (PATA, SATA, iSCSI), or
16 Gbps HBA or higher, connected 24 Gbps HBA or higher, connected
Storage System NAS, preferred RAID
over fiber channel / 2xGb Ethernet over fiber channel / 2xGb Ethernet
configuration
NIC NIC
# CPU cores 8 16 32
Physical RAM 24 Gb 32 Gb 64 Gb
· The configurations above cover ODI Agent, Load Plan Generator (LPG) and BIA Configuration Manager (BIACM), all
collocated on the same hardware as the OBIEE. The recommended specifications above accommodate primarily for
OBIEE workload. Neither ODI nor LPG and CM would generate noticeable overhead. If you plan to deploy OBIEE on a
separate server (farm), then you can use less powerful configuration for ODI, LPG and CM. Refer to Oracle Weblogic
documentation for more hardware requirements.
· The internal benchmarks did not show any noticeable workload from ODI agent processes. Oracle to Oracle
configurations can effectively use a database link knowledge module, even further minimizing the impact from ODI
processes.
· ODI deployments with agent processes, running on separate servers, or agents load balancing on multiple servers, are
not covered in this document. Refer to BI Applications and ODI documentation for more information for such
configurations.
· Depending on the number of planned concurrent users running OBIEE reports, you may have to plan for more memory
on the target tier to accommodate for the queries workload.
· To ensure the queries scalability on OBIEE tier, consider implementing OBIEE Cluster or Oracle Exalytics. Refer to OBIEE
and Exalytics documentation for more details.
· It is recommended to set up all Oracle BI Applications tiers in the same local area network. Deploying any of its tiers
over Wide Area Network (WAN) may cause additional delays during ETL Extract mappings execution and impact Load
Plan windows.
6
Choosing sub-optimal storage for running BI Applications tiers.
7
"Re-read " 3223637.78 KB/sec 3038416.45 KB/sec
"Reverse Read " 1754192.17 KB/sec 1765427.92 KB/sec
"Stride read " 1783300.46 KB/sec 1795288.49 KB/sec
"Random read " 1724525.63 KB/sec 1755344.27 KB/sec
"Mixed workload " 2704878.70 KB/sec 2456869.82 KB/sec
"Random write " 68053.60 KB/sec 25367.06 KB/sec
"Pwrite " 45778.21 KB/sec 23794.34 KB/sec
"Pread " 2837808.30 KB/sec 2578445.19 KB/sec
Total Time 110 min 216 min
Initial Write, Rewrite, Initial Read, Random Write, and Pwrite (buffered write operation) were impacted the most, while
Reverse Read, Stride Read, Random Read, Mixed Workload and Pread (buffered read operation) were impacted the least by
the concurrent load.
Read operations do not require specific RAID sync-up operations therefore read requests are less dependent on the number of
concurrent threads.
Conclusion
You should carefully plan for storage deployment, configuration and usage for the Oracle BI Applications environment. Avoid
sharing the same RAID controller(s) across multiple databases. Set up periodic monitoring of your I/O system during both ETL
and end user queries load for any potential bottlenecks.
Introduction
The Application Server tier may generate heavy impact on hardware from various one-time administration tasks, such as ODI
Load Plan generation, and from daily workload, such as OBIEE end user queries. So it needs to be sized to accommodate for
combined workload from various BI Domain components below, as well as future scalability to support more business users.
The Application Server uses Oracle BI Domain in Oracle Weblogic Server for the following services:
Weblogic Admin Server
Managed BI Server ‘bi_server1’ with the following deployed components:
o Oracle BI Enterprise Edition (OBIEE)
o Oracle BI Applications Configuration Manager (BIACM)
o Functional Setup Manager (FSM)
o Load Plan Generator (LPG)
Managed ODI Server ‘odi_server1’
o ODI Console
o ODI Agent
Important! It is not recommended to collocate the Application Server tier with the Data Warehouse tier for the same reasons.
8
The next sections cover the above components sizing parameters for better performance.
OBIEE and LPG are the most critical components deployed under bi_server1, which may require the largest amounts of
memory. They are covered in the section below.
Important! Oracle has published a separate paper (Oracle BI EE 11g Architectural Deployment: Capacity Planning Doc ID
1323646.1) with BI Server sizing calculations, so OBIEE scalability benchmarks are outside the scope of this technote.
The internal benchmarks for 150 VUsers with think time = 5sec, running non-cached reports against medium-sized data
warehouse, showed ~5.5Gb peak of used memory by OBIEE. So, the recommended memory allocation for OBIEE with
min=2Gb and max = 6Gb should be sufficient for most initial rollouts. Make sure you monitor your Application Server tier for
workload and memory usage during BI reports querying, and increase the memory settings as needed.
9
Managed BI Server: Other Recommended Settings
Review additional recommended configuration parameters for your OBIEE server.
Maximum number of rows fetched by BI Presentation Server = 65000
10
Source Environments Recommendations for Better Performance
Introduction
Oracle BI Applications data loads may cause additional overhead for CPU and memory on a source tier. There may be a larger
impact on the I/O subsystem, especially during full ETL loads. Using several I/O controllers or a hardware RAID controller with
multiple I/O channels on the source side would help to minimize the impact on Business Applications during ETL runs and
speed up data extraction into a target data warehouse. This chapter covers important topics how to minimize OLTP impact
from ETL.
11
Note: you have to update your ODI repository to use the replicated persistent staging tables or materialized views instead of
the original source tables in ODI scenarios.
Introduction
Oracle Materialized View (MV) Logs capture the changing data in base source tables and supply the critical CDC volumes to the
extract mappings.
Important! MV Logs present additional challenges, when used in OLTP environments. You should carefully test MV Log based
CDC before implementing it in your production environment.
1. MV Logs can cause additional overhead on business transactions performance, if created on heavy volume
transactional tables in busy OLTP sources.
2. Ensure regular MV refresh to purge MV Logs. Otherwise they will grow in size and generate even more overhead for
OLTP applications.
3. Avoid sharing an MV Log between two or more fast refreshable MVs. The MV Log will not be purged until all
depending MVs are refreshed.
The next sections will use an example for using an MV Log on PS_PROJ_RESOURCE in PeopleSoft to speed up incremental
extract for SDE_PSFT_ProjectBudgetFact mapping.
The following steps describe the CDC implementation using MV Log approach:
CREATE MATERIALIZED VIEW LOG ON PS_PROJ_RESOURCE NOCACHE LOGGING NOPARALLEL WITH SEQUENCE;
3. Create a Materialized View using PS_PROJ_RESOURCE definition and an additional LAST_UPDATE_DT column. The
latter will be populated using SYSDATE values:
12
CREATE TABLE OBIEE_PS_PROJ_RESOURCE_MV AS SELECT * FROM PS_PROJ_RESOURCE WHERE 1=2;
5. Create a database view on the MV, which will be used in the SDE Fact Source Qualifier query:
6. Run the complete refresh for the MV. The subsequent daily ETLs will perform fast refresh using the MV Log.
exec dbms_mview.refresh(‘OBIEE_PS_PROJ_RESOURCE_MV’,’C’);
7. Update the SDE fact extract logic and replace the original table with the MV, and add an additional filter:
BEGIN
DBMS_MVIEW.REFRESH('getTableName()', 'F');
END;
1. Save the changes and re-generate the updated scenario in ODI Studio.
13
Consider adding a unique index on the auxiliary CDC table primary column will speed up updates.
Measure carefully the impact on your source OLTP workload before you choose the trigger CDC approach, as it can
easily generate significant overhead and impact transactional business users.
When you find any critical extract mappings and you cannot use more OLTP data source resources, consider using SDS option,
which replicates all source tables to SDS schema on the target tier.
14
If your source system is on of the following:
- EBS R12
- EBS 11i release 11.5.10
- EBS 11i release 11.5.9 or lower and it has been migrated to OATM*
then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX prior to running the DDL.
If your source system is EBS 11i release 11.5.9 or lower and it has not been migrated to OATM*, replace <IDX_TABLESPACE>
with <PROD>X, where <PROD> is an owner of the table which will be indexed on LAST_UPDATE_DATE column.
DDL script for custom index creation:
15
CREATE index AR.OBIEE_HZ_CUST_ACCOUNT_ROLES ON AR.HZ_CUST_ACCOUNT_ROLES(LAST_UPDATE_DATE) tablespace
<IDX_TABLESPACE> ;
16
CREATE INDEX PA.OBIEE_PA_CLASS_CATEGORIES ON PA.PA_CLASS_CATEGORIES(LAST_UPDATE_DATE) tablespace
<IDX_TABLESPACE> ;
There is one more custom index, recommended for Supply Chain Analytics on AP_NOTES.SOURCE_OBJECT_ID column:
17
Important! You must use FND_STATS to compute statistics on the newly created indexes and update statistics on
newly indexed table columns in the EBS database.
Important! All indexes introduced in this section have the prefix “OBIEE_” and they do not follow the standard Oracle
EBS Index naming conventions. If a future Oracle EBS patch creates an index on LAST_UPDATE_DATE columns for the
tables listed below, Oracle EBS’s Autopatch may fail. In such cases the conflicting OBIEE_ indexes must be dropped,
and the Autopatch can be restarted.
18
Important! You should use FND_STATS to compute statistics on the newly created indexes and update statistics on
newly indexed table columns in the EBS database.
Since all custom indexes above follow Oracle EBS index standard naming conventions, any future upgrades would not be
affected.
*) Oracle Applications Tablespace Model (OATM):
Oracle EBS release 11.5.9 and lower uses two tablespaces for each Oracle Applications product, one for the tables and
one for the indexes. The old tablespace model standard naming convention for tablespaces is a product's Oracle
schema name with the suffixes D for Data tablespaces and X for Index tablespaces. For example, the default
tablespaces for Oracle Payables tables and indexes are APD and APX, respectively.
Oracle EBS 11.5.10 and R12 use the new Oracle Applications Tablespace Model. OATM uses 12 locally managed
tablespaces across all products. Indexes on transaction tables are held in a separate tablespace APPS_TS_TX_IDX,
designated for transaction table indexes.
Customers running pre-11.5.10 releases can migrate to OATM using OATM Migration utility. Refer to Oracle Support
Note 248857.1 for more details.
19
GL GL_JE_LINES
INV MTL_MATERIAL_TRANSACTIONS
INV MTL_SYSTEM_ITEMS_B
ONT OE_ORDER_LINES_ALL
PER PAY_PAYROLL_ACTIONS
PO RCV_SHIPMENT_LINES
WSH WSH_DELIVERY_ASSIGNMENTS
WSH WSH_DELIVERY_DETAILS
20
timed_statistics = TRUE
statistics_level = TYPICAL
sga_target = 8G # Resize SGA & PGA targets to fit into avail RAM
pga_aggregate_target = 4G # SGA + PGA should not exceed 70% of total RAM
workarea_size_policy = AUTO
db_block_checking = FALSE
db_block_checksum = TYPICAL
db_writer_processes = 2
log_checkpoint_timeout = 1800
log_checkpoints_to_alert = TRUE
undo_management = AUTO
undo_tablespace = <your undo tablespace>
undo_retention = 90000
job_queue_processes = 10
parallel_adaptive_multi_user = FALSE
parallel_max_servers = 16
parallel_min_servers = 0
star_transformation_enabled = TRUE
query_rewrite_enabled = TRUE
query_rewrite_integrity = TRUSTED
_b_tree_bitmap_plans = FALSE
plsql_code_type = NATIVE
disk_asynch_io = FALSE
fast_start_mttr_target = 3600
Review the template file above and adjust your target database parameters specific to your data warehouse tier hardware.
Note: init.ora template for Exadata / 11gR2 is provided in Exadata section of this document.
21
Gb
----------
280.49
Most of ODI scenario SQLs perform conventional inserts into i$ interface tables during ETL runs. With sub-optimal size of REDO
logs you may get a lot of “log file switch (checkpoint incomplete)” wait events in your AWR reports during ETL
runs.
To minimize the impact from ‘log file switch (checkpoint incomplete)’ wait events and improve performance for conventional
inserts, increase the size for your REDO files. You can query your database dictionary to find the optimal size (in Mb):
select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;
If your data warehouse hardware does not support asynchronous I/O, then you can improve the conventional inserts by
setting DBWR_IO_SLAVES in init.ora to non-zero value.
The internal benchmarks for running large inserts into an i$ table without asynchronous I/O support showed the best
performance for conventional inserts with 2-3 DB Writer processes, 3 x 1Gb Redo Logs, and DBWR_ID_SLAVES = 1:
Number DBWR_IO_SLAVES 0 1 2 4
Runtime In Sec
insert append (2 dbwr and 6x100M Redo log) 113
insert append (2 dbwr and 3x1G Redo log) 71 67
insert append (3 dbwr and 3x1G Redo log) 80 69
insert new rows (conventional - 2 dbwr and 6x100M Redo Logs) 530 251 255 339
insert new rows (conventional - 4 dbwr and 6x100M Redo Logs) 587 257 252 258
insert new rows (conventional - 3 dbwr and 6x100M Redo Logs) 524 254 259 254
insert new rows (conventional - 3 dbwr and 3x1G Redo Logs) 236 160
insert new rows (conventional - 3 dbwr and 4x500M Redo Logs) 178
insert new rows (conventional - 3 dbwr and 6x500M Redo Logs) 175
insert new rows (conventional - 2 dbwr and 3x1G Redo Logs) 166
22
Parallel Query configuration
BI Applications Load Plans may use Oracle Parallel Query option for running some scenarios, computing statistics, building
indexes on target tables. By default, none of the target tables have defined degree of parallelism. ODI generated SQL plans
could get easily skewed with enabled parallelism, so by default, none of the target tables have defined a degree of parallelism.
Oracle BI Applications init.ora templates have PARALLEL_ADAPTIVE_MULTI_USER = FALSE and manually defined, smaller parallel
query settings.
Important! You should carefully monitor your environment workload before changing any parallel query parameters.
It could easily lead to increased resource contention, creating I/O bottlenecks, and increasing response time when the
resources are shared by many concurrent transactions.
Since ODI EXEC_TABLE_MAINT_PROC creates indexes and computes statistics on target tables in parallel, concurrent
execution may cause performance problems, if the values parallel_max_servers and parallel_threads_per_cpu are too high.
You can monitor the system load from parallel operations by executing the following query:
SQL> select name, value from v$sysstat where name like 'Parallel%';
Initial ETL scenarios process very complex SQLs with multiple join operations, which
Database temporary
Temporary actively use temporary segments, stored in TEMP tablespace. TEMP can get filled very
segments
fast, when processing multiple concurrent SQLs with heavy joins
If you implemented SDS option, use a separate tablespace for replicated source objects
SDS segments, tables
SDS and their indexes. SDS tablespace should be sized based on the source tables’ footprint
and indexes
in OLTP.
ODI interface tables ODI interface tables are dropped and re-created for each ETL run. By separating them
Interface (c$, i$, e$) and their into a dedicated tablespace you can resize your interface tablespace after initial ETL, or
indexes create the tablespace as compressed.
BI Apps stage tables Staging tables are always truncated in each ETL run, so they can be deployed in a
(_DS, _FS, HS, etc), separate tablespace. Note, that Stage tablespace can grow very large in size during
Stage persistent stage (_PS) initial ETL.
tables, and their
indexes Persistent Stage tables (_PS) are not truncated in incremental runs.
Target Data tablespace should be used for data warehouse objects, such as facts,
Target Data Target Data Segments
dimensions, hierarchies, etc., as well as aggregate tables, populated by ODI scenarios.
Target Index Target Index Segments Target Index tablespace stores all indexes on Data warehouse tables.
23
1. BI Applications ODI scenarios use ODI interface tables (c$, i$, e$, etc) for data processing, transformations and error
logging operations. The typical data movement in 11g ETL can presented as: source -> c$ -> stage table -> i$ -> target
table. When sizing your Data Warehouse, you need to plan for additional space for ODI interface tables:
BI Applications Load Plan Initial executions bypass i$ tables and load data directly into the target tables, so i$
segments do not consume any space in initial ETLs.
Important! You can conserve additional space and improve performance for your extract scenarios by
switching from default JDBC Load Knowledge Module (LKM) to Database Link KM. DBLink KM creates views on
source and c$ synonyms to the source views over a database link. The use of DBLink KM further reduces ETL
data movement, saves space and improves extract (SDE) scenarios performance. Refer to ODI KMs
documentation for more details.
BI Apps Load Plans drop and re-create the interface tables within every single scenario for each ETL run.
If you use JDBC LKM, estimate the extracted volumes for your largest facts, executed in parallel, and then sum
up the volumes to find out the maximum space, consumed by c$ tables in your initial ETLs. Ignore this step, if
you use DBLink KM.
Estimate your facts incremental volumes, processed concurrently, and then sum them up to find out the
maximum space, consumed by i$ tables.
2. While stage segments consume space, almost equivalent to target segments’ footprint during initial ETL, they get
truncated in subsequent incremental runs, so their space allocation will be driven by the incremental volumes. If all
scenarios support incremental logic, then stage objects space may consume from 5 to 20% of its initially allocated
tablespace. So, you can resize your stage tablespace after completing initial ETL.
3. Depending on your hardware configuration, you may consider isolating staging tablespace target Data tablespace on
different controllers. Such configuration would help to speed up Target Load (SIL) mappings for fact tables by
balancing I/O load on multiple RAID controllers.
4. Temporary tablespace needs to be sized to accommodate for initial ETL. Since BI Applications scenarios do all
transformations in database, they may produce heavy joins, and often use temporary segments for storing interim
result sets, while going through execution plan operations. Make sure you allocate enough space in your Temporary
tablespace(s) to accommodate for parallel processing during initial ETL. Typically Fact tables processing in parallel
consumes the most TEMP space in initial loads.
5. SDS tablespace sizing is not covered in this document, since its footprint depends on implemented functional areas.
You can estimate its size by checking space of source tables and indexes, which will be replicated to SDS.
6. During incremental loads, by default, Load Plan drops and rebuilds indexes, so you should separate all indexes in a
dedicated tablespace and, if you have multiple RAID / IO Controllers, move the INDEX tablespace to a separate
controller.
7. Note that the Target INDEX Tablespace may increase, if you enable more query indexes in your data warehouse.
The following table summarizes uncompressed space allocation estimates in a data warehouse by its target data volume
range:
Target Data Volume SMALL MEDIUM LARGE
Target DATA Tablespace 50 Gb and higher 300 Gb and higher 1 Tb and higher
24
Stage Tablespace 50+ Gb 200+ Gb 1+ Tb
Total Warehouse Size 200 Gb and higher 800 Gb and higher 2.8 Tb and higher
Important! You should use Locally Managed tablespaces with AUTOALLOCATE clause. DO NOT use UNIFORM extents size, as it
may cause excessive space consumption and result in queries slower performance.
Use standard (primary) block size for your warehouse tablespaces. DO NOT build your warehouse on non-
standard block tablespaces.
###########################################################################
# Oracle BI Applications - init.ora template
# This file contains a listing of init.ora parameters for 11.2 / Exadata
###########################################################################
db_name = <database name>
control_files = /<dbf file loc>/ctrl01.dbf, /<dbf file loc>/ctrl02.dbf
db_block_size = 8192 # or 16384 (for better compression)
db_block_checking = FALSE
db_block_checksum = TYPICAL
deferred_segment_creation = TRUE
user_dump_dest = /<DUMP_HOME>/admin/<dbname>/udump
background_dump_dest = /<DUMP_HOME>/admin/<dbname>/bdump
core_dump_dest = /<DUMP_HOME>/admin/<dbname>/cdump
max_dump_file_size = 20480
processes = 500
sessions = 4
db_files = 1024
session_max_open_files = 100
dml_locks = 1000
cursor_sharing = EXACT
cursor_space_for_time = FALSE
session_cached_cursors = 500
open_cursors = 1000
db_writer_processes = 2
aq_tm_processes = 1
job_queue_processes = 2
timed_statistics = true
statistics_level = typical
sga_max_size = 45G
sga_target = 40G
shared_pool_size = 2G
shared_pool_reserved_size = 100M
workarea_size_policy = AUTO
pre_page_sga = FALSE
pga_aggregate_target = 16G
log_checkpoint_timeout = 3600
log_checkpoints_to_alert = TRUE
log_buffer = 10485760
undo_management = AUTO
undo_tablespace = UNDOTS1
undo_retention = 90000
parallel_adaptive_multi_user = FALSE
parallel_max_servers = 128
25
parallel_min_servers = 32
# ------------------- MANDATORY OPTIMIZER PARAMETERS ----------------------
star_transformation_enabled = TRUE
query_rewrite_enabled = TRUE
query_rewrite_integrity = TRUSTED
b_tree_bitmap_plans = FALSE
optimizer_autostats_job = FALSE
26
Set deferred_segment_creation = TRUE to defer a segment creation until the first record is inserted. Refer to init.ora
section below.
You should benchmark the query performance prior to implementing the changes in your Production environment.
27
4. Verify that your generated explain (and execution) plan use hash join operators, rather than nested loops.
Important! You should conduct comprehensive testing with all recommended techniques in place before dropping your query
indexes.
The Exadata Storage Server will cache data for W_PARTY_D table more aggressively and will try to keep the data from this
table longer than cached data from other tables.
Important! Use manual Flash Cache pinning only for the most common critical tables.
High Availability with Oracle Data Guard and Physical Standby Database
Oracle Data Guard configuration contains a primary database and supports up to nine standby databases. A standby database
is a copy of a production database, created from its backup. There are two types of standby databases, physical and logical.
A physical standby database must be physically identical to its primary database on a block-for-block basis. Data Guard
synchronizes a physical standby database with its primary one by applying the primary database redo logs. The standby
database must be kept in recovery mode for Redo Apply. The standby database can be opened in read-only mode in-between
redo synchronizations.
The advantage of a physical standby database is that Data Guard applies the changes very fast using low-level mechanisms and
bypassing SQL layers.
A logical standby database is created as a copy of a primary database, but it later can be altered to a different structure. Data
Guard synchronizes a logical standby database by transforming the data from the primary database redo logs into SQLs and
executing them in the standby database.
A logical standby database has to be open all the times to allow Data Guard to perform SQL updates.
Important! A primary database must run in ARCHIVELOG mode all the times.
Data Guard with Physical Standby Database option provides both efficient and comprehensive disaster recovery as well as
reliable high availability solution to Oracle BI Applications customers. Redo Apply for Physical Standby option synchronizes a
28
Standby Database much faster compared to SQL Apply for Logical Standby. OBIEE does not require write access to BI
Applications Data Warehouse for either executing end user logical SQL queries or developing additional contents in RPD or
Web Catalog.
The internal benchmarks on a low-range outdated hardware have showed four times faster Redo Apply on a physical standby
database compared to ETL execution on a primary database:
Step Name Row Count Redo Size Primary DB Run Redo Apply time
Time
SDE_ORA_SalesProductDimension_Full 2621803 621 Mb 01:59:31 00:10:20
SDE_ORA_CustomerLocationDimension_Full 4221350 911 Mb 04:11:07 00:16:35
SDE_ORA_SalesOrderLinesFact_Full 22611530 12791 Mb 09:17:19 03:16:04
Create Index W_SALES_ORDER_LINE_F_U1 Index n/a 610 Mb 00:24:31 00:08:23
Total 29454683 14933 Mb 15:52:28 03:51:22
The target hardware was configured intentionally on a low-range Sun server, with both Primary and Standby databases
deployed on the same server, to imitate heavy incremental load. The modern production systems with primary and standby
database, deployed on separate servers, are expected to deliver up to 8-10 times better Redo Apply time on a physical
standby database, compared to the ETL execution time on the primary database.
The diagram below describes Data Guard configuration with Physical Standby database:
- The primary instance runs in “FORCE LOGGING” mode and serves as a target database for routine incremental ETL or
any maintenance activities such as patching or upgrade.
- The Physical Standby instance runs in read-only mode during ETL execution on the Primary database.
- When the incremental ETL load into the Primary database is over, DBA schedules the downtime or blackout window
on the Standby database for applying redo logs.
- DBA shuts down OBIEE tier and switches the Physical Standby database into ‘RECOVERY’ mode.
- DBA starts Redo Apply in Data Guard to apply the generated redo logs to the Physical Standby Database.
- DBA opens Physical Standby Database in read-only mode and starts OBIEE tier:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
29
Easy-to-manage switchover and failover capabilities in Oracle Data Guard allow quick role reversals between primary and
standby, so customers can consider switching OBIEE from the Standby to Primary, and then start applying redo logs to the
Standby instance. In such configuration the downtime can be minimized to two short switchovers:
- Switch OBIEE from Standby to Primary after ETL completion into Primary database, and before starting Redo Apply
into Standby database.
- Switch OBIEE from Primary to Standby before starting another ETL.
Additional considerations for deploying Oracle Data Guard with Physical Standby for Oracle BI Applications:
1. ‘FORCE LOGGING’ mode would increase the incremental load time into a Primary database, since Oracle would logging
index rebuild DDL queries.
2. Primary database has to be running in ARCHIVELOG mode to capture all REDO changes.
3. Such deployment results in more complex configuration; it also requires additional hardware to keep two large
volume databases and store daily archived logs.
However it offers these benefits:
1. High Availability Solution to Oracle BI Applications Data Warehouse
2. Disaster recovery and complete data protection
3. Reliable backup solution
30
Conclusion
This document consolidates the best practices and recommendations for improving performance for Oracle Business
Intelligence Applications Version 11g.This list of areas for performance improvements is not complete. If you observe any
performance issues with your Oracle BI Applications implementation, you should trace various components, and carefully
benchmark any recommendations or solutions discussed in this article or other sources, before implementing the changes in
the production environment.
31
Oracle Business Intelligence Applications Version 11g.x Performance Recommendations
January 2015
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com
32