Вы находитесь на странице: 1из 50

Oracle Database 10g Memory Advisors

• SGA Target Advice (introduced in 10gR2):


– V$SGA_TARGET_ADVICE view
– Estimates the DB time for different SGA sizes based on
current size
• PGA Target Advice (introduced in 9iR1):
– V$PGA_TARGET_ADVICE view
– Predicts the PGA cache hit ratio for different PGA sizes
– Time column EST_TIME added in 11gR1
• For all advisors, STATISTICS_LEVEL must be set to at
least TYPICAL

8 - 22 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 10g Memory Advisors


• In Oracle Database 10g, the SGA Advisor shows the improvement in DB Time that can be
achieved for a particular setting of the total SGA size. This advisor allows you to reduce trial
and error in setting the SGA size. The advisor data is stored in the
V$SGA_TARGET_ADVICE view.
• V$PGA_TARGET_ADVICE predicts how the pga cache hit percentage displayed by the
V$PGASTAT performance view would be impacted if the value of the
PGA_AGGREGATE_TARGET parameter is changed. The prediction is performed for various
values of the PGA_AGGREGATE_TARGET parameter, selected around its current value. The
advice statistic is generated by simulating the past workload run by the instance. In 11g a new
column is added: EST_TIME corresponding to the CPU and IO time it takes to process the
bytes.

Oracle Database 11g: New Features for Administrators 8 - 22


Automatic Memory Management Overview
10g&11g 11g

Untunable Untunable Memory Target Untunable


PGA PGA PGA
PGA memory

Free Targ
et
Free PGA Target PGA
SQL Areas
SQL Areas
SQL Areas
et Buffer cache
SGA Target SGA Targ
Buffer cache
Buffer cache
Large pool
Large pool
SGA memory

Large pool

Shared pool Shared pool Shared pool

Java pool Java pool Java pool


Streams pool Streams pool Streams pool

Other SGA Other SGA Other SGA

OLTP BATCH BATCH

8 - 23 Copyright © 2007, Oracle. All rights reserved.

Automatic Memory Management Overview


With Automatic Memory Management, the system causes an indirect transfer of memory from
SGA to PGA and vice versa. It automates the sizing of PGA and SGA according to your
workload.
This indirect memory transfer relies on OS mechanism of freeing shared memory. Once memory
released to OS, the other components can allocate memory by requesting memory from OS.
Currently, this is implemented on Linux, Solaris, HPUX, AIX and Windows.
Basically, you set your memory target for the database instance, and the system then tunes to the
target memory size, redistributing memory as needed between the system global area (SGA) and
aggregate program global area (PGA).
The slide shows you the differences between the Oracle Database 10g mechanism and the new
Automatic Memory management with Oracle Database 11g.

Oracle Database 11g: New Features for Administrators 8 - 23


Oracle Database 11g Memory Parameters

MEMORY_MAX_TARGET
SGA_MAX_SIZE

MEMORY_TARGET

SGA_TARGET PGA_AGGREGATE_TARGET

SHARED_POOL_SIZE
Others
DB_CACHE_SIZE
LARGE_POOL_SIZE
JAVA_POOL_SIZE
STREAMS_POOL_SIZE

DB_KEEP_CACHE_SIZE LOG_BUFFER_SIZE
DB_RECYCLE_CACHE_SIZE RESULT_CACHE_SIZE
DB_nK_CACHE_SIZE

8 - 24 Copyright © 2007, Oracle. All rights reserved.

Oracle 11g Database Memory Sizing Parameters


The above graphic shows you the memory initialization parameters hierarchy. Although you only
have to set MEMORY_TARGET to trigger Automatic Memory Management, you still have the
possibility to set lower bound values for various caches.So if the child parameters are user set,
they will be the minimum values below which we will not auto-tune that component.

Oracle Database 11g: New Features for Administrators 8 - 24


Automatic Memory Management Overview

11g 11g
350M 350M
Memory Memory
Max Target Max Target
300M
Memory Target

250M
Memory Target

ALTER SYSTEM SET


MEMORY_TARGET=300M;

8 - 25 Copyright © 2007, Oracle. All rights reserved.

Automatic Memory Management Overview


The simplest way to manage memory is to allow the database to automatically manage and tune it
for you. To do so (on most platforms), you set only a target memory size initialization parameter
(MEMORY_TARGET) and a maximum memory size initialization parameter
(MEMORY_MAX_TARGET). Because the target memory initialization parameter is dynamic,
you can change the target memory size at any time without restarting the database. The maximum
memory size serves as an upper limit so that you cannot accidentally set the target memory size
too high. Because certain SGA components either cannot easily shrink or must remain at a
minimum size, the database also prevents you from setting the target memory size too low.

Oracle Database 11g: New Features for Administrators 8 - 25


Auto Memory Parameter Dependency
N Y Y
MT>0 MMT=0 MMT=MT
N

N
MT=0 MMT>0
Y
ST>0 & PAT>0 ST+PAT<=MT<=MMT
Y
MT can be N Minimum possible values
MT=0 dynamically
changed later Y
ST>0 & PAT=0 PAT=MT-ST

Y
SGA & PGA N
ST>0 are separately
Y
auto tuned ST=0 & PAT>0 ST=min(MT-PAT,SMS)
N
Only PGA N
is auto tuned
ST=60%MT
SGA and PGA cannot PAT=40%MT
grow and shrink automatically

Both SGA and PGA can grow and shrink automatically

8 - 26 Copyright © 2007, Oracle. All rights reserved.

Auto Memory Parameter Dependency


The above flowchart describes the relationships between the various memory sizing parameters.
If MEMORY_TARGET is set to a non-zero value:
• If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the
minimum values for the sizes of SGA and the PGA respectively. MEMORY_TARGET can
take values from SGA_TARGET + PGA_AGGREGATE_TARGET to
MEMORY_MAX_SIZE.
• If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still auto-
tune both parameters. PGA_AGGREGATE_TARGET will be initialized to a value of
(MEMORY_TARGET-SGA_TARGET).
• If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still auto-
tune both parameters. SGA_TARGET will be initialized to a value of
min(MEMORY_TARGET-PGA_AGGREGATE_TARGET, SGA_MAX_SIZE (if set by the
user)) and will auto-tune sub-components.
• If neither is set, they will be auto-tuned without any minimum or default values. We will
have a policy of distributing the total server memory in a fixed ratio to the the SGA and PGA
during initialization. The policy is to give 60% for SGA and 40% for PGA at startup.

Oracle Database 11g: New Features for Administrators 8 - 26


Automatic Memory Parameter Dependency (Continued)
If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g):
• If SGA_TARGET is set we will only auto-tune the sizes of the sub-components of the SGA.
PGA will be autotuned independent of whether it is explicitly set or not. Though the whole
SGA(SGA_TARGET) and the PGA(PGA_AGGREGATE_TARGET) will not be auto-tuned,
i.e., will not grow or shrink automatically. * If neither SGA_TARGET nor
PGA_AGGREGATE_TARGET is set, we will follow the same policy as we have today;
PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the
sub-components will have to be set explicitly (for SGA_TARGET).
• If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 in manual
setup using text initialization file. Auto-tuning will default to 10gR2 behavior for sga and
pga.
• If sga_max_size is not user set, we will internally set it to MEMORY_MAX_TARGET, if
user set (independent of sga_target being user set).

In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and
include a value for MEMORY_TARGET, the database automatically sets
MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for
MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the
MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change
MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of
MEMORY_MAX_TARGET.

Oracle Database 11g: New Features for Administrators 8 - 27


Enabling Automatic Memory Management

8 - 28 Copyright © 2007, Oracle. All rights reserved.

Enabling Automatic Memory Management


Note: The above terminology is being revamped (given that it is BETA!). ‘Current Total Memory
Size’ should read ‘Current Total Memory Size for Auto-tuning’.
You can enable Automatic Memory Management using Enterprise Manager as shown above.
From the Database Home page, click the Server tab. On the Server page, click the Memory
Parameters link in the Database Configuration section. This takes you to the Memory Parameters
page. On this page, you can click the Enable button to enable Automatic Memory Management.
The value in the ‘Total Memory Size for Automatic Memory Tuning’ text box is set by default to
current SGA+PGA size. You can set it to anything more than this but less than the value in
‘Maximum Memory Size’ box.
Note: On the Memory Parameters page, you also have the possibility to specify the Maximum
Memory Size. If you change this field, the database is automatically restarted for your change to
take effect.

Oracle Database 11g: New Features for Administrators 8 - 28


Monitor Automatic Memory Management

8 - 29 Copyright © 2007, Oracle. All rights reserved.

Monitor Automatic Memory Management


Once Automatic Memory Management is enabled, you can see a new graphical representation of
the history of your memory size components in the Allocation History section of the Memory
Parameters page. The green part in the first histogram is Tunable PGA only and the brownish-
orange part is all of SGA. The dark blue below in the lower histogram is the Shared Pool size and
light blue corresponds to Buffer Cache.
The change above shows you the possible repartition of your memory after the execution of an
untunable PGA consuming PL/SQL program. Hence both SGA and PGA might shrink to take into
account the Untunable portion consuming the extra memory. Note that with SGA shrink, its
subcomponents also shrank around the same time.
On this page, you can also access the memory target advisor by clicking the Advice button. This
advisor gives you the possible DB time improvement for various total memory sizes.
Note: You can also look at the memory target advisor using the
V$MEMORY_TARGET_ADVISOR view.

Oracle Database 11g: New Features for Administrators 8 - 29


Monitor Automatic Memory Management

If you wish to monitor the decisions made by Automatic


Memory Management via command line:
• V$MEMORY_DYNAMIC_COMPONENTS has the current
status of all memory components
• V$MEMORY_RESIZE_OPS has a circular history buffer of
the last 800 SGA resize requests
• All SGA and PGA equivalents still in place for backward
compatibility

8 - 30 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 8 - 30


DBCA and Automatic Memory Management

8 - 31 Copyright © 2007, Oracle. All rights reserved.

DBCA and Automatic Memory Management


11gR1 DBCA has new options to accommodate Auto-memory. Use the Memory tab of the
Initialization Parameters screen to set the initialization parameters that control how the database
manages its memory usage. You can choose from two basic approaches to memory management:
• Typical, which requires very little configuration and allows the database to manage how it
uses a percentage of your overall system memory. Select Typical to create a database with
minimal configuration or user input. This option is sufficient for most environments and for
DBAs who are inexperienced with advanced database creation procedures. Enter a value in
the Percentage field. This value represents a percentage of your total available system
memory (shown in parenthesis) that will be allocated to the Oracle Database. Based on this
value, DBCA allocates the most efficient amount of memory to the database memory
structures. Click Show Memory Distribution to see how much memory DBCA will assign to
both the SGA and PGA. Note that the memory allocation also includes another 40MB, which
is required by the operating system to run the database executable.
• Custom, which requires more configuration, but provides you with more control over how the
database uses the available system memory. To allocate specific amounts of memory to the
SGA and PGA, select Automatic. To customize how the SGA memory is distributed among
the SGA memory structures (buffer cache, shared pool, …), select Manual and enter specific
values for each SGA subcomponents. You can review and modify these initialization
parameters later in DBCA.
Note: When using DBUA or manual DB creation, the memory_target parameter defaults to 0.

Oracle Database 11g: New Features for Administrators 8 - 31


Summary

• Unifies system (SGA) and process (PGA) memory


management
• Single dynamic parameter for all database memory
• Automatically adapts to workload changes
• Maximizes memory utilization
• Helps eliminate out-of-memory errors

8 - 32 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 8 - 32


Statistic Preferences Overview
Optimizer Statement level
statistics
Table level
gathering DBA_TAB_STAT_PREFS
task Schema level

Database level

Global level
CASCADE DEGREE
ESTIMATE_PERCENT METHOD_OPT
NO_INVALIDATE GRANULARITY
PUBLISH INCREMENTAL
STALE_PERCENT

set_global_prefs
DB
set_database_prefs MS
se _S
t| TA
TS
ex get |
set_schema_prefs po d
rt | ele
im te
set_table_prefs po
rt
gather_*_stats DBA

exec dbms_stats.set_table_prefs('SH','SALES','STALE_PERCENT','13');

8 - 33 Copyright © 2007, Oracle. All rights reserved.

Statistic Preferences Overview


The automated statistics-gathering feature was introduced in Oracle Database 10g Release 1 to
reduce the burden of maintaining optimizer statistics. However, there were cases where you had to
disable it and run your own scripts instead. One reason was the lack of object level control.
Whenever you found a small subset of the objects for which the default gather statistic options did
not work well, you had to lock the statistics and analyze them separately using your own options.
For example, the feature that automatically tries to determine adequate sample size
(ESTIMATE_PERCCENT=AUTO_SAMPLE_SIZE) does not work well against columns that
contain data with very high frequency skews. The only way to get around this issue in that case
was to manually specify the sample size in your own script.
The Statistic Preferences feature in Oracle Database 11g introduces flexibility so that you can rely
more on the automated statistics-gathering feature to maintain the optimizer statistics when some
objects require settings that are different from the database default.
This feature allows you to associate statistics gathering options that override the default behavior
of the GATHER_*_STATS procedures and the automated Optimizer Statistics Gathering task at
the object or schema level. As a DBA, you can use the DBMS_STATS package to manage the
above shown gathering statistic options. Basically, you can set, get, delete, export, and import
those preferences at the table, schema, database, and global level. Global preferences are used for
tables that do not have any preferences whereas database preferences are used to set preferences
on all tables. The preference values specified in various ways take precedence from the outer
circles to the inner ones as shown on the above slide.
The last three highlighted options are new in Oracle Database 11g Release 1:
Oracle Database 11g: New Features for Administrators 8 - 33
• PUBLISH is used to decide whether or not to publish the statistics to the dictionary or to
store them in a private area before.
Setting Global Preferences With Enterprise
Manager

8 - 34 Copyright © 2007, Oracle. All rights reserved.

Setting Global Preferences With Enterprise Manager


It is possible to control global preference settings using Enterprise Manager.
You can do so from the Statistics Options page. You can access this page from the Database
Home page by clicking the Server tab, then the Manage Optimizer Statistics link, and then the
Statistics Options link.
Once on the Statistics Options page, you can change the global preferences from the Gather
Optimizer Statistics Default Options section. Once done, click the Apply button.

Oracle Database 11g: New Features for Administrators 8 - 34


Partitioned ck
Tables and Incremental Statistics
a
db Overview
feen
w he n ted
e e
N ot le m
e p
ov im GRANULARITY=GLOBAL% & INCREMENTAL=FALSE
e m 3 02
R 58 Global
statistics
… … …
Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007

… … …
Global
statistics
Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007

GRANULARITY=GLOBAL% & INCREMENTAL=TRUE

8 - 35 Copyright © 2007, Oracle. All rights reserved.

Partitioned Tables and Incremental Statistics Overview


For a partitioned table, the system maintains both the statistics on each partition and the overall
statistics for the table. Generally, if the table is partitioned on a range, very few partitions go
through data modifications (DML). For example, suppose we have a table that stores the sales
transactions. We partition the table on sales date with each partition containing transactions for a
quarter. Most of the DML activity happens on the partition that stores the transactions of the
current quarter. The data in other partitions remain unchanged. Currently the system keeps track
of DML monitoring information at table and (sub)partition level. Statistics are gathered only for
those partitions (in the above example, the partition for the current quarter) that are significantly
changed (current threshold is 10%) since last statistics gathering. However, global statistics are
gathered by scanning the entire table, which makes global statistics very expensive on partitioned
tables especially when some partitions are stored in slow devices and not modified often.
Oracle Database 11g can expedite the gathering of certain global statistics like the number of
distinct values. In contrast to the traditional way of scanning the entire table, there is a new
possible mechanism to maintain certain global statistics by scanning only those partitions that
have been changed and still make use of the statistics gathered before for those partitions that are
unchanged. In short, these global statistics can be maintained incrementally.
DBMS_STATS package currently allows you to specify the granularity on a partitioned table. For
example, you can specify auto, global, global and partition, all, partition, and subpartition. If the
granularity specified includes GLOBAL and the table is marked as INCREMENTAL for its
gathering options, the global statistics are gathered using the incremental mechanism. Moreover,
statistics for changed partitions are gathered as well, no matter whether you specified
PARTITION Oracle
in theDatabase 11g:
granularity or not. New Features for Administrators 8 - 35
Note: The new mechanism does not incrementally maintain histograms and density global
Hash-based Sampling for Column Statistics

• Computing columns statistics is the most expensive


step in statistics gathering
• The row sampling technique gives inaccurate results
with skewed data distribution
• New approximate counting technique used when
ESTIMATE_PERCCENT is set to AUTO_SAMPLE_SIZE
– You are encouraged to use AUTO_SAMPLE_SIZE
• Old row sample technique used otherwise

8 - 36 Copyright © 2007, Oracle. All rights reserved.

Hash-based Sampling for Column Statistics


For query optimization, it is essential to have a good estimate of the number of distinct values. By
default, and without histograms, the optimizer uses the number of distinct values to evaluate the
selectivity of a predicate of a column. The algorithm used in Oracle Database 10g computes the
number of distinct values with a SQL statement counting the number of distinct values found on a
sample of the underlying table. With Oracle Database 10g when gathering column statistics you
have two choices:
1. Use a small sample size, which leads to less accurate results but had a short execution time.
2. Use a large sample or full scan, which leads to very accurate results but had a very long
execution time.
In Oracle Database 11g we have a new method for gathering column statistics that provides
similar accuracy to a scan with the execution time of a small sample (1-5%). This new technique
is used when you invoke a procedure from DBMS_STATS with ESTIMATE_PERCENT
gathering option set to AUTO_SAMPLE_SIZE, which is the default value. The row sampling
based algorithm will be used for collection of number of distinct values if you specify any value
other than AUTO_SAMPLE_SIZE. This is to preserve the old behavior when you specify
sampling percentage.
Note: With Oracle Database 11g, you are encouraged to use AUTO_SAMPLE_SIZE. The new
evaluation mechanism fixes the following two most encountered issues in Oracle Database 10g:
• The auto option stops too early and generates inaccurate statistics, and the user would specify
a higher sample size than the one used by auto.
• The auto optionDatabase
Oracle stops too late andNew
11g: the performance
Featuresisforbad, and the user would8specify
Administrators - 36 a lower
sample size than the one used by auto.
Multi-columns Statistics Overview

You can play the following mini lesson to better


understand multi-column statistics:

Multi-column statistics Overview (see URL in notes)

8 - 37 Copyright © 2007, Oracle. All rights reserved.

ASM Fast Disk Resync Overview


To better understand the following slides, you can spend some time playing the following mini
lesson at:
http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20Curriculum-
Public/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR
1_Mini_Lessons/11gR1_Beta1_Multi_Col_Stats_JFV/11gR1_Beta1_Multi_Col_Stats_viewlet_s
wf.html

Oracle Database 11g: New Features for Administrators 8 - 37


Multi-columns Statistics Overview
AUTO
MAKE MODEL
1

S(MAKE Λ MODEL)=S(MAKE)xS(MODEL)

select dbms_stats.create_extended_stats('jfv','auto','(make,model)')
from dual; 2

exec dbms_stats.gather_table_stats('jfv','auto',-
method_opt=>'for all columns size 1 for columns (make,model) size 3'); 3

DBA_STAT_EXTENSIONS AUTO SYS_STUF3GLKIOP5F4B0BTTCFTMX0W

MAKE MODEL
4

S(MAKE Λ MODEL)=S(MAKE,MODEL)

8 - 38 Copyright © 2007, Oracle. All rights reserved.

Multi-column Statistics Overview


With Oracle Database 10g, the query optimizer takes into account correlation between columns
when computing selectivity of multiple predicates in the following limited cases:
• If all the columns of a conjunctive predicate match all the columns of a concatenated index
key, and the predicates are equality, then the optimizer uses the number of distinct keys
(NDK) in the index for estimating selectivity, as 1/NDK.
• When DYNAMIC_SAMPLING is set to level 4, the query optimizer uses dynamic sampling
to estimate the selectivity of complex predicates involving several columns from the same
table. However, the sample size is very small and it increases parsing time.So, the sample is
likely to be statistically inaccurate and may cause more harm than good.
In all other cases the optimizer assumes that the values of columns used in a complex predicate
are independent from each other. It estimates the selectivity of a conjunctive predicate by
multiplying the selectivity of individual predicates. This approach always results in under-
estimation of the selectivity. To circumvent this issue, Oracle Database 11g allows you to collect,
store and use the following statistics to capture functional dependency between two or more
columns, also called groups of columns: Number of distinct values, number of nulls, frequency
histograms, and density.
For example, consider a table AUTO where you store information about cars. Columns MAKE
and MODEL are highly correlated in that MODEL determines MAKE. This is a strong
dependency, and both columns should be considered by the optimizer as highly correlated. You
can signal that correlation to the optimizer using the CREATE_EXTENDED_STATS function
shown in the above example, and then compute the statistics for all columns including the ones
Oraclegroups
for the correlated Database 11g: New Features for Administrators 8 - 38
you created.
Note:
Expression Statistics Overview
CREATE INDEX upperidx ON AUTO(upper(MODEL))

AUTO
MODEL
le
ib
ss
AUTO po
ill
MODEL St

R
ec
o m
m AUTO DBA_STAT_EXTENSIONS
en
S(upper( MODEL))=0.01 de
d MODEL
SYS_STU3FOQ$BDH0S_14NGXFJ3TQ50

select dbms_stats.create_extended_stats('jfv','auto','(upper(model))') from dual;

exec dbms_stats.gather_table_stats('jfv','auto',-
method_opt=>'for all columns size 1 for columns (upper(model)) size 3');

8 - 39 Copyright © 2007, Oracle. All rights reserved.

Expression Statistics Overview


Predicates involving expressions on columns are a big issue for the query optimizer, when
computing selectivity on predicates of the form function(Column) = constant, the optimizer
assumes a static selectivity value of one percent. Obviously this approach is wrong and causes the
optimizer to produce suboptimal plans. The query optimizer has been extended to better handle
such predicates in limited cases, where functions that preserve the data distribution characteristics
of the column and thus allow the optimizer to use the columns statistics. An example of such
function is TO_NUMBER. Further enhancements were made to evaluate built-in functions during
query optimization to derive better selectivity using dynamic sampling. Lastly, the optimizer
collect statistics on virtual columns created to support function-based indexes.
However, these solutions are either limited to a certain class of functions, or work only for
expressions used to created function-based indexes. Using expression statistics in Oracle Database
11g, you can use a more general solution that includes arbitrary user-defined functions and do not
depend on the presence of function-based indexes. As shown in the above example, this feature
relies on the virtual column infrastructure to create statistics on expressions of columns.

Oracle Database 11g: New Features for Administrators 8 - 39


Deferred Statistics Publishing Overview

You can play the following mini lesson to better


understand statistic preferences and statistics publishing:

Deferred Statistics Publishing Overview (see URL in notes)

8 - 40 Copyright © 2007, Oracle. All rights reserved.

ASM Fast Disk Resync Overview


To better understand the following slides, you can spend some time playing the following mini
lesson at:
http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20Curriculum-
Public/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR
1_Mini_Lessons/11gR1_Beta1_Publish_Stats_JFV/11gR1_Beta1_Publish_Stats_viewlet_swf.htm
l

Oracle Database 11g: New Features for Administrators 8 - 40


Deferred Statistics Publishing Overview
PROD OPTIMIZER_PRIVATE_STATISTICS=TRUE

OPTIMIZER_PRIVATE_STATISTICS=FALSE

Dictionary
Private statistics
statistics

PUBLISH=FALSE
+
GATHER_*_STATS
DBA_TAB_PRIVATE_STATS

IMPORT_TABLE_STATS

expdp/impdp
PUBLISH_PRIVATE_STATS

EXPORT_PRIVATE_STATS

TEST

8 - 41 Copyright © 2007, Oracle. All rights reserved.

Deferred Statistics Publishing Overview


By default, the statistics gathering operation automatically stores the new statistics in the data
dictionary each time it completes the iteration for one object (table, partition, sub-partition, or
index). The optimizer see them as soon as they are written to the data dictionary, and these new
statistics are called current statistics. This automatic publishing can be dreadful to the DBA who is
never sure of the aftermath of the new statistics, days or even weeks later. In addition, the
statistics used by the optimizer can be inconsistent if, for example, table statistics are published
before the statistics of its indexes, partitions or sub-partitions.
To get around these potential issues in Oracle Database 11g Release 1, you can separate the
gathering step from the publication step of optimizer statistics. There are two benefits from
separating the two steps:
• Support the statistics gathering operation as an atomic transaction: the statistics of all tables
and its dependent objects (indexes, partitions, sub-partitions) in a schema will be published at
the same time. This new model has two nice properties: The optimizer will always have a
consistent view of the statistics, and if for some reason the gathering step fails in mid-flight,
it will be able to resume from where it left off when it is restarted using the
DBMS_STAT.RESUME_GATHER_STATS procedure.
• Allow the DBA to validate the new statistics by running all or part of the workload using the
newly gathered statistics on a test system, then, when satisfied with the test results, proceed
to the publishing step to make them current in the production environment.
When you specify the gather option PUBLISH to FALSE, gathered statistics are store in the
private statistic tables instead of being current. These private statistics are accessible from a
number ofOracle Database 11g: New Features for Administrators 8 - 41
views: {ALL|DBA|USER}_{TAB|COL|IND|TAB_HISTGRM}_PRIVATE_STATS.
To test the private statistics, you basically have two options:
Deferred Statistics Publishing Example

exec dbms_stats.set_table_prefs('SH','CUSTOMERS','PUBLISH','false'); 1

exec dbms_stats.gather_table_stats('SH','CUSTOMERS'); 2

alter session set optimizer_use_private_statistics = true; 3

Execute your workload from same session … 4

exec dbms_stats.publish_private_stats('SH','CUSTOMERS'); 5

8 - 42 Copyright © 2007, Oracle. All rights reserved.

Deferred Statistics Publishing Example


1) You use the SET_TABLE_PREFS procedure to set the PUBLISH option to FALSE. This
prevents the next statistics gathering operation to automatically publish statistics as current.
According to the first statement, this is only true for SH.CUSTOMERS table.
2) Then you gather statistics on SH.CUSTOMERS table in the private area of the dictionary.
3) Now, you can test the new set of private statistics from your session by setting the
OPTIMIZER_USE_PRIVATE_STATISTICS to TRUE.
4) Which you do at step four by issuing queries against SH.CUSTOMERS.
5) If you are satisfied with the test results, you can use the PUBLISH_PRIVATE_STATS
procedure to render the private statistics for SH.CUSTOMERS current.
Note: To analyze the differences between the private statistics and the current ones, you could
export the private statistics to your own statistics table, and then use the new
DBMS_STAT.DIFF_TABLE_STATS function.

Oracle Database 11g: New Features for Administrators 8 - 42


! !
m e nted Discovery Engine
mple
yet i
No t

8 - 43 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 8 - 43


Summary

In this lesson, you should have learned how to:


• Use ADDM to perform cluster-wide performance
analysis
• Setup SGA sizing initialization parameters
• Setup Automatic Memory Management
• Use memory advisors

8 - 44 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 8 - 44


Partitioning and Storage-Related
Enhancements

Copyright © 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


• Implement the new partitioning methods
• Employ Data Compression
• Create SQL Access Advisor analysis session using
Enterprise Manager
• Create SQL Access Advisor analysis session using
PL/SQL
• Setup a SQL Access Advisor analysis to get partition
recommendations

9-2 Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 9 - 2


Partitioning Enhancements

• Interval Partitioning
• System Partitioning
• Composite Partitioning enhancements
• Virtual Column-Based Partitioning
• Reference Partitioning

9-3 Copyright © 2007, Oracle. All rights reserved.

Partitioning Enhancements
Partitioning is a important tool for managing large databases. Partitioning allows the DBA to
employ a "divide and conquer" methodology for managing database tables, especially as those
tables grow. Partitioned tables allow a database to scale for very large datasets while maintaining
consistent performance, without unduly impacting administrative or hardware resources.
Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB
or 10 TB of data, partitioning can speed up data access by orders of magnitude.
With the introduction of Oracle Database 11g, the DBA will find a useful assortment of
partitioning enhancements. These enhancements include:
• Addition of Interval Partitioning
• Addition of System Partitioning
• Composite Partitioning enhancements
• Addition of Virtual Column-Based Partitioning
• Addition of Reference Partitioning

Oracle Database 11g: New Features for Administrators 9 - 3


Interval Partitioning

• Interval partitioning is an extension of range


partitioning.
• Partitions of a specified interval are created when
inserted data exceeds all of the range partitions.
• At least one range partition must be created.
• Interval partitioning automates the creation of range
partitions

9-4 Copyright © 2007, Oracle. All rights reserved.

Interval Partitioning
Before the introduction of interval partitioning, the DBA was required to explicitly define the
range of values for each partition. The problem is explicitly defining the bounds for each partition
does not scale as the number of partitions grow.
Interval partitioning is an extension of range partitioning which instructs the database to
automatically create partitions of a specified interval when data inserted into the table exceeds all
of the range partitions. You must specify at least one range partition. The range partitioning key
value determines the high value of the range partitions, which is called the transition point, and
the database creates interval partitions for data beyond that transition point.
Interval partitioning fully automates the creation of range partitions. Managing the creation of
new partitions can be a cumbersome and highly repetitive task. This is especially true for
predictable additions of partitions covering small ranges, such as adding new daily partitions.
Interval partitioning automates this operation by creating partitions on demand.
When using interval partitioning, consider the following restrictions:
• You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
• Interval partitioning is not supported for index-organized tables.
• You cannot create a domain index on an interval-partitioned table.

Oracle Database 11g: New Features for Administrators 9 - 4


Interval Partitioning Example

CREATE TABLE SH.SALES_INTERVAL


PARTITION BY RANGE (time_id)
INTERVAL(NUMTOYMINTERVAL(1, ‘month’))
(PARTITION P0 values less than (TO_DATE(‘1-1-2002’, ‘dd-mm-yyyy’)),
PARTITION P1 values less than (TO_DATE(‘1-1-2003’, ‘dd-mm-yyyy’)),
PARTITION P2 values less than (TO_DATE(‘1-7-2003’, ‘dd-mm-yyyy’)),
PARTITION P3 values less than (TO_DATE(‘1-1-2004’, ‘dd-mm-yyyy’)))
AS SELECT * FROM SH.SALES WHERE TIME_ID < TO_DATE(‘1-1-2004’,
‘dd-mm-yyyy’);

SALES_INTERVAL Table

P1 P2 P3 Pi1 Pi2 …
Interval
Range Components
Components

9-5 Copyright © 2007, Oracle. All rights reserved.

Interval Partitioning Example


Consider the example above which illustrates the creation of an interval partitioned table. The
original CREATE TABLE statement specifies 4 partitions with varying widths. This portion of the
table is range partitioned. It also specifies that above the transition point of 1-Jan-2004, partitions
are created with a width of one month. These partitions are interval partitioned. Partition Pi0 will
automatically be created using this information when a row with a value of A corresponding to
Jan 2004 is inserted into the table. The high bound of partition P3 represents a transition point. P3
and all partitions below it (P0, P1, and P2 in this example) are in the range section while all
partitions above it fall into the interval section. The only argument to the INTERVAL clause is a
constant of the INTERVAL type if the partitioning column is of date type and a constant of
number type if the partitioning column is of number type. Currently, only partitioned tables in
which the partitioning column is of DATE or NUMBER type are supported.

Oracle Database 11g: New Features for Administrators 9 - 5


Moving the Transition Point

Interval partitioned table created as shown below:

CREATE TABLE SALES_INTERVAL


PARTITION BY RANGE (time_id)
INTERVAL(NUMTOYMINTERVAL(1, ‘month’))
(PARTITION P0 VALUES LESS THAN (TO_DATE(‘1-1-
2004’, ‘dd-mm-yyyy’)))
AS SELECT * FROM SH.SALES WHERE 1 = 0;

Moving transition point using MERGE clause:


ALTER TABLE SH.SALES_INTERVAL MERGE PARTITIONS
P3, P4 INTO PARTITION P4;

9-6 Copyright © 2007, Oracle. All rights reserved.

Moving the transition point


As a result of maintenance operations a partition may move from the interval section to the range
section thus shifting the transition point upwards. For example if a user merges two partitions in
the interval section together the width of the resulting partition is no longer the same as the
interval. We have to move this partition to the range section. If this partition is the first partition in
the interval section the semantics are straightforward but consider this example. A table is created
as follows
CREATE TABLE SALES_INTERVAL
PARTITION BY RANGE (time_id)
INTERVAL(NUMTOYMINTERVAL(1, ‘month’))
(PARTITION P0 VALUES LESS THAN (TO_DATE(‘1-1-2004’, ‘dd-mm-yyyy’)))
AS SELECT * FROM SH.SALES WHERE 1 = 0;
Rows come in for January 2004, March 2004 and April 2004 and we create 3 corresponding
partitions. Lets call them P1, P3 and P4 respectively. Then the statement below is executed:
ALTER TABLE SALES_INTERVAL MERGE PARTITIONS P3, P4 INTO PARTITION P4;
The semantics for interval partitioned tables will be that after the merge the table will now have 3
partitions, P0 corresponding to values less than 1-JAN-2004, P1 corresponding to rows for
January-2004 and P4 for rows in February, March and April 2004.

Oracle Database 11g: New Features for Administrators 9 - 6


System Partitioning

System Partitioning:
• Enables application-controlled partitioning for selected
tables
• Provides the benefits of partitioning but the partitioning
and data placement are controlled by the application
• Does not employ partitioning keys like other
partitioning methods
• Does not support partition pruning in the traditional
sense

9-7 Copyright © 2007, Oracle. All rights reserved.

System Partitioning
System partitioning enables application-controlled partitioning for arbitrary tables. The database
simply provides the ability to break down a table into meaningless partitions. All other aspects of
partitioning are controlled by the application. System partitioning provides the well-known
benefits of partitioning (scalability, availability, and manageability), but the partitioning and
actual data placement are controlled by the application.
The most fundamental difference between system partitioning and other methods is that system
partitioning does not have any partitioning keys. Consequently the distribution or mapping of the
rows to a particular partition is not implicit. Instead the user specifies the partition to which a row
maps by using partition extended syntax when inserting a row.
Since system partitioned tables do not have a partitioning key, the usual performance benefits of
partitioned tables will not be available for system partitioned tables. Specifically, there is no
support for traditional partition pruning, partition wise joins etc. Partition pruning will be
achieved by accessing the same partitions in the system partitioned tables as those that were
accessed in the base table.
System partitioned tables provide the manageability advantages of equi-partitioning. For example,
a nested table can be created as a system partitioned table that has the same number of partitions
as the base table. A domain index can be backed up by a system partitioned table that has the
same number of partitions as the base table. This gives the following benefits
When a partition is accessed in the base table the corresponding partition can be accessed in the
system partitioned table. Pruning will be based on the base table pruning.
Oracle on
Any DDL performed Database
the base 11g: NewbeFeatures
table can duplicatedfor
on Administrators 9 - 7table. E.g. if a
the system partitioned
partition is dropped on the base table, the corresponding partition can be dropped in the system
System Partitioning Example

CREATE TABLE systab (c1 integer, c2 integer)


PARTITION BY SYSTEM
(
PARTITION p1 TABLESPACE tbs_1,
PARTITION p2 TABLESPACE tbs_2,
PARTITION p3 TABLESPACE tbs_3,
PARTITION p4 TABLESPACE tbs_4
);

Inserting into the system partitioned table:


INSERT INTO systab PARTITION (p1) VALUES (4,5);
/*Partition p1 */
INSERT INTO systab PARTITION (1) VALUES (4,5); /* First
partition */
INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno
bound to 1/p1 */

9-8 Copyright © 2007, Oracle. All rights reserved.

System Partitioning Example


The syntax in the example above creates a table with four partitions. Each partition can have
different physical attributes. INSERT and MERGE statements must use partition extended syntax
to identify a particular partition a row should go into.For example, tuple (4,5) can be inserted into
any one of the above four partitions.
INSERT INTO systab PARTITION (p1) VALUES (4,5); /* Partition p1 */
INSERT INTO systab PARTITION (1) VALUES (4,5); /* First partition */
INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1
*/
Or:
INSERT INTO systab PARTITION (p2) VALUES (4,5); /* partition p2 */ or
INSERT INTO systab PARTITION (2) VALUES (4,5) /* second partition */
INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1
*/
As the examples above show, the partition extended syntax supports both numbers and bind
variables. The use of bind variables is important because it allows cursor sharing of insert
statements. Deletes and updates do not require the partition extended syntax. However since there
is no partition pruning, if the partition extended syntax is omitted the entire table will be scanned
to execute the operation. Again, this example highlights the fact that there is no implicit mapping
from tuples to any partition.

Oracle Database 11g: New Features for Administrators 9 - 8


System Partitioning Guidelines

The following operations are supported for system


partitioned tables:
• Partition maintenance operations and other DDL
operations
• Creation of local indexes
• Creation of local bitmapped indexes
• Creation of global indexes
• All DML operations
• INSERT AS SELECT with partition extended syntax:
INSERT INTO <table_name> PARTITION
(<partition-name|number|bind var) AS <subqery>

9-9 Copyright © 2007, Oracle. All rights reserved.

System Partitioning Guidelines


The following operations are supported for system partitioned tables:
• Partition maintenance operations and other DDLs (See exceptions below)
• Creation of local indexes.
• Creation of local bitmapped indexes
• Creation of global indexes
• All DML operations
• INSERT AS SELECT with partition extended syntax:
INSERT INTO <table_name> PARTITION (<partition-name|number|bind var) AS
<subqery>
Because of the peculiar requirements of system partitioning, the following operations are not
supported for system partitioning:
• Unique local indexes are not supported because they require a partitioning key.
• CREATE TABLE AS SELECT
Since there is no partitioning method, it is not possible to distribute rows to partitions.
Instead the user should first create the table and then insert rows into each partition.
• INSERT INTO <tabname> AS <subquery>
• SPLIT PARTITION operations

Oracle Database 11g: New Features for Administrators 9 - 9


Composite Partitioning Enhancements

• Range Top Level


– Range- Range RANGE, LIST, INTERVAL
• List Top Level
SP1 SP1 SP1 … SP1 SP1
– List-List
– List-Hash
SP2 SP2 SP2 … SP2 SP2
– List-Range
• Interval Top Level SP3 SP3 … SP3
SP3 SP3
– Interval-Range
– Interval-List SP4 SP4 SP4 … SP4 SP4

– Interval-Hash
LIST, RANGE, HASH

9 - 10 Copyright © 2007, Oracle. All rights reserved.

Composite Partitioning Enhancements


Prior to the release of Oracle Database 11g, the only composite partitioning methods supported
were Range-List and Range-Hash. With this new release List partitioning can be a top level
partitioning method for composite partitioned tables giving us List-List, List-Hash, List-Range
and Range-Range composite methods. With the introduction of Interval partitioning, Interval-
Range, Interval-List and Interval-Hash are now supported composite partitioning methods.
Range-Range Partitioning
Composite range-range partitioning enables logical range partitioning along two dimensions; for
example, partition by order_date and range subpartition by shipping_date.
List-Range Partitioning
Composite list-range partitioning enables logical range subpartitioning within a given list
partitioning strategy; for example, list partition by country_id and range subpartition by
order_date.
List-Hash Partitioning
Composite list-hash partitioning enables hash subpartitioning of a list-partitioned object; for
example, to enable partition-wise joins.
List-List Partitioning
Composite list-list partitioning enables logical list partitioning along two dimensions; for
example, list partition by country_id and list subpartition by sales_channel.

Oracle Database 11g: New Features for Administrators 9 - 10


Range-Range Partitioning Example

CREATE TABLE sales (prod_id NUMBER(6) NOT NULL, cust_id NUMBER


NOT NULL,time_id DATE NOT NULL, channel_id char(1) NOT NULL,
promo_id NUMBER (6) NOT NULL,quantity_sold NUMBER(3) NOT NULL,
amount_sold NUMBER(10,2) NOT NULL)
PARTITION BY RANGE (time_id)
SUBPARTITION BY RANGE (cust_id)
SUBPARTITION TEMPLATE
(
SUBPARTITION sp1 VALUES LESS THAN (50000),
SUBPARTITION sp2 VALUES LESS THAN (100000),
SUBPARTITION sp3 VALUES LESS THAN (150000),
SUBPARTITION sp4 VALUES LESS THAN (MAXVALUE)
)
(
PARTITION VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MM-YYYY')),
PARTITION VALUES LESS THAN (TO_DATE('1-JUL-1999','DD-MM-YYYY')),
PARTITION VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MM-YYYY')),
PARTITION VALUES LESS THAN (TO_DATE('1-JAN-2000','DD-MM-YYYY')
);

9 - 11 Copyright © 2007, Oracle. All rights reserved.

Composite Range-Range Partitioning


Composite Range-Range partitioning enables logical range partitioning along two dimensions. In
the example above, the table SALES is created and range partitioned on time_id. Using a
subpartition template, the SALES table is subpartitioned by range using cust_id for the
subpartition key. Because of the template, all partitions will have the same number of
subpartitions with the same bounds as defined by the template. If no template is specified, a single
default partition bound by MAXVALUE (Range) or DEFAULT value (List) will be created.
Although the example above illustrates the Range-Range methodology, the other new composite
partitioning methods use similar syntax and statement structure. All of the composite partitioning
methods fully support the existing partition pruning methods for queries involving predicates on
the subpartitioning key.

Oracle Database 11g: New Features for Administrators 9 - 11


Virtual Column-Based Partitioning

• Virtual column values are derived by the evaluation of a


function or expression.
• Virtual columns can be defined within a CREATE or
ALTER table operation.
CREATE TABLE employees
(employee_id number(6) not null,

total_compensation as (salary *( 1+commission_pct))

• Virtual column values are not physically stored in the


table row on disk, but are evaluated on demand.
• Virtual columns can be indexed, used in queries, DML
and DDL statements lake other table column types.
• Tables and indexes can be partitioned on a virtual
column and even statistics can be gathered upon them.
9 - 12 Copyright © 2007, Oracle. All rights reserved.

Virtual Column-Based Partitioning


Columns of a table whose values are derived by computation of a function or an expression are
known as virtual columns. These columns can be specified during a CREATE, or ALTER table
operation and can be defined to be either visible or hidden. Virtual columns share the same SQL
namespace as other real table columns and conform to the data type of the underlying expression
that describes it. These columns can be used in queries like any other table columns providing a
simple, elegant and consistent mechanism of accessing expressions in a SQL statement.
The values for virtual columns are not physically stored in the table row on disk, rather they are
evaluated on demand. The functions or expressions describing the virtual columns should be
deterministic and pure, meaning the same set of input values should return the same output values.
Virtual columns can be used like any other table columns. They can be indexed, used in queries,
DML and DDL statements. Tables and indexes can be partitioned on a virtual column and even
statistics can be gathered upon them.
You can use virtual column partitioning to partition key columns defined on virtual columns of a
table. Frequently, business requirements to logically partition objects do not match existing
columns in a one-to-one manner. With the introduction of Oracle Database 11g, partitioning has
been enhanced to allow a partitioning strategy defined on virtual columns, thus enabling a more
comprehensive match of the business requirements.

Oracle Database 11g: New Features for Administrators 9 - 12


Virtual Column-Based Partitioning Example

CREATE TABLE employees


(employee_id number(6) not null, first_name varchar2(30),
last_name varchar2(40) not null, email varchar2(25),
phone_numbervarchar2(20), hire_date date not null,
job_id varchar2(10) not null, salary number(8,2),
commission_pct number(2,2), manager_id number(6),
department_id number(4),
total_compensation as (salary *( 1+commission_pct))
)
PARTITION BY RANGE (total_compensation)
(
PARTITION p1 VALUES LESS THAN (50000),
PARTITION p2 VALUES LESS THAN (100000),
PARTITION p3 VALUES LESS THAN (150000),
PARTITION p4 VALUES LESS THAN (MAXVALUE)
);

9 - 13 Copyright © 2007, Oracle. All rights reserved.

Virtual Column-Based Partitioning Example


Consider the example in the slide above. The EMPLOYEES table is created using the standard
CREATE TABLE syntax . The total_compensation column is a virtual column calculated
by multiplying the value of total_compensation by the commission_pct plus one. The
next statement declares total_compensation (a virtual column) to be the partitioning key of
the EMPLOYEES table.
Partition pruning takes place for virtual column partition keys when the predicates on the
partitioning key are of the following types:
• Equality or Like
• List
• Range
• TBL$
• Partition extended names
Given a join operation between two tables, the optimizer recognizes when partition-wise join (full
or partial) is applicable, decides whether to use it or not and annotate the join properly when it
decides to use it. This applies to both serial and parallel cases.
In order to recognize full partition-wise join the optimizer relies on the definition of equi-
partitioning of two objects, this definition includes the equivalence of the virtual expression on
which the tables were partitioned.

Oracle Database 11g: New Features for Administrators 9 - 13


Reference Partitioning

• A table can now be partitioned based on the partitioning


method of a table referenced in its referential constraint
• The partitioning key is resolved through an
existing parent-child relationship
• The partitioning key is enforced by active
primary key or foreign key constraints
• Tables with a parent-child relationship
can be equi-partitioned by inheriting the
partitioning key from the parent table
without duplicating the key columns

9 - 14 Copyright © 2007, Oracle. All rights reserved.

Reference Partitioning
Reference partitioning provides the ability to partition a table based on the partitioning scheme of
the table referenced in its referential constraint. The partitioning key is resolved through an
existing parent-child relationship, enforced by active primary key or foreign key constraints. The
benefit of this is that tables with a parent-child relationship can be logically equi-partitioned by
inheriting the partitioning key from the parent table without duplicating the key columns. The
logical dependency also automatically cascades partition maintenance operations, making
application development easier and less error-prone.
To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in
the CREATE TABLE statement. This clause specifies the name of a referential constraint and this
constraint becomes the partitioning referential constraint that is used as the basis for reference
partitioning in the table.
As with other partitioned tables, you can specify object-level default attributes, and can optionally
specify partition descriptors that override the object-level defaults on a per-partition basis.

Oracle Database 11g: New Features for Administrators 9 - 14


Reference Partitioning Example
CREATE TABLE orders
( order_id NUMBER(12), order_date TIMESTAMP WITH
LOCAL TIME ZONE, order_mode VARCHAR2(8),
customer_id NUMBER(6), order_status NUMBER(2),
order_total NUMBER(8,2), sales_rep_id NUMBER(6),
promotion_id NUMBER(6),
CONSTRAINT orders_pk PRIMARY KEY(order_id)
)
PARTITION BY RANGE(order_date)
(PARTITION Q1_2005 VALUES LESS THAN
(TO_DATE("01-APR-2005","DD-MON-YYYY")),
PARTITION Q2_2005 VALUES LESS THAN
(TO_DATE("01-JUL-2005","DD-MON-YYYY")),
PARTITION Q3_2005 VALUES LESS THAN
(TO_DATE("01-OCT-2005","DD-MON-YYYY")),
PARTITION Q4_2005 VALUES LESS THAN
(TO_DATE("01-JAN-2006","DD-MON-YYYY"))
);

9 - 15 Copyright © 2007, Oracle. All rights reserved.

Reference Partitioning Example


The example in the slide above creates a list-partitioned table called ORDERS which is range-
partitioned on order_date. It is created with four partitions, Q1_2005, Q2_2005, Q3_2005,
and Q4_2005. This table will be referenced in the creation of a reference partitioned table on the
next slide.

Oracle Database 11g: New Features for Administrators 9 - 15


Reference Partitioning Example
(Continued)

CREATE TABLE order_items


( order_id NUMBER(12) NOT NULL,
line_item_id NUMBER(3) NOT NULL,
product_id NUMBER(6) NOT NULL,
unit_price NUMBER(8,2),
quantity NUMBER(8),
CONSTRAINT order_items_fk
FOREIGN KEY(order_id) REFERENCES orders(order_id)
)
PARTITION BY REFERENCE(order_items_fk);

9 - 16 Copyright © 2007, Oracle. All rights reserved.

Reference Partitioning Example (continued)


The reference-partitioned child table ORDER_ITEMS example above is created with four
partitions, Q1_2005, Q2_2005, Q3_2005, and Q4_2005, where each partition contains the
order_items rows corresponding to orders in the respective parent partition.
If partition descriptors are provided, then the number of partitions described must be exactly equal
to the number of partitions or subpartitions in the referenced table. If the parent table is a
composite partitioned table, then the table will have one partition for each subpartition of its
parent; otherwise the table will have one partition for each partition of its parent.
Partition bounds cannot be specified for the partitions of a reference-partitioned table.
The partitions of a reference-partitioned table can be named. If a partition is not explicitly named,
then it will inherit its name from the corresponding partition in the parent table, unless this
inherited name conflicts with one of the explicit names given. In this case, the partition will have
a system-generated name.
Partitions of a reference-partitioned table will collocate with the corresponding partition of the
parent table, if no explicit tablespace is specified for the reference-partitioned table's partition.

Oracle Database 11g: New Features for Administrators 9 - 16


Compression

• Table compression is optimized for relational data.


• There is virtually no negative impact on the
performance of queries against compressed data.
• There can be a significant positive impact on
queries accessing large amounts of data.
• The data is compressed by eliminating duplicate
values in a database block.
• All database features and functions that work on
regular blocks also work on compressed blocks.

9 - 17 Copyright © 2007, Oracle. All rights reserved.

Compression
The cost of disk systems can be a very large portion of building and maintaining large data
warehouses. Oracle Database helps reduce this cost by compressing the data and it does so
without the typical trade-offs of space savings versus access time to data.
The table compression technique used is very advantageous for large data warehouses. It has
virtually no negative impact on the performance of queries against compressed data; in fact, it
may have a significant positive impact on queries accessing large amounts of data, as well as
on data management operations such as backup and recovery. Consider that you need to
retrieve less data from disk in order to satisfy a query or perform a backup, which simply
reduces the amount of work that needs to be performed.
The data is compressed by eliminating duplicate values in a database block. Compressed data
stored in a database block is self-contained. That is, all the information needed to re-create the
uncompressed data in a block is available within that block. Duplicate values in all the rows
and columns in a block are stored once at the beginning of the block, in what is called a
symbol table for that block. All occurrences of such values are replaced with a short reference
to the symbol table. With the exception of a symbol table at the beginning, compressed
database blocks look very much like regular database blocks.

Oracle Database 11g: New Features for Administrators 9 - 17


Compression (continued)
As a result of the unique compression techniques, there is no expensive decompression
operation needed to access compressed table data. This means that the decision as to when to
apply compression does not need to take a possible negative impact on queries into account.
Compression is done as part of bulk-loading data into the database. The overhead associated
with the initial compression may be an increase in CPU resources of up to 50%. This is the
primary trade-off that needs to be taken into account when considering compression.

Oracle Database 11g: New Features for Administrators 9 - 18


Data Compression Levels

Three levels of compression are available:


• LOW
– LOW uses HSC, or the native Oracle
compression algorithm
– Gives the best CPU performance but
not the best compression ratio
• MEDIUM
– Employs LZO, level 1
– Gives a better compression ratio but
CPU utilization is higher
• HIGH
– Uses the ZLIB level 9algorithm
– Has the best compression ratio but the highest CPU
utilization

9 - 19 Copyright © 2007, Oracle. All rights reserved.

Data Compression Specifics


For compressing the user data three algorithms are available for the DBA to choose between. The
three methods balance better compression against CPU usage.
• HSC, or the native Oracle compression algorithm gives the best CPU performance but not the
best compression ratio.
• LZO, level 1, gives us a better compression ratio but not so good CPU performance.
• The algorithm with the best compression ratio, ZLIB level 9, will gives the poorest CPU
performance but the highest compression rate.
The DBA can choose between three levels of compression: HIGH, MEDIUM and LOW
depending on what is favored the most; space or CPU utilization.
• LOW uses the Oracle native compression algorithm HSC.
• MEDIUM employs LZO level 1
• HIGH uses ZLIB level 9.
The compression works at the block level. Compressing the data can be costly from the CPU point
of view but decompression is done very fast. For LZO and ZLIB we will have to decompress the
data portion whenever something in that area needs to be accessed. For this, larger in-memory
buffers are allocated. When we are done reading or modifying the data, the buffer is compressed
again. Of course this can be costly whenever DML operations are called.
If an ALTER TABLE statement is issued to turn on compression, only blocks are generated after
this statement will be compressed. The user can also switch between the compression algorithms
using this ALTER TABLE statement but again, only the new blocks will use the new algorithm.
Oracle Database 11g: New Features for Administrators 9 - 19
SQL Access Advisor: Overview

What
partitions, indexes,
and MVs do I need SQL
to optimize Solution Access
my entire Advisor
workload?

DBA
No expertise
required Component
Workload of CBO
Provides
implementation
script

9 - 20 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor: Overview


Defining appropriate access structures to optimize SQL queries has always been a concern for
an Oracle DBA. As a result, there have been many papers and scripts written as well as high-end
tools developed to address the matter. In addition, with the development of partitioning and
materialized view technology, deciding on access structures has become even more complex.
As part of the manageability improvements in Oracle Database 10g and 11g, SQL Access Advisor
has been introduced to address this very critical need.
SQL Access Advisor identifies and helps resolve performance problems relating to the execution
of SQL statements by recommending which indexes, materialized views, materialized view logs,
or partitions to create, drop, or retain. It can be run from Database Control or from the command
line by using PL/SQL procedures.
SQL Access Advisor takes an actual workload as input, or the Advisor can derive a hypothetical
workload from the schema. It then recommends the access structures for faster execution path. It
provides the following advantages:
• Does not require you to have expert knowledge
• Bases decision making on rules that actually reside in the cost-based optimizer
• Is synchronized with the optimizer and Oracle database enhancements
• Is a single advisor covering all aspects of SQL access methods
• Provides simple, user-friendly GUI wizards
• Generates scripts for implementation of recommendations

Oracle Database 11g: New Features for Administrators 9 - 20


SQL Access Advisor: Usage Model

SQL Access
Advisor

SQL cache
Workload
Hypothetical

STS
Filter
Options

Indexes Materialized Materialized Partitioned


views views log objects

9 - 21 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor: Usage Model


SQL Access Advisor takes as input a workload that can be derived from multiple sources:
• SQL cache, to take current content of V$SQL
• Hypothetical, to generate a likely workload from your dimensional model. This option is
interesting when your system is being initially designed.
• SQL Tuning Sets, from the workload repository
SQL Access Advisor also provides powerful workload filters that you can use to target the tuning.
For example, a user can specify that the advisor should look at only the 30 most resource-
intensive statements in the workload, based on optimizer cost. For the given workload, the advisor
then does the following:
• Simultaneously considers index solutions, materialized view solutions, partition solutions, or
combinations of all three
• Considers storage for creation and maintenance costs
• Does not generate drop recommendations for partial workloads
• Optimizes materialized views for maximum query rewrite usage and fast refresh
• Recommends materialized view logs for fast refresh
• Recommends partitioning for tables, indexes, and materialized views.
• Combines similar indexes into a single index
• Generates recommendations that support multiple workload queries

Oracle Database 11g: New Features for Administrators 9 - 21


Possible Recommendations

Recommendation Comprehensive Limited


Add new (partitioned) index on table or materialized view. YES YES
Drop an unused index. YES NO
Modify an existing index by changing the index type. YES NO
Modify an existing index by adding columns at the end. YES YES
Add a new (partitioned) materialized view. YES YES
Drop an unused materialized view (log). YES NO
Add a new materialized view log. YES YES
Modify an existing materialized view log to add new YES YES
columns or clauses.
Partition an existing unpartitioned table or index YES YES

9 - 22 Copyright © 2007, Oracle. All rights reserved.

Possible Recommendations
SQL Access Advisor carefully considers the overall impact of recommendations and makes
recommendations by using only the known workload and supplied information. Two workload
analysis methods are available:
• Comprehensive: With this approach, SQL Access Advisor addresses all aspects of tuning
partitions, materialized views, indexes, and materialized view logs. It assumes that the
workload contains a complete and representative set of application SQL statements.
• Limited: Unlike the comprehensive workload approach, a limited workload approach
assumes that the workload contains only problematic SQL statements. Thus, advice is sought
for improving the performance of a portion of an application environment.
When comprehensive workload analysis is chosen, SQL Access Advisor forms a better set of
global tuning adjustments, but the effect may be a longer analysis time. As shown in the table, the
chosen workload approach determines the type of recommendations made by the advisor.
Note: Partition recommendations can only work on tables that have at least 10,000 rows, and
workloads that have some predicates and joins on columns of type NUMBER or DATE.
Partitioning advises can only be generated on these types columns. In addition, partitioning
advices can only be generated for single column INTERVAL and HASH partitioning.
INTERVAL partitioning recommendations can be output as RANGE syntax but INTERVAL is
the default. HASH partitioning is only done to leverage partition-wise joins.

Oracle Database 11g: New Features for Administrators 9 - 22


SQL Access Advisor Session: Initial Options

9 - 23 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor Session: Initial Options


The next few slides describes a typical SQL Access Advisor session. You can access the SQL
Access Advisor wizard through the Advisor Central link on the Database Home page or through
individual alerts or performance pages that may include a link to facilitate solving a performance
problem. The SQL Access Advisor wizard consists of several steps during which you supply the
SQL statements to tune and the types of access methods you want to use.
Use the SQL Access Advisor Default Options page to select a template or task from which to
populate default options before starting the wizard. You can choose Continue to start the wizard
or Cancel to go back to the Advisor Central page. If you Choose View Options to view a list of
the options for the specified template or task.
Note: The SQL Access Advisor may be interrupted while generating recommendations allowing
the results to be reviewed.
For general information about using SQL Access Advisor, see the "Overview of the SQL Access
Advisor" section in the "SQL Access Advisor" chapter of the Oracle Data Warehousing Guide.

Oracle Database 11g: New Features for Administrators 9 - 23


SQL Access Advisor Session: Initial Options

9 - 24 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor Session: Initial Options (Continued)


If you choose the Inherit Options from a Task or Template option on the Initial Options page, you
are able to select an existing task, or an existing template to inherit SQL Access Advisor’s
options. By default, SQLACCESS_EMTASK template is used.
You can view the various options defined by a task or a template by selecting the corresponding
object, and click View Options.

Oracle Database 11g: New Features for Administrators 9 - 24


SQL Access Advisor: Workload Source

9 - 25 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor: Workload Source


You can choose your workload source from three different sources:
• Current and Recent SQL Activity: This source corresponds to SQL statements that are still
cached in your SGA.
• Use an existing SQL Tuning Set: You also have the possibility to create and use a SQL
Tuning Set that holds your statements.
• Hypothetical Workload: This option provides a schema that allows the advisor to search for
dimension tables and produce a workload. This is very useful to initially design your schema.
Using the Filter Options section, you can further filer your workload source. Filter options are:
• Resource Consumption: Number of statements order by Optimizer Cost, Buffer Gets, CPU
Time, Disk Reads, Elapsed Time, Executions.
• Users
• Tables
• SQL Text
• Module Ids
• Actions

Oracle Database 11g: New Features for Administrators 9 - 25


SQL Access Advisor: Recommendation Options

9 - 26 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor: Recommendation Options


Use the Recommendations Options page to choose whether to limit the SQL Access Advisor to
recommendations based on a single access method. You can choose the type of structures to be
recommended by the advisor. If none of the three possible ones are chosen, the advisor evaluates
existing structures instead of trying to recommend new ones.
You can use the Advisor Mode section to run the advisor in one of two modes. These modes
affect the quality of recommendations as well as the length of time required for processing. In
Comprehensive mode, the Advisor searches a large pool of candidates resulting in
recommendations of the highest quality. In Limited mode, the advisor performs quickly, limiting
the candidate recommendations by working on highest cost statements only.

Oracle Database 11g: New Features for Administrators 9 - 26


SQL Access Advisor: Recommendation Options

9 - 27 Copyright © 2007, Oracle. All rights reserved.

SQL Access Advisor: Recommendation Options (Continued)


You can choose Advanced Options to show or hide options that allow you to set space
restrictions, tuning options and default storage locations. Use the Workload Categorization section
to set options for workload volatility and scope. For workload volatility, you can choose to favor
read-only operations or you can consider the volatility of referenced objects when forming
recommendations. For workload scope, you can select Partial Workload, which will not include
recommendations to drop unused access structures, or Complete Workload, which does include
recommendations to drop unused access structures.
Use the Space Restrictions section to specify a hard space limit, which forces the advisor to
produce only recommendations with total space requirements that do not exceed the specified
limit. Use the Tuning Options section to specify options that tailor the recommendations made by
the advisor. The Prioritize Tuning of SQL Statements by dropdown list allows you to prioritize by
Optimizer Cost, Buffer Gets, CPU Time, Disk Reads, Elapsed Time, and Execution Count. Use
the Default Storage Locations section to override the defaults defined for schema and tablespace
locations. By default indexes are placed in the schema and tablespace of the table they reference.
Materialized views are placed in the schema and tablespace of the user who executed one of the
queries that contributed to the materialized view recommendation.
Note: Oracle highly recommends that you specify the default schema and tablespaces for
materialized views.

Oracle Database 11g: New Features for Administrators 9 - 27

Вам также может понравиться