Академический Документы
Профессиональный Документы
Культура Документы
Free Targ
et
Free PGA Target PGA
SQL Areas
SQL Areas
SQL Areas
et Buffer cache
SGA Target SGA Targ
Buffer cache
Buffer cache
Large pool
Large pool
SGA memory
Large pool
MEMORY_MAX_TARGET
SGA_MAX_SIZE
MEMORY_TARGET
SGA_TARGET PGA_AGGREGATE_TARGET
SHARED_POOL_SIZE
Others
DB_CACHE_SIZE
LARGE_POOL_SIZE
JAVA_POOL_SIZE
STREAMS_POOL_SIZE
DB_KEEP_CACHE_SIZE LOG_BUFFER_SIZE
DB_RECYCLE_CACHE_SIZE RESULT_CACHE_SIZE
DB_nK_CACHE_SIZE
11g 11g
350M 350M
Memory Memory
Max Target Max Target
300M
Memory Target
250M
Memory Target
N
MT=0 MMT>0
Y
ST>0 & PAT>0 ST+PAT<=MT<=MMT
Y
MT can be N Minimum possible values
MT=0 dynamically
changed later Y
ST>0 & PAT=0 PAT=MT-ST
Y
SGA & PGA N
ST>0 are separately
Y
auto tuned ST=0 & PAT>0 ST=min(MT-PAT,SMS)
N
Only PGA N
is auto tuned
ST=60%MT
SGA and PGA cannot PAT=40%MT
grow and shrink automatically
In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and
include a value for MEMORY_TARGET, the database automatically sets
MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for
MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the
MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change
MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of
MEMORY_MAX_TARGET.
Database level
Global level
CASCADE DEGREE
ESTIMATE_PERCENT METHOD_OPT
NO_INVALIDATE GRANULARITY
PUBLISH INCREMENTAL
STALE_PERCENT
set_global_prefs
DB
set_database_prefs MS
se _S
t| TA
TS
ex get |
set_schema_prefs po d
rt | ele
im te
set_table_prefs po
rt
gather_*_stats DBA
exec dbms_stats.set_table_prefs('SH','SALES','STALE_PERCENT','13');
… … …
Global
statistics
Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007
S(MAKE Λ MODEL)=S(MAKE)xS(MODEL)
select dbms_stats.create_extended_stats('jfv','auto','(make,model)')
from dual; 2
exec dbms_stats.gather_table_stats('jfv','auto',-
method_opt=>'for all columns size 1 for columns (make,model) size 3'); 3
MAKE MODEL
4
S(MAKE Λ MODEL)=S(MAKE,MODEL)
AUTO
MODEL
le
ib
ss
AUTO po
ill
MODEL St
R
ec
o m
m AUTO DBA_STAT_EXTENSIONS
en
S(upper( MODEL))=0.01 de
d MODEL
SYS_STU3FOQ$BDH0S_14NGXFJ3TQ50
exec dbms_stats.gather_table_stats('jfv','auto',-
method_opt=>'for all columns size 1 for columns (upper(model)) size 3');
OPTIMIZER_PRIVATE_STATISTICS=FALSE
Dictionary
Private statistics
statistics
PUBLISH=FALSE
+
GATHER_*_STATS
DBA_TAB_PRIVATE_STATS
IMPORT_TABLE_STATS
expdp/impdp
PUBLISH_PRIVATE_STATS
EXPORT_PRIVATE_STATS
TEST
exec dbms_stats.set_table_prefs('SH','CUSTOMERS','PUBLISH','false'); 1
exec dbms_stats.gather_table_stats('SH','CUSTOMERS'); 2
exec dbms_stats.publish_private_stats('SH','CUSTOMERS'); 5
• Interval Partitioning
• System Partitioning
• Composite Partitioning enhancements
• Virtual Column-Based Partitioning
• Reference Partitioning
Partitioning Enhancements
Partitioning is a important tool for managing large databases. Partitioning allows the DBA to
employ a "divide and conquer" methodology for managing database tables, especially as those
tables grow. Partitioned tables allow a database to scale for very large datasets while maintaining
consistent performance, without unduly impacting administrative or hardware resources.
Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB
or 10 TB of data, partitioning can speed up data access by orders of magnitude.
With the introduction of Oracle Database 11g, the DBA will find a useful assortment of
partitioning enhancements. These enhancements include:
• Addition of Interval Partitioning
• Addition of System Partitioning
• Composite Partitioning enhancements
• Addition of Virtual Column-Based Partitioning
• Addition of Reference Partitioning
Interval Partitioning
Before the introduction of interval partitioning, the DBA was required to explicitly define the
range of values for each partition. The problem is explicitly defining the bounds for each partition
does not scale as the number of partitions grow.
Interval partitioning is an extension of range partitioning which instructs the database to
automatically create partitions of a specified interval when data inserted into the table exceeds all
of the range partitions. You must specify at least one range partition. The range partitioning key
value determines the high value of the range partitions, which is called the transition point, and
the database creates interval partitions for data beyond that transition point.
Interval partitioning fully automates the creation of range partitions. Managing the creation of
new partitions can be a cumbersome and highly repetitive task. This is especially true for
predictable additions of partitions covering small ranges, such as adding new daily partitions.
Interval partitioning automates this operation by creating partitions on demand.
When using interval partitioning, consider the following restrictions:
• You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
• Interval partitioning is not supported for index-organized tables.
• You cannot create a domain index on an interval-partitioned table.
SALES_INTERVAL Table
P1 P2 P3 Pi1 Pi2 …
Interval
Range Components
Components
System Partitioning:
• Enables application-controlled partitioning for selected
tables
• Provides the benefits of partitioning but the partitioning
and data placement are controlled by the application
• Does not employ partitioning keys like other
partitioning methods
• Does not support partition pruning in the traditional
sense
System Partitioning
System partitioning enables application-controlled partitioning for arbitrary tables. The database
simply provides the ability to break down a table into meaningless partitions. All other aspects of
partitioning are controlled by the application. System partitioning provides the well-known
benefits of partitioning (scalability, availability, and manageability), but the partitioning and
actual data placement are controlled by the application.
The most fundamental difference between system partitioning and other methods is that system
partitioning does not have any partitioning keys. Consequently the distribution or mapping of the
rows to a particular partition is not implicit. Instead the user specifies the partition to which a row
maps by using partition extended syntax when inserting a row.
Since system partitioned tables do not have a partitioning key, the usual performance benefits of
partitioned tables will not be available for system partitioned tables. Specifically, there is no
support for traditional partition pruning, partition wise joins etc. Partition pruning will be
achieved by accessing the same partitions in the system partitioned tables as those that were
accessed in the base table.
System partitioned tables provide the manageability advantages of equi-partitioning. For example,
a nested table can be created as a system partitioned table that has the same number of partitions
as the base table. A domain index can be backed up by a system partitioned table that has the
same number of partitions as the base table. This gives the following benefits
When a partition is accessed in the base table the corresponding partition can be accessed in the
system partitioned table. Pruning will be based on the base table pruning.
Oracle on
Any DDL performed Database
the base 11g: NewbeFeatures
table can duplicatedfor
on Administrators 9 - 7table. E.g. if a
the system partitioned
partition is dropped on the base table, the corresponding partition can be dropped in the system
System Partitioning Example
– Interval-Hash
LIST, RANGE, HASH
Reference Partitioning
Reference partitioning provides the ability to partition a table based on the partitioning scheme of
the table referenced in its referential constraint. The partitioning key is resolved through an
existing parent-child relationship, enforced by active primary key or foreign key constraints. The
benefit of this is that tables with a parent-child relationship can be logically equi-partitioned by
inheriting the partitioning key from the parent table without duplicating the key columns. The
logical dependency also automatically cascades partition maintenance operations, making
application development easier and less error-prone.
To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in
the CREATE TABLE statement. This clause specifies the name of a referential constraint and this
constraint becomes the partitioning referential constraint that is used as the basis for reference
partitioning in the table.
As with other partitioned tables, you can specify object-level default attributes, and can optionally
specify partition descriptors that override the object-level defaults on a per-partition basis.
Compression
The cost of disk systems can be a very large portion of building and maintaining large data
warehouses. Oracle Database helps reduce this cost by compressing the data and it does so
without the typical trade-offs of space savings versus access time to data.
The table compression technique used is very advantageous for large data warehouses. It has
virtually no negative impact on the performance of queries against compressed data; in fact, it
may have a significant positive impact on queries accessing large amounts of data, as well as
on data management operations such as backup and recovery. Consider that you need to
retrieve less data from disk in order to satisfy a query or perform a backup, which simply
reduces the amount of work that needs to be performed.
The data is compressed by eliminating duplicate values in a database block. Compressed data
stored in a database block is self-contained. That is, all the information needed to re-create the
uncompressed data in a block is available within that block. Duplicate values in all the rows
and columns in a block are stored once at the beginning of the block, in what is called a
symbol table for that block. All occurrences of such values are replaced with a short reference
to the symbol table. With the exception of a symbol table at the beginning, compressed
database blocks look very much like regular database blocks.
What
partitions, indexes,
and MVs do I need SQL
to optimize Solution Access
my entire Advisor
workload?
DBA
No expertise
required Component
Workload of CBO
Provides
implementation
script
SQL Access
Advisor
SQL cache
Workload
Hypothetical
STS
Filter
Options
Possible Recommendations
SQL Access Advisor carefully considers the overall impact of recommendations and makes
recommendations by using only the known workload and supplied information. Two workload
analysis methods are available:
• Comprehensive: With this approach, SQL Access Advisor addresses all aspects of tuning
partitions, materialized views, indexes, and materialized view logs. It assumes that the
workload contains a complete and representative set of application SQL statements.
• Limited: Unlike the comprehensive workload approach, a limited workload approach
assumes that the workload contains only problematic SQL statements. Thus, advice is sought
for improving the performance of a portion of an application environment.
When comprehensive workload analysis is chosen, SQL Access Advisor forms a better set of
global tuning adjustments, but the effect may be a longer analysis time. As shown in the table, the
chosen workload approach determines the type of recommendations made by the advisor.
Note: Partition recommendations can only work on tables that have at least 10,000 rows, and
workloads that have some predicates and joins on columns of type NUMBER or DATE.
Partitioning advises can only be generated on these types columns. In addition, partitioning
advices can only be generated for single column INTERVAL and HASH partitioning.
INTERVAL partitioning recommendations can be output as RANGE syntax but INTERVAL is
the default. HASH partitioning is only done to leverage partition-wise joins.