Вы находитесь на странице: 1из 54

High Availability

Flashback Archive Data Guard Streams Online Maintenance Data Recovery Advisor

<Insert Picture Here>

Flashback Data Archive

Data History and Retention


Data retention and change control requirements are growing
Regulatory oversight and Compliance Sarbanes-Oxley, HIPAA, Basel-II, Internal Audit Business needs Extract temporal dimension of data Understand past behavior and manage customer relationships profitably

Failure to maintain appropriate history & retention is expensive


Legal risks Loss of Reputation

Current approaches to manage historical data are inefficient and often ineffective

Data History and Retention - Requirements


Historical data needs to be secure and tamper proof
Unauthorized users should not be able to access historical data No one should be able to update historical data

Easily accessible from existing applications


Seamless access Should not require special interfaces or application changes

Minimal performance overhead Optimal Storage footprint


Historical data volume can easily grow into hundreds of terabytes

Easy to set up historical data capture and configure retention policies

Managing Data History Current Approaches


Application or mid-tier level
Combines business logic and archive policies Increases complexity No centralized management Data integrity issues if underlying data is updated directly

Database level
Enabled using Triggers Significant performance and maintenance overhead

External or Third-party
Mine redo logs History stored in separate database Cannot seamlessly query OLTP and history data

None of the above approaches meet all customer requirements


Customers are therefore forced to make significant compromises

Introducing Flashback Data Archive


Transparently tracks historical changes to all Oracle data in a highly secure and efficient manner
Historical data is stored in the database and can be retained for as long as you want Special kernel optimizations to minimize performance overhead of capturing historical data Historical data is stored in compressed form to minimize storage requirements Automatically prevents end users from changing historical data

Seamless access to archived historical data


Using AS OF SQL construct
select * from product_information AS OF TIMESTAMP '02-MAY-05 12.00 AM where product_id = 3060

Introducing Flashback Data Archive


Extremely easy to set up
enable history capture in minutes!

Completely transparent to applications Centralized and automatic management


policy-based multiple tables can share same Retention and Purge policies automatic purge of aged history
Retention Period

Automatically Purge Data based on Retention policy

How Does Flashback Data Archive Work?


Primary source for history is the undo data History is stored in automatically created history tables inside the archive Transactions and its undo records on tracked tables marked for archival
undo records not recycled until history is archived

History is captured asynchronously by new background process (fbda)


default capture interval is 5 minutes capture interval is self-tuned based on system activities process tries to maximize undo data reads from buffer cache for better performance INSERTs do not generate history records

Flashback Data Archive And DDLs


Possible to add columns to tracked tables Automatically disallows any other DDL that invalidates history
Dropping and truncating a tables Dropping or modifying a column

Must disable archiving before performing any major changes


Disabling archiving discards already collected history

Flashback Data Archive guarantees historical data capture and maintenance


Any operations that invalidates history or prevents historical capture will be disallowed

Creating Flashback Data Archive & Enable History Tracking


1. Create tablespace (Automatic Segment Space Management is required) 2. Create a flashback data archive Set the retention period
CREATE FLASHBACK ARCHIVE fda1 TABLESPACE tbs1 RETENTION 5 YEAR;

3. Enable archiving on desired tables


ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE fda1;

Managing Flashback Data Archive

SYS_FBA_HIST_* - Internal History Table


replica of tracked table with additional timestamp columns partitioned for faster performance no modifications allowed to internal partitions compression reduces disk space required no out-of-box indexes support for copying primary key indexes from tracked table in later releases (TBD)

Applications dont need to access internal tables directly


use AS OF to seamlessly query history

Alerts generated when flashback data archive is 90% full Automatically purges historical data after expiration of specified retention period
supports ad-hoc purge by administrators (privileged operation

Summary
Managing historical data should no longer be a onerous task Flashback Data Archive provides a secure, efficient, easy to use and applicant transparent solution
Easy to implement Centralized, Integrated and query-able Highly storage and performance efficient Automatic, Policy-based management

Reduce costs of compliance Can be used for variety of other purposes


Auditing, Human error correction, etc.

<Insert Picture Here>

Data Guard

Data Guard Enhancements


Better standby resource utilization Enhanced HA / DR functionality Improved performance

Data Guard becomes an integral part of IT operations

Physical Standby with Real-Time Query


Concurrent Real-Time Query Continuous Redo Shipment and Apply
Primary Database Physical Standby Database

Read-only queries on physical standby concurrent with redo apply


Supports RAC on primary and/or standby Queries see transactionally consistent results Handles all data types, but not as flexible as logical standby

Real-time Query Benefits


Immediate appeal to a large installed customer base for physical standby
Allows leveraging existing physical standby assets for excellent business use Satisfies several customers with specific requirements Telecom service schedules for technicians Medical access patient medical reports Finance provide management-level reports Transportation provide responses to package tracking queries Web-business scale-out read access for catalog browsing

Significant differentiator compared to storage mirroring


Mirror volumes are offline during mirroring

Snapshot Standby
Leverage Standby Database for Testing

Updates

Queries Updates

Primary Database

Physical Snapshot Standby Database Database

Preserves zero data loss, although no real time query or fast failover Truly leverages DR hardware for multiple purposes Similar to storage snapshots, but provides DR at the same time anduses single copy of storage

Snapshot Standby
Easier than manual steps in 10.2
10.2
Standby

>
>

alter database recover managed standby database cancel;


create restore point before_lt guarantee flashback database; alter system archive log current; alter system set log_archive_dest_state_2=defer; alter database activate standby database; startup mount force; alter database set standby database to maximize performance; alter system set log_archive_dest_state_2=defer; alter database open;

11.1 Standby > alter database convert to snapshot standby;


PERFORM TESTING, ARCHIVE LOGS CONTINUE TO BE SHIPPED

Primary > > > > > > >

Standby

> alter database convert to physical standby;

PERFORM TESTING, ARCHIVE LOGS NOT SHIPPED


>
> > > >

startup mount force;


flashback database to restore point before_lt; alter database convert to physical standby; startup mount force; alter database recover managed standby database disconnect from session; Alter system set log_archive_dest_state_2=enable

Primary >

Use Physical Standby to Detect Lost Writes


Use new initialization parameter
db_lost_write_protect

Compare versions of blocks on the standby with that in the incoming redo stream Version discrepancy implies lost writes Can use the standby to failover and restore data consistency

Enhanced SQL Apply


Support
XMLType data type (CLOB) Transparent Data Encryption (TDE) DBMS_FGA (Fine Grained Auditing) DBMS_RLS (Virtual Private Database) Role-specific DBMS_SCHEDULER jobs (PRIMARY, LOGICAL STANDBY, BOTH) Dynamic SQL Apply parameter changes Support for Parallel DDL execution on the standby database

Enhanced Fast-Start Failover


Supports Maximum Performance (ASYNC) Mode
Automatic failover for long distance standby Data loss exposure limited using Broker property (default=30 seconds, min=6 seconds)

FastStartFailoverLagLimit Immediate fast-start failover for user-configurable health conditions ENABLE FAST_START FAILOVER [CONDITION <value>];
Condition examples: Datafile Offline Corrupted Controlfile Corrupted Dictionary Inaccessible Logfile Stuck Archiver Any explicit ORA-xyz error

Apps can request fast-start failover using api DBMS_DG.INITIATE_FS_FAILOVER

Data Guard Performance Improvements


Faster Failover
Failover in seconds with Fast-Start Failover

Faster Redo Transport


Optimized async transport for Maximum Performance Mode Redo Transport Compression for gap fetching: new compression attribute for log_archive_dest_n

Faster Redo Apply


Parallel media recovery optimization

Faster SQL Apply


Internal optimizations

Fast incremental backup on physical standby database


Support for block change tracking

Rolling Database Upgrades Using Transient Logical Standby


Physical Logical
Start rolling database upgrades with physical standbys

Temporarily convert physical standby to logical to perform the upgrade


Data type restrictions are limited to short upgrade window

Upgrade
Physical

No need for separate logical standby for upgrade Also possible in 10.2 (more manual steps)
Leverage your physical standbys!

<Insert Picture Here>

Streams

Streams Overview
Source Database Target Database

Propagate

Redo Logs

Capture

Apply1 Apply2

All sites active and updateable Automatic conflict detection & optional resolution Non-Oracle Supports data transformations Database Flexible configurations n-way, hub & spoke, Database platform / release / schema structure can differ Provides HA for applications where update conflicts can be avoided or managed

Transparent Gateway

Streams Enhancements in Oracle Database 11g


Additional Data Type Support Table data comparison Synchronous capture Manageability & Diagnosibility improvements Performance improvements Streams AQ Enhancements

Newly Supported Datatypes


XMLType
Storage CLOB

Transparent Data Encryption (TDE)


Default: Capture TDE=> Apply TDE PRESERVE_ENCRYPTION apply parameter controls behaviour when destination columns are not encrypted

Table Data Comparison


Compare data between live sources
Compare 11.1 with 10.1, 10.2 or 11.1
Compare

Recheck
In-flight data Rows that are different

Converge feature
Identify truth database (local or remote) for row diffs DBMS_COMPARISON

Synchronous Capture
Available in all editions of Oracle Database 11g Efficient internal mechanism to immediately capture change Changes captured as part of the user transaction
DML only LCRs enqueued persistently to disk

When to use:
Replicate a few low activity tables of highly active source database Capture from redo logs cannot be implemented
DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE

Streams Performance Advisor


Capture Changes
10110 00011 10101

Auto-discovery of streams topology on multiple databases Automatic performance analysis across all databases Per-Stream Analysis:
Time-based analysis of each component (waits, CPU, etc.) using ASH

Bottleneck components
Top wait events of bottleneck

Change Apply

Per-Component Analysis:
Throughput and latency Aborted or Enabled

Integrated with ADDM Stream errors are integrated with Server-generated Alerts

Split and Merge of Streams


Challenge
With hub&spoke configurations, when one destination is unavailable, all destinations are hit with a performance impact because capture queue spills to disk after 5 minutes

Solution
Split the queue between live and down destinations Merge queues after recovery
Maintains high performance for all replicas Automated, fast catch-up for unavailable replica

Streams: Hub with 3 Spokes


Destination Database A
Dequeue LCRs
Queue

Propagation A

Apply Process Source Database

Destination Database B
Dequeue LCRs
Queue

Enqueue LCRs Capture Process

Propagation B
Queue

Apply Process

Destination Database C
Dequeue Apply LCRs Process Queue Propagation C

Split Streams: Site A Unavailable

Source Database

CLONED Propagation A

Queue

X
Dequeue LCRs

Destination Database A

Queue

Apply Process

CLONED Capture Process (Disabled) Enqueue LCRs Capture Process Propagation B


Queue

Destination Database B
Dequeue LCRs
Queue

Apply Process

Destination Database C
Apply Dequeue Process LCRs
Queue

Propagation C

Split Streams: Site A Available

Source Database

CLONED Propagation A

Destination Database A
Dequeue LCRs
Queue

Enqueue LCRs

Queue

Apply Process

CLONED Capture Process (Enabled) Enqueue LCRs Capture Process Propagation B


Queue

Destination Database B
Dequeue LCRs
Queue

Apply Process

Destination Database C
Apply Dequeue Process LCRs
Queue

Propagation C

Merge Streams: Original Configuration

Destination Database A
Dequeue LCRs
Queue

Propagation A

Apply Process Source Database

Destination Database B
Dequeue LCRs
Queue

Enqueue LCRs Capture Process

Propagation B
Queue

Apply Process

Destination Database C
Apply Dequeue Process LCRs
Queue

Propagation C

Streams . more manageability improvements


Automated Alerts
abort of capture, propagation, or apply 1st error in DBA_APPLY_ERROR

Propagation uses scheduler Improved error messages


ORA-1403 -> ORA-26786 or ORA-26787 customer DML Handlers need to handle these new exceptions more detail added to many error messages trace Streams messages from start to finish

Cross-database LCR tracking


DBMS_CAPTURE_ADM.SET_MESSAGE_TRACKING(mylabel)
V$STREAMS_MESSAGE_TRACKING

Streams performance

CERN reporting >5000 txns/s in 10.2.03


OpenLAB presentation
http://openlab-mu-internal.web.cern.ch/openlab-muinternal/Documents/3_Presentations/Slides/2007/DW_openlab_qr1_2007.pdf

11g performance improvements


common case 10.2 -> 11.1 almost double

Streams Advanced Queuing (AQ) New Features


JMS/AQ performance improvements
Direct Streams AQ support in JDBC

Scalable event notification


Grouping notification by time Multiple processes notification for scalability

Improved Manageability
Scheduler support Performance views

Flashback Transaction

Flashback Transaction
Automatically finds and backs out a transaction and all its dependent transactions
Utilizes undo, archived redo logs, supplemental logging
Finalize changes with commit, or roll back

Dependent transactions include


Write after write Primary Key insert after delete

Faster, Easier than laborious manual approach


DBMS_FLASHBACK.TRANSACTION_BACKOUT

Flashback Transaction Enterprise Manager Support

Data Recovery Advisor

Data Recovery Advisor


The Motivation
Investigation & Planning

Oracle provides robust tools for data repair:


RMAN physical media loss or corruptions Flashback logical errors Data Guard physical or logical problems

Recovery

However, problem diagnosis and choosing the right solution can be error prone and time consuming
Errors more likely during emergencies

Time to Repair

Data Recovery Advisor


Oracle Database tool that automatically diagnoses data failures, presents repair options, and executes repairs at the user's request Determines failures based on symptoms
E.g. an open failed because datafiles f045.dbf and f003.dbf are missing Failure Information recorded in diagnostic repository (ADR) Flags problems before user discovers them, via automated health monitoring

Intelligently determines recovery strategies


Aggregates failures for efficient recovery Presents only feasible recovery options Indicates any data loss for each option

Can automatically perform selected recovery steps


Reduces downtime by eliminating confusion

Data Recovery Advisor Enterprise Manager Support

Data Recovery Advisor RMAN Command Line Interface


RMAN> list failure
lists all previously detected failures

RMAN> advise failure


presents recommended recovery options

RMAN> repair failure


repair database failures (defaults to first repair option from most recent ADVISE FAILURE)

RMAN> change failure 5 priority low


change failure priority (with exception of critical priority failures, e.g. missing control file)

<Insert Picture Here>

Recovery Manager, Ultra Safe Mode and Online Operations

RMAN Enhancements
Better performance
Intra-file parallel backup and restore of single data files (multi-section backup) Faster backup compression (ZLIB, ~40% faster)

Better security
Virtual private catalog allows a consolidation of RMAN repositories and maintains a separation of responsibilities.

Lower space consumption


Duplicate database or create standby database over the network, avoiding intermediate staging areas

Integration with Windows Volume Shadow Copy Services (VSS) API


Allows database to participate in snapshots coordinated by VSS-compliant backup management tools and storage products Database is automatically recovered upon snapshot restore via RMAN

Ultra-Safe Mode
The DB_ULTRA_SAFE parameter provides an easy way to turn on the safest mode. It affects the default values of the following parameters
DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often prevent memory and data corruption. DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache header of every data block when writing it to disk. Checksums assist in detecting corruption caused by underlying disks, storage systems or I/O systems. DB_LOST_WRITE_PROTECT, which initiates checking for "lost writes". Data block lost writes occur on a physical standby database, when the I/O subsystem signals the completion of a block write, which has not yet been completely written in persistent storage. Of course, the write operation has been completed on the primary database.

Online Operations & Redefinition Improvements


Fast add column with default value Invisible indexes speed application migration and testing Online index build with NO pause to DML No recompilation of dependent objects when Online Redefinition does not logically affect objects More resilient execution of table DDL operations Support Online Redefinition for tables with Materialized View Logs

<Insert Picture Here>

Q&A

High Availability
Flashback Archive Data Guard Streams Online Maintenance Data Recovery Advisor

Вам также может понравиться