Вы находитесь на странице: 1из 240

http://appsjagan.blogspot.in/search?updated-min=2010-01-01T00:00:00%2B05:30&updatedmax=2011-01-01T00:00:00%2B05:30&max-results=50 http://oraclemamukutti.blogspot.in/2011/03/performance-tuning-part-1.html http://www.dba-oracle.com/oracle_tips_fix_corrupt_undo_segments.htm http://allthingsoracle.com/convert-single-instance-to-rac-part-2-manually-convert-to-rac/ http://oracledbascratchpad.blogspot.in/2009/10/tuning-scripts.html http://kumarmohitlal.blogspot.in/2012/01/oracle-real-time-interview-questions.html http://gavinsoorma.com/category/oracle-11g/ http://dbaadnanrafi.blogspot.in/2011_01_01_archive.html http://www.siue.edu/~dbock/cmis565/ http://oraclemamukutti.blogspot.in/2011/03/performance-tuning-part-1.html http://oraclemamukutti.blogspot.in/2011/03/performance-tuning-part-2.html http://satya-dba.blogspot.in/2010/04/rman-commands.html#catalog (IMP) http://docs.oracle.com/cd/E11882_01/server.112/e25494/dba006.

htm#ADMIN11052 Primary database scn is 22 and standby SCN is 24 how you will resolve this issue In case of a dedicated server, a server process is associated with a single user process and serves it dedicatedly. In case of a shared server, a single server process can serve multiple user processes. This is achieved with the help of a dispatcher process, which places each user process in a single request queue. Server process picks up the user process whenever it is free. After that, the server process puts the result in the individual response queue associated with different dispatcher processes. How to increase SGA_MAX_SIZE
SQL> SHOW parameter spfile NAME TYPE VALUE ------------------------------------ ----------- -----------------------------spfile string /u01/app/oracle/product/10.2.0 /dbs/spfileDB1.ora SQL> CREATE pfile FROM spfile; File created. SQL> ALTER system SET sga_max_size=600M scope=spfile; System altered. SQL> shutdown immediate; DATABASE closed. DATABASE dismounted. ORACLE instance shut down. SQL> startup

Data Files: moving renaming deleting


SQL> SELECT name FROM v$datafile; NAME --------------------------------------------------------/u01/app/oracle/product/10.2.0/oradata/DBSID/SYSTEM01.DBF

/u01/app/oracle/product/10.2.0/oradata/DBSID/UNDOTBS01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/SYSAUX01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/USERS01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF

Shutdown the database first:


SQL> SHUTDOWN IMMEDIATE;

Rename a datafile:

SQL> host mv -v /u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF \ /u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE02.DBF

Move a datafile to the new location:


SQL> HOST mv -v /u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF \ /u01/app/oracle/oradata/DBSID/EXAMPLE01.DBF

Move and rename a datafile to the new location:

SQL> HOST mv -v /u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF \ /u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF SQL> STARTUP MOUNT

Move and rename a datafile variant:


SQL> ALTER DATABASE RENAME FILE '/u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF' TO '/u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF'; SQL> ALTER DATABASE OPEN; SQL> SELECT name FROM v$datafile; NAME ---------------------------------------------------------------/u01/app/oracle/product/10.2.0/oradata/DBSID/SYSTEM01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/UNDOTBS01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/SYSAUX01.DBF /u01/app/oracle/product/10.2.0/oradata/DBSID/USERS01.DBF /u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF

Database altered

Deleting dropping a datafile:

If a datafile you wish to drop is only datafile, present in the tablespace in which it resides, you can simply drop a tablespace:
SQL> SELECT file_name, tablespace_name FROM dba_data_files; FILE_NAME TABLESPACE_NAME ------------------------------------------------------------------- -----------------------------/u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF EXAMPLE02 SQL>ALTER DATABASE DATAFILE /u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF OFFLINE DROP

SQL>DROP TABLESPACE EXAMPLE02 INCLUDING CONTENTS; SQL> host rm -f /u01/app/oracle/oradata/DBSID/EXAMPLE02.DBF

OR

SQL>DROP TABLESPACE EXAMPLE02 INCLUDING CONTENTS AND DATAFILES;

Redo Online Logs: add clear move delete After default database 10g installation usually weve got 2-3 redo log files in 2-3 groups:
SQL>select GROUP#, member from v$logfile; GROUP# MEMBER ---------------------------------------------1 /u01/app/oracle/oradata/DBSID/redo01.log 2 /u01/app/oracle/oradata/DBSID/redo02.log 3 /u01/app/oracle/oradata/DBSID/redo03.log

Lets make some changes and create 3 additional members on a different mount point...

Adding new redo log members to an existing group:

NOTE: The database is up and running. First, create physical directories:


as a root user: #mkdir -p /u02/app ; chmod -R 775 /u02; chown -R root:oinstall /u02

And next:
as an oracle user: $ mkdir -p /u02/app/ oracle/oradata/DBSID/ SQL> ALTER DATABASE ADD LOGFILE MEMBER '/u02/app/oracle/oradata/DBSID/redo1b.log' TO GROUP 1, '/u02/app/oracle/oradata/DBSID/redo2b.log' TO GROUP 2, '/u02/app/oracle/oradata/DBSID/redo3b.log' TO GROUP 3; SQL>select GROUP# , member from v$logfile order by group#; GROUP# MEMBER ---------------------------------------------1 /u01/app/oracle/oradata/DBSID/redo01.log 1 /u01/app/oracle/oradata/DBSID/redo1b.log 2 /u01/app/oracle/oradata/DBSID/redo02.log 2 /u01/app/oracle/oradata/DBSID/redo2b.log 3 /u01/app/oracle/oradata/DBSID/redo03.log 3 /u01/app/oracle/oradata/DBSID/redo3b.log

Renaming, moving the redo log files

First shutdown the database:


SQL>shutdown immediate

Move, rename physical files on the OS


$mv v /u01/app/oracle/oradata/DBSID/redo01.log \ /u01/app/oracle/oradata/DBSID/redo1a.log $mv v /u01/app/oracle/oradata/DBSID/redo02.log \ /u01/app/oracle/oradata/DBSID/redo2a.log $mv v /u01/app/oracle/oradata/DBSID/redo03.log \ /u01/app/oracle/oradata/DBSID/redo3a.log

Go back to SQLplus and startup database in mount mode:


SQL>STARTUP MOUNT SQL>ALTER DATABASE RENAME FILE '/u01/app/oracle/oradata/DBSID/redo01.log' TO '/u01/app/oracle/oradata/DBSID/redo1a.log'; SQL>ALTER DATABASE RENAME FILE '/u01/app/oracle/oradata/DBSID/redo02.log' TO '/u01/app/oracle/oradata/DBSID/redo2a.log'; SQL>ALTER DATABASE RENAME FILE '/u01/app/oracle/oradata/DBSID/redo03.log' TO '/u01/app/oracle/oradata/DBSID/redo3a.log'; SQL> ALTER DATABASE OPEN; SQL>select GROUP# , member from v$logfile order by group#; GROUP# MEMBER ---------------------------------------------1 /u01/app/oracle/oradata/DBSID/redo1a.log 1 /u02/app/oracle/oradata/DBSID/redo1b.log 2 /u01/app/oracle/oradata/DBSID/redo2a.log 2 /u02/app/oracle/oradata/DBSID/redo2b.log 3 /u01/app/oracle/oradata/DBSID/redo3a.log 3 /u02/app/oracle/oradata/DBSID/redo3b.log

Creating a new redo log file:

1 SQL>ALTER DATABASE ADD LOGFILE 2 '/u01/app/oracle/oradata/DBSID/redo4a.log' SIZE 10M

A new group is created automatically.


SQL>select GROUP# , member from v$logfile order by group#; GROUP# MEMBER ---------------------------------------------1 /u01/app/oracle/oradata/DBSID/redo1a.log 1 /u02/app/oracle/oradata/DBSID/redo1b.log 2 /u01/app/oracle/oradata/DBSID/redo2a.log 2 /u02/app/oracle/oradata/DBSID/redo2b.log 3 /u01/app/oracle/oradata/DBSID/redo3a.log 3 /u02/app/oracle/oradata/DBSID/redo3b.log 4 /u02/app/oracle/oradata/DBSID/redo4a.log

Creating new multiple log files:


1 2 3 4 5 SQL>ALTER DATABASE ADD LOGFILE ('/u02/app/oracle/oradata/DBSID/redo3b.log', '/u02/app/oracle/oradata/DBSID/redo3b.log' , '/u02/app/oracle/oradata/DBSID/redo3b.log') SIZE 10M;

Switching redo log files group and force to create an archive log file arc.
SQL>ALTER SYSTEM SWITCH LOGFILE;

Dropping redo log group and redo log member:

NOTE: There are some restrictions when dropping redo log groups or redo log files: - There must be at least two redo log groups and each redo log group must have at least one log member left. - Only inactive group can be dropped. To drop a group:
SQL>ALTER DATABASE DROP LOGFILE GROUP 4;

To drop a group member:


1 SQL>ALTER DATABASE DROP LOGFILE MEMBER 2 '/u02/app/oracle/oradata/DBSID/redo4A.log';

Clearing Log Files and Group


1 SQL>ALTER DATABASE CLEAR LOGFILE 2 '/u02/app/oracle/oradata/DBSID/redo3a.log ';

When archive log mode is YES, you need to force clear


1 SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE 2 '/u02/app/oracle/oradata/DBSID/redo3a.log '; SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2;

Managing Tablespaces and Datafiles


Using multiple tablespaces provides several Advantages

Separate user data from data dictionary data to reduce contention among dictionary objects and schema objects for the same datafiles.

Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. Take individual tablespaces offline while others remain online, providing better overall availability.

Creating New Tablespaces


You can create Locally Managed or Dictionary Managed Tablespaces. In prior versions of Oracle only Dictionary managed Tablespaces were available but from Oracle ver. 8i you can also create Locally managed tablespaces. The advantages of locally managed tablespaces are Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps, resulting in the following benefits:

Concurrency and speed of space operations is improved, because space allocations and deallocations modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues Performance is improved, because recursive operations that are sometimes required during dictionary-managed space allocation are eliminated

To create a locally managed tablespace give the following command


SQL> CREATE TABLESPACE ica_lmts DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K. The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then the default size is 1M. The following example creates a Locally managed tablespace with uniform extent size of 256K SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K; To Create Dictionary Managed Tablespace

SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY;

Bigfile Tablespaces (Introduced in Oracle Ver. 10g)


A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. Bigfile tablespaces can reduce the number of datafiles needed for a database. To create a bigfile tablespace give the following command
SQL> CREATE BIGFILE TABLESPACE ica_bigtbs DATAFILE '/u02/oracle/ica/bigtbs01.dbf' SIZE 50G;

To Extend the Size of a tablespace


Option 1 You can extend the size of a tablespace by increasing the size of an existing datafile by typing the following command SQL> alter database ica datafile /u01/oracle/data/icatbs01.dbf resize 100M; This will increase the size from 50M to 100M Option 2 You can also extend the size of a tablespace by adding a new datafile to a tablespace. This is useful if the size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have free space. To add a new datafile to an existing tablespace give the following command. SQL> alter tablespace add datafile /u02/oracle/ica/icatbs02.dbf size 50M; Option 3 You can also use auto extend feature of datafile. In this, Oracle will automatically increase the size of a datafile whenever space is required. You can specify by how much size the file should increase and Maximum size to which it should extend.

To make a existing datafile auto extendable give the following command SQL> alter database datafile /u01/oracle/ica/icatbs01.dbf auto extend ON next 5M maxsize 500M; You can also make a datafile auto extendable while creating a new tablespace itself by giving the following command. SQL> create tablespace ica datafile /u01/oracle/ica/icatbs01.dbf size 50M auto extend ON next 5M maxsize 500M;

To decrease the size of a tablespace


You can decrease the size of tablespace by decreasing the datafile associated with it. You decrease a datafile only up to size of empty space in it. To decrease the size of a datafile give the following command SQL> alter database datafile /u01/oracle/ica/icatbs01.dbf resize 30M; Coalescing Tablespaces A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the database uses the free extent closest in size to the required extent. In some cases, when segments are dropped, their extents are deallocated and marked as free, but adjacent free extents are not immediately recombined into larger free extents. The result is fragmentation that makes allocation of larger extents more difficult. You should often use the ALTER TABLESPACE ... COALESCE statement to manually coalesce any adjacent free extents. To Coalesce a tablespace give the following command SQL> alter tablespace ica coalesce;

Taking tablespaces Offline or Online


You can take an online tablespace offline so that it is temporarily unavailable for general use. The rest of the database remains open and available for users to access data. Conversely, you can bring an offline tablespace online to make the schema objects within the tablespace available to database users. The database must be open to alter the availability of a tablespace. To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. To Take a Tablespace Offline give the following command SQL>alter tablespace ica offline; To again bring it back online give the following command. SQL>alter tablespace ica online; To take individual datafile offline type the following command SQL>alter database datafile /u01/oracle/ica/ica_tbs01.dbf offline; Again to bring it back online give the following command SQL> alter database datafile /u01/oracle/ica/ica_tbs01.dbf online; Note: You cant take individual datafiles offline it the database is running in NOARCHIVELOG mode. If the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then you can only drop it by giving the following command SQL>alter database datafile /u01/oracle/ica/ica_tbs01.dbf offline for drop;

Making a Tablespace Read only.


Making a tablespace read-only prevents write operations on the datafiles in the tablespace. The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. Making a tablespace read-only prevents updates on all tables in the tablespace, regardless of a user's update privilege level.

To make a tablespace read only SQL>alter tablespace ica read only Again to make it read write SQL>alter tablespace ica read write;

Renaming Tablespaces
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For example, the following statement renames the users tablespace:
ALTER TABLESPACE users RENAME TO usersts;

The following affect the operation of this statement:


The COMPATIBLE parameter must be set to 10.0 or higher. If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed and an error is raised. If any datafile in the tablespace is offline, or if the tablespace is offline, then the tablespace is not renamed and an error is raised.

Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. You must have the DROP TABLESPACE system privilege to drop a tablespace.

Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore,
make sure that all data contained in a tablespace to be dropped will not be required in the future. Also, immediately before and after dropping a tablespace from a database, back up the database completely To drop a tablespace give the following command. SQL> drop tablespace ica; This will drop the tablespace only if it is empty. If it is not empty and if you want to drop it anyhow then add the following keyword SQL>drop tablespace ica including contents; This will drop the tablespace even if it is not empty. But the datafiles will not be deleted you have to use operating system command to delete the files.

But If you include datafiles keyword then, the associated datafiles will also be deleted from the disk.

SQL>drop tablespace ica including contents and datafiles;

Temporary Tablespace
Temporary tablespace is used for sorting large tables. Every database should have one temporary tablespace. To create temporary tablespace give the following command. SQL>create temporary tablespace temp tempfile /u01/oracle/data/ica_temp.dbf size 100M extent management local uniform size 5M;
The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The AUTOALLOCATE clause is not allowed for temporary tablespaces.

Increasing or Decreasing the size of a Temporary Tablespace


You can use the resize clause to increase or decrease the size of a temporary tablespace. The following statement resizes a temporary file:
SQL>ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M;

The following statement drops a temporary file and deletes the operating system file: SQL> ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES;

Tablespace Groups
A tablespace group enables a user to consume temporary space from multiple tablespaces. A tablespace group has the following characteristics:

It contains at least one tablespace. There is no explicit limit on the maximum number of tablespaces that are contained in a group. It shares the namespace of tablespaces, so its name cannot be the same as any tablespace. You can specify a tablespace group name wherever a tablespace name would appear when you assign a default temporary tablespace for the database or a temporary tablespace for a user.

You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains is removed from it. Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces. The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces.

Creating a Temporary Tablespace Group


You create a tablespace group implicitly when you include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist. For example, if neither group1 nor group2 exists, then the following statements create those groups, each of which has only the specified tablespace as a member:
CREATE TEMPORARY TABLESPACE ica_temp2 TEMPFILE '/u02/oracle/ica/ica_temp.dbf' SIZE 50M TABLESPACE GROUP group1; ALTER TABLESPACE ica_temp2 TABLESPACE GROUP group2;

Assigning a Tablespace Group as the Default Temporary Tablespace


Use the ALTER DATABASE ...DEFAULT TEMPORARY TABLESPACE statement to assign a tablespace group as the default temporary tablespace for the database. For example:
ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;

Diagnosing and Repairing Locally Managed Tablespace Problems


To diagnose and repair corruptions in Locally Managed Tablespaces Oracle has supplied a package called DBMS_SPACE_ADMIN. This package has many procedures described below: Procedure Description

Procedure
SEGMENT_VERIFY SEGMENT_CORRUPT

Description Verifies the consistency of the extent map of the segment. Marks the segment corrupt or valid so that appropriate error recovery can be done. Cannot be used for a locally managed SYSTEM tablespace. Drops a segment currently marked corrupt (without reclaiming space). Cannot be used for a locally managed SYSTEM tablespace. Dumps the segment header and extent map of a given segment. Verifies that the bitmaps and extent maps for the segments in the tablespace are in sync. Rebuilds the appropriate bitmap. Cannot be used for a locally managed SYSTEM tablespace. Marks the appropriate data block address range (extent) as free or used in bitmap. Cannot be used for a locally managed SYSTEM tablespace. Rebuilds quotas for given tablespace. Migrates a locally managed tablespace to dictionarymanaged tablespace. Cannot be used to migrate a locally managed SYSTEM tablespace to a dictionary-managed SYSTEM tablespace. Migrates a tablespace from dictionary-managed format to locally managed format. Relocates the bitmaps to the destination specified. Cannot be used for a locally managed system tablespace. Fixes the state of the segments in a tablespace in which

SEGMENT_DROP_CORRUPT

SEGMENT_DUMP

TABLESPACE_VERIFY

TABLESPACE_REBUILD_BITMAPS

TABLESPACE_FIX_BITMAPS

TABLESPACE_REBUILD_QUOTAS TABLESPACE_MIGRATE_FROM_LOCAL

TABLESPACE_MIGRATE_TO_LOCAL

TABLESPACE_RELOCATE_BITMAPS

TABLESPACE_FIX_SEGMENT_STATES

Procedure

Description migration was aborted.

Be careful using the above procedures if not used properly you will corrupt your database. Contact Oracle Support before using these procedures. Following are some of the Scenarios where you can use the above procedures

Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap)
The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free in the bitmap, but no overlap between segments is reported. In this scenario, perform the following tasks: 1. Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment. 2. For each range, call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED option to mark the space as used. 3. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.

Scenario 2: Dropping a Corrupted Segment


You cannot drop a segment because the bitmap has segment blocks marked "free". The system has automatically marked the segment corrupted. In this scenario, perform the following tasks: 1. Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then proceed with steps 2 through 5. 2. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment. 3. For each range, call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free. 4. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry. 5. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.

Scenario 3: Fixing Bitmap Where Overlap is Reported

The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data must be sacrificed based on previous internal errors. After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks: 1. Make a list of all objects that t1 overlaps. 2. Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT procedure. 3. Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. If necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used. 4. Rerun the TABLESPACE_VERIFY procedure to verify the problem is resolved.

Scenario 4: Correcting Media Corruption of Bitmap Blocks


A set of bitmap blocks has media corruption. In this scenario, perform the following tasks: 1. Call the TABLESPACE_REBUILD_BITMAPS procedure, either on all bitmap blocks, or on a single block if only one is corrupt. 2. Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas. 3. Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.

Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace


To migrate a dictionary-managed tablespace to a locally managed tablespace. You use the TABLESPACE_MIGRATE_TO_LOCAL procedure.

For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give the following command.
EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('ica2');

Transporting Tablespaces You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size.

Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are simply copied to the destination location, and you use an import utility to transfer only the metadata of the tablespace objects to the new database. Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality can be used to Allow a database to be migrated from one platform to another. However not all platforms are supported. To see which platforms are supported give the following query.
SQL> COLUMN PLATFORM_NAME FORMAT A30 SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM; PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT

----------- ------------------------------ -------------1 Solaris[tm] OE (32-bit) 2 Solaris[tm] OE (64-bit) 7 Microsoft Windows NT 10 Linux IA (32-bit) 6 AIX-Based Systems (64-bit) 3 HP-UX (64-bit) 5 HP Tru64 UNIX 4 HP-UX IA (64-bit) 11 Linux IA (64-bit) 15 HP Open VMS 10 rows selected. Big Big Little Little Big Big Little Big Little Little

If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform. Important: Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or higher, you can accomplish this by making the datafile read/write at least once.

SQL> alter tablespace ica read only; Then, SQL> alter tablespace ica read write;

Procedure for transporting tablespaces


To move or copy a set of tablespaces, perform the following steps. 1. For cross-platform transport, check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view. If you are transporting the tablespace set to a platform different from the source platform, then determine if the source and target platforms are supported and their endianness. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database. Ignore this step if you are transporting your tablespace set to the same platform. 2. Pick a self-contained set of tablespaces. 3. Generate a transportable tablespace set. A transportable tablespace set consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces. If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4. 4. Transport the tablespace set. Copy the datafiles and the export file to the target database. You can do this using any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_COPY package, or publishing on CDs). If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now. 5. Plug in the tablespace.

Invoke the Export utility to plug the set of tablespaces into the target database.

Transporting Tablespace Example


These steps are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist: Tablespace
ica_sales_1 ica_sales_2

Datafile: /u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf /u01/oracle/oradata/ica_salesdb/ica_sales_201.dbf

Step 1: Determine if Platforms are Supported and Endianness This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If ica_sales_1 and ica_sales_2 were being transported to a different platform, you can execute the following query on both platforms to determine if the platforms are supported and their endian formats:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

The following is the query result from the source platform:


PLATFORM_NAME ENDIAN_FORMAT ------------------------- -------------Solaris[tm] OE (32-bit) Big

The following is the result from the target platform:


PLATFORM_NAME ENDIAN_FORMAT ------------------------- -------------Microsoft Windows NT Little

You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set. Step 2: Pick a Self-Contained Set of Tablespaces There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. That is it should not have tables with foreign keys referring to primary key of tables which are in other

tablespaces. It should not have tables with some partitions in other tablespaces. To find out whether the tablespace is self contained do the following
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ica_sales_1,ica_sales_2', TRUE);

After executing the above give the following query to see whether any violations are there.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS; VIOLATIONS --------------------------------------------------------------------------Constraint DEPT_FK between table SAMI.EMP in tablespace ICA_SALES_1 and table SAMI.DEPT in tablespace OTHER Partitioned table SAMI.SALES is partially contained in the transportable set

These violations must be resolved before ica_sales_1 and ica_sales_2 are transportable Step 3: Generate a Transportable Tablespace Set
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions: Make all tablespaces in the set you are copying read-only. SQL> ALTER TABLESPACE ica_sales_1 READ ONLY; Tablespace altered. SQL> ALTER TABLESPACE ica_sales_2 READ ONLY; Tablespace altered. Invoke the Export utility on the host system and specify which tablespaces are in the transportable set. SQL> HOST $ exp system/password FILE=/u01/oracle/expdat.dmp TRANSPORT_TABLESPACES = ica_sales_1,ica_sales_2

If ica_sales_1 and ica_sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the

tablespace set, then convert the datafiles composing the ica_sales_1 and ica_sales_2 tablespaces. You have to use RMAN utility to convert datafiles
$ RMAN TARGET / Recovery Manager: Release 10.1.0.0.0 Copyright (c) 1995, 2003, Oracle Corporation.

All rights reserved.

connected to target database: ica_salesdb (DBID=3295731590) Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system. RMAN> CONVERT TABLESPACE ica_sales_1,ica_sales_2 TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/%U'; Starting backup at 08-APR-03 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=11 devtype=DISK channel ORA_DISK_1: starting datafile conversion input datafile fno=00005 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO5_05ek24v5 channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00004 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO4_06ek24vl channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45 Finished backup at 08-APR-07

Step 4: Transport the Tablespace Set Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database. You can use any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on CDs).

Step 5: Plug In the Tablespace Set Plug in the tablespaces and integrate the structural information using the Import utility, imp:
IMP system/password FILE=expdat.dmp DATAFILES=/ica_salesdb/ica_sales_101.dbf,/ica_salesdb/ica_sales_201.dbf REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john)

The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by smith in the source database will be owned by sami in the target database after the tablespace set is plugged in. Similarly, objects owned by williams in the source database will be owned by john in the target database. In this case, the target database is not required to have users smith and williams, but must have users sami and john. After this statement executes successfully, all tablespaces in the set being copied remain in readonly mode. Check the import logs to ensure that no error has occurred. Now, put the tablespaces into read/write mode as follows:
ALTER TABLESPACE ica_sales_1 READ WRITE; ALTER TABLESPACE ica_sales_2 READ WRITE;

Viewing Information about Tablespaces and Datafiles


Oracle has provided many Data dictionaries to view information about tablespaces and datafiles. Some of them are: To view information about Tablespaces in a database give the following query SQL>select * from dba_tablespaces SQL>select * from v$tablespace; To view information about Datafiles SQL>select * from dba_data_files; SQL>select * from v$datafile; To view information about Tempfiles

SQL>select * from dba_temp_files; SQL>select * from v$tempfile; To view information about free space in datafiles SQL>select * from dba_free_space; To view information about free space in tempfiles SQL>select * from V$TEMP_SPACE_HEADER;

Relocating or Renaming Datafiles


You can rename datafiles to either change their names or relocate them.

Renaming or Relocating Datafiles belonging to a Single Tablespace


To rename or relocate datafiles belonging to a Single Tablespace do the following. 1. 2. 3. Take the tablespace offline Rename or Relocate the datafiles using operating system command Give the ALTER TABLESPACE with RENAME DATAFILE option to change the filenames within the Database. Bring the tablespace Online

4.

For Example suppose you have a tablespace users with the following datafiles /u01/oracle/ica/usr01.dbf /u01/oracle/ica/usr02.dbf Now you want to relocate /u01/oracle/ica/usr01.dbf to /u02/oracle/ica/usr01.dbf and want to rename /u01/oracle/ica/usr02.dbf to /u01/oracle/ica/users02.dbf then follow the given the steps 1. Bring the tablespace offline SQL> alter tablespace users offline; 2. Copy the file to new location using o/s command. $cp /u01/oracle/ica/usr01.dbf /u02/oracle/ica/usr01.dbf

Rename the file /u01/oracle/ica/usr02.dbf to /u01/oracle/ica/users02.dbf using o/s command. $mv 3. /u01/oracle/ica/usr02.dbf /u01/oracle/ica/users02.dbf

Now start SQLPLUS and type the following command to rename and relocate these files SQL> alter tablespace users rename file /u01/oracle/ica/usr01.dbf, /u01/oracle/ica/usr02.dbf to /u02/oracle/ica/usr01.dbf,/u01/oracle/ica/users02.dbf;

4.

Now bring the tablespace Online SQL> alter tablespace users online;

Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces


You can rename and relocate datafiles in one or more tablespaces using the ALTER DATABASE RENAME FILE statement. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. You must have the ALTER DATABASE system privilege To rename datafiles in multiple tablespaces, follow these steps. 1. Ensure that the database is mounted but closed. 2. Copy the datafiles to be renamed to their new locations and new names, using the operating system.. 3. Use ALTER DATABASE to rename the file pointers in the database control file. For example, the following statement renames the datafiles/u02/oracle/rbdb1/sort01.dbf and /u02/oracle/rbdb1/user3.dbf to /u02/oracle/rbdb1/temp01.dbf and /u02/oracle/rbdb1/users03.dbf, respectively:
ALTER DATABASE RENAME FILE '/u02/oracle/rbdb1/sort01.dbf', '/u02/oracle/rbdb1/user3.dbf' TO '/u02/oracle/rbdb1/temp01.dbf', '/u02/oracle/rbdb1/users03.dbf;

Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in the DBA_DATA_FILES view.

4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. 5. Start the Database

Managing REDO LOGFILES


Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements except, SELECT statement, to the logfiles. This is done because Oracle performs deferred batch writes i.e. it does write changes to disk per statement instead it performs write in batches. So in this case if a user updates a row, Oracle will change the row in db_buffer_cache and records the statement in the logfile and give the message to the user that row is updated. Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes. Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. Thats why Oracle writes the statement in redo logfile so that in case of power failure or system crash oracle can re-execute the statements next time when you open the database.

Adding a New Redo Logfile Group


To add a new Redo Logfile group to the database give the following command SQL>alter database add logfile group 3 /u01/oracle/ica/log3.ora size 10M; Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at the time of creating the database. If you want to change MAXLOGFILE setting you have to create a new controlfile.

Adding Members to an existing group


To add new member to an existing group give the following command SQL>alter database add logfile member /u01/oracle/ica/log11.ora to group 1; Note: You can add members to a group up to the MAXLOGMEMBERS setting you have specified at the time of creating the database. If you want to change MAXLOGMEMBERS setting you have create a new controlfile Important: Is it strongly recommended that you multiplex logfiles i.e. have at least two log members, one member in one disk and another in second disk, in a database.

Dropping Members from a group


You can drop member from a log group only if the group is having more than one member and if it is not the current group. If you want to drop members from the current group, force a log switch or wait so that log switch occurs and another group becomes current. To force a log switch give the following command SQL>alter system switch logfile; The following command can be used to drop a logfile member SQL>alter database drop logfile member /u01/oracle/ica/log11.ora; Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

Dropping Logfile Group


Similarly, you can also drop logfile group only if the database is having more than two groups and if it is not the current group. SQL>alter database drop logfile group 3; Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

Resizing Logfiles
You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.

Renaming or Relocating Logfiles


To Rename or Relocate Logfiles perform the following steps For Example, suppose you want to move a logfile from /u01/oracle/ica/log1.ora to /u02/oracle/ica/log1.ora, then do the following Steps 1. Shutdown the database

SQL>shutdown immediate; 2. Move the logfile from Old location to new location using operating system command $mv /u01/oracle/ica/log1.ora 3. Start and mount the database SQL>startup mount 4. Now give the following command to change the location in controlfile SQL>alter database rename file /u01/oracle/ica/log1.ora to /u02/oracle/ica/log2.ora; 5. Open the database SQL>alter database open; /u02/oracle/ica/log1.ora

Clearing REDO LOGFILES


A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used reinitialize the file without shutting down the database. The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:

If there are only two log groups The corrupt redo log file belongs to the current group

If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived. If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover

Viewing Information About Logfiles


To See how many logfile groups are there and their status type the following query. SQL>SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES ------ ------- ----- ------1 1 20605 1048576 2 1 20606 1048576 3 1 20603 1048576 4 1 20604 1048576 MEMBERS ------1 1 1 1 ARC --YES NO YES YES STATUS --------ACTIVE CURRENT INACTIVE INACTIVE FIRST_CHANGE# ------------61515628 41517595 31511666 21513647 FIRST_TIM --------21-JUN-07 21-JUN-07 21-JUN-07 21-JUN-07

To See how many members are there and where they are located give the following query SQL>SELECT * FROM V$LOGFILE;
GROUP# -----1 2 STATUS ------MEMBER ---------------------------------/U01/ORACLE/ICA/LOG1.ORA /U01/ORACLE/ICA/LOG2.ORA

Managing Control Files


Every Oracle Database has a control file, which is a small binary file that records the physical structure of the database. The control file includes:

The database name Names and locations of associated datafiles and redo log files The timestamp of the database creation The current log sequence number Checkpoint information

It is strongly recommended that you multiplex control files i.e. Have at least two control files one in one hard disk and another one located in another disk, in a database. In this way if control file becomes corrupt in one disk the another copy will be available and you dont have to do recovery of control file. You can multiplex control file at the time of creating a database and later on also. If you have not multiplexed control file at the time of creating a database you can do it now by following given procedure.

Multiplexing Control File


Steps: 1. Shutdown the Database. SQl>SHUTDOWN IMMEDIATE; 2. Copy the control file from old location to new location using operating system command. For example. $cp /u01/oracle/ica/control.ora /u02/oracle/ica/control.ora

3. Now open the parameter file and specify the new location like this CONTROL_FILES=/u01/oracle/ica/control.ora Change it to CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica/contro l.ora 4. Start the Database Now Oracle will start updating both the control files and, if one control file is lost you can copy it from another location.

Changing the Name of a Database


If you ever want to change the name of database or want to change the setting of MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then you have to create a new control file.

Creating a New Control File


Follow the given steps to create a new controlfile Steps 1. First generate the create controlfile statement SQL>alter database backup controlfile to trace;

After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. The trace file will be randomly named something like ORA23212.TRC and it is created in USER_DUMP_DEST directory. 2. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql 3. Now open the c.sql file in text editor and set the database name from ica to prod shown in an example below
CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log', '/u01/oracle/ica/redo01_02.log'), GROUP 2 ('/u01/oracle/ica/redo02_01.log', '/u01/oracle/ica/redo02_02.log'), GROUP 3 ('/u01/oracle/ica/redo03_01.log', '/u01/oracle/ica/redo03_02.log') RESETLOGS DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M, '/u01/oracle/ica/rbs01.dbs' SIZE 5M, '/u01/oracle/ica/users01.dbs' SIZE 5M, '/u01/oracle/ica/temp01.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;

4. Start and do not mount the database. SQL>STARTUP NOMOUNT; 5. Now execute c.sql script SQL> @/u01/oracle/c.sql 6. Now open the database with RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS;

Cloning an Oracle Database.


You have a Production database running in one server. The company management wants to develop some new modules and they have hired some programmers to do that. Now these

programmers require access to the Production database and they want to make changes to it. You as a DBA cant give direct access to Production database so you want to create a copy of this database on another server and wants to give developers access to it. Let us see an example of cloning a database We have a database running the production server with the following files PARAMETER FILE located in /u01/oracle/ica/initica.ora CONTROL FILES=/u01/oracle/ica/control.ora BACKGROUND_DUMP_DEST=/u01/oracle/ica/bdump USER_DUMP_DEST=/u01/oracle/ica/udump CORE_DUMP_DEST=/u01/oracle/ica/cdump LOG_ARCHIVE_DEST_1=location=/u01/oracle/ica/arc1 DATAFILES = /u01/oracle/ica/sys.dbf /u01/oracle/ica/usr.dbf /u01/oracle/ica/rbs.dbf /u01/oracle/ica/tmp.dbf /u01/oracle/ica/sysaux.dbf LOGFILE= /u01/oracle/ica/log1.ora /u01/oracle/ica/log2.ora Now you want to copy this database to SERVER 2 and in SERVER 2 you dont have /u01 filesystem. In SERVER 2 you have /d01 filesystem. To Clone this Database on SERVER 2 do the following. Steps :1. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1. 2. In SERVER 1 generate CREATE CONTROLFILE statement by typing the following command SQL>alter database backup controlfile to trace; Now, go to the USER_DUMP_DEST directory and open the latest trace file. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste in a file. Let the filename be cr.sql The CREATE CONTROLFILE Statement will look like this.
CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/log1.ora'

GROUP 2 ('/u01/oracle/ica/log2.ora' DATAFILE '/u01/oracle/ica/sys.dbf' SIZE 300M, '/u01/oracle/ica/rbs.dbf' SIZE 50M, '/u01/oracle/ica/usr.dbf' SIZE 50M, '/u01/oracle/ica/tmp.dbf' SIZE 50M, /u01/oracle/ica/sysaux.dbf size 100M; MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;

3. In SERVER 2 create the following directories $cd /d01/oracle $mkdir ica $mkdir arc1 $cd ica $mkdir bdump udump cdump Shutdown the database on SERVER 1 and transfer all datafiles, logfiles and control file to SERVER 2 in /d01/oracle/ica directory. Copy parameter file to SERVER 2 in /d01/oracle/dbs directory and copy all archive log files to SERVER 2 in /d01/oracle/ica/arc1 directory. Copy the cr.sql script file to /d01/oracle/ica directory. 4. Open the parameter file SERVER 2 and change the following parameters CONTROL FILES=//d01/oracle/ica/control.ora BACKGROUND_DUMP_DEST=//d01/oracle/ica/bdump USER_DUMP_DEST=//d01/oracle/ica/udump CORE_DUMP_DEST=//d01/oracle/ica/cdump LOG_ARCHIVE_DEST_1=location=//d01/oracle/ica/arc1 5. Now, open the cr.sql file in text editor and change the locations like this
CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('//d01/oracle/ica/log1.ora' GROUP 2 ('//d01/oracle/ica/log2.ora' DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M, '//d01/oracle/ica/rbs.dbf' SIZE 50M,

'//d01/oracle/ica/usr.dbf' SIZE 50M, '//d01/oracle/ica/tmp.dbf' SIZE 50M, //d01/oracle/ica/sysaux.dbf size 100M; MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;

In SERVER 2 export ORACLE_SID environment variable and start the instance $export ORACLE_SID=ica $sqlplus Enter User:/ as sysdba SQL> startup nomount; 6. Run cr.sql script to create the controlfile SQL>@/d01/oracle/ica/cr.sql 7. Open the database SQL>alter database open;

Managing the UNDO TABLESPACE


Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo. Undo records are used to:

Roll back transactions when a ROLLBACK statement is issued Recover the database Provide read consistency Analyze data as of an earlier point in time by using Flashback Query Recover from logical corruptions using Flashback features

Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that

you use undo tablespace to manage undo rather than rollback segments.

Switching to Automatic Management of Undo Space


To go for automatic management of undo space set the following parameter. Steps:1. If you have not created an undo tablespace at the time of creating a database then, create an undo tablespace by typing the following command SQL>create undo tablespace myundo datafile /u01/oracle/ica/undo_tbs.dbf size 500M autoextend ON next 5M ; When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed 2. Shutdown the Database and set the following parameters in parameter file. UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=myundo 3. Start the Database.

Now Oracle Database will use Automatic Undo Space Management.

Calculating the Space Requirements For Undo Retention


You can calculate space requirements manually using the following formula:
UndoSpace = UR * UPS + overhead

where:

UndoSpace is the number of undo blocks UR is UNDO_RETENTION in seconds. This value should take into consideration longrunning queries and any flashback requirements. UPS is undo blocks for each second overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)

As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for each second, with a 8K block size, the required undo space is computed as follows:

(3 * 3600 * 100 * 8K) = 8.24GBs To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement SQL> Select * from V$UNDOSTAT;

Altering UNDO Tablespace


If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it The following example extends an existing datafile SQL> alter database datafile /u01/oracle/ica/undo_tbs.dbf resize 700M
The following example adds a new datafile to undo tablespace SQL> ALTER TABLESPACE myundo ADD DATAFILE '/u01/oracle/ica/undo02.dbf' SIZE 200M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;

Dropping an Undo Tablespace


Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01:
SQL> DROP TABLESPACE myundo;

An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.

Switching Undo Tablespaces


You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace. The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = myundo2;

Assuming myundo is the current undo tablespace, after this command successfully executes, the instance uses myundo2 in place of myundo as its undo tablespace.

Viewing Information about Undo Tablespace

To view statistics for tuning undo tablespace query the following dictionary SQL>select * from v$undostat; To see how many active Transactions are there and to see undo segment information give the following command SQL>select * from v$transaction; To see the sizes of extents in the undo tablespace give the following query SQL>select * from DBA_UNDO_EXTENTS;

Export and Import


These tools are used to transfer data from one oracle database to another oracle database. You Export tool to export data from source database, and Import tool to load data into the target database. When you export tables from source database export tool will extracts the tables and puts it into the dump file. This dump file is transferred to the target database. At the target database the Import tool will copy the data from dump file to the target database. From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are enhanced versions of original Export and Import tools. The export dump file contains objects in the following order: 1. 2. 3. 4. 5. 6. Type definitions Table definitions Table data Table indexes Integrity constraints, views, procedures, and triggers Bitmap, function-based, and domain indexes

When you import the tables the import tool will perform the actions in the following order, new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data

Invoking Export and Import

You can run Export and Import tool in two modes Command Line Mode Interactive Mode When you just type exp or imp at o/s prompt it will run in interactive mode i.e. these tools will prompt you for all the necessary input. If you supply command line arguments when calling exp or imp then it will run in command line mode Command Line Parameters of Export tool You can control how Export runs by entering the EXP command followed by various arguments. To specify parameters, you use keywords:
Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR) or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

Keyword

Description (Default)

-------------------------------------------------------------USERID BUFFER FILE COMPRESS GRANTS INDEXES DIRECT LOG ROWS CONSISTENT FULL username/password size of data buffer output files (EXPDAT.DMP) import into one extent (Y) export grants (Y) export indexes (Y) direct path (N) log file of screen output export data rows (Y) cross-table consistency(N) export entire file (N)

OWNER TABLES

list of owner usernames list of table names

RECORDLENGTH length of IO record INCTYPE RECORD TRIGGERS STATISTICS PARFILE incremental export type track incr. export (Y) export triggers (Y) analyze objects (ESTIMATE) parameter filename export constraints (Y) transaction set to read only during object export (N)

CONSTRAINTS OBJECT_CONSISTENT FEEDBACK FILESIZE FLASHBACK_SCN FLASHBACK_TIME QUERY RESUMABLE RESUMABLE_NAME

display progress every x rows (0) maximum size of each dump file SCN used to set session snapshot back to time used to get the SCN closest to the specified time select clause used to export a subset of a table suspend when a space related error is encountered(N) text string used to identify resumable statement

RESUMABLE_TIMEOUT wait time for RESUMABLE TTS_FULL_CHECK TABLESPACES perform full or partial dependency check for TTS list of tablespaces to export export transportable tablespace metadata (N)

TRANSPORT_TABLESPACE TEMPLATE

template name which invokes iAS mode export

The Export and Import tools support four modes of operation

FULL :Exports all the objects in all schemas OWNER :Exports objects only belonging to the given OWNER TABLES :Exports Individual Tables TABLESPACE :Export all objects located in a given TABLESPACE. Example of Exporting Full Database The following example shows how to export full database $exp USERID=scott/tiger FULL=y FILE=myfull.dmp In the above command, FILE option specifies the name of the dump file, FULL option specifies that you want to export the full database, USERID option specifies the user account to connect to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE privilege. Example of Exporting Schemas To export Objects stored in a particular schemas you can run export utility with the following arguments $exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp The above command will export all the objects stored in SCOTT and ALIs schema. Exporting Individual Tables To export individual tables give the following command $exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp This will export scotts emp and sales tables. Exporting Consistent Image of the tables If you include CONSISTENT=Y option in export command argument then, Export utility will export a consistent image of the table i.e. the changes which are done to the table during export operation will not be exported.

Using Import Utility


Objects exported by export utility can only be imported by Import utility. Import utility can run in Interactive mode or command line mode.

You can let Import prompt you for parameters by entering the IMP command followed by your username/password: Example: IMP SCOTT/TIGER Or, you can control how Import runs by entering the IMP command followed by various arguments. To specify parameters, you use keywords: Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line. Keyword USERID BUFFER FILE SHOW IGNORE GRANTS INDEXES ROWS LOG FULL FROMUSER TOUSER TABLES RECORDLENGTH INCTYPE COMMIT PARFILE CONSTRAINTS DESTROY INDEXFILE SKIP_UNUSABLE_INDEXES FEEDBACK TOID_NOVALIDATE FILESIZE STATISTICS Description (Default) username/password size of data buffer input files (EXPDAT.DMP) just list file contents (N) ignore create errors (N) import grants (Y) import indexes (Y) import data rows (Y) log file of screen output import entire file (N) list of owner usernames list of usernames list of table names length of IO record incremental import type commit array insert (N) parameter filename import constraints (Y) overwrite tablespace data file (N) write table/index info to specified file skip maintenance of unusable indexes (N) display progress every x rows(0) skip validation of specified type ids maximum size of each dump file import precomputed statistics (always)

RESUMABLE RESUMABLE_NAME RESUMABLE_TIMEOUT COMPILE STREAMS_CONFIGURATION STREAMS_INSTANITATION

suspend when a space related error is encountered(N) text string used to identify resumable statement wait time for RESUMABLE compile procedures, packages, and functions (Y) import streams general metadata (Y) import streams instantiation metadata (N)

Example Importing Individual Tables To import individual tables from a full database export dump file give the following command $imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(emp,dept) This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below
Export file created by EXPORT:V10.00.00 via conventional path import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing table . . importing table "DEPT" "EMP" 4 rows imported 14 rows imported

Import terminated successfully without warnings.

Example, Importing Tables of One User account into another User account For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command $imp scott/tiger FILE=mytables.dmp FROMUSER=ali TOUSER=scott

Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed. Example Importing Tables Using Pattern Matching Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use % wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet e and those tables whose name contains alphabet d $imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott

TABLES=(a%,%d%)

Migrating a Database across platforms.


The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms. The following steps present a general overview of how to move a database between platforms. 1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process. SQL> SELECT tablespace_name FROM dba_tablespaces; 2. As a DBA user, perform a full export from the source database, for example: > exp system/manager FULL=y FILE=myfullexp.dmp 3. Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption. 4. Create a database on the target server. 5. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system. 6. As a DBA user, perform a full import with the IGNORE parameter enabled: > imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.

7. Perform a full backup of your new database

DATA PUMP Utility


Starting with Oracle 10g, Oracle has introduced an enhanced version of EXPORT and IMPORT utility known as DATA PUMP. Data Pump is similar to EXPORT and IMPORT utility but it has many advantages. Some of the advantages are:

Most Data Pump export and import operations occur on the Oracle database server. i.e. all the dump files are created in the server even if you run the Data Pump utility from client machine. This results in increased performance because data is not transferred through network. You can Stop and Re-Start export and import jobs. This is particularly useful if you have started an export or import job and after some time you want to do some other urgent work. The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The ability to estimate how much space an export job would consume, without actually performing the export Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs

Using Data Pump Export Utility


To Use Data Pump, DBA has to create a directory in Server Machine and create a Directory Object in the database mapping to the directory created in the file system. The following example creates a directory in the filesystem and creates a directory object in the database and grants privileges on the Directory Object to the SCOTT user. $mkdir my_dump_dir $sqlplus Enter User:/ as sysdba SQL>create directory data_pump_dir as /u01/oracle/my_dump_dir; Now grant access on this directory object to SCOTT user SQL> grant read,write on directory data_pump_dir to scott;

Example of Exporting a Full Database


To Export Full Database, give the following command $expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob

The above command will export the full database and it will create the dump file full.dmp in the directory on the server /u01/oracle/my_dump_dir In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit, and hence export will fail. In this situation you can create multiple dump files by typing the following command $expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob

This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how much larger the dump file should be.

Example of Exporting a Schema


To export all the objects of SCOTTS schema you can run the following export data pump command. $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp SCHEMAS=SCOTT You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only. If you want to export objects of multiple schemas you can specify the following command $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp SCHEMAS=SCOTT,HR,ALI

Exporting Individual Tables using Data Pump Export


You can use Data Pump Export utility to export individual tables. The following example shows the syntax to export tables
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp

TABLES=employees,jobs,departments Exporting Tables located in a Tablespace

If you want to export tables located in a particular tablespace you can type the following command
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp TABLESPACES=tbs_4, tbs_5, tbs_6

The above will export all the objects located in tbs_4,tbs_5,tbs_6

Excluding and Including Objects during Export


You can exclude objects while performing a export by using EXCLUDE option of Data Pump utility. For example you are exporting a schema and dont want to export tables whose name starts with A then you can type the following command $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp SCHEMAS=SCOTT EXCLUDE=TABLE:like A% Then all tables in Scotts Schema whose name starts with A will not be exported. Similarly you can also INCLUDE option to only export certain objects like this $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp SCHEMAS=SCOTT INCLUDE=TABLE:like A% This is opposite of EXCLUDE option i.e. it will export only those tables of Scotts schema whose name starts with A Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA

Using Query to Filter Rows during Export


You can use QUERY option to export only required rows. For Example, the following will export only those rows of employees tables whose salary is above 10000 and whose dept id is 10.
expdp hr/hr QUERY=emp:'"WHERE

dept_id > 10 AND sal > 10000"'

NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp

Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)
You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine. For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command $expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands Now he wants to stop this export job so he will type the following command
Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state and exits the client. After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command
$expdp hr/hr@mydb ATTACH=myfulljob

After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client. Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesnt want to continue with the export job.

Flash Back Features


From Oracle Ver. 9i Oracle has introduced Flashback Query feature. It is useful to recover from accidental statement failures. For example, suppose a user accidently deletes rows from a table and commits it also then, using flash back query he can get back the rows. Flashback feature depends upon on how much undo retention time you have specified. If you have set the UNDO_RETENTION parameter to 2 hours then, Oracle will not overwrite the data in undo tablespace even after committing until 2 Hours have passed. Users can recover from their mistakes made since last 2 hours only. For example, suppose John gives a delete statement at 10 AM and commits it. After 1 hour he realizes that delete statement is mistakenly performed. Now he can give a flashback AS.. OF query to get back the deleted rows like this.

Flashback Query
SQL>select * from emp as of timestamp sysdate-1/24; Or SQL> SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP('2007-06-07 10:00:00', 'YYYY-MM-DD HH:MI:SS') To insert the accidently deleted rows again in the table he can type SQL> insert into emp (select * from emp as of timestamp sysdate1/24)

Using Flashback Version Query


You use a Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. A new row version is created whenever a COMMIT statement is executed. The Flashback Version Query returns a table with a row for each version of the row that existed at any time during the time interval you specify. Each row in the table includes pseudocolumns of metadata about the row version. The pseudocolumns available are VERSIONS_XID VERSIONS_OPERATION VERSIONS_STARTSCN created VERSIONS_STARTTIME VERSIONS_ENDSCN VERSIONS_ENDTIME :Identifier of the transaction that created the row version :Operation Performed. I for Insert, U for Update, D for Delete :Starting System Change Number when the row version was :Starting System Change Time when the row version was created :SCN when the row version expired. :Timestamp when the row version expired

To understand lets see the following example Before Starting this example lets us collect the Timestamp SQL> select to_char(SYSTIMESTAMP,YYYY-MM-DD HH:MI:SS) from dual; TO_CHAR(SYSTIMESTAMP,YYYYY --------------------------2007-06-19 20:30:43 Suppose a user creates a emp table and inserts a row into it and commits the row. SQL> Create table emp (empno number(5),name varchar2(20),sal number(10,2)); SQL> insert into emp values (101,Sami,5000); SQL>commit; At this time emp table has one version of one row. Now a user sitting at another machine erroneously changes the Salary from 5000 to 2000 using Update statement SQL> update emp set sal=sal-3000 where empno=101; SQL> commit; Subsequently, a new transaction updates the name of the employee from Sami to Smith. SQL>update emp set name=Smith where empno=101; SQL> commit; At this point, the DBA detects the application error and needs to diagnose the problem. The DBA issues the following query to retrieve versions of the rows in the emp table that correspond to empno 101. The query uses Flashback Version Query pseudocolumns SQL> SQL> SQL> SQL> Connect / as sysdba column versions_starttime format a16 column versions_endtime format a16 set linesize 120;

SQL> select versions_xid,versions_starttime,versions_endtime, versions_operation,empno,name,sal from emp versions between timestamp to_timestamp(2007-06-19 20:30:00,yyyy-mm-dd hh:mi:ss)

and to_timestamp(2007-06-19 21:00:00,yyyy-mm-dd hh:mi:ss); VERSION_XID ----------0200100020D 02001003C02 0002302C03A V U U I STARTSCN -------11323 11345 12320 ENDSCN -----EMPNO ----101 101 101 NAME SAL -------- ---SMITH 2000 SAMI 2000 SAMI 5000

The Output should be read from bottom to top, from the output we can see that an Insert has taken place and then erroneous update has taken place and then again update has taken place to change the name. The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to get the SQL command to undo the change SQL> select operation,logon_user,undo_sql from flashback_transaction_query where xid=HEXTORAW(02001003C02); OPERATION LOGON_USER UNDO_SQL --------- ---------- --------------------------------------U SCOTT update emp set sal=5000 where ROWID = 'AAAKD2AABAAAJ29AAA'

Now DBA can execute the command to undo the changes made by the user
SQL> update emp set sal=5000 where ROWID ='AAAKD2AABAAAJ29AAA' 1 row updated

Using Flashback Table to return Table to Past States.


Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a specified point in time in the past very quickly, easily, and without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table uses information in the undo tablespace to restore the table. Therefore, UNDO_RETENTION parameter is significant in Flashing Back Tables to a past state. You can only flash back tables up to the retention time you specified.

Row movement must be enabled on the table for which you are issuing the FLASHBACK TABLE statement. You can enable row movement with the following SQL statement: ALTER TABLE table ENABLE ROW MOVEMENT; The following example performs a FLASHBACK TABLE operation the table emp
FLASHBACK TABLE emp TO TIMESTAMP TO_TIMESTAMP('2007-06-19 09:30:00', `YYYY-MM-DD HH24:MI:SS');

The emp table is restored to its state when the database was at the time specified by the timestamp.

Example:At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses Flashback Table to return the table to its state at 14:00, as shown in this example:
FLASHBACK TABLE EMPLOYEES TO TIMESTAMP TO_TIMESTAMP('2007-06-21 14:00:00','YYYY-MM-DD HH:MI:SS') ENABLE TRIGGERS;

You have to give ENABLE TRIGGERS option otherwise, by default all database triggers on the table will be disabled.

Recovering Drop Tables (Undo Drop Table)


In Oracle Ver. 10g Oracle introduced the concept of Recycle Bin i.e. whatever tables you drop the database does not immediately remove the space used by table. Instead, the table is renamed and placed in Recycle Bin. The FLASHBACK TABLEBEFORE DROP command will restore the table. This feature is not dependent on UNDO TABLESPACE so UNDO_RETENTION parameter has no impact on this feature. For Example, suppose a user accidently drops emp table SQL>drop table emp; Table Dropped

Now for user it appears that table is dropped but it is actually renamed and placed in Recycle Bin. To recover this dropped table a user can type the command SQL> Flashback table emp to before drop; You can also restore the dropped table by giving it a different name like this SQL> Flashback table emp to before drop rename to emp2; Purging Objects from Recycle Bin If you want to recover the space used by a dropped table give the following command SQL> purge table emp; If you want to purge objects of logon user give the following command SQL> purge recycle bin; If you want to recover space for dropped object of a particular tablespace give the command SQL> purge tablespace hr; You can also purge only objects from a tablespace belonging to a specific user, using the following form of the command: SQL>PURGE TABLESPACE hr USER scott; If you have the SYSDBA privilege, then you can purge all objects from the recycle bin, regardless of which user owns the objects, using this command:
SQL>PURGE DBA_RECYCLEBIN;

To view the contents of Recycle Bin give the following command SQL> show recycle bin; Permanently Dropping Tables If you want to permanently drop tables without putting it into Recycle Bin drop tables with purge command like this
SQL> drop table emp purge;

This will drop the table permanently and it cannot be restored.

Flashback Drop of Multiple Objects With the Same Original Name You can create, and then drop, several objects with the same original name, and they will all be stored in the recycle bin. For example, consider these SQL statements:
CREATE TABLE EMP ( ...columns ); # EMP version 1 DROP TABLE EMP; CREATE TABLE EMP ( ...columns ); # EMP version 2 DROP TABLE EMP; CREATE TABLE EMP ( ...columns ); # EMP version 3 DROP TABLE EMP;

In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped. You can use a FLASHBACK TABLE... TO BEFORE DROP statement with the original name of the table, as shown in this example:
FLASHBACK TABLE EMP TO BEFORE DROP;

The most recently dropped table with that original name is retrieved from the recycle bin, with its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The following example shows the retrieval from the recycle bin of all three dropped EMP tables from the previous example, with each assigned a new name:
FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_3; FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_2; FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_1;

Important Points: 1. There is no guarantee that objects will remain in Recycle Bin. Oracle might empty recycle bin whenever Space Pressure occurs i.e. whenever tablespace becomes full and transaction requires new extents then, oracle will delete objects from recycle bin 2. A table and all of its dependent objects (indexes, LOB segments, nested tables, triggers, constraints and so on) go into the recycle bin together, when you drop the table. Likewise, when you perform Flashback Drop, the objects are generally all retrieved together. 3. There is no fixed amount of space allocated to the recycle bin, and no guarantee as to how long dropped objects remain in the recycle bin. Depending upon system activity, a dropped object may remain in the recycle bin for seconds, or for months.

Flashback Database: Alternative to Point-In-Time Recovery


Oracle Flashback Database, lets you quickly recover the entire database from logical data corruptions or user errors. To enable Flashback Database, you set up a flash recovery area, and set a flashback retention target, to specify how far back into the past you want to be able to restore your database with Flashback Database. Once you set these parameters, From that time on, at regular intervals, the database copies images of each altered block in every datafile into flashback logs stored in the flash recovery area. These Flashback logs are use to flashback database to a point in time.

Enabling Flash Back Database


Step 1. Shutdown the database if it is already running and set the following parameters DB_RECOVERY_FILE_DEST=/d01/ica/flasharea DB_RECOVERY_FILE_DEST_SIZE=10G DB_FLASHBACK_RETENTION_TARGET=4320
(Note: the db_flashback_retention_target is specified in minutes here we have specified 3 days i.e. 3x24x60=4320)

Step 2. Start the instance and mount the Database. SQL>startup mount; Step 3. Now enable the flashback database by giving the following command SQL>alter database flashback on; Now Oracle start writing Flashback logs to recovery area. To how much size we should set the flash recovery area. After you have enabled the Flashback Database feature and allowed the database to generate some flashback logs, run the following query:
SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

This will show how much size the recovery area should be set to. How far you can flashback database. To determine the earliest SCN and earliest Time you can Flashback your database, give the following query:

SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;

Example: Flashing Back Database to a point in time


Suppose, a user erroneously drops a schema at 10:00AM. You as a DBA came to know of this at 5PM. Now since you have configured the flashback area and set up the flashback retention time to 3 Days, you can flashback the database to 9:50AM by following the given procedure 1. Start RMAN $rman target / 2. Run the FLASHBACK DATABASE command to return the database to 9:59AM by typing the following command RMAN> FLASHBACK DATABASE TO TIME timestamp('2007-06-21 09:59:00'); or, you can also type this command.
RMAN> FLASHBACK DATABASE TO TIME (SYSDATE-8/24);

3. When the Flashback Database operation completes, you can evaluate the results by opening the database read-only and run some queries to check whether your Flashback Database has returned the database to the desired state.
RMAN> SQL 'ALTER DATABASE OPEN READ ONLY';

At this time, you have several options Option 1:If you are content with your result you can open the database by performing ALTER DATABASE OPEN RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS;

Option 2:If you discover that you have chosen the wrong target time for your Flashback Database operation, you can use RECOVER DATABASE UNTIL to bring the database forward, or perform FLASHBACK DATABASE again with an SCN further in the past. You can completely undo the effects of your flashback operation by performing complete recovery of the database: RMAN> RECOVER DATABASE;

Option 3:If you only want to retrieve some lost data from the past time, you can open the database read-only, then perform a logical export of the data using an Oracle export utility, then run RECOVER DATABASE to return the database to the present time and re-import the data using the Oracle import utility 4. Since in our example only a schema is dropped and the rest of database is good, third option is relevant for us. Now, come out of RMAN and run EXPORT utility to export the whole schema $exp userid=system/manager file=scott.dmp owner=SCOTT

5.

Now Start RMAN and recover database to the present time $rman target / RMAN> RECOVER DATABASE;

6.

After database is recovered shutdown and restart the database in normal mode and import the schema by running IMPORT utility $imp userid=system/manager file=scott.dmp

Log Miner
Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

LogMiner Configuration
There are three basic objects in a LogMiner configuration that you should be familiar with: the source database, the LogMiner dictionary, and the redo log files containing the data of interest:

The source database is the database that produces all the redo log files that you want LogMiner to analyze. The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.

For example, consider the following the SQL statement: INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);

Without the dictionary, LogMiner will display:


insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'), HEXTORAW('c229'),HEXTORAW('c3020b'));

The redo log files contain the changes made to the database or database dictionary.

LogMiner Dictionary Options


LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

Using the Online Catalog

Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

Extracting a LogMiner Dictionary to the Redo Log Files

Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

Extracting the LogMiner Dictionary to a Flat File

This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead. Using the Online Catalog To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(-

OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Extracting a LogMiner Dictionary to the Redo Log Files To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not). To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Extracting the LogMiner Dictionary to a Flat File When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files. 1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the initialization parameter file:
UTL_FILE_DIR = /oracle/database

2. Start the Database SQL> startup 3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/database/',
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

Redo Log File Options


To mine data in the redo log files, LogMiner needs information about which redo log files to mine.

You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows: Automatically If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner. Manually Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

Example: Finding All Modifications in the Current Redo Log File


The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner. Step 1 Specify the list of redo log files to be analyzed. Specify the redo log files which you want to analyze.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/usr/oracle/ica/log1.ora', OPTIONS => DBMS_LOGMNR.NEW);

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/u01/oracle/ica/log2.ora', OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 2 Start LogMiner. Start LogMiner and specify the dictionary to use.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(

OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Step 3 Query the V$LOGMNR_CONTENTS view. Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
USR ---HR HR XID --------1.11.1476 1.11.1476 SQL_REDO SQL_UNDO ---------------------------------------------------set transaction read write; insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL", "PHONE_NUMBER","HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('306','Mohammed','Sami', 'MDSAMI', '1234567890', TO_DATE('10-jan-2003 13:34:43', 'dd-mon-yyyy hh24:mi:ss'), 'HR_REP','120000', '.05', '105','10'); set transaction read write; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') where "PRODUCT_ID" = '1799' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and ROWID = 'AAAHTKAABAAAY9mAAB'; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') where "PRODUCT_ID" = '1801' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and ROWID = 'AAAHTKAABAAAY9mAAC'; insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL", "PHONE_NUMBER","HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('307','John','Silver', 'JSILVER', '5551112222', TO_DATE('10-jan-2003 13:41:03', 'dd-mon-yyyy hh24:mi:ss'), 'SH_CLERK','110000', '.05', '105','50'); OE 1.1.1484 commit; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') where "PRODUCT_ID" = '1799' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and ROWID = 'AAAHTKAABAAAY9mAAB'; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') where "PRODUCT_ID" = '1801' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and ROWID ='AAAHTKAABAAAY9mAAC'; delete from "HR"."EMPLOYEES" "EMPLOYEE_ID" = '307' and "FIRST_NAME" = 'John' and "LAST_NAME" = 'Silver' and "EMAIL" = 'JSILVER' and "PHONE_NUMBER" = '5551112222' and "HIRE_DATE" = TO_DATE('10-jan2003 13:41:03', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" ='105' and "DEPARTMENT_ID" = '50' and ROWID = 'AAAHSkAABAAAY6rAAP'; delete from "HR"."EMPLOYEES" where "EMPLOYEE_ID" = '306' and "FIRST_NAME" = 'Mohammed' and "LAST_NAME" = 'Sami' and "EMAIL" = 'MDSAMI' and "PHONE_NUMBER" = '1234567890' and "HIRE_DATE" = TO_DATE('10-JAN-2003 13:34:43', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" = 'HR_REP' and "SALARY" = '120000' and "COMMISSION_PCT" = '.05' and "DEPARTMENT_ID" = '10' and ROWID = 'AAAHSkAABAAAY6rAAO';

OE OE

1.1.1484 1.1.1484

OE

1.1.1484

HR

1.11.1476

HR HR

1.15.1481 1.15.1481

set transaction read write; delete from "HR"."EMPLOYEES" where "EMPLOYEE_ID" = '205' and "FIRST_NAME" = 'Shelley' and "LAST_NAME" = 'Higgins' and "EMAIL" = 'SHIGGINS' and "PHONE_NUMBER" = '515.123.8080' and "HIRE_DATE" = TO_DATE( '07-jun-1994 10:05:01', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" = 'AC_MGR' and "SALARY"= '12000' and "COMMISSION_PCT" IS NULL and "MANAGER_ID" = '101' and "DEPARTMENT_ID" = '110' and ROWID = 'AAAHSkAABAAAY6rAAM'; set transaction read write; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+12-06') where "PRODUCT_ID" = '2350' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') and ROWID = 'AAAHTKAABAAAY9tAAD'; commit; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') where "PRODUCT_ID" = '2350' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') and ROWID ='AAAHTKAABAAAY9tAAD'; insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL","PHONE_NUMBER", "HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('205','Shelley','Higgins', and 'SHIGGINS','515.123.8080', TO_DATE('07-jun-1994 10:05:01', 'dd-mon-yyyy hh24:mi:ss'), 'AC_MGR','12000',NULL,'101','110');

OE OE

1.8.1484 1.8.1484

HR

1.11.1476

Step 4 End the LogMiner session.


SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example of Mining Without Specifying the List of Redo Log Files Explicitly
The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use the DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest. Example : Mining Redo Log Files in a Given Time Range This example assumes that you want to use the data dictionary extracted to the redo log files. Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary.
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG

WHERE DICTIONARY_BEGIN = 'YES'); NAME -------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf FIRST_TIME -------------------10-jan-2003 12:01:34

Step 2 Display all the redo log files that have been generated so far. This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS WHERE LOW_TIME > '10-jan-2003 12:01:34' NAME ---------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf /usr/oracle/data/db1arch_1_208_482701534.dbf /usr/oracle/data/db1arch_1_209_482701534.dbf /usr/oracle/data/db1arch_1_210_482701534.dbf

Step 3 Start LogMiner. Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '10-jan-2003 12:01:34', ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);

Step 4 Query the V$LOGMNR_LOGS view. This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS; NAME -----------------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf /usr/oracle/data/db1arch_1_208_482701534.dbf /usr/oracle/data/db1arch_1_209_482701534.dbf /usr/oracle/data/db1arch_1_210_482701534.dbf

Step 5 Query the V$LOGMNR_CONTENTS view. To reduce the number of rows returned by the query, exclude all DML statements done in the sys or system schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.) Note that all reconstructed SQL statements returned by the query are correctly translated.
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
USR ----------SYS SYS XID -------1.2.1594 1.2.1594 SQL_REDO ----------------------------------set transaction read write; create table oe.product_tracking (product_id number not null, modified_time date, old_list_price number(8,2), old_warranty_period interval year(2) to month);

SYS SYS SYS

1.2.1594 1.18.1602 1.18.1602

commit; set transaction read write; create or replace trigger oe.product_tracking_trigger before update on oe.product_information for each row when (new.list_price <> old.list_price or new.warranty_period <> old.warranty_period) declare begin insert into oe.product_tracking values (:old.product_id, sysdate, :old.list_price, :old.warranty_period); end;

SYS OE

1.18.1602 1.9.1598

commit; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'), "LIST_PRICE" = 100 where "PRODUCT_ID" = 1729 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 80 and ROWID = 'AAAHTKAABAAAY9yAAA';

OE

1.9.1598

insert into "OE"."PRODUCT_TRACKING" values "PRODUCT_ID" = 1729, "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03', 'dd-mon-yyyy hh24:mi:ss'),

"OLD_LIST_PRICE" = 80, "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00'); OE 1.9.1598 update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'), "LIST_PRICE" = 92 where "PRODUCT_ID" = 2340 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 72 and ROWID = 'AAAHTKAABAAAY9zAAA';

OE

1.9.1598

insert into "OE"."PRODUCT_TRACKING" values "PRODUCT_ID" = 2340, "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07', 'dd-mon-yyyy hh24:mi:ss'), "OLD_LIST_PRICE" = 72, "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

OE

1.9.1598

commit;

Step 6 End the LogMiner session.


SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

BACKUP AND RECOVERY


Opening or Bringing the database in Archivelog mode. To open the database in Archive log mode. Follow these steps: STEP 1: Shutdown the database if it is running. STEP 2: Take a full offline backup. STEP 3: Set the following parameters in parameter file. LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc LOG_ARCHIVE_DEST_1=location=/u02/ica/arc1 If you want you can specify second destination also LOG_ARCHIVE_DEST_2=location=/u02/ica/arc1 Step 3: Start and mount the database. SQL> STARTUP MOUNT STEP 4: Give the following command SQL> ALTER DATABASE ARCHIVELOG; STEP 5: Then type the following to confirm. SQL> ARCHIVE LOG LIST; STEP 6: Now open the database SQL>alter database open; Step 7: It is recommended that you take a full backup after you brought the database in archive log mode.

To again bring back the database in NOARCHIVELOG mode. Follow these steps: STEP 1: Shutdown the database if it is running. STEP 2: Comment the following parameters in parameter file by putting " # " .

# LOG_ARCHIVE_DEST_1=location=/u02/ica/arc1 # LOG_ARCHIVE_DEST_2=location=/u02/ica/arc2 # LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc STEP 3: Startup and mount the database. SQL> STARTUP MOUNT; STEP 4: Give the following Commands SQL> ALTER DATABASE NOARCHIVELOG; STEP 5: Shutdown the database and take full offline backup.

TAKING OFFLINE BACKUPS. ( UNIX )


Shutdown the database if it is running. Then start SQL Plus and connect as SYSDBA. $sqlplus SQL> connect / as sysdba SQL> Shutdown immediate SQL> Exit After Shutting down the database. Copy all the datafiles, logfiles, controlfiles, parameter file and password file to your backup destination. TIP: To identify the datafiles, Logfiles query the data dictionary tables V$DATAFILE and V$LOGFILE before shutting down. Lets suppose all the files are in "/u01/ica" directory. Then the following command copies all the files to the backup destination /u02/backup. $cd /u01/ica $cp * /u02/backup/ Be sure to remember the destination of each file. This will be useful when restoring from this backup. You can create text file and put the destinations of each file for future use. Now you can open the database.

TAKING ONLINE (HOT) BACKUPS.(UNIX)


To take online backups the database should be running in Archivelog mode. To check whether the database is running in Archivelog mode or Noarchivelog mode. Start sqlplus and then connect as SYSDBA. After connecting give the command "archive log list" this will show you the status of archiving. $sqlplus Enter User:/ as sysdba SQL> ARCHIVE LOG LIST If the database is running in archive log mode then you can take online backups. Let us suppose we want to take online backup of "USERS" tablespace. You can query the V$DATAFILE view to find out the name of datafiles associated with this tablespace. Lets suppose the file is "/u01/ica/usr1.dbf ". Give the following series of commands to take online backup of USERS tablespace. $sqlplus Enter User:/ as sysdba SQL> alter tablespace users begin backup; SQL> host cp /u01/ica/usr1.dbf /u02/backup

SQL> alter tablespace users end backup; SQL> exit;

RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE.


Option 1: When you dont have a backup. If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then, you can drop the damaged datafile and open the database. You will loose all information contained in the damaged datafile. The following are the steps to drop a damaged datafile and open the database.(UNIX)

STEP 1: First take full backup of database for safety. STEP 2: Start the sqlplus and give the following commands. $sqlplus Enter User:/ as sysdba SQL> STARTUP MOUNT SQL> ALTER DATABASE DATAFILE SQL>alter database open; Option 2:When you have the Backup. If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options for you. i . Either you can drop the damaged datafile, if it does not contain important information which you can afford to loose. ii . Or you can restore from full backup. You will loose all the changes made to the database since last full backup. To drop the damaged datafile follow the steps shown previously. To restore from full database backup. Do the following. STEP 1: Take a full backup of current database. STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations(UNIX) Suppose the backup is in "/u2/oracle/backup" directory. Then do the following. $cp /u02/backup/* /u01/ica This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations. '/u01/ica/usr1.dbf' offline drop;

RECOVERING FROM LOST OF CONTROL FILE.


If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location to the damaged location and open the database

If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can recreate a control file. If you have already taken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since then, just create the controlfile by executing the statement Buf If you have added any new tablespace after generating create controlfile statement. Then you have to alter the script and include the filename and size of the file in script file. If your script file containing the control file creation statement is "CR.SQL" Then just do the following. STEP 1: Start sqlplus STEP 2: connect / as sysdba STEP 3: Start and do not mount a database like this. SQL> STARTUP NOMOUNT STEP 4: Run the "CR.SQL" script file. STEP 5: Mount and Open the database. SQL>alter database mount; SQL>alter database open; If you do not have a backup of Control file creation statement. Then you have to manually give the CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You will lose any datafiles which you do not include. Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.

Recovering Database when the database is running in ARCHIVELOG Mode.


Recovering from the lost of Damaged Datafile. If you have lost one datafile. Then follow the steps shown below. STEP 1. Shutdown the Database if it is running. STEP 2. Restore the datafile from most recent backup.

STEP 3. Then Start sqlplus and connect as SYSDBA. $sqlplus Enter User:/ as sysdba SQL>Startup mount; SQL>Set autorecovery on; SQL>alter database recover; If all archive log files are available then recovery should go on smoothly. After you get the "Media Recovery Completely" statement. Go on to next step. STEP 4. Now open the database SQL>alter database open; Recovering from the Lost Archived Files: If you have lost the archived files. Then Immediately shutdown the database and take a full offline backup.

Time Based Recovery (INCOMPLETE RECOVERY).


Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table. You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14-Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday. Now to recover the table follow these steps. STEP 1. Shutdown the database and take a full offline backup. STEP 2. Restore all the datafiles, logfiles and control file from the full offline backup which was taken on Monday. STEP 3. Start SQLPLUS and start and mount the database. STEP 4. Then give the following command to recover database until specified time. SQL> recover database until time '2007:08:16:13:55:00'

using backup controlfile; STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this SQL> alter database open resetlogs; STEP 6. After database is open. Export the table to a dump file using Export Utility. STEP 7. Restore from the full database backup which you have taken on Saturday. STEP 8. Open the database and Import the table. Note: In Oracle 10g you can easily recover drop tables by using Flashback feature. For further information please refer to Flashback Features Topic in this book

Daily - Verify instance status - Check alerts - Check configured metrics - Check RMAN backups - Check storage - Check CPU contention - Check waiting times - Check memory usage - Check network load - Check iostat Weekly - Invalid objects - Tunning: indexes and execution plans - Top SQL - Environments consistence - Review of ressource policy - Trends and peaks - Cleaning of alert logs - Review of RMAN Monthly - Recovery tests - Analyze the data increment trend - Tunning - Review I/O - Fragmentation

- Row chaining - High Availability Analysis - Scalability - Schedule monthly downtime SYS: automatically created when Oracle database is installed automatically granted the DBA role has a default password: CHANGE_ON_INSTALL (make sure you change it) owns the base tables and views for the database data dictionary the default schema when you connect as SYSDBA Tables in the SYS schema are manipulated only by the database. They should never be modified by any user or database administrator, and no one should create any tables in the schema of user SYS. Database users should not connect to the Oracle database using the SYS account. SYSTEM: automatically created when Oracle database is installed automatically granted the DBA role has a default password: MANAGER (make sure you change it) used to create additional tables and views that display administrative information used to create internal tables and views used by various Oracle database options and tools Never use the SYSTEM schema to store tables of interest to non-administrative users.

DBA - role Contains all database system privileges. SYS user account - The DBA role will be assigned to this account. All of the base tables and views for the database's dictionary are store in this schema and are manipulated only by ORACLE. SYSTEM user account - It has all the system privileges for the database and additional tables and views that display administrative information and internal tables and views used by oracle tools are created using this username.

Tablespace administration sql statements


Finding out the historical tablespace usage select name , tablespace_size*8192/(1024*1024*1024) , tablespace_maxsize*8192/(1024*1024*1024) , tablespace_usedsize*8192/(1024*1024*1024) , rtime from DBA_HIST_TBSPC_SPACE_USAGE a , v$tablespace b where tablespace_id = 175 and a.tablespace_id = b.TS# order by rtime desc; find out free space on the mentioned tablespace. select tablespace_name, sum(bytes)/1024/1024 from dba_free_space group by tablespace_name;

select tablespace_name, sum(bytes)/1024/1024 from dba_free_space where tablespace_name = TS_RBS group by tablespace_name; find out existing size of the tablespaces by using the following command select file_name, t.tablespace_name tablespace, bytes/1024/1024 , d.autoextensible autoextend , d.increment_by, d.maxbytes from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and t.tablespace_name = TS_TRAINENGINE_FACT_DATA select t.tablespace_name tablespace, sum(bytes/1024/1024) from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name group by t.tablespace_name find out the tablespaces whose size is greater than 3 GB select file_name, bytes/1024/1024 from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and bytes/1024/1024 > 3000 ; select t.tablespace_name tablespace, sum(bytes/1024/1024) from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and d.tablespace_name = TS_GFD_DATA group by t.tablespace_name creating a temporary tablespace CREATE TEMPORARY TABLESPACE TS_TEMP TEMPFILE /global/reuterdb1/oracle/reutertp/data1/reutertp_temp01.dbf SIZE 20000M REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M; Command to rename a datafile name.This is normally used when you previously copied the datafile from one location to another location and you want to now update the control file .Be careful that you have to move the datafiles only when the database is completely shutdown. alter database rename file /global/reuterdb1/oracle/reutertp/data6/reutertp_GFD_data11.dbf to /global/reuterdb1/oracle/reutertp/data3/reutertp_GFD_data11.dbf ; select file_name, bytes from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and bytes/1024/1024 > 3000 ; select d.status, sum(bytes/1024/1024)/count(*) from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name group by d.status; select file_name, t.tablespace_name tablespace, bytes from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and d.tablespace_name like UN% ;

if problem with tablespace size then increase the tablespace size. find out datafile path and name for the specified tablespace select file_name,tablespace_name from dba_data_files; Resize the data file with the following command alter database datafile /u10/oradata/RSOD/RSODundo_02_02.dbf resize 1000M; resizing a datafile alter database datafile /global/mrt-db/oracle/mrt/data1/ppmrtint_tools01.dbf resize 680M adding datafile to a tablespace alter tablespace INDEX_MED_OPG ADD DATAFILE /var/opt/oracle/DSS1/data04/INMED_OPGDSS105.dbf size 1000M; to change the maxbytes of a datafile do the below command ALTER DATABASE DATAFILE /global/riskeb/oracle/redb/data2/redb_TRAINENGINE_interm_data01.dbf AUTOEXTEND ON NEXT 10M MAXSIZE 2000M; Adding datafile to a tablespace alter tablespace ts_GFD_indx add datafile /global/reuterdb1/oracle/reutertp/data6/reutertp_GFD_indx25.dbf SIZE 2048M AUTOEXTEND ON NEXT 1024M MAXSIZE 32000M; select file_name, t.tablespace_name tablespace, bytes from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name and d.tablespace_name like BRAINLOBZIP; changing a users quota on a tablespace to unlimited alter user ims quota unlimited on ts_data_1; DBA_USERS Describes all users of the database ALL_USERS Lists users visible to the current user, but does not describe

them USER_USERS Describes only the current user DBA_TS_QUOTAS Describes tablespace quotas for all users USER_TS_QUOTAS Describes tablespace quotas for current user USER_PASSWORD_LIMITS Describes the password profile parameters that are assigned to the user USER_RESOURCE_LIMITS Displays the resource limits for the current user DBA_PROFILES Displays all profiles and their limits RESOURCE_COST Lists the cost for each resource Syntax for creating a user ALTER USER avyrros IDENTIFIED by password DEFAULT TABLESPACE data_ts TEMPORARY TABLESPACE temp_ts QUOTA 100M ON data_ts QUOTA 0 ON test_ts PROFILE clerk;

Defining user quotas on tablespaces


Alter user listmgmt quota 1024M on ts_data01 quota 1024M on ts_index01 Finding out tablespace quotas for an user. SQL> select * from dba_ts_quotas where username like %LISTMGMT%; TABLESPACE_NAME USERNAME BYTES

MAX_BYTES

BLOCKS MAX_BLOCKS DRO

- - - TS_DATA01 1048576000 TS_INDEX01 1048576000 128000 80768 LISTMGMT 128000 NO LISTMGMT 128000 NO 1048576000 661651456

SQL> Alter user listmgmt quota 1024M on ts_data01 quota 1024M on ts_index01; User altered. The following query lists all tablespace quotas specifically assigned to each user: SELECT * FROM DBA_TS_QUOTAS; TABLESPACE USERNAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS - - - USERS JFEE 0 512000 0 250 USERS DCRANNEY 0 -1 0 -1 When specific quotas are assigned, the exact number is indicated in the MAX_BYTES column. Note that this number is always a multiple of the database block size, so if you specify a tablespace quota that is not a multiple of the database block size, then it is rounded up accordingly. Unlimited quotas are indicated by -1 Users having both zero quota and zero bytes used in a tablespace are not listed in the report. TFSTSUSR reports on DBA_TS_QUOTAS, which holds resource privilege information only at the tablespace level. A user can have an overriding database resource privilege that can be seen by querying DBA_USERS. V$SESSION Lists session information for each current session, includes user name V$SESSTAT Lists user session statistics

V$STATNAME Displays decoded statistic names for the statistics shown in the V$SESSTAT view PROXY_USERS Describes users who can assume the identity of other users See Also: Oracle Database SQL Reference for complete descriptions of the preceding data dictionary and dynamic performance views 2)WHEN MAX ERRORS REACHED OCCURS the error ORA-00020: maximum number of processes (%s) exceeded is because of the following reasons. 1)when the number of processes accessing oracle database exceeds specified limit. One day we got the above error on our development server. On our development server the processes parameter is set to 100. and we know that we dont have 100 users at the moment on dev, so the only reason could be that somebody has opened a connection in front end and then not closing it. or they might have a loop and then opening the connection. So asked the app team to check their front end code. Edit the init<sid>.ora file and increase the default max no. of processes form 50 to 200 or 300. Be careful to take memory availabily into account. This will give you a temporary solution only. If you are using web-application. Restart the application on the MMS console. The processes that are active take up some of the system memory in what is called the PGA, which is not part of the shared memory parameters. There is also some SGA space for all processes (active or not but it is small). Generally, if a process is not active, it is not using any resources to maintain a place for that process. So, a change in the processes parameter from 75 to 150 will not have much of an impact on the systems oracle-dedicated memory (the SGA). oracle error no of sessions exceeded reason is the processes are created from asp.net or whatever and then not closed.oracle has a limit on no of processes specified in the init.ora file.

select value, name from v$parameter where name=processes; select substr( b.owner||.||b.object_name, 1, 30) TABLE , substr( SID||,||SERIAL#, 1, 10) SESSION_ID , substr( lower(USERNAME), 1, 10) USERNAME , lower(OSUSER) OSUSER , lower(TERMINAL) TERMINAL , substr( PROGRAM , instr( PROGRAM , \, -1)+ 1) PROGRAM from V$LOCKED_OBJECT a, all_objects b, v$session c where a.OBJECT_ID = b.OBJECT_ID and a.SESSION_ID = c.SID; select username,TERMINAL, COUNT(*) from v$session GROUP BY TERMINAL,username; select TERMINAL, COUNT(*) from v$session where schemaname = KEYSTONE GROUP BY TERMINAL select TERMINAL, COUNT(*) from v$session GROUP BY TERMINAL select * from v$session where schemaname = KEYSTONE and terminal = SPL1W082 GROUP BY TERMINAL when filesystem is 100% full, sometimes we need to move the datafiles to another filesystem temporarily so the filesstem can be extended by the unix team.non system datafiles can be moved by the following way. This method has the advantage that it doesnt require shutting down the instance, but it only works with non-SYSTEM tablespaces. Further, it cant be used for tablespaces that contain active rollback segments or temporary segments. 1. 2. Take the tablespace offline. Rename and/or move the datafile using operating

system commands. 3. Use the alter tablespace command to rename the file

in the database. 4. Bring the tablespace back online.

sql> connect sys/oracle as sysdba sql> alter tablespace app_data offline; sql> alter tablespace app_date rename datafile /u01/oracle/U1/data01.dbf TO /u02/oracle/U1/data04.dbf ; sql> alter tablespace app_data online; The tablespace will be back online using the new name and/or location of the datafile. How to find pending transactions in the database? set lines 250 column start_time format a20 column sid format 999 column serial# format 999999 column username format a10 column status format a10 column schemaname format a10 column osuser format a10 column process format a10 column machine format a15

column terminal format a10 column program format a25 column module format a10 column logon format a20 prompt #################################################### prompt # current transactions: prompt #################################################### select t.start_time,s.sid,s.serial#,s.username,s.status,s.schemaname, s.osuser,s.process,s.machine,s.terminal,s.program,s.module,to_char(s.logon_time,DD/MON/YY HH24:MI:SS) logon_time from v$transaction t, v$session s where s.saddr = t.ses_addr order by start_time; How to find if spfile is used? this is the correct way: select distinct isspecified from v$spparameter if you see TRUE there, spfile is used otherwise pfile. select value from v$parameter where name = spfile; if this comes back with a value other than null, you are using an spfile. if you are using a pfile, you can shutdown the database, and startup open spfile=/path/to/spfile; what about if i just want to use pfile.. should i just delete the spfile? you will need to shutdown and startup open pfile=/path/to/initSID.ora then you can delete the spfile

Find size of table space select file_name, t.tablespace_name tablespace, bytes from dba_data_files d, dba_tablespaces t where t.tablespace_name = d.tablespace_name Find free space in tablespace select tablespace_name, sum(bytes)/1024/1024 from dba_free_space group by tablespace_name; Find last analyzed time for the schema, tables etc . SELECT table_name, last_analyzed FROM user_tables ORDERBY last_analyzed DESC select table_name,last_analyzed from dba_tables where owner like SUP% and rownum < 4; How to find users passwords in the database? yeah you can connect to the user schema without knowing the users actual password. here are the steps (It will work only if you have DBA privilege ie. access to dba_users and other dba_ views) 1. select username, password from dba_users;s suppose there is an user by name user1 and you want to log into his schema. save the value in the password column for this user retrieved from query in step 1 (say the encrypted value is abcdefgh1234) change the password of the user alter user user1 indetified by temppass; 4. now you can connect as user1 by using the new password. 5. then restore the original password (either connected as user1 or through your own user). alter user user1 identified by values abcdefgh1234;

10)9i how to delete and compute statistics


Analyze table test compute statistics; this will analyze the indexes as well.This is to update statistics.

or analyze table test delete statistics; delete the existing stats first and then analyze table test compute statistics; compute new stats The above commands are 9i . If you are using 10g then dont use compute statistics. Use dbms_stats.gather_table_stats procedure.

find all indexes on a table


select index_name from all_indexes where owner = SCOTT and table_name = TEST; this is to find the indexes for a particular table find all indexed columns select index_name, column_name, column_position from all_ind_cols where owner = SCOTT and table_name = TEST; this is to help you find any columns being not added in the multi column index get all session info Detailed sessions informations are returned by : SELECT sid, schemaname, terminal, osuser, program FROM v$session ORDER BY schemaname, program, terminal, osuser; get sql info for a session SELECT sql_text

FROM v$sqltext WHERE concat(hash_value, address)=(SELECT concat(sql_hash_value, sql_address) FROM v$session WHERE sid=?????) ORDER BY piece;

15) setting identifiers for different application modules


There is a way for programs to identify themselves. DBMS_SESSION.SET_IDENTIFIER (client_id VARCHAR2) If you are working with Application Developers, they could use this to set the identifier differently for different application modules New Client Entry, Invoicing, Daily Reports, etc. Then v$session would show this information for the session running the SQL in question in the CLIENT_IDENTIFIER field. That would be the easiest way. Other ways will take more work. drop tablespaces incuding objects,constraints,datafiles . this drops the whole tablespace. Be very careful. drop tablespace data_med_lcp including contents and datafiles cascade constraints describe the different user privileges views here granting privileges to users -> some examples grant CREATE SYNONYM to price_dev grant CREATE SESSION to price_dev select grantor,privilege from dba_tab_privs where owner in (SYS, SYSTEM) You can query the user_tab_privs_recd and the user_sys_privs: Select grantee, privilege from dba_tab_privs where owner = SIEBEL and table_name= S_SRC select * from user_sys_privs where username=user; select * from dba_sys_privs where grantee=KEYSTONE; select grant || privilege || to keystone from dba_sys_privs where grantee=KEYSTONE

select * from dba_role_privs where grantee=SUPPDB; select * from user_tab_privs_recd ; select * from user_tab_privs_recd where owner in (SYS, SYSTEM); select * from dba_role_privs where grantee = [user]; select grant || granted_role || to keystone from dba_role_privs where grantee = KEYSTONE; select * from role_role_privs; role_sys_privs role_tab_privs (1) How can I select the role(s) that a user has been assigned.?

(A) select * from dba_role_privs where GRANTEE=RKSREPGEN (2) How can I see all the grants that have been assigned to a role.? (A) select * from dba_sys_privs where GRANTEE=<ROLE_NAME> (3) How can I see all the grants that have been assigned to a user.? (A) select * from dba_sys_privs where grantee = RKSREPGEN Oracle provides several roles that are built into the database. Some of them are DBA, RESOURCE, and CONNECT. Most DBAs use them to make their tasks easier and simpler, but each of them is a security nightmare. Lets examine RESOURCE. This is generally given to schema owners. Did you know that it has UNLIMITED TABLESPACE system privilege, making it able to create any table anywhere in the database including the SYSTEM tablespace? Obviously, this is not what you want. You would want to restrict the tablespaces to specific users only. Similarly the role CONNECT, by default, has the CREATE TABLE/SEQUENCE/SYNONYM and a few more options. The name CONNECT somehow conveys the impression of the ability to connect only, not anything else. As you can see, however, the ability is much more than that. Another privilege, ALTER SESSION system privilege, allows the grantee to issue sql_trace = TRUE in their session. This can have far reaching consequences. Therefore, it is not prudent to use built-in roles. Rather, identify the privileges users will need, put them in the appropriate roles which you have created, and use them to control authorization. If possible, try not to use the Oracle built-in roles like RESOURCE and CONNECT. Create your own roles.

Try these For system privileges http://downloadwest.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_912a.htm#2073689 For role privileges http://downloadwest.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_912a.htm#2063331 For object privileges http://downloadwest.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_912a.htm#2063440 In Oracle 10gR2 things are fairly sane: CONNECT role has only CREATE SESSION RESOURCE has CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR, CREATE PROCEDURE, CREATE SEQUENCE, CREATE TABLE, CREATE TRIGGER and CREATE TYPE In Oracle 9iR2 things get a little scary: CONNECT has ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE, CREATE SESSION, CREATE SYNONYM, CREATE TABLE and CREATE VIEW. Rather a scary lot for a role called connect RESOURCE has CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR, CREATE PROCEDURE, CREATE SEQUENCE, CREATE TABLE, CREATE TRIGGER and CREATE TYPE Resize datafiles alter database datafile /var/opt/oracle/DSS1/data04/INDEX_MED_OPGDSS102.dbf resize 2000M Kill session The SQL*Plus Approach Sessions can be killed from within oracle using the ALTER SYSTEM KILL SESSION syntax.

First identify the offending session as follows: SELECT s.sid, s.serial#, s.osuser, s.program FROM v$session s; SID SERIAL# OSUSER PROGRAM

- - 1 2 3 4 5 6 20 43 33 1 SYSTEM 1 SYSTEM 1 SYSTEM 1 SYSTEM 1 SYSTEM 1 SYSTEM 60 SYSTEM 11215 USER1 5337 USER2 ORACLE.EXE ORACLE.EXE ORACLE.EXE ORACLE.EXE ORACLE.EXE ORACLE.EXE DBSNMP.EXE SQLPLUSW.EXE SQLPLUSW.EXE

The SID and SERIAL# values of the relevant session can then be substituted into the following statement: SQL> ALTER SYSTEM KILL SESSION sid,serial#; In some situations the Oracle.exe is not able to kill the session immediately. In these cases the session will be marked for kill. It will then be killed as soon as possible. Issuing the ALTER SYSTEM KILL SESSION command is the only safe way to kill an Oracle session. If the marked session persists for some time you may consider killing the process at the operating system level, as explained below. Killing OS processes is dangerous and can lead to instance failures, so do this at your own peril.

It is possible to force the kill by adding the IMMEDIATE keyword: SQL> ALTER SYSTEM KILL SESSION sid,serial# IMMEDIATE; This should prevent you ever needing to use the orakill.exe in Windows, or the kill command in UNIX/Linux

change sys password


There is no difference. But if your server and database are set up to allow you to connect as SYS without explicitly entering a password, you can then change the password for SYS (and any other user also, like: SYSTEM) if you want to. The syntax is simply: alter user [username] identified by [new_password]; So, to change the password for SYSTEM to be secret, just enter this: alter user system identified by secret; No, usernames and passwords in Oracle not case-sensitive, so this also works: alter user SYSTEM identified by SECRET But in 11g passwords are case-sensitive.so be careful. more info on orapwd file. REMOTE_LOGIN_PASSWORDFILE Parameter type = String Syntax REMOTE_LOGIN_PASSWORDFILE= {NONE | SHARED | EXCLUSIVE} Default value NONE REMOTE_LOGIN_PASSWORDFILE specifies whether Oracle checks for a password file and how many databases can use the password file. NONE >> Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.

SHARED >>More than one database can use a password file. Only user recognized by the password file is SYS. EXCLUSIVE >> The password file can be used by only one database and the password file can contain names other than SYS. you can query view sql> select * from V$PWFILE_USERS; to get listing of user enabled to connect to the database using ORAPWD file.

Using Password File Authentication


This section describes how to authenticate an administrative user using password file authentication.

Preparing to Use Password File Authentication


To enable authentication of an administrative user using password file authentication you must do the following: Create an operating system account for the user. If not already created, Create the password file using the ORAPWD utility: ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE. Connect to the database as user SYS (or as another user with the administrative privilege). If the user does not already exist in the database, create the user. Grant the SYSDBA or SYSOPER system privilege to the user: GRANT SYSDBA to scott; This statement adds the user to the password file, thereby enabling connection AS SYSDBA As soon as Oracle sees as sysdba it checks to see if youre allowed to connect. If you are, you are connected as the SYS user. It doesnt make any difference what username and password is used. Give it a try with a username you know doesnt exist: conn fred/flintstone as sysdba.

Unless you create the password file, the default password of SYS and SYSTEM is change_on_install and system respectively when you create a new database. Otherwise, the password for sys is sepcified when you create the password file: orapwd file=orapw<sid> password=<syss password> entries=<max # users> with REMOTE_LOGIN_PASSWORDFILE=exclusive in the init<SID>.ora Logging as sysdba to the database sqlplus /as sysdba This will let you login to Oracle as sys using OS authentication (must be login as Oracle user in Unix/Windows). find out whether a server is dedicated or shared. you can use lsnrctl status or lsnrctl services to see if your doing shared or dedicated (or just query SERVER from v$session) I ran the select SQL> set echo on SQL> select server from v$session where sid=(select sid from v$mystat where rownum=1); DEDICATED SQL> spool off that shows a single dedicated server session. select server, count(*) from v$session group by server;

when you have conversion errors in toad


Change the nls_lang in the registry editor of the client to .UTF8 or whichever character set you want.be careful about including the dot before the character set. find out whether datafiles are online select FILE#,STATUS from v$datafile; Maximum cursors

To check how many cursors are in use run this sql statement: SELECT v.value as numopencursors ,s.machine ,s.osuser,s.username FROM V$SESSTAT v, V$SESSION s WHERE v.statistic# = 3 and v.sid = s.sid Modify primary key columns If foreign key constraints are referencing this primary key then find them by using query SELECT A.TABLE_NAME PARENT, A.CONSTRAINT_NAME, B.TABLE_NAME CHILD, B.CONSTRAINT_NAME FROM USER_CONSTRAINTS A, USER_CONSTRAINTS B WHERE A.CONSTRAINT_NAME = B.R_CONSTRAINT_NAME AND A.CONSTRAINT_NAME = PK_RECEIVING_CENTER_MATRIX; After spooling above information and after confirmation from application team use below command to drop primary key and cascade all constraints. ALTER TABLE TAB_PK DROP PRIMARY KEY CASCADE; CASCADE option drops the foreign key constraints on any table referencing the PK being dropped. Dropped the primary key using alter table surveyor.RECEIVING_CENTER_MATRIX drop primary key ; observed that the index was not dropped. So In toad I had to manually drop the index. Then I recreated primary key with ALTER TABLE surveyor.RECEIVING_CENTER_MATRIX ADD CONSTRAINT pk_RECEIVING_CENTER_MATRIX1 PRIMARY KEY (PDA_TYPE_ID,BUSINESS_UNIT,ISO_COUNTRY_CODE,RECEIVING_CENTER_ID) USING INDEX TABLESPACE CLM_DATA;

find table and index storage parameters select OWNER, TABLE_NAME, TABLESPACE_NAME,PCT_FREE, PCT_USED, INI_TRANS, MAX_TRANS, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, FREELISTS, FREELIST_GROUPS from dba_tables where owner = SURVEYOR and table_name = RECEIVING_CENTER_MATRIX select OWNER, INDEX_NAME, INDEX_TYPE, TABLE_OWNER, TABLE_NAME, TABLE_TYPE,TABLESPACE_NAME, INI_TRANS, MAX_TRANS, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, PCT_THRESHOLD, INCLUDE_COLUMN, FREELISTS, FREELIST_GROUPS, PCT_FREE from dba_indexes where owner = SURVEYOR and index_name = PK_RECEIVING_CENTER_MATRIX change initial extent of table or any storage parameters 1) Pre-Oracle 8i drop an recreate the table and then reload (probably best done with export/import) 2) 8i and beyond use the new alter table . move tablespace . storage..; feature.

there is a new function under the alter tablecommand called MOVE TABLE clause. The move_table_clause lets you relocate data of a nonpartitioned table into a new segment, optionally in a different tablespace, and optionally modify any of its storage attributes. This can be done without taking down the database (however, I recommend that you take an export for safety)

The syntax is: alter table <TABLE_NAME> move tablespace <NEW_TBSP> storage( <STORAGE_CLAUSE>); One restriction, you must rebuild all associated indexes after moving a table because the move invalidates them. You rebuild indexes with the following command; alter index <INDEX_NAME> rebuild tablespace <NEW_TBSP> storage( <STORAGE_CLAUSE>); get table_size 1 select a.owner, a.table_name, 2 3 4 a.initial_extent,a.next_extent, a.MIN_EXTENTS, a.MAX_EXTENTS, a.TABLESPACE_NAME, a.NUM_ROWS, b.extents NUM_EXTENTS

5 from dba_tables a, dba_segments b 6 where a.owner=SCOTT 7 8 and a.owner=b.owner and a.table_name =b.segment_name

9* and a.table_name =EMP alter table scott.emp 2 move tablespace DATA 3 storage (initial 200K next 200K minextents 1 maxextents 299) 4 / Note: there is one catchthe table cannot contain a column of the long datatype 3) Random advice: Dont go crazy about compressing tables into a single extent (or just a few extents). There was

According to presentations at IOUG2000 from Oracles Performance Group, TUSC, & others (Mike Ault, Dave Ensor), the number of outstanding extents that a table has DOES NOT IMPACT performance until a high number is reached they have noticed performance slowdowns after 1000 extentseven with Locally managed Tablespaces. I have tested this in Oracle 8 and have confirmed this. Follow the traditional rules, such as one of mine is to size growth rates for apx 1 extent per month When allocating storage for an Object (table or index), I like to set the initial extent size = next extent size, plus I place objects of the same extent sizes in the same tablespace: Why? This reduces the fragmentation of the extents and makes maximum efficient use of the tablespaces space (even with tables that grow and shrink dynamically) while reducing the possibility of creating dead space fragments. And while extents do grow, In theory I shouldnt have to touch them for a long period of time. So, you must should know you database & data Thus plan in advancedo calculations of the growth and set storage sizes appropriately while the new tools allow you to do storage management on the fly, you do not want to be babysitting extent managment all the timetakes away from the really challenging DBA tasks! There are two good papers to read about extent sizing and fragmentation & Storage Management. 1) How to Stop Defragmenting and Start Living: The Definitive Word on Fragmentation from Oracle presented at OracleWorld 1999 2) I cannot remember the namebut the is a great paper on www.orapub.com

handle database recovery when datafile gets corrupted,no backup files available, non system datafile shutdown the database and Copy all the database files into some safe place !! 2. startup mount; 3. alter database datafile D:\oracle\oradata\ABCDev\OEM_Repository.dbf offline drop; 4. alter database open; 5. shutdown immediate; 6. Copy all files, here is the chance to make your first consistent cold backup 7. startup 8. Delete D:\oracle\oradata\ABCDev\OEM_Repository.dbf with explorer 9. Recreate your repository find out export size before exporting. select owner,sum(bytes)/1024/1024 from dba_segments where owner = SURVEYOR group by owner ; select owner,segment_type,sum(bytes)/1024/1024 from dba_segments where owner = SURVEYOR group by owner,segment_type ; select owner,segment_type,sum(bytes)/1024/1024 from dba_segments where owner = DSADM group by owner,segment_type ; select owner,sum(bytes) from dba_segments group by owner order by owner; in 10g you can use expdp and find out the expected size of the export dump without the above sql queres.

compiling packages in sys schema


This script will give you a list of invalid objects. You can recompile invalid packages with the utlrp.sql SQL script that is supplied with Oracle. This script is maintained in the $ORACLE_HOME/rdbms/admin directory on your database server. You can also manually recompile packages if you prefer. Here is an example of calling the utlrp.sql script to recompile invalid database objects:

@?/rdbms/admin/utlrp.sql 31)find deprecated parameters in 10g select name from V$PARAMETER where ISDEPRECATED = TRUE; ORA-00376: file 40 cannot be read at this time . If the tablespace datafiles exist and are online and checked three times! check the following: Find the status of the tablespaces: SQL> select tablespace_name,status from dba_tablespaces; Use the following query to find out the status of the datafiles. SQL> select file#,name,status,enabled from v$datafile; If the status says recover then media recovery must be done by bringing the datafile online. Else, ORA-1113 will be encountered: Make sure that the archived log destination has all required archived files because when you execute recover command it will prompt you for exact archived log files that are required.Oracle will automatically suggest the required file names and you just have to type enter if you agree. To recovery the datafile try using the command: SQL> recover datafile <full_path_of_datafile>; SQL> alter database datafile <full_path_datafile_name> online; ORA-00376: file %s cannot be read at this time There was a problem accessing the datafile. The datafile or tablespace the datafile belongs to is either offline or the datafile is gone. eg: SQL> select name, status from v$datafile where status not in (ONLINE, SYSTEM) ; alter database datafile C:\ORACLE\ORADATA\OFFLINE.DBF online; alter tablespace offline_ts online;

select * from v$recover_file; Either set the tablespace or datafile back online or if the datafile is gone restore the datafile. syntax for doing secure copy to different server scp filename oracleuser@servername:path difference between oracle 10g db control and grid control Database Control is the HTTP Management environment and comes installed with the 10g Database. It can be used to manage one database (one target) at a time (standalone). To monitor more than one database, you must create a new console on a different port for each database. 10g Grid Control is the Enterprise version in that you can monitor different Target from different operationg systems at the same time. These include Application Servers, Listeners, Operation Systems, Non Oracle Database Systems using plugins (from 10gR2). So, for someone who was using OEM 9i, there was connection in to Standalone Console (Java) and Oracle Management Server (Java and HTML). In 10g, they are replaced with Database Control (Java and HTML versions) and Grid Control respectively. To administer multiple database 10g grid control is better. OEM Database Control comes with Oracle Enterprise Edition and does 10g Grid control is a separate software or added with Oracle Enterprise Edition . Among the both which will be good in terms of functionality. You will need a license for GC. Its not free whereas DB Console is. You go to OTN to download it or edelivery. Thats correct. You need an additional license from that of the Database Server for Grid Control How to get a formatted explain plan? select plan_table_output from table(dbms_xplan.display(PLAN_TABLE,null,ALL)); find out oracle version on the server using unix commands sqlplus version but this command may not tell you the correct version when there are multiple oracle homes. Then you have to set more oracle environment variables. So export ORACLE_HOME=/opt/oracle/product/10.2.0

And then Export LD_LIBRARY_PATH=/opt/oracle/product/10.2.0/lib:$LD_LIBRARY_PATH Then sqlplus version gives the right version all faqs about sqlplus errors can be found here http://www.oracle.com/technology/support/tech/sql_plus/htdocs/faq101.html#A4828 find out long running job processes and the work progress. SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2) %_COMPLETE FROM V$SESSION_LONGOPS WHERE OPNAME NOT LIKE %aggregate% AND TOTALWORK != 0 AND SOFAR <> TOTALWORK

invalid objects

1. @?/rdbms/admin/utlrp.sql 3. 4. Verify that all expected packages and classes are valid: SQL> SELECT count(*) FROM dba_objects WHERE status=INVALID;

SQL> SELECT distinct object_name FROM dba_objects WHERE status=INVALID when an object in oracle gets invalid, all the dependent objects get invalid also. this is not a problem usually, because when you try to use an objects that is marked invalid, oracle will try to compile it first, and if its ok, it will work. if the compilation fails, the application running the sql with the invalid object will then get the relevant error message. unless you get anything in the view DBA_ERRORS which will contain the compilation errors, you dont need to worry.

Configure iSQL*PLUS
STEP 1 goto dos prompt..... start...Run....Cmd on command prompt type: isqlplusctl start after issuing the above command following prompt will displayed Starting iSQL*Plus ... iSQL*Plus started. STEP 2 Open Any compatible Internet Browser. Type the address for ISQL*PLUS http://PC-NAME:5560/isqlplus/ Syntax http:// host_name:portnumber/isqlplus Check Computer Name Right click on my computer Then properties Than click computer name to check Port number: oracle_home=where oracle software installed. oracle_home\install\portlist.ini step 3 after opening the iSQL*Plus page on Expolorer.. LOGIN SCREEN ENTER username and password Now isqlplus started.

Oracle 10g Database New Features


Oracle 10g Database New Features Simplicity vs Flexibility Automatic statistics gathering Advisories Automatic tuning Less than 30 basic (init.ora) parameters Basic, advanced, hidden parameters Easier operations alter tablespace rename flashback queries undrop table Manageability

SGA_TARGET Sets total size for all SGA components Buffer Cache, Shared Pool, Large Pool, Java Pool Dynamically and automatically adjustable Automatic changes persist in SPFILE PGA_AGGREGATE_TARGET Available since 9i Sets total size target for all server processes sort_area_size, sort_area_retained_size, hash_area_size, bitmap_merge_area_size Contents automatically managed Basic Parameters COMPATIBLE CONTROL_FILES DB_BLOCK_SIZE DB_CREATE_FILE_DEST DB_CREATE_ONLINE_LOG_DEST DB_DOMAIN DB_NAME DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE INSTANCE_NUMBER JOB_QUEUE_PROCESSES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST_STATE_n NLS_LANGUAGE NLS_TERRITORY OPEN_CURSORS PROCESSES REMOTE_LISTENER REMOTE_LOGIN_PASSWORDFILE ROLLBACK_SEGMENTS SESSIONS SHARED_SERVERS STAR_TRANSFORMATION_ENABLED UNDO_MANAGEMENT UNDO_TABLESPACE Rename Tablespace Useful in Transportable Tablespace scenarios ALTER TABLESPACE prod RENAME to arc1; Cant rename SYSTEM or SYSAUX

Tablespace and all datafiles must be online Can also rename READ ONLY tablespaces Bigfile Tablespaces

Support for sizes up to 8 Exabytes! 8 000 000 Terabytes Max 65535 files in database SYSTEM & SYSAUX cant be bigfile tablespaces Crossplatform Transportable TS RMAN> CONVERT TABLESPACE sales_1,sales_2 2> TO PLATFORM Microsoft Windows NT 3> FORMAT /temp/%U; ... Transporting Tablespaces Between Databases input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45 Data Pump

A server-managed data transportation tool Direct load/extract capabilities Very high performance/efficient with large data sets Replacement for exp/imp old exp/imp remain supported Commands expdp/impdp Can use files or direct network transfer Dynamic configuration, resumable operations Client can detach and reconnect Can be parallelized using PARALLEL Even loads to/from external text files Parallelization level can be changed on the fly for long running jobs Monitored through DBA_DATAPUMP_JOBS Fine-Grained Object Selection exclude=function exclude=procedure exclude=package:like PAYROLL% include=table content=metadata_only | data_only | both query=modify_date > sysdate-1 DDL Transformations, DDL extract

FlashBack Database Flash Recovery Area must be configured Flashback logs are stored there Consisting of old database block images Fast rollback of database, no redologs required FlashBack Database Configuration parameters: DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE DB_FLASHBACK_RETENTION_TARGET Commands: ALTER DATABASE FLASHBACK ON; ALTER DATABASE FLASHBACK OFF; ALTER TABLESPACE test1 FLASHBACK OFF; ALTER TABLESPACE test1 FLASHBACK ON; Flashback Row History SELECT versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN, versions_operation OPERATION, empname, salary FROM hr.employees_demo VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE where empno = 111; XID START_SCN END_SCN OPERATION EMPNAME SALARY --- --------- ------- --------- ------- -----0004000700000058 113855 I Tom 927 000200030000002D 113564 D Mike 555 000200030000002E 112670 113564 I Mike 555 3 rows selected Useful for auditing Flashback Transaction History select xid, start_scn, commit_scn, operation, undo_sql, table_name from dba_transaction_query where xid = 000200030000002D; XID START_SCN COMMIT_SCN OPERATION UNDO_SQL --- --------- ---------- --------- ------------------------000200030000002D 112670 113565 D insert into "SCOTT"."EMP" ("EMPNO","EMPNAME","SALARY") values ('111','Mike','655') 000200030000002D 112670 113565 I delete from "SCOTT"."DEPT"

where "DEPTNO" = '20' and "DEPTNAME" = 'Finance' 000200030000002D 112670 113565 D update SCOTT.EMP set SALARY = 555 where EMPNO = 111 and EMPNAME = Mike and SALARY = 655 3 rows selected Table Recovery using Flashback DROP TABLE X; Table is renamed internally, not dropped Indexes & other structures remain Table is purged when out of free space or quota SELECT * FROM RECYCLEBIN; Systemwide recyclebin DBA_RECYCLEBIN Or show recyclebin command in sqlplus FLASHBACK TABLE RB$$3560$TABLE$1 TO BEFORE DROP RENAME TO scott.emp; PURGE RECYCLEBIN; DROP TABLE X PURGE; Performance Tuning RBO is dead, long live the CBO! Even data dictionary, SYS tables using CBO However, RBO is gone nowhere, its available Optimizer able to use run-time statistics exec dbms_stats.gather_system_statistics() OPTIMIZER_DYNAMIC_SAMPLING (default: 2) Multiple Advisors SQL Access & Tuning Advisor Memory Advisors (SGA, Shared Pool, etc..) Segment Advisor (Fragmentation, etc..) Undo Advisor Performance Troubleshooting Automatic Workload Repository Runtime execution statistics are gathered in memory MMON background process flushes stats to disk V$SQL_BIND_CAPTURE Samples bind variables for all sessions Faster than sql_trace (10046 trace on level 4) But doesnt capture all variable types And doesnt capture occurences of bindings

_cursor_bind_capture_interval defaults to 900 seconds Good for getting samples of database operations SQLPLUS Changes

Improvements SPOOL CREATE | REPLACE | APPEND also works for SAVE command SHOW RECYCLE BIN SQLPROMPT runtime variable substitution SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER >" glogin.sql and login.sql scripts are executed also on CONNECT Other

dbms_scheduler calendar Expressions: Yearly, Monthly, Weekly, Daily, Hourly, Minutely, Secondely alter system flush buffer_cache; drop database; database must be closed mounted exclusively restricted default user tablespace specifies default tablespace for new users, similar to default temporary tablespace in 9i
* NYOUG General Meeting March 2004

Sunday, December 5, 2010


Database Normalization explained.
Understanding and mastering database normalization techniques is essential in order to achieve a high performance database design for your system. If your design doesn't conform to (at least) the Third Normal Form (3NF), chances are high that you will find it hard to achieve the performance needed for a successful application.

Furthermore, you will find that writing good DML-statements (SELECT, UPDATE, INSERT or DELETE) is difficult, and sometimes actually impossible, without using a lot of procedural coding (PL/SQL in Oracle, VB/C# in Microsoft products). Many "experts" will tell you that if you do database normalization up to (and including) the Third Normal Form, you're well off. The Database Normalization eBook shows you

that this a far too easy approach, and it is richly documented with graphical Entity Relationship and Server Diagram examples. for more Detail Click

Here!

Thursday, December 2, 2010


Setting Up iSQL*Plus for SYSDBA Access
Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\orasoft>d: D:\>set oracle_home=d:\oracle\product\10.2.0\db_1 D:\>set java_home=%oracle_home%\jdk D:\>set oracle_home oracle_home=d:\oracle\product\10.2.0\db_1 D:\>set java_home java_home=d:\oracle\product\10.2.0\db_1\jdk D:\>cd %oracle_home%\oc4j\j2ee\isqlplus\application-deployments\isqlplus D:\oracle\product\10.2.0\db_1\oc4j\j2ee\isqlplus\application-deployments\isqlplu s> %java_home%\bin\java -Djava.security.properties=%oracle_home%\oc4j\j2ee\home\c onfig\jazn.security.props -jar %oracle_home%\oc4j\j2ee\home\jazn.jar -user "iSQL *Plus DBA/admin" -password welcome -shell JAZN:> adduser "iSQL*Plus DBA" orasoft orasoft JAZN:> grantrole webDba "iSQL*Plus DBA" orasoft JAZN:> exit JAZN:> D:\oracle\product\10.2.0\db_1\oc4j\j2ee\isqlplus\application-deployments\isqlplu s>exit

now type following URL or your browser http://pc-name:port_number/isqlplus/dba to get your host name type following query on SQL Prompt

select host_name from v$instance; to get Port number check in file .

Managing Database Structure

CHAPTER # 5

Managing Database Storage Structures


LOGICAL AND PHYSICAL STRUCTURE OF DATABASE

STORAGE STRUCTURES

Page # 5-3

A database is divided into logical storage unit called tablespaces. Each tablespace has many logical oracle data blocks. The DB_BLOCK_SIZE parameter specifies how large a logical block is. A logical block can range from 2KB to 32KB in size. The default size is 8 KB. A specific number of contiguous logical blocks form an extent. A set of extents that are allocated for a certain logical structure form one segment. An oracle data block is the smallest unit of logical I/O.

HOW DATA IS STORED

Page # 5-4

When a table is created, a segment is created to hold its data. A tablespace contains a collection of segments. Logically, a table contains rows of column values. A row is ultimately stored in a database block in the form of row piece. It is called row piece because under some circumstances the entire row may not be place in one place. This is happens when an inserted row is too large to fit into a single block or when an update causes an existing row to out groom its current space.

ANATOMY OF A DATABLOCK (CONTENT OF DB)

Page # 5-5

Oracles Data block contain the following: 123Block Header Row Data Free Space

BLOCK HEADER: The block header contains The segment type (such as Table, Table partition, Index, Index Partition, Cluster etc), Data Block Address, Table Directory, Row Directory and Transaction slots of size 23 bytes each, which are used are used when modification are made to row s in the block. The block header grows downward from the top ROW DATA: This is actual data for the rows in the block. Row data space grows upward from the bottom. FREE SPACE: Free space is the middle of the block. This enables the (1) Header and the (2) Row Data space to grow when necessary. Row data takes up free space as new rows are inserted or column of existing rows are updated with larger values. The examples of events that cause header growth are when the row directory needs more row entries or more transaction slots are required. Initially the free space in a block is contiguous. However deletions and updates may fragment the free space in the block.

TABLESPACES AND DATA FILES


Oracle stores data logically in tablespaces and physically in data files.

Page # 5-6

TABLESPACES:

- Can belongs to only one database at a time - Consist of one or more data files - Are further divided into logical storage of unit

DATA FILES: - Can belongs to only one tablespace and one database - Are repository (where information is stored) for schema object data.

DATABASE, TABLESPACES AND DATAFILES Are closely related, but they have important differences.

An ORACLE DATABASE consists of one or more logical storage unit called tablespaces, which collectively store all of the databases data. Each TABLESPACE in an oracle database consists of one or more files called data files, which are physical structures that conform to the operating system in which oracle is running. A databases data is collectively stored in the DATA FILES that constitute each tablespace of the database

For example The simplest oracle database would have two table spaces (the required SYSTEM ams SYSAUX tablespaces) each with one data file. Another database can have three tablespaces, each consisting of two data files (for a total of six data files) A single database can potentially have as many as 65534 datafiles. TYPES OF TABLESPACES There are two types of table spaces 1. System Tablespaces 2. Non-system Tablespaces SYSTEM TABLESPACES:

Created with the database

Required in all database Contains the data dictionary, including stored program units. Contains the SYSTEM undo segment Should not contain user data, although it is allowed

NON-SYSTEM TABLESPACES: Enable more flexibility in database administration Separate UNDO, TEMPORARY, APPLICATION DATA, & APPLICATION INDEX segments. Separate data by backup requirements Separate dynamic and statistics data Control the amount of space allocated to the users objects.

SPACE MANGEMENT IN TABLESPACES


Tablespaces allocate space in extents. Tablespace can be created to use one of the following two different methods of keeping track of free and used space.

Locally managed tablespaces Dictionary managed tablespaces

LOCALLY MANAGED TABLESPACES:

The extents are managed within the tablespace via bitmaps. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, the oracle manages the bitmap values to show the new status of the blocks. Locally managed is the default beginning with ORACLE9i In locally managed tablespace Reduce contention on data dictionary tables

No undo generated when space allocation or deallocation occurs No coalescing required

The local option of the EXTENT MANAGEMENT clause specifies that a tablespace is to be locally managed. By default a tablespace is locally managed.

DICTIONARY MANAGED TABLESPACES:

The extents are managed by the data dictionary. The oracle server update data dictionary whenever extent allocated or unallocated. This is the backward capability; it is recommended that we use locally managed tablespaces.

VIEW STORAGE STRUCTURE IN OEM

Page # 5-9

Logical data structures are stored in the physical files of the database. We can easily view the logical structure of our database through OEM. Detailed information about each structure can be obtained by clicking the links in the Storage Region of the Administration Page.

CREATE TABLESPACE ON OEM SESSION

Page # 5-10

Go to ORACLE ENTERPRISE MANAGER. Click on Administration Page Click tablespaces on STORAGE TAB Click on create button Enter the name of the table space Extend management LOCALLY Type PERMANENT Status READ WRITE e.g: ABC auto fixed auto fixed auto fixed

Go Datafile Section Click on add datafile Enter the name of datafile e.g: abc01.dbf Enter the size of datafile Goto Storage Tab Check on AUTOMATICALLY EXTENT And give the size of increment in KB. e.g: 100KB Maximum size check unlimited means file grow to unlimited size e.g: 10 MB by default 100 MB size is given

Now table space has been created in the location of


E:\ORACLE\PRODUCT\10.1.0\ORADATA\ORCL2\

CREATE TABLESPACE ON SQL*PLUS SESSION

FOR DATABASE 10g RELEASE 1 & 2 (same path)


CREATE TABLESPACE AAAA DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA01.DBF' SIZE 50M; Tablespace created. To see the information of the tablespace we create above perform the query SELECT TABLESPACE_NAME,

STATUS, LOGGING, CONTENTS, EXTENT_MANAGEMENT, ALLOCATION_TYPE, SEGMENT_SPACE_MANAGEMENT FROM DBA_TABLESPACES WHERE TABLESPACE_NAME='AAAA';

TS_NAME STATUS LOGGING CONTENTS EXTENT_MAN ALLOCATIO SEGMEN ------------- ----------AAAA ONLINE ------------- --------------- ------------------- ----------------- -----------LOGGING PERMAN LOCAL SYSTEM AUTO

In The Above Tablespace We Give Tablespace Name and Path of Datafile with Size We see by default PAREMTER set with the tablespace

LOGGING, CONTENETS, EXTENT_MANAGEMENT, ALLOCATION TYPE, SEGMENT SPACE MANAGEMENT

Logging Default / No Logging Permanent Default / Temporary / Undo Locally Default / Dictionary System Default / Local Uniform Size Automatic Default / Manual

There Are Two Modes of Logging 1) Logging 2) No Logging

Logging

LOGGING: specifies that, by default all tables, indexes, and partition within the tablespace have all changes written to online redo log files. LOGGING IS THE DEFAULT. NOLOGGING: means do not write all changes to online redo log files.

There Are Three Types of Contents 1) Permanent 2) Temporary 3) Undo

Contents

There Two Type of Extent Management 1) Local 2) Dictionary

Extent_management

DICTIONARY MANAGED: Specifies that the tablespace is managed using dictionary tables. This is obsolete now. LOCALLY MANAGED: Specifies that the tablespace is managed via bitmap. NOTE: If you specify LOCAL you cannot specify DEFAULT storage_clause, MINIMUM EXTENT, or TEMPORARY

STROAGE FOR LOCALLY MANAGED TABLESPACE

Page # 5-12

The extents with in the locally managed tablespace can be allocated in one of these two ways: 1) System or autoallocate or automatic 2) Uniform 1) SYSTEM OR AUTOALLOCATE OR AUTOMATIC Allocatin_type

By default allocation types is system or autoallocate or automatic if we do not specify extent management local clause means the size of extent within the tablespace are system managed.

2) UNIFORM For uniform allocation we must give extent management clause and specify uniform size specifies that the TB is managed with uniform extent size. The default size 100 MB There Are Two Type of Segment Space Management for locally managed tablespace. 1)Automatic 2) Manual

1) AUTOMATIC: Segment_Space_Management Oracle database uses bitmaps to manage the free space within segments. 2) MANUALL: Specifies that we want to use free list for managing free space within segments.

ADVANTAGES OF LOCALLY MANAGED TABLESPACES:


Locally managed tablespaces have the following advantages over dictionary managed tablespaces. 1. Local management avoids recursive space management operations. 2. Locally managed tablespace do not record free space in the data dictionary tables, they reduce contention on these tables. 3. The size of extents are managed locally can be determined automatically by the system. 4. Changes to the extent bitmaps do not generate undo information. NOTE: If we are managing a database that has dictionary managed tablespaces and we want to convert them to locally manged tablespace, use the DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL Create another tablespace other than default parameters

1 CREATE TABLESPACE BBBB 2 DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\BBBB01.DBF' SIZE 50M 3 NOLOGGING 4 OFFLINE 5 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100K

6 SEGMENT SPACE MANAGEMENT MANUAL; Tablespace created. Now perform the above query to see the parameters of the tablespace AAAA and BBBB SQL> SELECT 2 TABLESPACE_NAME, 3 STATUS, 4 LOGGING, 5 CONTENTS, 6 EXTENT_MANAGEMENT, 7 ALLOCATION_TYPE, 8 SEGMENT_SPACE_MANAGEMENT 9 FROM DBA_TABLESPACES 10 WHERE TABLESPACE_NAME IN ('AAAA' , 'BBBB');

TS_NAME STATUS ------------- ----------AAAA BBBB ONLINE

LOGGING CONTENTS EXTENT_MAN ALLOCATIO SEGMEN ------------- --------------- ------------------- ----------------- -----------LOGGING PERMAN LOCAL SYSTEM AUTO

OFFLINE NOLOGGING PERMANENT LOCAL

UNIFORM MANUAL

TABLESPACE IN THE PRECONFIGUED DATABASE:


There are six tablespace that are automatically created with the database creation. 12SYSTEM SYSAUX

Page # 5-14

3456-

TEMP UNDOTBS1 USERS EXAMPLE

SYSTEM The system tablespace is used by the oracle server to manage the database. It contains the data dictionary and tables that contain administrative information about the database. These all contained in SYS schema and can be access by the SYS and system users or other administrative users. SYSAUX This is an auxiliary tablespace to the system tablespace. Some components and product that are earlier used by SYSTEM tablespace now use the SYSAUX tablespace. In OEM we can see a pie chart of the contents of this tablespace. TEMP Temporary tablespace is used when we execute a SQL statement that require sorting. The tamp tablespace is defined as the default tablespace for sorting data. This means that if no temporary tablespace is specified when the user is created, the oracle database assigns this tablespace to the user. UNDOTBS1 This is the undo tablespace used by the database server to store undo information. This tablespace is created at database creation time. USERS This tablespace is used to store permanent user objects and data. In the preconfigured database, the user tablespace is the default tablespace for all objects created by NONSYSTEM users. EXAMPLE This tablespace contains the sample schemas that can be installed when we create the database.

Create TEMPORARY tablespace

CREATE TEMPORARY TABLESPACE KHAN TEMPFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\KHAN01.DBF' SIZE 100M; Tablespace created. We can set default temporary tablespace for all users

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE KHAN;

Database altered. Restore default temporary tablespace ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP; Database altered.

Create UNDO tablespace

CREATE UNDO TABLESPACE UNDOO1 DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOO01.DBF' SIZE 10M; Tablespace created. FOR DATABASE 9i RELEASE 2

SQL > CREATE TABLESPACE ABCD DATAFILE 'D:\ORACLE\ORADATA\ORCL\ABCD01.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

ALTER A TABLESPACE
We can alter any tablespace in several ways as the need of your system change.

Page # 5-16

1. RESIZING A TABLESPACE a- We can resize of any tablespace by adding a new datafile b- By changing the size of datafile B.1 By manually using alter database command B.2 Automatically using AUTOEXTEND A At the time of database creation. B At the time of tablespace creation. C After the tablespace creation. 2 BY CHANGING THE STATUS OF TABLESPACE A Tablespace can be in one of the three different status or states. 1- Read write 2- Read only 3- Offline 1- RESIZING A TABLESPACE:

a-

We can resize of any tablespace by adding a new datafile

SQL> ALTER TABLESPACE AAAA ADD DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA03.DBF' SIZE 5M; Tablespace altered.

b-

By changing the size of datafile


We can enlarge a tablespace by

1- By manually using ALTER DATABASE 2- Automatically using AUTOEXTEND 1By manually using ALTER DATABASE

SQL> ALTER DATABASE DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA02.DBF' RESIZE 10M; Tablespace altered. We can also decrease the size of data file NOTE: If there are database objects stored above the specified size, then the datafile is decrease only to the last block of the last object in the datafile 2Automatically using AUTOEXTEND We can use AUTOEXTEND clause in three ways. 1. At the time of database creation. 2. At the time of tablespace creation. 3. After the tablespace creation. 1- At the time of database creation.

When we create database at that time use AUTOEXTEND ON with the datafile

2- At the time of tablespace creation.

CREATE TABLESPACE AAAA

DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA01.DBF' SIZE 10M AUTOEXTEND ON; Tablespace created. 3- After the tablespace creation.

ALTER DATABASE DATAFILE 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA01.DBF' AUTOEXTEND ON; Database altered. Determining wether AUTOEXTEND is ENABLED or DISABLED We use DBA_DATA_FILES view.

SQL> DESC DBA_DATA_FILES Name Null? Type

----------------------------------------- -------- ---------------------------FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO VARCHAR2(513) NUMBER VARCHAR2(30) NUMBER NUMBER VARCHAR2(9) NUMBER

AUTOEXTENSIBLE MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_STATUS

VARCHAR2(3) NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(7)

SQL> SELECT TABLESPACE_NAME, FILE_NAME, AUTOEXTENSIBLE FROM DBA_DATA_FILES; TABLESPACE_NAME FILE_NAME AUTOEXTENSIB --------------------YES YES YES YES YES NO NO YES NO NO NO YES

------------------------------ ------------------------------------------------------USERS SYSAUX UNDOTBS1 SYSTEM EXAMPLE ABC ABCD ABC1 XYZ BBBB UNDOO1 AAAA

E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ABC01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ABCD01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ABC101.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\XYZ01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\BBBB01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOO01.DBF E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\AAAA01.DBF

2- BY CHANGING THE STATUS OF TABLESPACE A Tablespace can be in one of the three different status or states. 1- Read write 2- Read only 3- Offline By default the status of the tablespace is Read Write. Any of the following states may not be available because their availability depends on the type of tablespace. For Example if tablespace type is default temporary. We cannot offline temporary tablespace until new default is not available. We cannot alter a temporary tablespace to permanent. For Example if tablespace type is permanent. READ ONLY MODE: We can set permanent tablespace to read only mode by using alter tablespace command.

SQL> ALTER TABLESPACE AAAA READ ONLY; Tablespace altered. When the tablespace in read only mode. No further write operation can occur in the tablespace except existing transaction can be completed (committed or rollback) Tablespace is online while in read-only state. Object can be dropped from read only status of tablespace. Because these command affect only the data dictionary. We cannot offline SYSTEM and SYSAUX tablespace.

OFFLINE MODE: We can take an online tablespace offline so that this portion of the database is not available for general use. The rest of the database is open and available for users to access data. When we take tablespace offline, we can use the following options. a) NORMAL b) TEMPORARY c) IMMEDIATE d) FOR RECOVER a) OFFLINE NORMAL A tablespace can be taken offline normally if no error conditions exist for any of data files of the tablespace. The oracle database ensures that all data is written to disk by taking a checkpoint for all the data files of the tablespace as it take them offline. For normal offline there is no need to write NORMAL

SQL> ALTER TABLESPACE AAAA OFFLINE; Tablespace altered. b) OFFLINE TEMPORARY A tablespace can be taken offline temporarily if there are errors conditions exist for one or more data files of the tablespace. The oracle database take the data files (which are not already offline) offline, performing check pointing on them as it does so. If no files are offline, but we use the temporary clause, media recovery is not required to bring the tablespace back online. However, if one or more files of the tablespace are offline because of write error, and we take the tablespace offline temporarily, the tablespace require recovery before you can bring it back online.

ALTER TABLESPACE AAAA OFFLINE TEMPORARY; Tablespace altered.

c) OFFLINE IMMEDIATE A tablespace can be taken offline immediately without the oracle database taking a checkpoint on any of the data files. When we specify immediate, media recovery for the tablespace is required before the tablespace can be brought online. NOTE: We cannot take a tablespace offline immediate if the database is running in NOARCHIVELOG mode.

SQL> ALTER TABLESPACE AAAA OFFLINE IMMEDIATE; Tablespace altered. d) OFFLINE FOR RECOVER The for recover setting has been depreciated. The syntax is supported for backward compatibility.

DROPPING TABLESPACE

Page # 5-21

Once a tablespace has been dropped, the objects and data in it will no longer be available. To recover them can be a time consuming process. Oracle recommends a backup before and after dropping a tablespace.

DROP TABLESPACE BBBB; Tablespace dropped. Use Contents and data file option

DROP TABLESPACE BBBB INCLUDING CONTENTS AND DATAFILE; Tablespace dropped. INCLUDING CONTENTS: Drops all segments in the tablespace

AND DATA FILE: Delete the related operating system file CASCADE CONSTRAINTS: Drop referential integrity constraints from table outside the tablespace. Some restriction when we drop any tablespace

We cannot drop system and sysaux tablespace We cannot drop temporary tablespace until new default is not available We cannot drop a tablespace that contains any active undo segments. When we drop any tablespace the operating system file cannot delete at the same time we can delete the tablespace than we can delete data file manually from the operating system.
Page # 5-22

TO VIEWING TABLESPACE INFORMATION ON SQL*PLUS

SQL> desc v$tablespace Name Null? Type

----------------------------------------------------------------------------------- -------- -----TS# NAME INCLUDED_IN_DATABASE_BACKUP BIGFILE FLASHBACK_ON SQL> select ts#, name from v$tablespace; TS# NAME ---------- -----------------------------0 SYSTEM 1 UNDOTBS1 2 SYSAUX NUMBER VARCHAR2(30) VARCHAR2(3) VARCHAR2(3) VARCHAR2(3)

4 USERS 3 TEMP 6 EXAMPLE 7 ABC 8 SCOTT1 8 rows selected. SQL> desc dba_tablespaces Name Null? Type

----------------------------------------------------------------------------------- -------- -----TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FORCE_LOGGING EXTENT_MANAGEMENT ALLOCATION_TYPE NOT NULL VARCHAR2(30) NOT NULL NUMBER NUMBER NUMBER NOT NULL NUMBER NUMBER NUMBER NUMBER VARCHAR2(9) VARCHAR2(9) VARCHAR2(9) VARCHAR2(3) VARCHAR2(10) VARCHAR2(9)

PLUGGED_IN SEGMENT_SPACE_MANAGEMENT DEF_TAB_COMPRESSION RETENTION BIGFILE Query to view thee information of tablespace perform the query SELECT TABLESPACE_NAME, STATUS, LOGGING, CONTENTS, EXTENT_MANAGEMENT, ALLOCATION_TYPE, SEGMENT_SPACE_MANAGEMENT FROM DBA_TABLESPACES ;

VARCHAR2(3) VARCHAR2(6) VARCHAR2(8) VARCHAR2(11) VARCHAR2(3)

TO VIEWING DATAFILE INFORMATION ON SQL*PLUS

Page # 5-22

SQL> DESC V$DATAFILE Name Null? Type

----------------------------------------- -------- ---------------------------FILE# CREATION_CHANGE# CREATION_TIME NUMBER NUMBER DATE

TS# RFILE# STATUS ENABLED CHECKPOINT_CHANGE# CHECKPOINT_TIME UNRECOVERABLE_CHANGE# UNRECOVERABLE_TIME LAST_CHANGE# LAST_TIME OFFLINE_CHANGE# ONLINE_CHANGE# ONLINE_TIME BYTES BLOCKS CREATE_BYTES

NUMBER NUMBER VARCHAR2(7) VARCHAR2(10) NUMBER DATE NUMBER DATE NUMBER DATE NUMBER NUMBER DATE NUMBER NUMBER NUMBER

CHAPTER # 6

Administering User Security Managing Users


1. 2. 3. 4. 5. Create new database users Alter database users Drop database users Monitoring information about existing users Obtain user information
Page # 6-3

DATABASE USER ACCOUNTS:

To access the database, a user must specify a valid database user account and successfully authenticate as required by that user account. The database administrator defines the names of the users who are allowed to access the database. Each user account has A security domain defines the settings that apply to the users. Security domain include

1) A unique user name 2) Authentication mechanism 3) Tablespace quota 4) Default tablespace 5) Temporary tablespace 6) Account locking 7) Resource limit 8) Direct privilege 9) Role privilege AUTHENTICATION MECHANISM A user can access the database can be authenticated by on of the following A) Data dictionary (PASSWORD) B) Operating system (EXTERNALLY) C) Network (GLOBAL) Means of authentication can be defined at the time of user creation or Can be defined later by ALTER command In this lesson we study Data dictionary & Operating System Authentication. TABLESPACE QUOTA

Tablespace quota controls the amount of physical storage that is allowed to the user in the tablespace in the database. DEFAULT TABLESPACE Default tablespace are the location where segments that are created by user are stored. If user does not explicitly define the tablespace at the time of segments are created. TEMPORARY TABLESPACE Temporary tablespace are the location where extent are allocated by the oracle server if the user performs the operation like sorting the data. ACCOUNT LOCKING User account can be lock to prevent the user when user logging on the database. This can be done automatically or the DBA can lock / unlock the a/c manually. RESOURCE LIMIT Limits can be places on the user of resources such as CPU time / Logical I/O / No. of session a user open. This can be cover next part of this lesson. DIRECT PRIVILEGE Privilege are used to control the action that a user can perform in a database ROLE PRIVILEGE A user can be granted privileges indirectly through the use of roles. DATABASE SCHEMA A database schema is a named collection of object such as (tables, views, clusters, procedures and packages) that is associated with the particular user. When a database user is created a corresponding schema with the same name is created for that user. A user can be associated with one schema only. Therefore user and schema are often used interchangeably. When we create any user we must consider the following point that called checklist for creating user.

1- Identify the tablespace where user stores objects. 2- Decide tablespace quota for each tablespace 3- Assign a default tablespace and temporary tablespace 4- Grant privileges and assign role to the user.

CREATING A USER IN SQL*Plus BY PASSWORD

Page # 6-6

Create user abc1 Identified by abc1 Default tablespace example Temporary tablespace temp Quota 15m on example Quota unlimited on users Password expire; Quota : Maximum space allowed to the user for objects in any tablespace. Quota can be define integer byte/ kilo bytes (KB) and megabytes (MB) Unlimited: This keyword use for user can use as much as space available in any tablespace. NOTE: When this users creates objects by default objects are kept on example tablespace. If the user wants to kept object on another tablespace than must give tablespace name at the end of object. CHANGING USER QUOTA ON TABLESAPACES We can modify user table quotas in following situations 1- Table owned by user grow continuously 2- An application is enhanced and requires more table or indexes. 3- Objects are reorganized and places on different tablespaces.

Than we use ALTER USER command to solve this problem

SQL> ALTER USER ABC1 2 QUOTA 10M ON USERS; User altered. OR Alter user abc1 Quota 0 on users; Important note: after 0 qouta assign to user then no new extent can be allocated for the objects of user.

CREATING A USER IN SQL*Plus BY EXTERNALLY


Step#1 SQL> CREATE USER RAHEEL IDENTIFIED EXTERNALLY; User created. Step#2 SQL> GRANT CONNECT, RESOURCE TO RAHEEL; Grant succeeded. Step#3

Page # 6-6

SQL> ALTER SYSTEM SET OS_AUTHENT_PREFIX =" " SCOPE = SPFILE; System altered. Step#4 SQL> ALTER SYSTEM SET REMOTE_OS_AUTHENT = TRUE SCOPE = SPFILE;

System altered.

Step#5 Open sqlnet.ora file on notepad from the path C:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\sqlnet.ora And change the parameter #SQLNET.AUTHENTICATION_SERVICES= (NTS) hash sign of previous SQLNET.AUTHENTICATION_SERVICES= (NONE) type new Save and close the file. Step#6 Now create a new user Click Start Setting Control Panel User Account Create a New Account Enter User name same as we create on SQL*Plus (RAHEEL) Check on Limited click NEXT Select the account we create Click Create Password Enter Password Than click Create Password Step#7 Click Start Setting Control Panel Administrator Tools Computer Management Click on Local User & Groups Double Click on Users Right Click (The User Which We Create in Operating System RAHEEL) Select Properties Select Member Of Tab Click on Add Click on Advance Than click on Find Now Select Ora_dba Than Click Ok 3 Times Step#8 Restart All Oracle services from services.msc Step#9 Log of current User Step#10 Log on the user we create RAHEEL Step#11

Open SQL*PLUS and enter / on password window Now we connect to the ORCL database.

UNLOCKING A USER ACCOUNT AND RESETTING THE PASSWORD THROUG OEM AND SQL*PLUS Page # 6-10

ALTER USER SCOTT IDENTIFIED BY TIGER ACCOUNT UNLOCK; User Altered

DROPING A USER
A user can be drop by using the following command

DROP USER ABC1; If we want to remove the user and its associated objects then we use cascade option

DROP USER ABC1 CASCADE;

TO GET INFORMATION ABOUT THE USERS OF ORACLE USE THIS QUERY


SQL> DESC DBA_USERS name null? Type

------------------------------------------ ---------- --------------username user_id password account_status not null not null not null varchar2(30) number varchar2(30) varchar2(32)

lock_date expiry_date default_tablespace temporary_tablespace created profile initial_rsrc_consumer_group external_name not null not null not null not null

date date varchar2(30) varchar2(30) date varchar2(30) varchar2(30) varchar2(4000)

IF WE GET THE INFORMATION ABOUT TABLESPACE QUOTA OF USERS THEN DESC DBA_TS_QUOTAS; name null? Type

----------------------------------------------------------------- -------- -----tablespace_name username bytes max_bytes blocks max_blocks not null varchar2(30) not null varchar2(30) number number not null number number

Managing Privileges
WHAT IS PRIVILEGE ? A privilege is a right to execute a particular type of SQL statement or to access another user's object. The oracle database enables you control what user can or cannot do within the database These include the right to 1- Connect to database

2- Create a table 3- Select rows from another user table 4- Execute another user stored procedure There are two types of oracle user privileges 1- System privileges: 2- Object privileges:

1- SYSTEM PRIVILEGES:
Each system privilege allows a user to perform a particular database action or class of database operations. For example the privilege of create tablespace is the system privilege. System privileges can be granted by the administrator or by someone who explicitly give permission to administrator the privilege. There are more then a hundred distinct system privileges. Many system privileges contain the ANY clause.

2- OBJECT PRIVILEGES:
Each object privilege allows a user to perform particular action on a specific object For example a table, view, sequences, procedure, factions or package. Without specific permission, user can access only their own objects. Object privileges can be granted by the owner of an abject, by the administrator, or by someone who has been explicitly given permission to grant privileges on the object. A DBA's control of privileges include 1- Providing a user the right to perform a type of operation 2- Grant and revoke access to perform system functions 3- Grant privileges directly to the users or to roles 4- Granting privileges to all users (public)

SYSTEM PRIVILEGES: There are more than 100 system privileges. Privileges can be classified as 1. Privileges that enable system-wide operation Example: Create Session, Create Tablespace 2. Privileges that enable management of objects in a users own schema Example: Create Table 3. Privileges that enable management of objects in any schema Example Create Any Table

ANY KEYWORD

Signifies that user have the privilege in any schema

GRANT COMMAND Signifies add a privilege to a user or a group of users. REVOKE COMMAND Delete the privileges. Revoke and Grand are the DDL commands. MOST COMMON SYSTEM PRIVILEGES ARE

Index :

Create any index Alter any index Drop any index

Table :

Create table Create any table Alter any table

Drop any table Select any table Update any table Delete any table

Session :

Create session Alter session Restricted session

Tablespace:

Create tablespace Alter tablespace Drop tablespace Unlimited tablespaces

IMPORTANT NOTE: - There is no create index privilege because - Create table include the create index and the analyze commands. - User must have quota on tablespace or must have Grant unlimited tablespace. - Privilege such as create table, create procedure, create cluster include dropping this objects. - Unlimited tablespace cannot be granted to role - The drop any table privilege is necessary to truncate a table in another schema. NOTE: Granting a privilege with ANY clause means that the privilege crosses schema lines. For example, the Create Table Privilege allows you to create a table but only within your own schema. But Create Any Table Privilege allows you to create table your own schema as well as

other users schemas. Selecting With Admin Option check box enables you to administrator the privilege and Grants the system privilege to other users.

Carefully consider security requirements before granting system permissions, some system privileges are usually granted only to administrator. 1) Restricted Session 2) SYSDBA and SYSOPER 3) Drop Any Object 4) Create Any Directory 5) Grant Any Object Privilege 6) Alter Database and Alter System

Restricted Session:

This privilege allows you to log in even if the database has been opened in restricted mode.

SYSDBA and SYSOPER:

These privileges allow you to shutdown, startup, and perform recovery and other administrative tasks in the database. SYSOPER allow a user to perform basic operational tasks, but without the ability to look at the user data. It includes the following system privileges.

Differentiates SYSDBA And SYSOPER Privileges

SYSDBA 1- Having SYSOPER privileges with admin option 2- Create database 3- Alter tablespace begin/end backup

SYSOPER 1- Startup / shutdown 2- Alter database open | mount 3- Alter database backup control file to recover database

4- Restricted session 5- Recover database until ( MEANS: past recovery ) 6- Deletion of Database

4- Alter database archive log 5- Restricted session 6- Alter database Recover (Complete Recover

Drop Any Object

The Drop Any privilege allows you to delete objects that other users own.

Create, Manage, Drop and Alter Tablespace:

These privileges allow for tablespace administration including creating, dropping and changing their attributes.

Create Any Directory:

The oracle database allows developers to call external code (for example, a C library) from within PL/SQL. As a security measure, the opening system directory where the code resides must be linked to a virtual oracle directory object. With the create any directory privilege, you can potentially call insecure code objects.

Grant Any Object Privilege:

This privilege allows you to grant object permissions on objects that you do not own.

Alter Database and Alter System:

These very powerful privileges allow you to modify the database and the oracle instance, such as renaming a data file or flushing the buffer cache.

HOW TO GRANT SYSTEM PRIVILEGES ON SQL*PLUS

SQL > Grant create session to emi; If Grantee Want To Further Grant The System Privilege Then We Use With Admin Option

SQL > Grant create session to emi with admin option;

Note: WITH ADMIN OPTION privilege is usually reserved for security administrator and is rarely granted to other users.

RESTRICTIONS ON SYSTEM PRIVILEGES


SQL > Show parameter o7_dictionary_accessibility name -----------------------------------o7_dictionary_accessibility type value

----------- ----boolean false

This parameter control restrictions on system privileges

If set to true Allow access to objects in sys schema For example: Select Any Table System privileges allow the user to accessing the objects of any schema as well as sys/ dictionary schema. This was oracle 7 behavior. Now default is false means Do not allow access to sys schema. For example: Select Any Table System privileges allow the user to accessing the objects of any schema but do not allow to access the object in sys/ dictionary schema.

HOW TO REVOKE SYSTEM PRIVILEGES ON SQL*PLUS


Use REOVKE command to remove a system privilege from a user Only privileges granted with a GRAND command can be revoked

Revoke create table from emi; IMPORTANT NOTE: The revoke command can only revoke privileges that have been granted directly with a grant option.

REVOKING SYSTEM PRIVILEGES WITH ADMIN OPTION


There are no cascading effects when a system privilege is revoked, regardless of whether it was given with admin option SCENARIO If DBA grant Create Session, Create Table Privilege With Admin Option to UMAR UMAR create table in his schema And UMAR give Create Session, Create Table privilege to OWAIS OWAIS create table in his schema

NOW DBA revoke create session, create table privilege from UMAR RESULT: UMAR table still exist but no new table can be create OWAIS table still exist and have to create further new table as OWAIS want Therefore no effect on OWAIS because system privileges are independent NOTE : DBA having the ability to revoke system privilege from any user

If DBA grant create session with admin option to UMAR And UMAR give create session to OWAIS However DBA having the ability to revoke directly OWAIS privilege.

STEP # 1 Connect to sys and Create 2 users with default tablespace and quota SQL> CONN SYS/ORACLE AS SYSDBAS Connected

SQL> CREATE USER SSS IDENTIFIED BY RRR DEFAULT TABLESPACE ABC QUOTA 5M ON ABC; User created.

SQL> CREATE USER SSS IDENTIFIED BY SSS DEFAULT TABLESPACE ABC QUOTA 5M ON ABC; User created.

STEP # 2 Now DBA grant system privileges to user 1 (RRR)

SQL> GRANT CONNECT TO RRR WITH ADMIN OPTION;

Grant succeeded. SQL> GRANT CREATE TABLE TO RRR WITH ADMIN OPTION; Grant succeeded.

STEP # 3 Now connect to user 1 (RRR) and create objects and insert the values

SQL> CONN RRR/RRR Connected.

SQL> CREATE TABLE AAA ( 2 NAME VARCHAR2(20)); Table created.

SQL> INSERT INTO AAA 2 VALUES 3 ('RAHEEL'); 1 row created.

STEP # 4 User 1 (RRR) grant system privileges to user 2 (SSS)

SQL> GRANT CONNECT TO SSS WITH ADMIN OPTION ; Grant succeeded. SQL> GRANT CREATE TABLE TO SSS WITH ADMIN OPTION;

Grant succeeded.

STEP # 5 Connect to user 2 (SSS) create objects and insert values SQL> CONN SSS/SSS Connected.

SQL> CREATE TABLE BBB ( NAME VARCHAR2(20)); Table created.

SQL> INSERT INTO BBB VALUES ('SOVSTM'); 1 row created.

STEP # 6 Connect to sys and REVOKE system privileges from user 1 (RRR)

SQL> CONN SYS/ORACLE AS SYSDBA; Connected.

SQL> REVOKE CREATE TABLE FROM RRR; Revoke succeeded.

STEP # 7 Connect user 1 (RRR) and create abject now user one receive an error of insufficient privileges

SQL> CONN RRR/RRR Connected.

SQL> CREATE TABLE EMP ( 2 EMONO NUMBER(4)); CREATE TABLE EMP ( * ERROR at line 1: ORA-01031: insufficient privileges

STEP # 8 Connect user 2 (SSS) and create abject now user one successfully create object. Because SYSTEM PRIVILEGES are independent.

SQL> CONN SSS/SSS Connected.

SQL> CREATE TABLE EMP ( EMPNO NUMBER(4)); Table created.

SQL> INSERT INTO EMPNO VALUES (1111); 1 row created.

OBJECT PRIVILEGES:
Object Privileges ALTER DELETE EXECUTE INDEX INSERT REFERENCES SELECT UPDATE Table Y Y Y Y Y Y Y View Y Y Y Y Y Y Y Sequence Y Y Procedure

An object privilege is a privilege or right to perform a particular action of user on a specific table, view, sequences, procedure, function, or package. Each object has a particular set of grantable privileges.

This slide does not provide complete object privileges but commonly used objects.

HOW TO GRANT OBJECT PRIVILEGES ON SQL*PLUS

SQL > GRANT UPDATE ON SCOTT.EMP TO AAA; USE WITH ADMIN OPTION

Means user has able to transfer this grant to other GRANT UPDATE ON SCOTT.EMP TO AAA WITH GRANT OPTION;

REVOKING OBJECT PRIVILEGES


Use the revoke command to revoke object privileges. User revoking the privileges must be the original granter of the object privilege being revoked

REVOKE SELECT ON SCOTT.EMP FROM AAA;

SCENARIO If user AAA grant SELECT object privilege on EMP WITH GRANT OPTION to BBB And BBB grant the SELECT privilege on EMP to CCC Later user AAA revoke SELECT privilege from BBB Result: This revoke also apply on CCC means (CCC do not has the right of select EMP) Therefore effect on CCC because object privileges are dependent

STEP # 1 Connect to sys and Create two users and grants connect resource to both users.

SQL> CONN TO SYS/ORACLE AS SYSDBA; Connected

SQL> Create User UMAR Identified By UMAR; User created.

CHAPTER # 4

Managing The Oracle Instance

MANAGEMENT FRAME WORK

There are three components of the oracle database 10g management framework:

A Database instance that is being managed A Listener that allows connections to the database The Management Interface. This may be either a Management Agent running on the database server (which connects it to oracle enterprise manager grid control) OR The stand-alone Oracle Enterprise Manager Database Control also called DATABSE CONSOLE

NOTE : Each of these components must be explicitly started before we can use the service of the component and must be shutdown cleanly when shutting down the Oracle Database.

STARTING AND STOPPING DATABASE CONTROL

Page # 4-4

When we install Oracle database, Oracle universal installer also installs Oracle enterprise Manager (OEM). Its web-based Database Control serves as the primary tool for managing our oracle database

STEP # 1 Open dos prompt Set oracle_sid=orcl2 database name may change C:>Emctl start dbconsole C:>Lsnrctl

STEP # 2

Open Internet Explorer On Address bar type http://computer name:port number/em

Check Computer Name right click on my computer Then properties Than click computer name For Port number when we start emctl dbconsole we see port number of our database. Normally 1158 is the port number

STEP # 3 LOGIN SCREEN ENTER username and password

STEP # 4 Enterprise manager 10g windows open Click on administration Then work start We can do all things that we can perform on sql plus.

ISQL PLUS PRACTICAL

Page # 4-9

STEP # 1 Start dos prompt Write isqlplusctl start STEP # 2

Open Internet Explorer Type the address for ISQL*PLUS http://PC-01:5560/isqlplus/ Syntax http:// host_name:portnumber/isqlplus Check Computer Name Right click on my computer Then properties Than click computer name For Port number Or manually E:\oracle\product\10.1.0\Db_1\install\portlist STEP # 3 LOGIN SCREEN ENTER username and password Now isqlplus started. We can do all things that we can perform on sqlplus. for isqlplus usually 5560.

LOGIN ISQLPLUS AS SYSDBA PRACTICAL

Page # 4-10

STEP # 1 Open CMD SET JAVA_HOME=D:\ORACLE\PRODUCT\10.2.0\DB_1\JDK SET ORACLE_HOME=D:\ORACLE\PRODUCT\10.2.0\DB_1 SET ORACLE_SID=ORCL

CD D:\ORACLE\PRODUCT\10.2.0\DB_1\OC4J\J2EE\ISQLPLUS\APPLICATIONDEPLOYMENTS\ISQLPLUS

Type D: enter drive name where oracle install

s> d:\oracle\product\10.2.0\db_3\jdk\bin\java Djava.security.properties=%ORACLE_HOME%\oc4j\j2ee\home\config\jazn.security.props -jar %oracle_home%\oc4j\j2ee\home\jazn.jar -user "iSQL*Plus DBA/admin" -password welcome shell Now you enter JAZN:> Prompt write the following steps on JAZN JAZ:> adduser "iSQL*Plus DBA" xyz xyz JAZN:>listusers JAZN:>grantrole WebDba "iSQL*Plus DBA" xyz JAZN:> exit

STEP # 2 Open Internet Explorer Type the path http:\\localhost:5560/isqlplus/dba security window open type username and password xyz/xyz now isqlplus started enter username sys password oracle as SYSDBA Now start work

USING THE SQL PLUS

Page # 4-12

For BLACK screen RUN sqlplus Than enter username and password For WHITE screen Program oracle 10g home 1 application development sqlplus

CALLING SCRIPT ON SQLPLUS

Page # 4-14

STEP # 1 Create a script file on note pad. Write following sql statement on notepad file

Select * from emp ; Select * from emp where job=SALESMAN; Select sum(sal), min(sal), max(sal) from emp; Select * from emp Where sal > (select sal from emp where ename=SMITH); Select * from cat; Select * from emp, dept; Update emp

Set job=SALESMAN; Select * from emp; Rollback; Select * from emp; Where ename like &\_% escape \; Select * from emp Where job in ( select job from emp where empno=7788); Insert into emp Values (9999,RAHEEL,SALESMAN,,,,,);

Now save the notepad file with sql extension script file.sql

STEP # 2 On sqlplus or isqlplus

@path of the file destination of the file you create above like @ E:\PRACTICE\Raheel\Raheel\final\10G\script.sql

INITIALIZATION PARAMETER FILE

Page # 4-15

In order to start an instance and open the database, we must connect sysdba and enter the startup command. The oracle server will then read the initialization parameter file and prepare the instance according to the initialization parameters contained within. For this we must have sysdba privileges There are two types of parameter files.

Static parameter file, PFILE, commonly referred as initsid.ora Dynamic persistent server parameter file SPFILE, commonly referred as spfilesid.ora.

CONTENTS OF PARAMETER FILE A list of instance parameters The name of the database the instance is associated with Allocation for memory structures of the SGA What to do with filled online redo log files. ---------------------------The names and locations of control files. Information about undo tablespace.

Text initialization parameter file: PFILE init<sid>.ora - Text File - Modified with an operating system editor - Mod made manually

(sid) means database name

- Whenever we need to change the value of the parameter file we must shutdown the database. - The pfile is read only during instance startup. (Changes take effect on next startup)

For oracle 9i

default location C:\oracle\admin\orcl\pfile\initsid.ora

For oracle 10g (R2) default location E:\oracle\product\10.2.0\Db_1\database\initorcl.ora Server parameter file: SPFILE : spfile<sid>.ora (sid) means database name - Binary File - Maintained by the oracle server

- Always resides on the server side having read and write by database server - Ability to make changes persistent across SHUTDOWN and STARTUP - Can self-tune parameter values - Can have RMAN support for backup the initialization parameter

For oracle 9i default location C:\oracle\ora92\database\spfileorcl.ora For oracle 10g (R2) default location E:\oracle\product\10.2.0\Db_1\dbs\spfileorcl.ora PRACITCAL -----------------------------------------------------------------------------------------------------------To see which parameter file is running with the current instance. SQL> show parameter spfile; NAME TYPE VALUE or show parameter pfile;

------------ ----------- -----------------------------spfile string C:\ORACLE\PRODUCT\10.2.0\DB_1\DBS\SPFILEORCL.ORA

------------------------------------------------------------------------------------------------------------

VIEWING AND MODIFYING INITILIZATION PARAMETERS

Page # 4-18

We can use enterprise manager to view and modify initialization parameters by clicking On Administration Tab database configuration All Initialization Parameters OR ALTER SYSTEM command is used to change the value of instance parameter. DESC V$SPPARAMETER SELECT * FROM V$SPPARAMETER

Startup command behavior


ORDER OF PRECEDENCE : When the command startup is used, the spfileSID.ora on the server side is used to start up the instance. If the spfileSID.ora is not found the default Spfile.ora on the server side is used to start up the instance. If the default spfile.ora is not found, the initSID.ora on the server side will be used to start up the instance. A specified PFILE can override the use of the default SPFILE to start the instance. CREATE SPFILE FROM PFILE FOR ORACLE 9i Create SPFILE = 'd:\oracle\ora92\database\spfileorcl.ora' from pfile = 'd:\oracle\admn\orcl\pfile\initorcl.ora'; This work also be done without defining address and shutdown stage Create spfile from pfile ; FOR ORACLE 10g CREATE SPFILE = 'C:\oracle\product\10.2.0\Db_1\dbs\SPFILEORCL1.ORA' FROM PFILE = 'C:\oracle\product\10.2.0\Db_1\database\INITorcl.ORA' File created. This work also be done without defining address and shutdown stage Create spfile from pfile ;

STARTING UP THE DATABASE :


There are 4 stages to startup and shutdown the database When startup database

Page # 4-21

Shutdown Nomount mount open

AT NOMOUNT STAGE: An instance is typically started only in NOMOUNT mode During Database creation, During Re-creation of control files, Or During certain backup and recovery scenarios. At this stage following tasks are performed - Reading initialization parameter file - First spfileSID.ora - if not found then, spfile.ora - If not found then, initsid.ora specifying the PFILE parameter with STARTUP overrides the default behavior. - Allocating SGA - Starting the background Process - opening the alertSID.log files and trace files

AT MOUNT STAGE: Mounting a database includes the following tasks: - Associating a database with instance start at nomount stage. - Locating and opening the control file specified in the parameter file. - Reading the control file to obtain the name, status and destination of DATA FILES AND ONLINE REDO LOG FILES - To perform special maintenance operations

- Renaming data files (data files for an offline tablespace can be renamed when the database is open) - Enabling and disabling online redo log file archiving options. - Performing full Database Recovery AT OPEN STAGE: Opening a database includes the following tasks: - Open online data files - Open online redo log files NOTE : If any of the data files or online redo log files are not present when you attempt to open the database, then the oracle server returns an error. STARTUP COMMAND use to start database STARTUP NOMOUNT; to move the database form NOMOUNT TO MOUNT or from NOMOUNT TO OPEN use ALTER DATABASE COMMAND Alter database mount; Alter database open;

SQL> startup nomount ORACLE instance started. Total System Global Area 171966464 bytes Fixed Size Variable Size 787988 bytes 145750508 bytes

Database Buffers Redo Buffers SQL> SQL>

25165824 bytes 262144 bytes

SQL> alter database mount; Database altered. SQL> alter database open; Database altered.

OPENING A DATABASE IN RESTRICTRED MODE :

Page # 4-23 Last

A restricted session is useful when you perform structure maintenance or a database import and export. Then database can be started in restricted mode so that it is available only to users having restricted session / administrative privileges. This can be done in two ways 1- Before startup the database SQL> STARTUP RESTRICT ORACLE instance started. Total System Global Area 612368384 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. 1250452 bytes 281021292 bytes 327155712 bytes 2940928 bytes

or
2- After database open use alter system command SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION; SYSTEM ALTERED After placing an instance in restricted mode, you may want to kill all current users before performing administrative tasks. This can be done by the following SQL> ALTER SYSTEM KILL SESSION 'integer1,integer2' where integer1: value of the SID column in the V$SESSION view integer2: value of the SERIAL# column in the V$SESSION view Effects of terminating session : Alter system kill session command causes the background process PMON to perform the following steps 1- Rollback the user current transactions 2- Release all currently held table or row lock 3- Free all resources currently reserved by the user

SQL> STARTUP RESTRICT; OR After database open SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION ; SYSTEM ALTERED;

TERMINATE SESSION STEP # 1 SQL> CREATE USER ABC IDENTIFIED BY ABC ; User created. SQL> CONN ABC/ABC; ERROR: ORA-01045: user ABC lacks CREATE SESSION privilege; logon denied Warning: You are no longer connected to ORACLE. When we connect database an error occurred because this user has no right to connect with database. SQL> CONN SYS/ORACLE AS SYSDBA; Connected. SQL> GRANT CONNECT , RESOURCE TO ABC IDENTIFIED BY ABC; Grant succeeded. SQL> CREATE USER XYZ IDENTIFIED BY XYZ; User created. SQL> GRANT CONNECT, RESOURCE TO XYZ IDENTIFIED BY XYZ; Grant succeeded. STEP # 2 Open another SQL session abc/abc xyz/xyz session 2 session 3

scott/tiger session 4

STEP # 3 Query on scott/tiger session 4

SQL> SELECT * FROM EMP ; 14 rows selected. SQL> UPDATE EMP SET SAL=1000; 14 rows updated. SQL> SELECT * FROM EMP; 14 rows selected. STEP # 4

Now scott grant select right to abc and xyz


SQL> GRANT SELECT ON SCOTT.EMP TO ABC; Grant succeeded. SQL> GRANT SELECT ON SCOTT.EMP TO XYZ; Grant succeeded. STEP # 5 Query on abc/abc session 2

SELECT * FROM SCOTT.EMP; Query on xyz session 3

SELECT * FROM SCOTT.EMP; Step # 6 Now Come to SYS session session 1 Find username , SID, serial# from v$session view SQL> select USERNAME, SID, SERIAL# from V$SESSION

Then Alter system kill session integer1, integer2; Where Integer 1 = SID Integer 2 = serial # SQL> AFTER SYSTEM KILL THE SESSION 123,23; SYSTEM ALTERED STEP # 7 Query on abc SESSION Select * from scott.emp; An error occurred -------------------------------------------------------------SQL> SELECT * FROM EMP; SELECT * FROM EMP * ERROR at line 1: ORA-00028: your session has been killed --------------------------------------------------------------ERROR: ORA-01012: not logged on This means the session of this user has been killed he is not able to work with the database.

OPENING A DATABASE IN READ ONLY MODE


To prevent data from being modified by user transactions database can be open at read only mode. SQL> ALTER DATABASE OPEN READ ONLY;

In this mode of database no DDL, DML operation perform READ ONLY session can be used : - Execute query - Execute disk sort using locally manage tablespaces - take datafiles offline and online, but not tablespaces - perform recovery of offline datafiles and tablespaces.

SHUTDOWN THE DATABASE :


There are 4 ways to shutdown database 1- SHUTDOWN NORMAL : - No new connection can be made - Oracle server waits all users to disconnect - database and redo buffer written to the disk - Background process terminated and SGA remove from memory - Oracle closes and dismount database before shutdown - Next startup does not require recovery 2- SHUTDOWN TRANSACTIONAL : - No new connection can be made - User automatically disconnect after completing the transaction in progress - When all transaction finished shutdown occur immediately - Next startup does not require recovery

Page # 4-24

3- SHUTDOWM IMMEDIATE : - Current SQL statement being processed is not completed

- Oracle server does not wait for the user who are currently connected to the database - Oracle closes and dismount database before shutdown. - Next startup does not require recovery 4- SHUTDOWN ABORT : - Oracle does not wait for user currently connected to the database - Database and redo buffers are not written to the disks - The instance terminated without closing the files - The database is not close or dismounts - Next startup requires instance recovery.

MONITORING AN INSTANCE USING ALERT LOG FILES :


Each oracle instance has an alert<sid>. log file. The file is on the server with the database and is stored in the directory specified with the background_dump_dest initialization parameter. The file must manage by DBA It continuous grow while the database continuous to work Alert log file should be the first place you look when diagnostic day-to-day operations or errors. The alert file keeps a record of the following information - when the database was STARTED and SHUTDOWN - When the database ARCHIVELOG, and RECOVER - a list of non default initialization parameters - the startup backgroud process - the thread used by the instance - the log sequence number LGWR is writing to - information regarding to log switch - Administrative operations, such as the SQL statements CREATE, ALTER, DROP

DATABASE, and TABLESPACE creation of tablespace and undo segments - information regarding error messages such as ORA-600 and extent errors. - Block corruption errors (ORA-1578), and deadlock errors (ORA-60) The alertSID.log location is defined by the BACKGROUND_DUMP_DEST initialization parameter. DESTINATION: for ORACLE 10g E:\oracle\product\10.1.0\admin\ORCL\bdump NOTE : This file can grow to an unmanageable size. If alert log file delete during database open it must be re-created automatically. And re-create every time instance started.

DYNAMIC PERFORMANCE VIEWS:

Page # 4-32

The oracle database also maintains a more dynamic set of data about the operation and performance of database instance. This data stored in dynamic performance views. These views provide access to information about changing states and condition in the database. These views are based on virtual tables that are built from memory structure inside the database server. That is they are not true tables. These views are owned by the sys user. Any other user cannot access these views. However DBA grant privileges on them. These views are often referred as V DOLLOR V$ views. We can query V$ FIXED_TABLE to see all view names. SQL>SELECT * FROM V$FIXED_TABLE ; Read consistency is not granted on these views because the data is dynamic. Some dynamic performance views contain that is not applicable to all states of an instance or database. That is why some of them can show you data before a database is mounted or open. For example: If an instance has been started, but no database is mounted we can query V$BGPROCESS to see the list of background process that are running

SQL>SELECT * FROM V$BGPROCESS;

But we cannot query V$DATAFILE to see the status of data files. Because database is not mounted. Dynamic Performance Views include information about:

Session File States Progress of Jobs and Tasks Locks Backup States Memory Usage and Allocation System and Session Parameter SQL execution Statistics and Metrics

LIST OF COMMON DYNAMIC VIEWS V$CONTROLFILE List the name of control files V$DATAFILE Contain datafile information from the control file V$DATABASE Contains database information from the control file V$INSTANCE Display the state of current instance V$PARAMETER List parameter and values currently in effect fro the session V$SESSION List session information for each current session

V$SGA Contain summary information on the system global area (SGA) V$SPPARAMETER List the contents of SPFILE V$TABLESPACE Display the tablespace information from the control file V$THREAD Contain thread information from the control file V$VERSION Version numbers of core library components in the oracle server.

Flashback query

Query all data as it existed at a specific point in time. Flashback query lets you view and repair historical data. You can perform queries on the database as of a certain wall clock time or user-specified system commit number (SCN) Example --SQL> --- At 11 : 34 SQL> DELETE FROM EMP WHERE JOB='SALESMAN'; 4 rows deleted. SQL> COMMIT; Commit complete. Next, use the SELECT . . . AS OF query to retrieve Flashback data from the past. SQL> --- At 12 : 19 Delete Case

SQL> SELECT * FROM EMP AS OF TIMESTAMP SYSDATE-1/24; Or SQL> SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP ( '2010-03-05 11:33:30' , 'YYYY-MM-DD HH24:MI:SS'); Once we confirm the validity of the accidentally deleted data, its easy to reinsert the data by using the previous query as part of an INSERT statement, as shown here: SQL>INSERT INTO EMP SELECT * FROM EMP AS OF TIMESTAMP SYSDATE-1/24 WHERE JOB='SALESMAN' 4 rows created. SQL> COMMIT ; Commit complete. SQL> SELECT * FROM EMP UPDATE CASE SQL> --- AT 12 : 22 SQL> SELECT * EMP WHERE EMPNO=7900;
7900 JAMES CLERK 7698 03-DEC-81 950 30

SQL> UPDATE EMP SET SAL=3000 WHERE EMPNO=7900; 1 rows updated. SQL> COMMIT; Commit complete. SQL> SELECT * FROM EMP WHERE EMPNO=7900;
7900 JAMES CLERK 7698 03-DEC-81 3000 30

SQL> --- AT 12 : 40 SQL> SELECT * FROM EMP AS OF TIMESTAMP

TO_TIMESTAMP ('2010-03-08 12:20:00' , 'YYYY-MM-DD HH24:MI:SS') WHERE EMPNO=7900


7900 JAMES CLERK 7698 03-DEC-81 950 30

SQL>

UPDATE EMP SET SAL=(SELECT SAL FROM EMP AS OF TIMESTAMP TO_TIMESTAMP ('2010-03-08 12:20:00' , 'YYYY-MM-DD HH24:MI:SS') WHERE EMPNO=7900) WHERE EMPNO=7900;

1 row updated. SQL> SELECT * FROM EMP WHERE EMPNO=7900;


7900 JAMES CLERK 7698 03-DEC-81 950 30

UPDATE CASE 2 SQL> --- AT 12 : 22 SQL> SELECT * EMP WHERE JOB=SALESMAN;


7499 ALLEN 7521 WARD 7654 MARTIN 7844 TURNER SALESMAN SALESMAN SALESMAN SALESMAN 7698 20-FEB-81 7698 22-FEB-81 7698 28-SEP-81 7698 08-SEP-81 1600 1250 1250 1500 300 500 1400 0 30 30 30 30

SQL> UPDATE EMP SET SAL=10000 WHERE JOB=SALESMAN; 4 rows updated. SQL> COMMIT; Commit complete. SQL> SELECT * FROM EMP WHERE JOB=SALESMAN;
7499 ALLEN 7521 WARD 7654 MARTIN 7844 TURNER SALESMAN SALESMAN SALESMAN SALESMAN 7698 20-FEB-81 7698 22-FEB-81 7698 28-SEP-81 7698 08-SEP-81 10000 10000 10000 10000 300 500 1400 0 30 30 30 30

SQL> --- AT 12 : 40 SQL> SELECT * FROM EMP AS OF TIMESTAMP TO_TIMESTAMP ('2010-03-08 12:20:00' , 'YYYY-MM-DD HH24:MI:SS') WHERE JOB=SALESMAN
7499 ALLEN 7521 WARD 7654 MARTIN 7844 TURNER SALESMAN SALESMAN SALESMAN SALESMAN 7698 20-FEB-81 7698 22-FEB-81 7698 28-SEP-81 7698 08-SEP-81 1600 1250 1250 1500 300 500 1400 0 30 30 30 30

SQL> UPDATE EMP E SET SAL=( SELECT SAL FROM EMP AS OF TIMESTAMP TO_TIMESTAMP('2010-03-08 15:25:04','RRRR-MM-DD HH24:MI:SS') WHERE E.EMPNO=EMPNO AND JOB='SALESMAN') WHERE JOB='SALESMAN'; 4 row updated. SQL> SELECT * FROM EMP WHERE JOB=SALESMAN;
7499 ALLEN 7521 WARD 7654 MARTIN 7844 TURNER SALESMAN SALESMAN SALESMAN SALESMAN 7698 20-FEB-81 7698 22-FEB-81 7698 28-SEP-81 7698 08-SEP-81 1600 1250 1250 1500 300 500 1400 0 30 30 30 30

The previous examples use a time stamp to pinpoint the exact time the data was accidentally dropped. We could use the SCN for the transaction instead of time stamps. If you need to be very specific regarding the time point, use the time-stamp method to specify the time.

Query Parsing

What is Query Parsing ?


In oracle, statement, DDL/DML, or anything else, gets parsed. Parsing means what Oracle understands about the statement and based on that, how to execute it. This process is also known as Optimization of the query. The idea is how best Oracle can process the query or in other words, optimize its execution.

Parsing is of two types, Hard parse and Soft parse. If the said query is found in Oracle's cache, the query optimization is not needed.Oracle can pick up the optimized query and can execute it. If the query is run for the first time or the query's cached version is obsolete or flushed out from oracle's cache, query needs to be optimized and the process is c called Hard parse of the query. Hard parse , in general , is unavoidable as for the very first time, each query needs to be parsed , atleast for once. But in the subsequent executions, query should be simply soft parsed and executed. The mechanism which works in the backend for doing all this is called Optimizer. There are two versions of it, Rule Based and Cost Based. The Rule Based optimizer(RBO) is made deprecated from Oracle version release 10g. This was not very much efficient as it was "statement driven". The Cost Based is now the only and supported mode of optimizer which is, as the name suggests, cost based and takes into the account the resource consumption of the query.

Queries
============================================ Display time waited for each wait class. ============================================

SELECT a.wait_class, sum(b.time_waited)/1000000 time_waited FROM v$event_name a JOIN v$system_event b ON a.name = b.event GROUP BY a.wait_class;

=============================================== Display session wait information by wait class. ===============================================

SELECT * FROM v$session_wait_class WHERE sid = &enter_sid;

=============================================== Statistics (delete lock unlock) ===============================================

BEGIN DBMS_STATS.delete_table_stats('MY_SCHEMA','LOAD_TABLE'); DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE'); DBMS_STATS.unlock_table_stats('MY_SCHEMA','LOAD_TABLE');

END;

============================================= Retrieve SAMPLE Date (%) ============================================ SELECT e.empno, e.ename, d.dname FROM emp SAMPLE (10) e JOIN dept d ON e.deptno = d.deptno;

=============================================== past 30 minutes waits =============================================== select ash.event, sum(ash.wait_time +ash.time_waited) ttl_wait_time from v$active_session_history ash where ash.sample_time between sysdate - 30/1440 and sysdate group by ash.event order by 2 /

================================================= What user is waiting the most(last_hour) ? =================================================

select sesion.sid, sesion.username, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history, v$session sesion where active_session_history.sample_time between sysdate - 1/24 and sysdate and active_session_history.session_id = sesion.sid group by sesion.sid, sesion.username order by 3

============================================== What SQL is currently using the most resources? (lasthour) ============================================== select active_session_history.user_id, dba_users.username, sqlarea.sql_text, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history, v$sqlarea sqlarea, dba_users where active_session_history.sample_time between sysdate - 1/24 and sysdate and active_session_history.sql_id = sqlarea.sql_id and active_session_history.user_id = dba_users.user_id group by active_session_history.user_id,sqlarea.sql_text, dba_users.username order by 4 /

============================================== What object is currently causing the highest resource waits? (lasthour) ==============================================

select dba_objects.object_name, dba_objects.object_type, active_session_history.event, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history, dba_objects where active_session_history.sample_time between sysdate - 1/24 and sysdate and active_session_history.current_obj# = dba_objects.object_id group by dba_objects.object_name, dba_objects.object_type, active_session_history.event order by 4

=========================================== script to gather database statistics

========================================== begin DBMS_STATS.GATHER_DATABASE_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>'for all columns size auto', degree=>null, cascade=>true, no_invalidate=>false, options=>'GATHER STALE', gather_sys=>FALSE); DBMS_STATS.GATHER_DATABASE_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>'for all columns size auto', degree=>null, cascade=>true, no_invalidate=>false, options=>'GATHER EMPTY', gather_sys=>FALSE); end;

========================================= script to gather dictionary statistics ========================================

begin DBMS_STATS.GATHER_DICTIONARY_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>'for all columns size auto', degree=>null, cascade=>true, no_invalidate=>false, options=>'GATHER STALE' ); DBMS_STATS.GATHER_DICTIONARY_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>'for all columns size auto',

degree=>null, cascade=>true, no_invalidate=>false, options=>'GATHER EMPTY' ); end;

======================================== schedule a job (statistics) =======================================

begin sys.dbms_scheduler.create_job(job_name => '"SYS"."ESTIMATE100_GATHERAUTO"', job_type => 'PLSQL_BLOCK', job_action => 'begin DBMS_STATS.GATHER_DATABASE_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>''for all columns size auto'', degree=>null, cascade=>true, no_invalidate=>false, options=>''GATHER STALE'', gather_sys=>FALSE); DBMS_STATS.GATHER_DATABASE_STATS ( estimate_percent =>100, block_sample=>FALSE, method_opt=>''for all columns size auto'', degree=>null, cascade=>true, no_invalidate=>false, options=>''GATHER EMPTY'', gather_sys=>FALSE); end;', repeat_interval => 'FREQ=DAILY;BYHOUR=2;BYMINUTE=0;BYSECOND=0', start_date => trunc(sysdate+1) + 2/24, job_class => 'DEFAULT_JOB_CLASS', comments => 'Gather auto stats on every table with 100% sampling', auto_drop => FALSE, enabled => FALSE);

sys.dbms_scheduler.set_attribute(name => '"SYS"."ESTIMATE100_GATHERAUTO"', attribute => 'job_priority', value => 4); sys.dbms_scheduler.enable('"SYS"."ESTIMATE100_GATHERAUTO"'); end;

Oracle Datapump parameter REMAP_SCHEMA


Loads all objects from the source schema into a target schema.

Syntax
REMAP_SCHEMA=source_schema:target_schema

Suppose that you execute the following Export and Import commands to remap the hr schema into the scott schema:
> expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp

> impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp REMAP_SCHEMA=hr:scott

In this example, if user scott already exists before the import, then the Import REMAP_SCHEMA command will add objects from the hr schema into the existing scott schema. You can connect to the scott schema after the import by using the existing password (without resetting it). If user scott does not exist before you execute the import operation, Import automatically creates it with an unusable password. This is possible because the dump file, hr.dmp, was created by SYSTEM, which has the privileges necessary to create a dump file that contains the metadata needed to create a schema. However, you cannot connect to scott on completion of the import, unless you reset the password for scott on the target database after the import completes.

Basic Introduction to SQL*PLUS


The SQL*PLUS (pronounced "sequel plus") program allows you to store and retrieve data in the relational database management system ORACLE. Databases consist of tables which can be manipulated by structured query language (SQL) commands. A table is made up of columns (vertical) and rows (horizontal). A row is made up of fields which contain a data value at the intersection of a row and a column. Be aware that SQL*PLUS is a program and not a standard query language.

Getting Started
It is a prerequisite that users are registered for ORACLE, an ORACLE account is needed. On Unix platforms you must start the script oraenv to set the ORACLE environment. Enter the command . oraenv and press <Return;>. Don't forget to type a blank between the dot and oraenv.

If you are working with a PC using MS Windows, simply use Netinstall to install the product. You can find the software in the database folder. Enter sqlplus on unix systems or run it on Windows from the start menue. Answer the displayed prompts by entering your ORACLE user-name and password. The SQL*PLUS command prompt SQL > indicates that you are ready to work.

Some elementary Commands


alter user user identified by newpassword Help exit, quit ho[st] ho[st] command ho[st] oerr enables user to change the password accesses the SQL*PLUS help system terminates SQL*PLUS leads to the operating system without leaving SQL*PLUS executes a host operating system command accesses the ORACLE error help for unix

Editing and Executing


All entered input is stored as a single SQL*PLUS statement in the command buffer. Pressing the <Return> key while editing will either open a new numbered line or, if the previous line ends with a semicolon or consists of a single slash, will execute the SQL*PLUS command. Opening new numbered lines allows you to structure statements and enables you to refer to particular lines by later using edit functions. l[ist] ln or n ln m a text c/oldstring/newstring i del r[un] / ; lists command buffer (the current line is marked with a star) makes line n the current line and lists it lists lines n through m appends text to current line changes oldstring to newstring in current line inserts a line after current line deletes the current line runs and lists command buffer runs command buffer lists command buffer

If you use substitution variables, like &variable, instead of values or names in your SQL statement, SQL*PLUS will prompt you and substitute the entered value. A substitution variable is a user variable name preceded by an ampersand.

Working with Command Files


You can use command files to save complex commands. After creating a command file you can retrieve, edit, and run it. The default file extension is .sql . If you use other file extensions you must write the full file name like name.extension. ed[it] overwrites a scratch file with the contents of the command buffer edit enables you to edit this file with the defined host operating system editor. The name of the scratch file is afiedt.buf . After leaving the editor the buffer is listed and you can execute it.

ed[it] filename sav[e] filename sav[e] filename [option]

enables you to edit an existing or new file filename.sql creates file filename and stores the command buffer into it stores command buffer into file filename Possible options are cre[ate], app[end], rep[lace]. get filename loads the host operating system file filename into the command buffer sta[rt] filename [arg1 arg2 ..] executes file filename arg1 arg2 .. are arguments you wish to pass to the command file If you run a command file in which a substitution variable like &1 is used, you will be prompted for that value. You can avoid being prompted by passing an argument to the command file.

Queries and Subqueries


Retrieving data from the database is the most common SQL operation. A query is an SQL command (specifically a select) that retrieves information from one or more tables. A subquery is a select which is nested in another SQL command

The Describe Command


desc[ribe] name lists the column definition for table or view name

Basic Select Commands


The basic select command consists of two parts, called clauses: select some data from table. Examples select * from tabname selects all columns and rows from table tabname select distinct col from tabname selects column col from table tabname and returns only one copy of duplicate rows select col1, col2 ... from tabname selects specified columns from table tabname select col1, col2*3 from tabname selects col1,col2 from table tabname and lists col1, col2 multiplied by 3 select 2*3 from dual calculates 2*3 and will display the result

Selecting Rows and Ordering


To retrieve specific rows from a table you need to add a where clause. A where clause consists of one or more search conditions which can be connected by logical operators. To display the retrieved data in a specific order you need to add an order by clause. Examples select col1,col2 from tabname where col1 < col2 and col2 !=0 order by col2 Columns col1, col2 are selected from table tabname and all rows where col2 is not equal to zero and col1 is less than col2 are displayed in an ascending order (ordered bycol2). select col1,col2 from tabname where col1 like '_A%' or col1 like '+++' order by col2 desc

Columns col1,col2 are selected from table tabname and all rows where col1 is equal to '+++' or where the second letter in col1 is an 'A' are displayed in a descending order. In this example two different escape characters are used. The underscore matches exactly one character whereas the percent sign can match zero or more characters. select col1,col2 from tabname where col1 in ( value1,value2 ) Columns col1,col2 are selected from table tabname and all rows where col1 is equal to value1 or to value2 are displayed. select col1,col2 from tabname where col1 not between value1 and value2 Columns col1,col2 are selected from table tabname and all rows where col1 is not in the range between value1 and value2 are displayed.

Using Set Operator


Set operators combine the results of two queries into a single result. If a statement contains multiple set operators, they will be evaluated from left to right. set operator union union all intersect minus returns all distinct rows selected by either query returns all rows selected by either query, including all duplicates returns all distinct rows selected by both queries returns all distinct rows selected by the first query but not the second

Example select * from table1 union all select * from table2 This will combine all rows, columns of table1 and table2.

Querying Multiple Tables


If you want to retrieve information from different tables, you can do this by issuing different queries or a single JOIN query. In a JOIN query, you list the names of the tables you are querying in the from clause and the names of the linking columns in the where clause. The omission of the linking where clause causes a cartesian product of both tables. A JOIN combines rows from two or more tables where columns which the tables have in common match. If a column name is not unique, you must use a prefix to make clear which column from which table you want to select (e.g. tablename.columnname).

Simple Join
select col1,tab1.col2,col3 from tab1,tab2 where tab1.col2=tab2.col2 This is the most common type of join. It returns rows from two tables based on an equality condition, therefore it is also called an equi-join

Non-Equi Join
select tab1.col1,tab2.col2 from tab1,tab2 where tab1.col1 between lowval and highval Since this join doesn't return rows based on a equality condition, it is called a non-equi join.

Self Join
select alias1.col1,alias2.col1 "Header 2" from tabname alias1,tabname alias2 where alias1.col2=alias2.col3 In this example the table tabname is joined with itself. Using of two different alias names for the same table allows you to refer to it twice. Since the names of the resulting columns in this example are the same, the second column gets a new header.

Outer Join
select col1,col2 from tab1,tab2 where tab1.col1=tab2.col2(+) Suppose you want to retrieve information from two tables where not all rows match but the result should contain all values from one or more columns. A simple join will select only matching rows whereas the outer join extends the result. All matching rows will be selected and when you append the outer join operator (+) to a column name, those rows which do not match will also be selected. In the example the number of rows which are selected is the number of rows in table tab2. If rows match, the outer join works as a simple join, if not, the values from tab2.col2 and a NULL value for the non existing value of tab1.col1 will be selected.

Data Definition Language DDL


DDL commands allow you to create, alter and delete objects (e.g tables, views) and also to grant and revoke privileges. create table tabname (col1 type1,col2 type2,...) creates table tabname col1 ... coln are the column names, type1,type2.. specifies the datatype of a column which can be number, date, char, varchar. number(p,s) specifies a fixed point number having precision p (total number of digits) and scale s (number of digits to the right of the decimal point). number(p) specifies a fixed point number. number specifies a floating point number. char(size) specifies fixed length (max 255) character data of length size. varchar2(size) specifies variable length (max 2000) character string having a maximum length of size bytes. create table tabname as subquery creates table tabname subquery inserts rows into the table upon its creation. A subquery is a form of the select command which enables you to select columns from an existing table. create view viewname as subquery creates view viewname A view is a logical table based on one or more tables. drop table tabname alter table tabname add (col1 type1,col2 type2,...) alter table tabname modify (col1 type1,col2 type2,...) rename oldname to newname alter user user identified by newpassword; grant privilege on object to user revoke privilege on object from user removes table tabname from the database adds columns to table tabname modifies column definitions renames table oldname enables user to change the password to newpassword grants a privilege to user revokes a privilege from user

Data Manipulation Language DML


DML commands manipulate and query data in existing tables. These commands do not commit current actions. insert into tabname (col1,col2...) values (val1,val 2...) inserts rows into table tabname insert into tabname subquery inserts rows(selected by a subquery) into &table tabname update tabname set col1=expr1,col2=expr2... where cond updates rows in table tabname columns are set to values of expressions if condition cond is true update tabname set (col1,col2...)=(subquery) where cond updates rows in table tabname

delete from tabname [where cond]

columns are set to selected values if condition cond is true either deletes all rows from table tabname or rows where cond is true

Schema
When you select data from a table or you insert data into a table then this object has to be in your own schema. In other words, you must be the owner. If you are not the owner of the object, but the owner granted some privileges to you, you have to specify schema.tabname. Example
select * from scott.emp

Transaction Control Commands


Transaction Control Commands manage changes made by Data Manipulation Language commands. A transaction (or logical unit of work) is a sequence of SQL statements that ORACLE treats as a single unit. A transaction ends with a commit, rollback , exit, or any DDL statement which issues an implicit commit. In most cases transactions are implicitly controlled. commit rollback rollback to savepoint savep savepoint savep makes all changes since the beginning of a transaction permanent rolls back (undoes) all changes since the beginning of a transaction rolls back to savepoint savep defines savepoint savep

Resize Datafile to Optimal Size


SQL> Define BLKSIZE=<="" block="" db="" e.g="" for="" size="" span="" your=""> SELECT /*+rule*/ 'ALTER DATABASE DATAFILE '''||FILE_NAME||''' RESIZE '||CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 )||'M;', CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SMALLEST, CEIL( BLOCKS*&&BLKSIZE/1024/1024) CURRSIZE, CEIL( BLOCKS*&&BLKSIZE/1024/1024) CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SAVINGS FROM DBA_DATA_FILES A, ( SELECT FILE_ID, MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS GROUP BY FILE_ID ) B WHERE A.FILE_ID = B.FILE_ID(+)

OUTPUT

'ALTERDATABASEDATAFILE'''||FILE_NAME||'''RESIZE'||CEIL((NVL(HWM,1)*8192)/1024/1024)||'M;' ALTER DATABASE DATAFILE 'D:\ORACLCE\APP\ADNAN\ORADATA\ORCL\SYSTEM01.DBF' RESIZE 684M; ALTER DATABASE DATAFILE 'D:\ORACLCE\APP\ADNAN\ORADATA\ORCL\SYSAUX01.DBF' RESIZE 466M; ALTER DATABASE DATAFILE 'D:\ORACLCE\APP\ADNAN\ORADATA\ORCL\UNDOTBS01.DBF' RESIZE 89M; ALTER DATABASE DATAFILE 'D:\ORACLCE\APP\ADNAN\ORADATA\ORCL\USERS01.DBF' RESIZE 5M; ALTER DATABASE DATAFILE 'D:\ORACLCE\APP\ADNAN\ORADATA\ORCL\EXAMPLE01.DBF' RESIZE 82M;

SMALLEST 684 466 89 5 82

CURRSIZE 690 490 90 5 100

SAVINGS 6 24 1 0 18

Oracle OCP test Quiz


ORA-19809: limit exceeded for recovery files db_recovery_file_dest_size and archiver error
As we working on development systems normal work around was to free up some space in the db_recovery_file_dest and hope that that the database would continue after the archiver logger error. Invariably we needed to restart the database and hope that the problem resolves itself. The alert log typically shows an entry as follows :ORA-19815: WARNING: db_recovery_file_dest_size of 42949672960 bytes is 100.00% used, and has 0 remaining bytes available. ************************************************************************ You have following choices to free up space from flash recovery area: 1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard, then consider changing RMAN ARCHIVELOG DELETION POLICY. 2. Back up files to tertiary device such as tape using RMAN BACKUP RECOVERY AREA command. 3. Add disk space and increase db_recovery_file_dest_size parameter to reflect the new space.

4. Delete unnecessary files using RMAN DELETE command. If an operating system command was used to delete files, then use RMAN CROSSCHECK and DELETE EXPIRED commands. ************************************************************************ Errors in file /u00/app/oracle/diag/rdbms/sid/SID/trace/SID_ora_20214.trc: ORA-19809: limit exceeded for recovery files ORA-19804: cannot reclaim 1063256064 bytes disk space from 42949672960 limit ARCH: Error 19809 Creating archive log file to +FRA connect from Rman RMAN > CHANGE ARCHIVELOG ALL VALIDATE; RMAN > DELETE EXPIRED ARCHIVELOG ALL; Metalink Document https://metalink.oracle.com/metalink/plsql/f?p=200:27:9917209390401703684::::p27_id,p27_show_header,p27_show_help:621248.995, 1,1 has an unpublished note on the subject Cause ~~~~~~~ We register all the information about what we place in the flash recovery area in the rman repository/controlfile. If we determine that there is not sufficient space in the recovery file destination, as set by dest_size then we will fail. Just deleting the old backups/archive logs from disk is not sufficient as its the rman repository/controlfile that holds the space used information. Fix ~~~~ There are a couple of possible options. 1) Increase the parameter db_recovery_file_dest_size 2) Stop using the db_recovery_file_dest by unsetting the parameter. ( This assumes you never really wanted to use this option ) 3) Remove the Entries from the rman repository/Controlfile

The removal is desribed in the RMAN documentation but this is a quick and dirty way if you dont have an rman repository but could endanger your ability to recover so be careful. a) delete unwanted archive log files from disk ( rm /del ) b) connect to rman c) rman crosscheck archivelog all marks the controlfile that the archives have been deleted d) rman delete expired archivelog all deletes the log entries identified above

MAINTAINING ONLINE REDO LOG FILES

Before adding group and members first check how many group already in the database using the view SQL > select * from V$LOG; SQL > select * from V$LOGFILE; ADDING ONLINE REDO LOG FILE GROUPS In some cases you may need to create additional log file group. To solve availability problem this can be done by SQL command.

ALTER DATABASE ADD LOGFILE GROUP 4 ('D:\ORACLE\ORADATA\ORCL\REDO04A.LOG' , 'D:\ORACLE\ORADATA\ORCL\REDO04B.LOG') SIZE 1M;

ADDING ONLINE REDO LOG FILE MEMBER We can add new members to existing online redo log file groups using the following SQL command:

ALTER DATABASE ADD LOGFILE MEMBER 'D:\ORACLE\ORADATA\ORCL\REDO04C.LOG' TO GROUP 4 , 'D:\ORACLE\ORADATA\ORCL\REDO03B.LOG' TO GROUP 3

When adding member use the fully specified name of the log file members, Otherwise the files are created in a default directory of the database server.

DROPPING ONLINE REDO LOGFILE GROUPS

Online redo log file group can be dropped by the SQL command

ALTER DATABASE DROP LOGFILE GROUP 4;

RESTRICTIONS: While deleting online redo log file group we must consider the following restrictions

1. An instance requires at least 2 group of online redo log files. 2. An ACTIVE and CURRENT group cannot be dropped. 3. When an online redo log file group is dropped, the operating system file are nat deleted.

NOTE: If database is in archive mode group can not be dropped until archival process not completely done. Even group is in INACTIVE MODE

DROPPING ONLINE REDO LOG FILE MEMBERS

We can drop online redo log file member by using SQL statement

ALTER DATABASE DROP LOGFILE MEMBER 'D:\ORACLE\ORADATA\ORCL\REDO02B.LOG';

RESTRICTIONS: While deleting online redo log file members we must consider the following restrictions

1. We cannot drop the last valid member of any group 2. If the group is current, you must force switch log file before you can drop the member. 3. If the database is in archived mode and the log file group to which the member belongs is not
archived, the member can not be dropped

4. when a online redo log file member is dropped, the operation system file cannot be deleted.

REALLOCATING OR RENAMING ONLINE REDO LOG FILES

The location of online redo log files can be changed by renaming the online redo log files. Before renaming the online redo log files, ensure that the new online redo log file can exists. Relocate or rename online redo log files in one of the 2 following ways:

1. Add new members and drop old members 2. Alter database rename file command

ALTER DATABASE RENAME FILE COMMAND use following steps:

STEP # 1: Shutdown the database SQL > SHUTDOWN;

STEP # 2: Copy the online redo log file to the new location in operating system

STEP # 3: To start database at mount SQL > STARTUP MOUNT;

STEP # 4: Rename the online redo log file member using the SQL >ALTER DATABASE RENAME FILE

D:\ORACLE\ORADATA\ORCL\REDO03C.LOG TO D:\ORACLE\ORADATA\REDO03B.LOG;

CLEARING ONLINE REDO LOG FILES

An online redo log file might become corrupt while the database is open , and ultimately stop database activity because activity cannot continue. In this situation ALTER DATABASE CLEAR LOGFILE command can be used to reinitialize the online redo log file with out shutdown the database.

The command can overcome two situations where dropping online redo log file is not possible: If the are only two log groups The corrupt online redo log file belongs to the current group

Use the UNARCHIVED keyword in the command to clear the corrupted online file to avoid archiving. ONLINE REDO LOG FILE CONFIGURATION

To determine the appropriate number of online redo log files for a database, you must test different configurations.

Oracle server multiplexed group can contain different no. of members this is called asymmetric configuration (means all members in a group placed on same disk) In this case we get temporary result because of disk failure

Oracle recommended try to built symmetric configuration (means member of any group also placed on different location) By doing this if one member is not available the other is available and instance does not shutdown Separate archive log file and online redo log files on different disks to reduce contention b/w the ARCn and LGWR

Data files and online files should be placed on different disk to reduce contention b/w LGWR and DBWn

SIZE OF ONLINE REDO LOG FILE The minimum size of online redo log file is 50KB and Maximum size is specifying in the operating system. SQL > Select * from V$log; SQL > Select * from V$logfile; In the V$log views the status column having the values

1. UNUSED:
Indicates that the online redo log file group has never been written to

2. CURRENT:
Indicates the current online redo log file group. This implies that the online redo log file group is active.

3. ACTIVE:
Indicates that the online redo log file group is active but is not the current online is needed for crash recovery. redo log file group. It

4. INACTIVE:
Indicates that the online redo log file group is no longer needed for instance recovery.

In the V$logfile views the status column having the values

1. INVALID:
Indicates that the file is inaccessible

2. STALE:
Indicates that the contents of the file are incomplete

3. DELETED:

Indicates that the file is no longer used

4. BLANK
Indicates that the file is in use.

Tuesday, November 15, 2011


Configuring Flashback Database through SQL

In order to configure the Flashback Database feature, we need to step through a series of operations, as follows:

1. Check that our database is in the archive log mode by either querying the $DATABASE view, or by simply issuing the following command: SQL> ARCHIVE LOG LIST The preceding output reveals that the database is indeed running in the archive log mode. If it isnt, we can turn archive logging on with the ALTER DATABASE statement shown in the following code, after first shutting down the database and starting it up initially in the mount mode: SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN

2. Set up a flash recovery area, as we learn in the FLASH RECOVERY AREA CHAPTER

3. Set the DB_FLASHBACK_RETENTION_TARGET initialization parameter to specify how far back you can flashback your database. The following code sets the Flashback target to 1 day (1,440 minutes): SQL> SHOW PARAMETER RETENTION SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440;

4. Shut down and restart the database in the mount exclusive mode. If we are using a single instance, a simple MOUNT command can be used:

SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT;

5. Enable the Flashback Database feature: SQL> ALTER DATABASE FLASHBACK ON;

SQL> ALTER DATABASE OPEN;

6. Use the ALTER DATABASE OPEN command to open the database and then confirm that the Flashback Database feature is enabled, by querying the V$DATABASE view: SQL> SELECT FLASHBACK_ON FROM V$DATABASE;

Oracle Flashback Concept


FLASHBACK LEVELS In Oracle Database 10g, you have access to flashback techniques at the row, table, and database levels, as follows: Row level We can use Flashback techniques to undo erroneous changes to individual rows. There are three types of row-level Flashback techniques, and all of them rely on undo data stored in the undo tablespace: Flashback Query: Allows us to view old row data based on a point in time or an SCN. we can view the older data and, if necessary, retrieve it and undo erroneous changes. Flashback Versions Query: Allows us to view all versions of the same row over a period of time so that you can undo logical errors. It can also provide an audit history of changes, effectively allowing us to compare present data against historical data without performing any DML activity. Flashback Transaction Query: Lets us view changes made at the transaction level. This technique helps in analysis and auditing of transactions, such as when a batch job runs twice and you want to determine which objects were affected. Using this technique, we can undo changes made by an entire transaction during a specified period. Table level There are two main Flashback features available at the table level:

Flashback Table: Restores a table to a point in time or to a specified SCN without restoring data files. This feature uses DML changes to undo the changes in a table. The Flashback Table feature relies on undo data. Flashback Drop: Allows us to reverse the effects of a DROP TABLE statement, without resorting to a point-in-time recovery. The Flashback Drop feature uses the Recycle Bin to restore a dropped table. Database level Flashback Database: The Flashback Database feature allows us to restore an entire database to a point in time, thus undoing all changes since that time. For example, we can restore a dropped schema or an erroneously truncated table. Flashback Database mainly uses flashback logs to retrieve older versions of the data blocks; it also relies, to a much smaller extent, on archived redo logs to completely recover a database without restoring data files and performing traditional media recovery. As we can see, Oracles Flashback technology employs a variety of techniques. The row-level Flashback techniques and Flashback Table use undo data Flashback Drop use new concept of Recycle Bin and Flashback Database rely on the new concept of Flashback log data, respectively, to undo errors at various levels. We will focus on these techniques in this chapter. Flashback vs.Traditional Recovery Techniques Unlike traditional recovery techniques, the primary use of Flashback techniques isnt to recover from a media loss, but to recover from human errors. For example, we may accidentally change the wrong set of data or drop a table. Or we may just want to query historical data and perform change analysis. In some extreme cases, we may want to revert the entire database to a previous point in time. Note: If we have a damaged disk drive, or if there is physical corruption (not logical corruption due to application or user errors) in our database, we must still use the traditional methods of restoring backups and using archived redo logs to perform the recovery. Traditionally, the only way to recover from human error was to employ traditional backup and restore techniques. The process of restoring the database files and then rolling forward through all the redo logs could often involve significant downtime, however, and Flashback technology offers us a much more efficient and much faster way to recover from logical errors, in most cases while the database is still online and available to users. Furthermore, Flashback techniques allow us to selectively restore certain objects. With traditional techniques, we have no choice but to recover the entire database. FLASHBACK DATABASE Before Oracle Database 10g, if we suffered logical database corruption, we would undertake traditional point-in-time recovery techniques, restoring data file backup copies and then using archived redo logs to advance the database forward. This was often time-consuming and cumbersome. No matter how limited the extent of the corruption, we would need to restore entire data files and apply the archived redo logs. Note: Oracle can check data block integrity by computing checksums before writing the data blocks to disk. When the block is subsequently read again, the checksum for the data block is computed again, and if the two checksums differ, there is likely corruption in the data block. By setting the DB_BLOCK_CHECKSUM initialization parameter to FULL, we can make the database perform the check in the database buffer cache itself, thus eliminating the possibility of corruption at the physical disk level. The DB_BLOCK_CHECKSUM parameter is FALSE by default.

In Oracle Database 10g, the Flashback Database feature restores data files but without requiring backup data files and using just a fraction of the archived redo log information. A Flashback Database operation simply reverts all data files of the database to a specified previous point in time. With Flashback Database, the time it takes to recover is directly proportional to the number of changes that we need to undo. Thus, it is the size of the error and not the size of the database that determines the time it takes to recover. This means that we can recover from logical errors in a fraction of the time perhaps as little as a hundredth of the time, depending on the size of the database that it would take using traditional methods.

Note: Flashing back a database is possible only when there is no media failure. If you lose a data file or it becomes corrupted, youll have to recover using a restored data file from backups. We can use Flashback Database in the following situations: To retrieve a dropped schema When a user error affects the entire database When we truncate a table in error When a batch job performs only partial changes

The Flashback Database feature uses flashback database logs, which are stored in the new flash recovery area, to undo changes to a point in time just before a specified target time or SCN. Since the specified target time and the actual recovery time may differ slightly, we then use archived redo logs to recover the database over the short period of time between the target time and the actual recovery time.

Once the Flashback Database feature is enabled, we simply use the FLASHBACK DATABASE command to return the database to its state at a previous time, SCN, or log sequence. We can issue the FLASHBACK DATABASE command from either RMAN or SQL*Plus. The only difference is that RMAN will automatically retrieve the necessary archived redo logs, whereas in SQL*Plus we may have to supply the archived redo logs, unless we use the SET AUTORECOVERY ON feature in SQL*Plus. Well take a look at the whole Flashback Database process in more detail shortly, but first lets look at how to enable (and disable) the Flashback Database feature.

Tip Since we need the current data files in order to apply changes to them, we cant use the Flashback Database feature in cases where a data file has been damaged or lost.

Oracle Shutdown
SHUTDOWN THE DATABASE: There are 4 ways to shutdown database

1- SHUTDOWN NORMAL: No new connection can be made Oracle server waits all users to disconnect database and redo buffer written to the disk Background process terminated and SGA remove from memory Oracle closes and dismount database before shutdown Next startup does not require recovery

2- SHUTDOWN TRANSACTIONAL: No new connection can be made User automatically disconnect after completing the transaction in progress When all transaction finished shutdown occur immediately Next startup does not require recovery

3- SHUTDOWM IMMEDIATE: Current SQL statement being processed is not completed Oracle server does not wait for the user who are currently connected to the database Oracle closes and dismount database before shutdown. Next startup does not require recovery

4- SHUTDOWN ABORT: Oracle does not wait for user currently connected to the database Database and redo buffers are not written to the disks The instance terminated without closing the files The database is not close or dismounts Next startup requires instance recovery.

Oracle 10g startup


STARTING UP THE DATABASE: There are 4 stages to startup and shutdown the database When startup database Shutdown Nomount Mount Open

At NOMOUNT Stage: An instance is typically started only in NOMOUNT mode During Database creation, During Re-creation of control files,

Or During certain backup and recovery scenarios.

At this stage following tasks are performed Reading initialization parameter file

First spfileSID.ora if not found then, spfile.ora If not found then, initsid.ora specifying the PFILE parameter with STARTUP overrides the default behavior. Allocating SGA Starting the background Process opening the alertSID.log files and trace files

At MOUNT Stage: Mounting a database includes the following tasks: Associating a database with instance start at nomount stage. Locating and opening the control file specified in the parameter file. Reading the control file to obtain the name, status and destination of

DATA FILES AND ONLINE REDO LOG FILES To perform special maintenance operations Renaming data files

(data files for an offline tablespace can be renamed when the database is open) Enabling and disabling online redo log file archiving, flashback options. Performing full Database Recovery

We usually need to start up a database in mount mode when youre doing activities such as performing a full database recovery, changing the archive logging mode of the database, or renaming data files. Note that all three of these operations require Oracle to access the data files but cant accommodate any user operations in the files.

At OPEN Stage: The last stage of the startup process is opening the database. When the database is started in the open mode, all valid users can connect to the database and perform database operations. Prior to this stage, the general users cant connect to the database at all. You can bring the database into the o pen mode by issuing the ALTER DATABASE command as follows:

SQL> ALTER DATABASE OPEN; Database altered. More commonly, we simply use the STARTUP command to mount and open our database all at once: SQL> STARTUP To open the database, the Oracle server first opens all the data files and the online redo log files, and verify that the database is consistent. If the database isnt consistentfor example, if the SCNs in the control files dont match some of the SCNs in the data file headersthe background process will automatically perform an instance recovery before opening the database. If media recovery rather than instance recovery is needed, Oracle will signal that a database recovery is called for and wont open the database until you perform the recovery. IN SHORT Opening a database includes the following tasks: Open online data files Open online redo log files

NOTE If any of the data files or online redo log files are not present when you attempt to open the database, then the oracle server returns an error. STARTUP COMMAND use to start database STARTUP NOMOUNT; to move the database from NOMOUNT TO MOUNT or from NOMOUNT TO OPEN use ALTER DATABASE COMMAND ALTER DATABASE MOUNT; ALTER DATABASE OPEN;

Oracle Parameter related to Database Buffer Cache


DB_CACHE_SIZE This parameter sets the size of the default buffer pool for those buffers that have the primary block size (this is the block size defined by DB_BLOCK_SIZE). For example, we can use a number like 1,024MB. Default value: If we're using the SGA_TARGET parameter, the default is 0. If we aren't using SGA_TARGET, its the greater of 48MB or 4MB * number of CPUs * granule size. Parameter type: Dynamic. It can be modified with the ALTER SYSTEM command. DB_KEEP_CACHE_SIZE

The normal behavior of the buffer pool is to treat all the objects placed in it equally. That is, any object will remain in the pool as long as free memory is available in the buffer cache. Objects are removed (aged out) only when there is no free space. When this happens, the oldest unused objects sitting in memory are removed to make space for new objects. The use of two specialized buffer poolsthe keep pool and the recycle poolallows us to specify at object-creation time how we want the buffer pool to treat certain objects. For example, if we know that certain objects dont really need to be in memory for a long time, we can assi gn them to a recycle pool, which removes the objects right after theyre used. In contrast, the keep pool always retains an object in memory if its created with the KEEP option. The DB_KEEP_CACHE_SIZE parameter specifies the size of the keep pool, and its set as follows: DB_KEEP_CACHE_SIZE = 500MB Default value: 0; by default, this parameter is not configured. Parameter type: Dynamic. It can be changed by using the ALTER SYSTEM command. DB_RECYCLE_CACHE_SIZE The DB_RECYCLE_CACHE_SIZE parameter specifies the size of the recycle pool in the buffer cache. Oracle removes objects from this pool as soon as the objects are used. The parameter is set as follows: DB_RECYCLE_CACHE_SIZE = 200MB Default value: 0; by default, this parameter is not configured. Parameter type: Dynamic. It can be changed by using the ALTER SYSTEM command. DB_nK_CACHE_SIZE If we prefer to use nonstandard-sized buffer caches, we need to specify the DB_nK_CACHE_SIZE parameter for each, as in the following two examples: DB_4K_CACHE_SIZE=2048MB DB_8K_CACHE_SIZE=4096MB The values for n that can be used in this parameter are 2, 4, 8, 16, or 32. Default value: 0 Parameter type: Dynamic. You can change this parameters value with the ALTER SYSTEM command.

Oracle SGA_TARGET Parameter


SGA_TARGET The SGA_TARGET parameter determines whether our database will use automatic shared memory management. In order to use automatic shared memory management, set the SGA_TARGET parameter to a positive value. We dont have to specify the five automatic shared memory components in our initialization file (shared pool, database buffer cache, Java pool, large pool, and streams pool). Oracle will show zero values for these when we query the V$PARAMETER view, which shows the values of all our

initialization parameters. We may, however, choose minimum values for any of the five auto-memory parameters, in which case we should specify the values in the initialization file. If we set the SGA_TARGET parameter to zero, we disable automatic shared memory management, and we have to manually set the values for all the previously mentioned SGA components. When we use automatic shared memory management by setting a value for the SGA_TARGET parameter, the memory we allocate for the manually sized components (the log buffer, the buffer cache keep pool, the buffer cache recycle pool, the nondefault-sized buffer cache pools and nKBsized buffer caches, and the fixed SGA allocations) will be deducted from the SGA_TARGET value first. To get a quick idea about how much memory to allocate for the SGA_TARGET parameter under automatic shared memory management, run the following query:

SQL> SELECT ( (SELECT SUM(value) FROM V$SGA) (SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY) ) "SGA_TARGET" FROM DUAL;

The value for SGA_TARGET must be a minimum of 64MB, and the upper limit is dependent on the operating system.

Default value: 0 (no automatic shared memory management) Parameter type: Dynamic. You can use the ALTER SYSTEM command to modify the value.

Note If we set the SGA_TARGET parameter, we can leave out all the automatic SGA parametersDB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE, and STREAMS_POOL_SIZEfrom our init.ora file, unless we want to ensure a minimum size for some of these, in which case we can explicitly specify them in the file.

Oracle Compatible Parameter


COMPATIBLE

The COMPATIBLE parameter allows us to use the latest Oracle database release, while making sure that the database is compatible with an earlier release.

Suppose we upgrade to the Oracle Database 10g Release 2 version, but our application developers havent made any changes to their Oracle9i application. In this case, we could set the COMPATIBLE parameter equal to 9.2 so the untested features of the new Oracle version wont hurt our application. Later on, after the application has been suitably upgraded, we can reset the COMPATIBLE initialization parameter to 10.2.0, which is the default value for Oracle Database 10g Release 2 (10.2). If, instead, we immediately raise the compatible value to 10.2, we can use all the new 10.2 features, but we wont be able to downgrade our database to 9.2 or any other lower versions. We must understand this irreversible compatibility clearly, before we set the value for this parameter. Default value: 10.2.0 Parameter type: Static

DB_NAME and DB_UNIQUE_NAME


DB_NAME The DB_NAME parameter sets the name of the database. This is a mandatory parameter and the value is the same as the database name you used to create the database. The DB_NAME value should be the same as the value of the ORACLE_SID environment variable. This parameter cant be changed after the database is created. You can have a DB_NAME value of up to eight characters. Default value: false Parameter type: Static DB_UNIQUE_NAME The DB_UNIQUE_NAME parameter lets us specify a globally unique name for the database.

Types of initialization parameter file


Oracle uses a parameter file to store the initialization parameters and their settings for an instance. We can use either of the following two types of parameter files:

1. Server parameter file (SPFILE): Dynamic persistent server parameter file SPFILE, commonly 2.
referred as spfileSID.ora A binary file that contains the initialization parameters Initialization parameter file (pfile): Static parameter file, PFILE, commonly referred as initSID.ora A text file that contains a list of all initialization parameters.

Contents Of Parameter File A list of instance parameters The name of the database the instance is associated with

Allocation for memory structures of the SGA What to do with filled online redo log files. (Archive Destination) The names and locations of control files. Information about undo tablespace. Text initialization parameter file: PFILE initsid.ora (sid) means database name Text File Modified with an operating system editor Modification made manually Whenever we need to change the value of the parameter file we must shutdown the database. The pfile is read only during instance startup. (Changes take effect on next startup) For oracle 10g (R2) default location E:\oracle\product\10.2.0\Db_1\database\initorcl.ora Server parameter file: SPFILE : spfilesid.ora (sid) means database name - Binary File - Maintained by the oracle server - Always resides on the server side having read and write by database server - Ability to make changes persistent across SHUTDOWN and STARTUP - Can self-tune parameter values - Can have RMAN support for backup the initialization parameter

Verify Database 32/64 bit creation


to check database been created originally in a 32-bit environment and is now on a 64-bit platform

select decode(instr(metadata, 'B023'), 0, '64bit Database', '32bit Database') "DB Creation" from kopm$;

Saturday, September 8, 2012


DB Buffer Usage Query

-- DB Buffer Usage select decode(state, 0, 'Free', 1, decode(lrba_seq, 0, 'Available', 'Being Modified'), 2, 'Not Modified', 3, 'Being Read', 'Other') block_status, count(*) "count" from sys.x$bh group by decode(state, 0, 'Free', 1, decode(lrba_seq, 0, 'Available', 'Being Modified'), 2, 'Not Modified', 3, 'Being Read', 'Other');

CPU USAGE BY current Users


--- CPU USAGE BY USER SELECT s.sid, s.serial#, nvl(s.username, '[ORACLE PROCESS]') user_name, s.osuser os_user, k.ksusestv cpu_usage, s.program, s.client_info, s.module, s.machine, s.action,

s.logon_time FROM v$session s, sys.x$ksusesta k WHERE k.indx = s.sid AND k.ksusestn = 12 AND s.type != 'BACKGROUND' ORDER BY k.ksusestv DESC

Find Power Privileges given to users To check which Powerful privileges are granted to users perform following query
select grantee, privilege, admin_option from where or or and sys.dba_sys_privs (privilege like '% ANY %' privilege in ('BECOME USER', 'UNLIMITED TABLESPACE') admin_option = 'YES') grantee not in ('SYS', 'SYSTEM', 'OUTLN', 'AQ_ADMINISTRATOR_ROLE', 'DBA', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE', 'OEM_MONITOR', 'CTXSYS', 'DBSNMP', 'IFSSYS', 'MDSYS', 'ORDPLUGINS', 'ORDSYS' )

Oracle User info using Process ID set serveroutput on size 50000


set echo off feed off veri off accept 1 prompt 'Enter Unix process id: ' DECLARE v_sid number; s sys.v_$session%ROWTYPE;

p sys.v_$process%ROWTYPE; BEGIN begin select sid into v_sid from where and or exception when no_data_found then dbms_output.put_line('Unable to find process id &&1!!!'); return; when others then dbms_output.put_line(sqlerrm); return; end; sys.v_$process p, sys.v_$session s p.addr (p.spid = s.paddr = &&1

s.process = '&&1');

select * into s from sys.v_$session where sid

= v_sid;

select * into p from sys.v_$process where addr = s.paddr;

dbms_output.put_line('======================================== =========='); dbms_output.put_line('SID/Serial s.sid||','||s.serial#); dbms_output.put_line('Foreground - '||s.program); : '|| : '|| 'PID: '||s.process||'

dbms_output.put_line('Shadow '||p.program); dbms_output.put_line('Terminal p.terminal); dbms_output.put_line('OS User '||s.machine); dbms_output.put_line('Ora User

: '|| 'PID: '||p.spid||' : '|| s.terminal || '/ ' || : '|| s.osuser||' on : '|| s.username);

dbms_output.put_line('Status Flags: '|| s.status||' '||s.server||' '||s.type); dbms_output.put_line('Tran Active : '|| nvl(s.taddr, 'NONE')); dbms_output.put_line('Login Time 'Dy HH24:MI:SS')); : '|| to_char(s.logon_time,

dbms_output.put_line('Last Call : '|| to_char(sysdate(s.last_call_et/60/60/24), 'Dy HH24:MI:SS') || ' - ' || to_char(s.last_call_et/60, '990.0') || ' min'); dbms_output.put_line('Lock/ Latch : '|| nvl(s.lockwait, 'NONE')||'/ '||nvl(p.latchwait, 'NONE')); dbms_output.put_line('Latch Spin 'NONE')); : '|| nvl(p.latchspin,

dbms_output.put_line('Current SQL statement:'); for c1 in ( select * from sys.v_$sqltext where HASH_VALUE = s.sql_hash_value order by piece) loop dbms_output.put_line(chr(9)||c1.sql_text); end loop;

dbms_output.put_line('Previous SQL statement:'); for c1 in ( select * from sys.v_$sqltext

where HASH_VALUE = s.prev_hash_value order by piece) loop dbms_output.put_line(chr(9)||c1.sql_text); end loop;

dbms_output.put_line('Session Waits:'); for c1 in ( select * from sys.v_$session_wait where sid = s.sid) loop dbms_output.put_line(chr(9)||c1.state||': '||c1.event); end loop;

--

dbms_output.put_line('Connect Info:');

-- for c1 in ( select * from sys.v_$session_connect_info where sid = s.sid) loop -dbms_output.put_line(chr(9)||': '||c1.network_service_banner); -end loop;

dbms_output.put_line('Locks:'); for c1 in ( select /*+ ordered */ decode(l.type, -- Long locks 'TM', 'DML/DATA ENQ', ENQ', 'UL', 'PLS USR LOCK', -- Short locks 'BL', 'BUF HASH TBL', FILE', 'CF', 'CONTROL 'TX', 'TRANSAC

'CI', 'CROSS INST F', FILE ', 'CU', 'CURSOR BIND ', 'DL', 'DIRECT LOAD ', 'MOUNT/STRTUP', 'DR', 'RECO LOCK TRAN', 'FS', 'FILE SET NUM', 'FI', 'SGA OPN FILE', 'IR', 'INSTCE RECVR', STATE ', 'LS', 'LOG SWITCH 'MM', 'MOUNT DEF RECVRY', 'PF', 'PWFILE ENQ STRT', 'RT', 'REDO THREAD ', ENQ ', 'RW', 'ROW WAIT 'SM', 'SMON LOCK INSTCE', 'SQ', 'SEQNO ENQ TRANSC', 'SV', 'SEQNO VALUE ', ', 'TD', 'DLL ENQ SEG ', ', ', ', ', ', ', ', ', 'IV', 'LIBCACHE INV', ', ',

'DF', 'DATA

'DM', 'DX', 'DISTRIB 'IN', 'INSTANCE

'IS', 'GET 'KK', 'LOG SW KICK

'MR', 'MEDIA 'PR', 'PROCESS 'SC', 'SCN

'SN', 'SEQNO 'ST', 'SPACE 'TA', 'GENERIC ENQ 'TE', 'EXTEND

'TS', 'TEMP SEGMENT', TABLE REDO ', 'UN', 'USER NAME ', 'TYPE='||l.type) type, ',

'TT', 'TEMP 'WL', 'WRITE

decode(l.lmode, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, 'RSX', 6, 'X',

to_char(l.lmode) ) lmode, decode(l.request, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, 'RSX', 6, 'X', to_char(l.request) ) lrequest, decode(l.type, 'MR', o.name, 'TD', o.name, 'TM', o.name, 'RW', 'FILE#='||substr(l.id1,1,3)|| ' BLOCK#='||substr(l.id1,4,5)||' ROW='||l.id2, 'TX', 'RS+SLOT#'||l.id1||' WRP#'||l.id2, 'WL', 'REDO LOG FILE#='||l.id1, 'RT', 'THREAD='||l.id1, 'TS', decode(l.id2, 0, 'ENQUEUE', 'NEW BLOCK ALLOCATION'), 'ID1='||l.id1||' ID2='||l.id2) objname from sys.v_$lock l, sys.obj$ o = s.sid

where sid

and l.id1 = o.obj#(+) ) loop

dbms_output.put_line(chr(9)||c1.type||' H: '||c1.lmode||' R: '||c1.lrequest||' - '||c1.objname); end loop; dbms_output.put_line('======================================== ==============='); END; undef 1

Oracle 10g Parameters related to Audit


AUDIT_TRAIL The AUDIT_TRAIL parameter turns auditing on or off for the database. If we dont want auditing to be turned on, do nothing, since the default value for this parameter is none, or false, which disables database auditing. If we want auditing turned on, we can set the AUDIT_TRAIL parameter to any of the following values: os: Oracle writes the audit records to an operating system audit trail, which is an operating system file, including audit records from the OS, audit records for the SYS user, and those database actions that are always automatically audited. db: Oracle records the same type of auditing as with the os setting, but it directs all audit records to the database audit trail, which is the AUD$ table owned by SYS. none: This value disables auditing. db,extended: This is similar to the db setting, but also provides extended audit information like the SQLBIND and SQLTEXT columns of the SYS.AUD$ table. In addition, you have two XML-related AUDIT_TRAIL values (new in Oracle Database 10.2): XML: This value for audit trail enables database auditing and writes audit details to OS files in XML format. XML,EXTENDED: This value prints all database audit records plus the SQLTEXT and SQLBIND values to OS files in the XML format. The parameter is set as follows: AUDIT_TRAIL = db Default value: none Parameter type: Static Chapter 11 provides more information about auditing actions within an Oracle database. Tip Even if we dont set the AUDIT_TRAIL parameter to any value, Oracle will still write audit information to an operating system file for all database actions that are audited by default. On a UNIX system, the

default location for this file is the $ORACLE_HOME/rdbms/audit directory. Of course, we can specify a different directory if we wish. See Chapter 11 for more details on this feature. AUDIT_FILE_DEST The AUDIT_FILE_DEST parameter specifies the directory in which the database will write the audit records, when we choose the operating system as the destination with the AUDIT_TRAIL parameter by specifying AUDIT_TRAIL=os. We can also specify this parameter if we choose the XML or XML,EXTENDED options for the AUDIT_TRAIL option, since the audit records are written to operating system files in both cases. Default value: $ORACLE_HOME/rdbms/audit Parameter type: Dynamic. You can modify this parameter with the ALTER SYSTEM . . . DEFERRED command. AUDIT_SYS_OPERATIONS

This parameter, if set to a value of true, will audit all actions of the SYS user and any other user with a SYSDBA or SYSOPER role and will write the details to the operating system audit trail specified by the AUDIT_TRAIL parameter. By writing the audit information to a secure operating system location, we remove any possibility of the SYS user tampering with an audit trail that is located within the database. The possible values are true and false.
Default value: false Parameter type: Static

Auditing
If you enable auditing then the auditing information, (stored in table SYS.AUD$) will start growing and because is relies on SYSTEM tablespace you might have performance problems in the future. Auditing housekeeping must be setup. Here is a practical guide, from a 5TByte E-Business Suite, with 50MB auditing produced every day.
1. Create a SYSTEM.AUD$_BU table stored in a different tablespace(AUDIT_DATA) where you will move all you auditing produced CREATE TABLESPACE AUDIT_DATA DATAFILE '/filesystem034/oradata/SID/audit_data_001.dbf' SIZE 10000M AUTOEXTEND OFF LOGGING PERMANENT EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT MANUAL; CREATE TABLE SYSTEM.AUD$_BU TABLESPACE AUDIT_DATA AS SELECT * FROM SYS.AUD$

WHERE 1=2; 2. Create a procedure(Keep_Size_Aud_Log) that moves the rows from SYS.AUD$ to SYSTEM.AUD$_BU CREATE OR REPLACE PROCEDURE Keep_Size_Aud_Log IS rowCount NUMBER; BEGIN SELECT COUNT(*) INTO rowCount FROM sys.aud$ ; IF rowCount > 0 THEN COMMIT; INSERT /*+ APPEND */ INTO SYSTEM.aud$_bu (SELECT * FROM sys.aud$); COMMIT; EXECUTE IMMEDIATE 'truncate table sys.aud$'; sys.Dbms_System.ksdwrt (3,'ORA-AUDIT TRAIL: rows moved from SYS.AUD$ to SYSTEM.AUD$_BU'); END IF; END Keep_Size_Aud_Log; / 3. Execute the procedure every day at midnight with a job DECLARE X NUMBER; BEGIN SYS.DBMS_JOB.SUBMIT ( job => X ,what => 'SYS.KEEP_SIZE_AUD_LOG;' ,next_date => TO_DATE('27/02/2008 12:35:21','dd/mm/yyyy hh24:mi:ss') ,INTERVAL => 'TRUNC(SYSDATE+1)' ,no_parse => FALSE ); END; Tip: To speed up searching on SYSTEM.AUD$_BU you can create 2 indexes (one on timestamp# and the other to userid) CREATE INDEX SYSTEM.AUD$_BU_TIME_IDX ON SYSTEM.AUD$_BU (TIMESTAMP#) NOLOGGING TABLESPACE AUDIT_DATA; CREATE INDEX SYSTEM.AUD$_BU_USERID_IDX ON SYSTEM.AUD$_BU (USERID) NOLOGGING TABLESPACE AUDIT_DATA;

Unit II Preparing for OCP DBA Exam 2: Database Administration Basics of the Oracle Database Architecture

In this chapter, you will understand and demonstrate knowledge in the following areas: The Oracle architecture Starting and stopping the Oracle instance Creating an Oracle database

A major portion of your understanding of Oracle, both to be a successful Oracle DBA and to be a successful taker of the OCP Exam 2 for Oracle database administration, is understanding the Oracle database architecture. About 22 percent of OCP exam 2 is on material in these areas. Oracle in action consists of many different items, from memory structures, to special processes that make things run faster, to recovery mechanisms that allow the DBA to restore systems after seemingly unrecoverable problems. Whatev er the Oracle feature, its all here. You should review this chapter carefully, as the concepts presented here will serve as the foundation for material covered in the rest of the book, certification series, and your day-to-day responsibilities as an Oracle DBA. The Oracle Architecture In this section, you will cover the following topics related to the Oracle architecture: Oracle memory structures Oracle background processes Oracle disk utilization structures

The Oracle database server consists of many different components. Some of these components are memory structures, while others are processes that execute certain tasks behind the scenes. There are also disk resources that store the data that applications use to track data for an entire organization, and special resources designed to allow for recovering data from problems ranging from incorrect entry to disk failure. All three structures of the Oracle database server running together to allow users to read and modify data are referred to as an Oracle instance. Figure 6-1 demonstrates the various disk, memory, and process components of the Oracle instance. All of these features working together allow Oracle to handle data management for applications ranging from small "data marts" with fewer than five users to enterprise-wide client/server applications designed for online transaction processing for 50,000+ users in a global environment. Oracle Memory Structures Focus first on the memory components of the Oracle instance. This set of memory components represents a "living" version of Oracle that is available only when the instance is running. There are two basic memory structures on the Oracle instance. The first and most important is called the System Global Area, which is commonly referred to as the SGA. The other memory structure in the Oracle instance is called the Program Global Area, or PGA. This discussion will explain the components of the SGA and the PGA, and also cover the factors that determine the storage of information about users connected to the Oracle instance.

The Oracle SGA The Oracle SGA is the most important memory structure in Oracle. When DBAs talk about most things related to memory, they are talking about the SGA. The SGA stores several different components of memory usage that are designed to execute processes to obtain data for user queries as quickly as possible while also maximizing the number of concurrent users that can access the Oracle instance. The SGA consists of three different items, listed here. The buffer cache The shared pool The redo log buffer

The buffer cache consists of buffers that are the size of database blocks. They are designed to store data blocks recently used by user SQL statements in order to improve performance for subsequent selects and data changes. The shared pool has two required components and one optional component. The required components of the shared pool are the shared SQL library cache and the data dictionary cache. The optional component of the shared pool includes session information for user processes connecting to the Oracle instance. The final area of the SGA is the redo log buffer, which stores online redo log entries in memory until they can be written to disk. Explore the usage of the shared pool in the Oracle database architecture. The shared SQL library cache is designed to store parse information for SQL statements executing against the database. Parse information includes the set of database operations that the SQL execution mechanism will perform in order to obtain data requested by the user processes. This information is treated as a shared resource in the library cache. If another user process comes along wanting to run the same query that Oracle has already parsed for another user, the database will recognize the opportunity for reuse and let the user process utilize the parse information already available in the shared pool. Of course, the specific data returned by the query for each user will not reside in the shared pool, and thus not be shared, because sharing data between applications represents a data integrity/security issue. The other mandatory component of the shared pool is the data dictionary cache, also referred to by many DBAs as the "row" cache. This memory structure is designed to store the data from the Oracle data dictionary in order to improve response time on data dictionary queries. Since all user processes and the Oracle database internal processes use the data dictionary, the database as a whole benefits in terms of performance from the presence of cached dictionary data in memory. The redo log buffer allows user processes to write their redo log entries to a memory area in order to speed processing on the tracking of database changes. One fact that is important to remember about redo logs and user processes is that every process that makes a change to the database must write an entry to the redo log in order to allow Oracle to recover the change. When the database is set up to archive redo logs, these database changes are kept in order to rebuild database objects in the event of a disk failure. The availability of a buffer for storing redo information in memory prevents the need for user processes to spend the extra time it takes to write an entry directly to disk. By having all user processes writing those redo log records to memory, the Oracle database avoids contention for disk usage that would invariably cause database performance to slow down. Since every

data change process has to write a redo log entry, it makes sense that processes be able to write that change as quickly as possible in order to boost speed and avoid problems. The final SGA resource is the buffer cache. This area of memory allows for selective performance gains on obtaining and changing data. The buffer cache stores data blocks that contain row data that has been selected or updated recently. When the user wants to select data from a table, Oracle looks in the buffer cache to see if the data block that contains the row has already been loaded. If it has, then the buffer cache has achieved its selective performance improvement by not having to look for the data block on disk. If not, then Oracle must locate the data block that contains the row, load it into memory, and present the selected output to the user. There is one overall performance gain that the buffer cache provides that is important to note . No user process ever interfaces directly with any record on a disk. This fact is true for the redo log buffer as well. After the users select statement has completed, Oracle keeps the block in memory according to a special algorithm that eliminates buffers according to how long ago they were used. The procedure is the same for a data change, except that after Oracle writes the data to the row, the block that contains the row will then be called a dirty buffer, which simply means that some row in the buffer has been changed. Another structure exists in the buffer cache, called the dirty buffer write queue, and it is designed to store those dirty buffers until the changes are written back to disk. The Oracle PGA The PGA is an area in memory that helps user processes execute, such as bind variable information, sort areas, and other aspects of cursor handling. From the prior discussion of the shared pool, the DBA should know that the database already stores parse trees for recently executed SQL statements in a shared area called the library cache. So, why do the users need their own area to execute? The reason users need their own area in memory to execute is that, even though the parse information for SQL or PL/SQL may already be available, the values that the user wants to execute the search or update upon cannot be shared. The PGA is used to store real values in place of bind variables for executing SQL statements. Location of User Session Information The question of location for user session information is an important one to consider. Whether user session information is stored in the PGA or the shared pool depends on whether the multithreaded server (MTS) option is used. MTS relates to how Oracle handles user processes connecting to the database. When the DBA uses MTS, all data is read into the database buffer cache by shared server processes acting on behalf of user processes. When the DBA uses the MTS configuration, session information is stored in the shared pool of the SGA. When MTS is not used, each user process has its own dedicated server process reading data blocks into the buffer cache. . In the dedicated server process configuration, , the PGA stores session information for each user running against Oracle. More information about shared vs. dedicated servers, the MTS architecture, and the purpose of the Server process appear in this discussion, Unit IV, and Unit V of this Guide. Exercises 1. 2. What is the name of the main memory structure in Oracle, and what are its components? What is the function of the PGA?

3.

Where is user session information stored in memory on the Oracle instance? How is its location determined?

Oracle Background Processes A good deal of the discussion around users thus far speaks of processes--user processes doing this or that. In any Oracle instance, there will be user processes accessing information. . Likewise, the Oracle instance will be doing some things behind the scenes, using background processes. There are several background processes in the Oracle instance. It was mentioned in the discussion of the SGA that no user process ever interfaces directly with I/O. This setup is allowed because the Oracle instance has its own background processes that handle everything from writing changed data blocks onto disk to securing locks on remote databases for record changes in situations where the Oracle instance is set up to run in a distributed environment. The following list presents each background process and its role in the Oracle instance.

DBWR

The database writer process. This background process handles all data block writes to disk. It works in conjunctio the Oracle database buffer cache memory structure. It prevents users from ever accessing a disk to perform a da change such as update, insert, or delete.

LGWR

The log writer process. This background process handles the writing of redo log entries from the redo log buffer t online redo log files on disk. This process also writes the log sequence number of the current online redo log to th datafile headers and to the control file. Finally, LGWR handles initiating the process of clearing the dirty buffer w queue. At various times, depending on database configuration, those updated blocks are written to disk by DBW These events are called checkpoints. LGWR handles telling DBWR to write the changes.

SMON

The system monitor process. The usage and function of this Oracle background process is twofold. First, in the ev an instance failurewhen the memory structures and processes that comprise the Oracle instance cannot contin runthe SMON process handles recovery from that instance failure. Second, the SMON process handles disk sp management issues on the database by taking smaller fragments of space and "coalescing" them, or piecing the together.

PMON

The process monitor process. PMON watches the user processes on the database to make sure that they work co If for any reason a user process fails during its connection to Oracle, PMON will clean up the remnants of its activ and make sure that any changes it may have made to the system are "rolled back," or backed out of the database reverted to their original form.

RECO

(optional) The recoverer process. In Oracle databases using the distributed option, this background process hand resolution of distributed transactions against the database.

ARCH

(optional) The archiver process. In Oracle databases that archive their online redo logs, the ARCH process handle automatically moving a copy of the online redo log to a log archive destination.

CKPT

(optional) The checkpoint process. In high-activity databases, CKPT can be used to handle writing log sequence numbers to the datafile headers and control file, alleviating LGWR of that responsibility.

LCK0.. LCK9

(optional) The lock processes, of which there can be as many as ten. In databases that use the Parallel Server op this background process handles acquiring locks on remote tables for data changes.

S000.. S999

The server process. Executes data reads from disk on behalf of user processes. Access to Server processes can eit shared or dedicated, depending on whether the DBA uses MTS or not. In the MTS architecture, when users conn the database, they must obtain access to a shared server process via a dispatcher process, described below.

D001.. D999

(optional) The dispatcher process. This process acts as part of the Oracle MTS architecture to connect user proce shared server processes that will handle their SQL processing needs. The user process comes into the database v SQL*Net listener, which connects the process to a dispatcher. From there, the dispatcher finds the user process shared server that will handle interacting with the database to obtain data on behalf of the user process.

Exercises 1. 2. 3. Name the background process that handles reading data into the buffer cache. Which process handles writing data changes from the buffer cache back to disk? Which process handles writing redo log information from memory to disk? Which process can be configured to help it? Which background processes act as part of the multithreaded server (MTS) architecture? Which background process coalesces free space on disk?

Oracle Disk Utilization Structures In addition to memory, Oracle must execute disk management processing in order to create, access, store, and recover an Oracle database. There are structures that are stored on disk that the DBA must understand how to manage, and this section will identify and discuss the meaning of each structure in kind. To begin, the DBA must first understand that there are two different "lenses" through which he or she must view the way Oracle looks at data stored on disk. Through one lens, the DBA sees the disk utilization of the Oracle database consisting of logical data structures. These structures include tablespaces, segments, and extents. Through another, the DBA sees the physical database files that store these logical database structures. Figure 6-2 demonstrates the concept of a logical and physical view of storage in the Oracle instance. Figure 2: The logical and physical views of a database A tablespace is a logical database structure that is designed to store other logical database structures. A segment is a logical data object that stores the data of a table, index, or series of rollback entries. An extent is similar to a segment in that the extent stores information corresponding to a table. However, the difference is that an extent handles table growth. When the row data for a table exceeds the space allocated to it by the segment, the table acquires an extent to place the additional data in. The objects a DBA may place into a tablespace are things like

tables, indexes, rollback segments, and any other objects that consist of segments and extents. A logical database object such as a table or index can have multiple extents, but those extents and segments can be stored in only one tablespace. The other lens through which the DBA will view the data stored on disk is the physical lens. Underlying all the logical database objects in Oracle disk storage are the physical methods that Oracle uses to store those objects. The cornerstone of the physical method that Oracle uses to store data in the database is the Oracle data block. Data blocks store row data for segments and extents. In turn, the blocks are taken together to comprise a datafile. Datafiles correspond to the tablespace level of the logical Oracle disk storage architecture in that a tablespace may consist of one or more datafiles. The objects in a tablespace, namely segments and extents corresponding to different tables, can span across multiple datafiles, provided that all datafiles are part of the same tablespace. Just as a table, index, or rollback segments cannot span across multiple tablespaces, any datafile on the database can contain data for only one tablespace. Individual data objects, such as tables and indexes, however, can have their segments and extents span across multiple datafiles belonging to that one tablespace. In addition to datafiles, the structures that Oracle uses to store information about the database on disk include: control files, parameter files, password files, and redo log files. Redo log files are used to store information about the changes users make to the database. These files are generally large enough to store many entries There are generally two (minimum) redo logs available to the LGWR process of the Oracle instance for the purpose of storing redo log entries. Each redo log can consist of one or more redo log files, also referred to as redo log "members." The operation of online redo logs occurs in this way: LGWR writes entries to the files of one of the two (or more) online redo logs. As the redo log entries are written to the logs, the files that comprise the current online redo log get fuller and fuller until they reach capacity. Once the capacity of all redo log members is reached, the redo log is filled and LGWR switches writing redo log entries to the other redo log, starting the entire process again. If redo log archiving has been selected, the redo logs are archived as well, either automatically by the ARCH process or manually by the DBA. After all redo logs are filled, LGWR begins reuse of the first redo log file it wrote records to. The parameter file is another important physical database disk resource that is available to Oracle. The parameter file is commonly referred to as init.ora. Used upon database startup, this file specifies many initialization parameters that identify the database as a unique database, either on the machine or on the entire network. Some parameters that init.ora sets include the size of various objects of the SGA, the location of the control file, the database name, and others. The database cannot start without init.ora, so it is imperative that the file be present at database startup. Another database disk resource that is used by Oracle is the password file. This resource stores passwords for users who have administrative privileges over the Oracle database. The password file is only used on Oracle databases that use database authentication. The use of operating system authentication will be covered in more detail in the next section, "Starting and Stopping the Oracle Instance." In cases where operating system authentication is used, the password file is not present on the Oracle database. The final database disk resource that will be covered in this section is the control file. The control file to the physical structure of the Oracle database is like the data dictionary to the logical structure. The control file tells Oracle what the datafiles are called for all tablespaces and where they are located on the machine, as well as what the redo log member filenames are and where to find them. Without a control file, Oracle database server would be unable to find its physical components, and thus it is imperative to have a control file. The name of the control

file is specified in the init.ora file for the database instance. To recap, the main disk resources of Oracle are listed below: Datafiles Redo log files Control files

Exercises 1. 2. 3. Name five physical disk resources on the Oracle instance. What are the three logical resources on the Oracle instance? Explain the difference between a logical view of disk resources and a physical view of disk resources. How are the two linked?

Starting and Stopping the Oracle Instance In this section, you will cover the following topics related to starting and stopping the instance: Selecting an authentication method Starting an Oracle instance and opening the database Shutting down the Oracle instance Altering database mode and restricting login

The first order of business for the DBA in many environments is to install the Oracle software. The act of installing the Oracle software is specific to the operating system on the machine used to host the Oracle database. Once installation is accomplished, the DBA will start working on other aspects of the Oracle server before actually creating his or her first database. The first step of configuring the Oracle server is managing the creation and startup of the Oracle instance, and the first step for managing the instance is password authentication. Selecting an Authentication Method In order to determine what method of authentication to use for the Oracle database, the DBA must first answer an important question. That question is "How do I plan to support this database?" The answer to that question usually boils down to whether the DBA will be working on the same site as the host machine running Oracle primarily to support database administration tasks, or whether the DBA plans to monitor the site remotely, with a monitoring server that manages the database administration for many other databases in addition to the one being set up. Security in Oracle server consists initially of user authentication. The users of an Oracle system each have a username that they enter, along with a password, in order to gain entry into the database. Just as users supply a password, so too must the DBA. However, Oracle offers two options with respect to password authentication.

The use of either depends on the answer to the question of whether the DBA will administer the database locally or remotely, and also on a matter of preference. The two methods available are to either allow Oracle to manage its own password authentication or to rely on the authentication methods provided by the operating system for Oracle access. The decision-making flow may go something like this. If the database administrator is planning to administer the database from a remote location, then the question of whether or not a secure connection can be established with the host system running Oracle must be answered. If the DBA can in fact obtain a secure connection with the host system running Oracle, then the DBA can use either the Oracle method of authentication or the operating system authentication method. If the DBA cannot obtain a secure connection remotely, then the DBA must use the Oracle method of user authentication. If the DBA plans to administer the database locally, then the choice of whether to use operating system authentication or Oracle authentication is largely a matter of personal preference. Operating system authentication presents some advantages and disadvantages to using Oracles methods for user authentication. By using operating system authentication, the DBA thereby requires Oracle users to provide a password only when logging into the operating system on the machine hosting Oracle. When the user wants to access the Oracle database, he or she can simply use the "/" character at login time and Oracle will verify with the operating system that the users password is correct and allow access. Similarly, if the DBA logs onto the machine hosting Oracle and wants to execute some privileged activities such as creating users or otherwise administrating the database, the DBA will be able to log onto the database simply by using the "/" character in place of a username and password. For example, in a UNIX environment, the login procedure for the DBA using operating system authentication may look something like the following: UNIX SYSTEM V TTYP01 (23.45.67.98) Login: bobcat Password: User connected. Today is 12/17/99 14:15:34 [companyx] /home/bobcat/ --> sqlplus / SQL*PLUS Version 3.2.3.0.0 (c) 1979,1996 Oracle Corporation(c) All rights reserved. Connected to Oracle 7.3.4 (23.45.67.98) With the parallel, distributed, and multithreaded server options

Operating system authentication allows the user or DBA to access the database quickly and easily, with minimal typing and no chance that a password can be compromised at any time during the Oracle login process. However, using the operating system to perform authentication on the Oracle database leaves Oracles data integrity and security at the level that the operating system provides. If for some reason a users ID is compromised, not only is the host machine at risk to the extent that the user has access to the machines resources, but the Oracle database is at risk as well to the level that the user has access to Oracle resources. Therefore, it is recommended that where it does not hinder usage of the database and where it improves security, use the Oracle methods of user authentication. To use operating system authentication first requires that a special group be created on the operating system level if that feature is supported, or to grant the DBA special prioritization for executing processes on the host

system for Oracle. This step of some other operating system requirements must be fulfilled in order to connect to the database for startup and shutdown as internal. The internal privilege is a special access method to the Oracle database that allows the DBA to start and stop the database. One additional constraint for connecting to the database as internal (in addition to having group privileges and proper process execution prioritization) is that the DBA must have a dedicated server connection. In effect, the DBA cannot connect as internal when shared servers are used in conjunction with the MTS architecture, rather than dedicated servers, for handling database reads for user processes,. The ability to connect as internal is provided for backward compatibility. In the future, the DBA can connect to the database for the purposes of administration using the connect name/pass as sysdba command. The sysdba keyword denotes a collection of privileges that are akin to those privileges granted with the internal keyword. Once the DBA is connected as internal and the database instance is started, the DBA can then perform certain operations to create databases and objects with operating system authentication. The performance of these functions is contingent upon the DBA having certain roles granted to their userid. The roles the DBA must have granted to them in this case are called osoper and osdba. These two privileges are administrated via the operating system and cannot be revoked or administrated via Oracle, and are used in conjunction with operating system authentication. They have equivalents used with Oracle authentication sysoper and sysdba, respectively--and from here on this book will assume the use of Oracle authentication; thus, the use of sysoper and sysdba. When the DBA connects to the database as internal, the sysdba and sysoper privileges are usually enabled as well. Therefore, it is possible for the DBA to simply connect as internal when the need arises to do any DBA-related activities. However, this is not advised in situations where the database is administered by multiple database administrators who may have limited access to accomplish various functions. In this case, it is better simply to grant sysdba or sysoper directly to the user who administers the system. Choosing which privilege to grant to the user depends on what type of administrative functions the DBA may fulfill. The sysoper and sysdba privileges provide some breakout of responsibilities and function with the privileges they allow the grantee to perform. The sysoper privilege handles administration of the following privileges: Starting and stopping the Oracle instance Mounting or opening the database Backing up the database Initiating archival of redo logs Initiating database recovery operations Changing database access to restricted mode

As shown by the types of privileges granted with sysoper, sysdba is a DBA privilege that is granted to the DBA for startup and shutdown recovery and backup, and other availability functions. The sysdba privilege administers certain other privileges as well. Those privileges are listed as follows:

The sysoper privilege All system privileges granted with admin option The create database privilege Privileges required for time-based database recovery

Obviously, this is the role granted to the DBA ultimately responsible for the operation of the database in question. In addition, there is another role that is created upon installation of the Oracle software. This role is called DBA, and it also has all system privileges granted to it. Oracles method of user authentication for database administrators when operating system authentication is not used is called a password file. The DBA first creates a file to store the authentication passwords for all persons that will be permitted to perform administrative tasks on the Oracle database. This functionality is managed by a special utility called ORAPWD. The name of the password file can vary according to the operating system used by the machine hosting the Oracle instance. First, a filename for the password file that stores users that can log in as DBA must be specified. The location for the password file varies by database, but for the most part it can be found in the dbs subdirectory under the Oracle software home directory of the machine hosting the Oracle database. The filename for the password file is usually orapwsid, where the database name is substituted for sid. The password for administering the password file is the next parameter that the ORAPWD utility requires. By specifying a password for this file, the DBA simultaneously assigns the password for internal and SYS. After creating the password file, if the DBA connects as internal and SYS and issues the alter user name identified by password command in order to change the password, and the users and the password files passwords are changed. The final item to be specified to the ORAPWD utility is the number of entries that are allowed to be entered into the password file. This number will determine the number of users that will have administrator privileges allowed for their ID. Care should be taken when specifying this value. If too few values are specified and the DBA needs to add more, he or she will not be able to do so without deleting and re-creating the password file. This process is dangerous, and should be executed with care to ensure that the DBA does not log off before the process is complete. If the DBA does log off after deleting the password file, the database administrator will be unable to execute administrative operations on the database. Entries can be reused as members of the DBA team come and go. The actual execution of ORAPWD may look something like this from within Oracle: SQL> ORAPWD FILE=/home/oracle/dbs/orapworgdb01.pwd PASSWORD=phantom ENTRIES=5 Once this password file creation is complete, several items must be completed in order to continue with using Oracles authentication method to allow DBA access to the database without allowing access as internal. The first step is to set a value for an initialization parameter in the init.ora file. This parameter is called REMOTE_LOGIN_PASSWORDFILE, and its permitted settings are NONE, EXCLUSIVE, and SHARED. These various settings have different meanings with respect to allowing remote database administration on the Oracle instance. The NONE setting means that the database will not allow privileged sessions to be established over nonsecured connections because no password file exists. When operating system authentication is used on the database, the REMOTE_LOGIN_PASSWORDFILE parameter may be set to NONE to disallow remote access to

the database for administration purposes. The following code block shows how the DBA can set this parameter in the init.ora file: REMOTE_LOGIN_PASSWORDFILE=none REMOTE_LOGIN_PASSWORDFILE=exclusive REMOTE_LOGIN_PASSWORDFILE=shared The EXCLUSIVE setting indicates that the password file developed by the ORAPWD utility for security use on that database instance can be used for that instance and that instance only. In this configuration, the DBA will add users who will administer the Oracle instance to the password file and grant the sysoper and sysdba privileges directly to those userids, allowing the DBAs to log into the database as themselves with all administrator privileges. When using password file authentication to administer a database remotely via a nonsecure connection, the REMOTE_LOGIN_PASSWORDFILE parameter should be set to EXCLUSIVE. The final option is SHARED, which means that the password file allows access only by SYS and the DBA connected as internal. All administrative operations on the database must happen by a DBA who logs into the instance as SYS or as internal when this option is set. After creating the password file with the ORAPWD utility and setting the REMOTE_LOGIN_PASSWORDFILE parameter to EXCLUSIVE in order to administer a database remotely, the DBA can then connect to the database as some user with sysdba privileges. One such user that is created when Oracle is installed is SYS. The DBA can log into the database as user SYS and create the database and other usernames as necessary, or simply mount the existing database. Then, the other users that will administer the instance can log onto it as themselves and execute DBA tasks as needed. SQL> CONNECT john/mary AS SYSDBA; Connection succeeded; SQL> There are two important key points to remember with respect to password files. One is gaining information about the users listed in them, the other is related to schema ownership. To find out which users are in the database password file, the DBA can query a dynamic performance view from the Oracle data dictionary on the password file called V$PWFILE_USERS. More on the data dictionary will be presented in this chapter. The other feature is that any object created in any login as sysdba or sysoper will be owned by the SYS schema. While this fact of ownership may not be all that important, especially given the ability to create and use public synonyms to obtain data without regard to schema ownership, it is an important point to make that the DBA should not look for objects they may create using their own username as the owner. In the event that the DBAs simply log in as themselves without the as sysdba trailer, the objects they create will be owned by their schemas. Exercises 1. 2. 3. What two methods of user authentication are available in Oracle? Explain some advantages and disadvantages for each. What is the name of the utility used to create a password file? Describe its usage, parameters, and the related parameter that must be set in INIT.ORA in order to use a password file for authentication. What are the two Oracle roles granted to DBAs in order to perform database administration?

4.

What is SYS? How is it used?

Starting the Oracle Instance and Opening the Database There is an important distinction that sometimes gets blurry in the minds of DBAs and especially Oracle developers who dont work with Oracle internals often enough to notice a difference. The distinction is between an Oracle instance and an Oracle database. First of all, an Oracle instance is NOT an Oracle database. The Oracle database is a set of tables, indexes, procedures, and other data objects that store information that applications place into storage in the Oracle product. The Oracle instance is the memory structures, background processes, and disk resources, all working together to fulfill user data requests and changes. With those points made, it should also be said that the Oracle instance is very close in similarity to the Oracle database. With that distinction made, attention should now turn to starting the Oracle instance. This step is the first that should be accomplished when creating a new database or allowing access to an existing database. To start the Oracle database instance, the DBA should do the following: 1. 2. Start the appropriate administrative tool and connect as sysdba or internal. The appropriate tool in this case is Server Manager or Enterprise Manager. Using Server Manager, use the startup start_option command to start the instance. The DBA must supply the name of the database (also known as the SID) and the parameter file Oracle should use to start the database. There are several different options the DBA can use to specify the startup status of the database.

Within the Server Manager tool there are several different options for database availability at system startup. These different options correspond to the level of access to the database once the database instance is running. Each startup feature has several associated facts about the access level permitted while the database is running in that mode. The options available for starting up an Oracle database are starting the database without mounting it, with mounting but not opening, mounting and opening in restricted mode, mounting and opening for anyones use, performing a forced start, or starting the database and initializing recovery from failure. This set of options is listed below: Startup nomount Startup mount Startup open Startup restricted Startup force

The first option available is starting up the instance without mounting the database. In Server Manager, the command used for starting any database is the startup command. For starting the instance without mounting the database, the startup command can be issued with an option called nomount. This option will start the instance for database creation, but will not open any other database that may be available to the Oracle instance. This option is used for preventing any problems with existing databases that are mounted at the time that a new database is created. Another recommended safety measure for creating new databases in the same instance that

already owns a database is to back up the existing database before creating the new one. A complete discussion of backup and recovery occurs in Unit III. In other situations, the DBA may want to start the instance and mount but not open an existing database for certain DBA maintenance activities. In this case, the DBA may need to work on some aspect of the physical database structure, such as the movement of redo log file members, data files, or to specify the usage of Oracles archival of redo logs feature. In situations where the DBA needs to perform a full database recovery, the DBA should mount but not open the database. The same startup command is used for starting the database in all modes, but the mode option used in this situation is the mount option. The DBA will need to specify the database name and the parameter file for this option to mount the database to the instance for the physical database object maintenance activities described above. A nonmounted database can be mounted to an instance after creation of that database using commands described in the section of this chapter titled "Altering Database Availability." TIP: In a database that is mounted, the DBA can alter the control file. All other options for starting a database will allow access to the database in one form or another. Hence, the options considered now are called opening the database. The DBA will open the database for many reasons, first and foremost so that users and applications can access the database in order to work. In order for the DBA to start the instance and then mount and open the database, the DBA must use the startup open option. Once the DBA has opened the database, any user with a valid username and password who has been granted the create session privilege or the CONNECT role can access the Oracle database. In some cases, the DBA may want to open the database without letting users access the database objects. This is the most common situation for a DBA to start the database in when there is DBA maintenance activity required on the logical portion of the Oracle database. In this case, the DBA will execute the startup option as before. However, in addition to starting and opening the database, the DBA will execute a special command that restricts database access to only those users on the system with a special access privilege called restricted session. From database startup, the DBA can execute the startup restrict command within Server Manager. Although any user on the database can have this privilege granted to them, typically only the database administrator will have it. In some cases, such as in the case of reorganizing large tables that involves a large-volume data load, the DBA may grant the restricted session privilege to a developer who is assisting in the database maintenance work. In these situations, the DBA may want to consider a temporary grant of restricted session to the developer, followed by a revocation of the privilege afterward to prevent possible data integrity issues in later maintenance cycles. This method is generally preferable to a permanent grant of restricted session to someone outside the DBA organization. Typically, the DBA will want to use the restrict option for logical database object maintenance such as reorganizing tablespaces, creating new indexes or fixing old ones, large-volume data loads, reorganizing or renaming objects, and other DBA maintenance activities. There are two special cases for database startup left to consider, both of which are used for circumstances outside of normal database activity. One of those two situations is when the database has experienced a failure of some sort that requires the DBA to perform a complete database recovery of the database and the instance. In this case, the DBA may want the instance to initiate its complete recovery at the time the instance is started. To accomplish the task, the DBA can issue the startup recover command from the Server Manager tool, and Oracle will start the instance and initiate the complete recovery at instance startup. In cases where archiving is used, Oracle may require certain archived redo logs to be present for this option to complete successfully. In any event,

the use of this option will be more carefully considered in the next unit, the treatment of OCP Exam 3 on database backup and recovery. The final option for database startup is used in unusual circumstances as well. Sometimes (rarely) there is a situation where the Oracle database cannot start the instance under normal circumstances or shut down properly due to some issue with memory management or disk resource management. In these cases, the DBA may need to push things a bit. The DBA can give database startup an additional shove with the startup force command option. This option will use a method akin to a shutdown abort (see the next section on database shutdown) in order to end the current instance having difficulty before starting the new instance. It is not recommended that the DBA use this option without extreme care, as there is usually a need for instance recovery in this type of situation. Exercises 1. 2. What is the tool used for starting the Oracle database? What connection must be used for the task? What are the five options for database startup?

Shutting Down the Oracle Database Shutting down the Oracle instance works in much the same way as starting the instance, with the requirement to cease allowing access to the database and the requirement to accomplish the task while being logged on as internal. The task must also be accomplished from the Server Manager, either graphically with the use of the Shut Down menu under the Instance menu or with the shutdown command in line mode. The options for database shutdown are listed below: Shutdown normal Shutdown immediate Shutdown abort

There are three priorities that can be specified by the DBA for shutting down the database. The first and lowest priority is normal. It is the lowest priority because Oracle will wait for many other events to play themselves out before actually shutting down the connection. In other words, the database will make the DBA wait for all other users to finish what they are doing before the database will actually close. The following description of events illustrates specifically how the shutdown process works under normal priority: 1. 2. 3. 4. 5. DBA issues shutdown normal from Server Manager at 3 p.m. User X is logged onto the system at 2:30 and performs data entry until 3:15 p.m. User X will experience no interruption in database availability as a result of shutdown normal. User Y attempts to log into the database at 3:05 p.m. and receives the following message: ora-01090: shutdown in progressconnection is not permitted. User Z is the last user logged off at 3:35 p.m. The database will now shut down. When the DBA starts the database up again, there will be no need to perform a database recovery.

There are three rules that can be abstracted from this situation. The first is that no new users will be permitted access to the system. The second is that the database does not force users already logged onto the system to log

off in order to complete the shutdown. Third, under normal shutdown situations, there is no need for instance recovery. Normal database shutdown may take some time. While Oracle attempts to shut down the dat abase, the DBAs session will not allow the DBA to access any other options or issue any other commands until the shutdown process is complete. The time the process can take depends on several factors. Some of the factors that the database shutdown will depend on are whether many users have active transactions executing at the time the shutdown command is issued, how many users are logged on to the system and on the shutdown priority issued by the DBA. A higher-priority shutdown that the DBA can enact in certain circumstances is the shutdown immediate command. Shutting down a database with immediate priority is similar to using the normal priority in that no new users will be able to connect to the database once the shutdown command is issued. However, Oracle will not wait for a user to logoff as it did in points 2 and 4 above. Instead, Oracle terminates the user connections to the database immediately and rolls back any uncommitted transactions that may have been taking place. This option may be used in order to shut down an instance that is experiencing unusual problems, or in the situation where the database could experience a power outage in the near future. A power outage can be particularly detrimental to the database; therefore, it is recommended that the DBA shut things down with immediate priority when a power outage is looming. There are two issues associated with shutting down the database with immediate priority. The first is the issue of recovery. The database will most likely need instance recovery after an immediate shutdown. This activity should not require much effort from the DBA, as Oracle will handle the recovery of the database instance itself without much intervention. However, the other issue associated with shutting down the database immediately is that the effect of the shutdown is not always immediate! In some cases, particularly in situations involving user processes running large-volume transactions against a database, the rollback portion of the database shutdown may take some time to execute. The final priority to be discussed with shutting down a database is the shutdown with abort priority. This is the highest priority that can be assigned a shutdown activity. In all cases that this priority is used, the database will shut down immediately, with no exceptions. Use of this priority when shutting down a database instance should be undertaken with care. The additional item that a shutdown abort uses to prevent the database from waiting for rollback to complete is not to roll back uncommitted transactions. This approach requires more instance recovery activity, which is still handled by Oracle. Only in a situation where the behavior of the database is highly unusual or when the power to the database will cut off in less than two minutes should the shutdown abort option be employed. Otherwise, it is usually best to avoid using this option entirely, and use shutdown immediate in circumstances requiring the DBA to close the database quickly. Exercises 1. 2. What connection must be used for the task of database shutdown? What are the three options for database shutdown?

Changing Database Availability and Restricting Login During the course of normal operation on the database, the DBA may require changing the availability of the database in some way. For example, the DBA may have to initiate emergency maintenance on the database, which requires the database to be unavailable to the users. Perhaps there are some problems with the database

that need to be resolved while the instance is still running but the database is unavailable. For this and many other reasons, the DBA can alter the availability of the database in several ways. The following discussion will highlight some of those ways. The first way a DBA may want to alter the status and availability of the database instance is to change the mount status of a database. In some situations, the DBA may need to start a database with the nomount option, as discussed earlier in the section on starting the database. After the activities that required the database not to be mounted are complete, the DBA will want to mount the database to the instance, but have the database still be closed and therefore unavailable to the users. To change the status of a database to be mounted, the DBA can use either the graphical interface of Server Manager to mount the database or use the alter database mount statement to achieve that effect. Mounting the database allows the DBA to do several database maintenance activities without allowing users the chance to access the database and cause contention. After database work, or in the course of a manual startup, the DBA will want to allow the users access to the database. This step can be accomplished in two ways. Like mounting the database manually, the DBA can use the graphical user interface to open the database for user access. Alternately, the DBA can issue the alter database open statement from the SQL command prompt and open the database for user access. When the database is in open mode, then a database user with the create session privilege, or the CONNECT role, can access the database. One fact that is important to remember about the Oracle database is that it can be accessed by multiple instances. The final option to be covered corresponds to situations where the DBA has the database open for use, and needs to make some changes to the database. Some of these changes may include re-creating indexes, large-volume data loads, tablespace reorganization, and other activities that require the database to be open but access to the data to be limited. This option is called the restricted session. In cases where the DBA wants to limit access to the database without actually closing it, the DBA can enable the databases restricted session mode. This option prevents logging into the database for any user that does not have the restricted session privilege granted to the user. Although any user on the database can have this privilege granted to them, typically only the database administrator will have it. In some cases, such as in the case of reorganizing large tables that involves a largevolume data load, the DBA may grant the restricted session privilege to a developer who is assisting in the database maintenance work. In these situations, the DBA may want to consider a temporary grant of restricted session to the developer, followed by a revocation of the privilege afterward to prevent possible data integrity issues in later maintenance cycles. This method is generally preferable to a permanent grant of restricted session to someone outside the DBA organization. This option is handled in one way, mainly. The method used to close access to the database to all users except those with the restricted session privilege is alter database enable restricted session. In order to restore access to the database to all users without the restricted session privilege is to issue the following command: alter database enable restricted session. Exercises 1. 2. What statement is used to change the status of a database? Explain the use of the restricted session privilege.

Creating an Oracle Database In this section, you will cover the following topics related to creating an Oracle database:

Entity relationships and database objects Creating a database in Oracle Creating the Oracle data dictionary

Once the DBA has set up some necessary preliminary items for running the Oracle instance, such as password authentication, the DBA can then create the database that users will soon utilize for data management. Creating a database involves three activities that will be discussed in this section. The first activity for creating a database is mapping a logical entity-relationship diagram that details a model for a process to the data model upon which the creation of database objects like indexes and tables will be based. The second activity that the DBA will perform as part of creating a database is the creation of physical data storage resources in the Oracle architecture, such as datafiles and redo log files. The final (and perhaps the most important) aspect of creating a database is creating the structures that comprise the Oracle data dictionary. A discussion of each element in the database creation process will be discussed now in detail. Entity Relationships and Database Objects The first part of creating a database is creating a model for that database. One fundamental tenet of database design is remembering that every database application is a model of reality. Most of the time, the database is used to model some sort of business reality, such as the tracking of inventory, payment of sales bonuses, employee expense vouchers, and customer accounts receivable invoices. The model for a database should be a model for the process that the database application will represent. Now, explore the combination of those entities and their relationships. The concept of an entity maps loosely to the nouns in the reality the database application is trying to model. In the employee expenditure system mentioned above, the entities (or nouns) in the model may include employees, expense sheets, receipts, payments, a payment creator such as accounts payable, and a payer account for the company that is reimbursing the employee. The relationships, on the other hand, map loosely to the idea of a verb, or action that takes place between two nouns. Some actions that take place in this employee expenditure system may be submits expense sheet, submits receipts, deducts money from account, and pays check. These entities and relationships can translate into several different types of visual representations or models of a business reality. Figure 6-3 illustrates each entity by a small illustration, with the relationships between each entity represented by an arrow and a description. The employee fills out the expense sheets for the expenses incurred on behalf of the company. Then, the employees send their vouchers to the accounts payable organization, which creates a check for the employee and mails the payment to the employee. The process is very simple, but it accurately models the business process within an organization to reimburse an employee for his expenses. When the developers of a database application create the employee expenditure system modeled by the entity-relationship diagram above, they will first take those entities and map out the relationship, then take the entity-relationship diagram and create a logical data model out of those entities and processes. A logical data model is a more detailed diagram than the entity-relationship diagram in that it fills in details about the process flow that the entity-relationship diagram attempts to model. Figure 6-4 shows the logical data model of the employee table and the invoice table.

On the expense sheet, the employee will fill in various pieces of information, including the expense ID number, the employee ID number, and the expense amount. The line between the two entities is similar to a relationship; however, in the logical data model, the entities are called tables and the relationships are called foreign keys. There is an interesting piece of information communicated above and below the line on the opposite side of each table as well. That piece of information identifies a pair of facts about the relationship. The first element of the pair identifies whether the relationship is mandatory from the perspective of the table appearing next to the pair. A one indicates that the relationship is mandatory for the pair, while a zero (0) indicates that the relationship is optional. In the example in the diagram above, the relationship between employee and expense sheet is optional for employees but mandatory for expense sheets. This means that for any given record in the EMPLOYEE table, there may or may not be expense sheets in the EXPENSES table. However, every expense sheet record in the EXPENSES table will correspond to an employee record. The second component of that pair indicates whether there is a one-to-one, one-to-many, or many-to-many correspondence between records of one table and records of another table. In the example above, records in the EMPLOYEE table have a one-to-many relationship with the records of the EXPENSE table, while the records of the EXPENSE table have a one-to-one relationship with records of the EMPLOYEE table. That is to say, each employee may have submitted one or more expense sheets, or none at all, while each expense sheet corresponds to one and only one employee. This pair of facts is referred to as the ordinality of the database tables. The relationship between columns to tables corresponds loosely to the activity or relationship that exists between the two entities that the tables represent. In the case of the table structure in Figure 6-5, the EMPID column in EXPENSE corresponds to the primary key column of the EMPLOYEE table. In terms of the entity-relationship diagram, the empid is the tie that binds an expense sheet to the employee who created it. Therefore, the relationship of one table to another through foreign keys should correspond somewhat to the relationship that occurs between two entities in the process flow being modeled. Creating a physical database out of the logical data model requires considering several issues. The database designer may ask several questions related to the physical design of that system as follows: How many employees will be allowed to use the system? What sort of company chargeback system will be used to take the employee expense payment from the budget of the department for which the expense was incurred on behalf of the employee? How many expense sheets will be submitted per month and per year?

The proper creation of a database in Oracle depends on answering these and many other questions regarding the physical relationship between the machine hosting Oracle and the data Oracle stores as part of the application model. Some of these questions relate to Oracle-specific features. For example, the designer of the database should know row count estimates for each object to be created in Oracle. This estimate of row count should be something that is forecasted over a period of time, say two years. This forecast of sizing for the database will allow the DBA some "breathing room" when the database application is deployed, so that the DBA is not constantly trying to allocate more space to an application that continually runs out of it. Some objects that the

designer will need to produce sizing estimates for are the tables and indexes, and the tablespaces that will contain those tables and indexes. In a point related to indexes, the designer of the application should know what the users of the database will need regarding data access. This feature of database design is perhaps the hardest to nail down after the initial estimate of transaction activity for the database application. The reason for the difficulty is knowing what the users will want with respect to data access. The developers of the application should, where possible, try to avoid providing users with free rein to access data via ad hoc queries, as many users will not know, for example, that searching a table on an indexed column is far preferable to searching on a nonindexed column, for performance reasons. Providing the "canned" query access via graphical user interfaces or batch reporting allows the designers to tune the underlying queries that drive the screens or reports, scoring a positive response from the users while also minimizing the impact of application activity on the Oracle instance. In addition, specifying the databases character set is critical to the proper functioning of the database. There are several different options for specifying character sets in the Oracle database, just as there are many different languages available for use by different peoples of the world. These languages fall into distinct categories with respect to the mechanisms on a computer that will store and display the characters that comprise those languages. The distinct categories are single-byte character sets, multibyte character sets, and languages read from right to left. Examples of single-byte character sets are any of the Romance or Teutonic languages originating in Western Europe, such as English, Spanish, French, German, Dutch, or Italian. Examples of the multibyte character sets available are the languages that originated in Eastern Asia, Southeast Asia, or the Pacific Rim. These languages include Mandarin, Japanese, and Korean. Examples of a language read right to left include Hebrew and Arabic. One final, and perhaps the most important, area of all to consider at the onset of database system creation in the Oracle environment is how the user will preserve the data in the system from any type of failure inherent in the usage of computer machinery. Such methods may include full and partial backups for the database and the archiving (or storing) of redo logs created by Oracle to track changes made in the database. Backup and recovery are handled as a whole topic unto themselves in the OCP DBA Exam series, and also in this book. See Unit III covering OCP Exam 3, "Database Backup and Recovery." There are three main steps in creating databases in the Oracle environment. The first is creating the physical locations for data in tables and indexes to be stored in the database. These physical locations are called datafiles. The second step is to create the files that will store the redo entries that Oracle records whenever any process makes a data change to the Oracle database. These physical structures are called the redo log files, or redo log members. The final step in creating an Oracle database is to create the logical structures of the data dictionary. The data dictionary comprises an integral portion of the database system. Both the users and Oracle refer to the data dictionary in order to find information stored in tables or indexes, to find out information about the tables or indexes, or to find out information about the underlying physical structure of the database, the datafiles, and the redo log files. Exercises 1. What is an entity-relationship diagram? Explain both concepts of entities and relationships.

2.

3. 4.

What is a logical data model? How does the logical data model correspond to the entity-relationship diagram? What structures in a data model relate loosely to the entities and the relationships of an entityrelationship diagram? What is ordinality? Explain the concept of mandatory vs. optional relationships? What is a foreign key? Is a foreign key part of the entity-relationship diagram or the logical data model? To what item in the other model does the foreign key relate to?

Creating a Database in Oracle Creation of the Oracle database is accomplished with the create database statement. The first thing to remember about database creation is the Oracle recommended methodology for actually creating the database. The steps are as follows: 1. 2. 3. 4. 5. 6. Back up existing databases. Create or edit the init.ora parameter file. Verify the instance name. Start the appropriate database management tool. Start the instance. Create and back up the new database.

Step 1 in the process is to back up the database. This step prevents headaches later if there is a problem with database creation damaging an existing database, which can happen if the DBA attempts database creation without changing the DB_NAME parameter in the INIT.ORA file. More details will be given shortly about the required parameters that must be unique for database creation. Steps 1 and 2 are critical in preserving the integrity of any existing databases that may already exist on the Oracle instance. Sometimes accidents do happen in database creation. The worst thing a DBA can face when creating a new database is when a datafile or log filename in a parameter file may not have been changed before creating the second database. This situation leaves the first database vulnerable to being overwritten when the second database is created, which causes the first database to be unusable. Always remember to back up any existing database that uses the same instance and host machine. A critical resource used to start any instance is the file that contains any initialization parameter that the DBA cares to set for the Oracle database and instance being used. This file is generally referred to as the init.ora or parameter file. A parameter file is as unique as the database that uses it. Each database instance usually has at least one parameter file that corresponds to it and it only. Usually, a database instance will have more than one parameter file used exclusively for starting it, to handle various situations that the DBA may want to configure the instance to handle. For example, a DBA may have one parameter file for general use on starting the Oracle instance when users will access the system, one parameter file that is specifically configured to handle an increase in processing associated with heavy transaction periods at the end of the year, and another parameter file designed to start the instance in proper configuration for DBA maintenance weekends. Oracle provides a generic copy of that parameter file INIT.ORA in the software distribution used to install Oracle server on the machine hosting Oracle. Generally, the DBA will take this generic parameter file and alter certain parameters according to his or her needs. There are several parameters that must be changed as part of setting up and running a new Oracle database. The following list highlights key initialization parameters that have to be changed in order to correspond to a unique database. The list describes each parameter in some detail and offers some potential values if appropriate.

DB_NAME

The local name of the database on the machine hosting Oracle, and one componen a databases unique name within the network. If this is not changed, permanent damage may result in the event a database is created.

DB_DOMAIN

Identifies the domain location of the database name within a network. It is the seco component of a databases unique name within the network.

CONTROL_FILES

A name or list of names for the control files of the database. The control files docum the physical layout of the database for Oracle. If the name specified for this parame do not match filenames that exist currently, then Oracle will create a new control fil for the database at startup. If the file does exist, Oracle will overwrite the contents o that file with the physical layout of the database being created.

DB_BLOCK_SIZE

The size in bytes of data blocks within the system. Data blocks are unit components datafiles into which Oracle places the row data from indexes and tables. This param cannot be changed for the life of the database.

DB_BLOCK_BUFFERS

The maximum number of data blocks that will be stored in the database buffer cach the Oracle SGA.

PROCESSES

The number of processes that can connect to Oracle at any given time. This value includes background processes (of which there are at least five) and user processes.

ROLLBACK_SEGMENTS

A list of named rollback segments that the Oracle instance will have to acquire at database startup. If there are particular segments the DBA wants Oracle to acquire, he/she can name them here.

LICENSE_MAX_SESSIONS

Used for license management. This number determines the number of sessions tha users can establish with the Oracle database at any given time.

LICENSE_MAX_WARNING

Used for license management. Set to less than LICENSE_MAX_SESSIONS, Oracle w issue warnings to users as they connect if the number of users connecting has excee LICENCE_MAX_WARNING.

LICENSE_MAX_USERS

Used for license management. As an alternative to licensing by concurrent sessions DBA can limit the number of usernames created on the database by setting a nume value for this parameter.

Following the creation of the appropriate initialization parameter file, the DBA will need to start the database instance while connected to the database as sysdba and while running Server Manager. The task of connecting to the database as sysdba has already been discussed. To start the instance, use the startup nomount command in order to run the instance without mounting a previously existing database. After starting the instance without mounting a database, the DBA can create the database with the create database command In order to create a database, the user must have the osdba or sysdba granted to them and enabled. The following code block contains a create database statement: CREATE DATABASE orgdb01 CONTROLFILE REUSE LOGFILE GROUP 1 (redo1a.log, redo1b.log, redo1c.log) SIZE 1M, GROUP 2 (redo2a.log, redo2b.log, redo2c.log) SIZE 1M MAXLOGFILES 2 DATAFILE data01.dat SIZE 30M AUTOEXTEND 10M MAXSIZE 100M, index01.dat SIZE 30M AUTOEXTEND 10M MAXSIZE 100M, rbs01.dat SIZE 20M, users01.dat SIZE 30M, temp01.dat SIZE 10M MAXDATAFILES 20 ARCHIVELOG;

A new database is created by Oracle with several important features. The first item to remember about database creation is that the database will be created with two special users, which are designed to allow for performing important activities required throughout the life of a database. The names of the two users created by Oracle are SYS and SYSTEM. Both have many powerful privileges and should be used carefully for administrative tasks on the database. For now, the DBA should remember that when the users SYS and SYSTEM are created, the password for SYS is CHANGE_ON_INSTALL, and the password for SYSTEM is MANAGER. Of the disk resources created, the one most central to database operation is the control file. As covered, the control file identifies all other disk resources to the Oracle database. If information about a datafile or redo log file is not contained in the control file for the database, the database will not know about it, and the DBA will not be able to recover it if the disk containing it is lost. The controlfile reuse clause is included in the create database statement to allow Oracle to reuse the file specified if it already exists. BEWARE OF REUSING CONTROL FILES. An existing database will need recovery if its control file is overwritten by the creation of another database. Control files are generally not very large, perhaps 250K for even a large database. This size of the control file is related to the number of datafiles and redo log files that are used on the Oracle database. Adding more datafiles and redo logs increases the size of a control file; fewer datafiles that grow larger with the autoextend option and fewer redo logs decrease the size of control files. Other items created as part of the create database command are the datafiles and the redo log files. The datafiles are specified with the datafile clause, and they are where Oracle physically stores the data in any and all tablespaces. Redo logs are created with the logfile clause. Redo logs are entries for changes made to the database. If the create database statement specifies a datafile or a redo log file that currently exists on the system and the reuse keyword is used, then Oracle will overwrite that file with the data for the new redo log member or datafile for the database being created. This syntax is purely optional, and furthermore, if is not used and the create database statement includes files that already exist, Oracle will overwrite them. In general, care should be taken when reusing files in order to prevent "accidental reuse" in a situation where a database already exists on the machine and the DBA creates another one that overwrites key files on the first database. Another pair of options set at database creation time are called maxdatafiles and maxlogfiles. This parameter is a keyword specified in the create database statement, and it must be followed by an integer. As its name implies, the maxdatafiles parameter indicates the maximum number of datafiles a database can have. This parameter can be a potential limitation later when the database grows to a point where it cannot accommodate another datafile because the maxdatafiles parameter would be exceeded. A workaround for this problem is to use the autoextend option when defining datafiles. When autoextend is used, the datafiles will automatically allocate more space when the datafile fills, up to a total size specified by the maxsize keyword. The final and most important option included in the create database statement is the archivelog option. When archivelog is used, Oracle archives the redo logs generated. This feature should be enabled in all but read only databases. Not only are physical database resources and database administrative passwords created as part of the create database command, but some important logical disk resources are created as well. One of these resources is the SYSTEM tablespace. Sometimes the SYSTEM tablespace is compared to the root directory of a machines file system. The SYSTEM tablespace certainly is a tablespace in the Oracle system that is HIGHLY important to the operation of the Oracle database. Many important database objects are stored in the SYSTEM tablespace. Some of those objects are the Oracle data dictionary and rollback segments. There must be one rollback segment in the SYSTEM tablespace for Oracle to acquire at database startup, or else the database wont start. In the interests of preserving the integrity of the Oracle database, the DBA should ensure that only the data dictionary and system

rollback segments are placed in the SYSTEM tablespace. In particular, no data objects such as tables or indexes should be placed in the SYSTEM tablespace. Finally, the create database command specifies the character set used throughout the database. Like DB_BLOCK_SIZE, the character set specified for the database should not be changed at any point after the database is created. After database creation, the database that was just created is mounted and opened directly by Oracle for the DBA to begin placing data objects into it, such as tables, indexes, and rollback segments. Once the database is created, the DBA should consider some preliminary work on distributing the I/O load in order to simplify the maintainability of the database in the future. Some areas that the DBA may want to address right away are placement of redo logs and datafiles and the separation of tables from their corresponding indexes. Also, the DBA will find the further allocation of rollback segments in addition to the rollback segment in the SYSTEM tablespace to be an important initial consideration on creating the database. Generally, it is recommended to place the rest of the rollback segments available to the database in a special tablespace designed only to hold rollback segments. Exercises 1. 2. 3. Name some of the steps in creating a new Oracle database. What resources are created as part of the creation of a database? What is the SYSTEM tablespace? What is its significance? What is a parameter file? What are some of the parameters a DBA must set uniquely for any database via the parameter file?

Creating the Oracle Data Dictionary The data dictionary is the first database object created at the time a create database command is issued. Every object in the database is tracked in some fashion by the Oracle data dictionary. Oracle generally creates the data dictionary without any intervention from the DBA at database creation time with the use of catalog.sql and catproc.sql. The first script, catalog.sql, runs a series of other scripts in order to create all the data dictionary views, along with special public synonyms for those views. Within the catalog.sql script there are calls to several other scripts, which are listed below: cataudit.sql Creates the SYS.AUD$ dictionary table, which tracks all audit trail information generated by Oracle when the auditing feature of the database is used. catldr.sql Creates views that are used for the SQL*Loader tool, discussed later in this unit, which is used to process large-volume data loads from one system to another. catexp.sql Creates views that are used by the IMPORT/EXPORT utilities, discussed in the unit covering OCP Exam 3, "Database Backup and Recovery."

The other script generally run by the Oracle database when the data dictionary is created is the CATPROC.SQL script. This script calls several other scripts in the process of creating several different data dictionary components used in everything procedural related to the Oracle database. The code for creating these dictionary views is not contained in catproc.sql. The code that actually creates the objects is in several scripts called by this master

script. Some of the objects created by the scripts called by catproc.sql are stored procedures, packages, triggers, snapshots, and certain utilities for PL/SQL constructs like alerts, locks, mail, and pipes. Exercises 1. 2. How is the data dictionary created? What two scripts are used as part of database creation?

Chapter Summary This chapter covered introductory material related to Oracle database administration. The topics covered in this chapter included an overview of the Oracle architecture, the process of starting and stopping access to a database and to an Oracle instance, and the tasks required for creating an Oracle database. The material in this chapter comprises about 22 percent of questions asked on OCP Exam 2. The first area of discussion in this chapter was an overview of the various components of the Oracle database. Figure 6-1 gives a clear idea of the background processes, memory structures, and disk resources that comprise the Oracle instance, and also of the methods in which they act together to allow users to access information. Several memory structures exist on the Oracle database to improve performance on various areas of the database. The memory structures of an Oracle instance include the System Global Area (SGA) and the Program Global Area (PGA). The SGA, in turn, consists of a minimum of three components: the data block buffer cache, the shared pool, and the redo log buffer. Corresponding to several of these memory areas are certain disk resources. These disk resources are divided into two categories: physical resources and logical resources. The physical disk resources on the Oracle database are datafiles, redo log files, control files, password files, and parameter files. The logical resources are tablespaces, segments, and extents. Tying memory structures and disk resources together are several memory processes that move data between disk and memory, or handle activities in the background on Oracles behalf. The core background processes available on the Oracle instance include data block writer (DBWR), log writer (LGWR), system monitor (SMON), process monitor (PMON), checkpoint (CKPT), archiver (ARCH), recoverer (RECO), dispatcher (Dnnn), lock (LCKn), and server (Snnn). These different processes have functions that are related to activities that happen regularly in Oracle against the memory structures, disk resources, or both. DBWR moves data blocks out of the buffer cache. LGWR writes redo log entries out of the redo log buffer and into the online redo log. SMON handles instance recovery at startup in the event of a failure, and periodically sifts through tablespaces, making large continuous free disk space out of smaller empty fragments. PMON ensures that if a user process fails, the appropriate cleanup activity and rollback occurs. CKPT handles writing new redo log number information to datafiles and control files in the database at periodic intervals during the time the database is open. ARCH handles the automatic archiving of online redo log files. RECO and LCKn handle transaction processing on distributed database configurations. Server processes read data into the buffer cache on behalf of user processes. They can either be dedicated to one user process, or shared between many processes in the MTS architecture. Dnnn are dispatchers used to route user processes to a shared server in MTS configurations. The next area covered by the chapter was on how to start the Oracle instance. Before starting the instance, the DBA must figure out what sort of database authentication to use both for users and administrators. The options available are operating system authentication and Oracle authentication. The factors that weigh on that choice are whether the DBA wants to use remote administration via network or local administration directly on the

machine running Oracle. If the DBA chooses to use Oracle authentication, then the DBA must create a password file using the ORAPWD utility. The password file itself is protected by a password, and this password is the same as the one used for authentication as user SYS and when connecting as internal. To have database administrator privileges on the database, a DBA must be granted certain privileges. They are called sysdba and sysoper in environments where Oracle authentication is used, and osdba or osoper where operating system authentication is used. In order to start a database instance, the DBA must run Server Manager and connect to the database as internal. The command to start the instance from Server Manager is called startup. There are several different options for starting the instance. They are nomount, mount, open, restrict, recover, and force. The nomount option starts the instance without mounting a corresponding database. The mount option starts the instance and mounts but does not open the database. The open option starts the instance, mounts the database, and opens it for general user access. The restrict option starts the instance, mounts the database, and opens it for users who have been granted a special access privilege called restricted access. The recover option starts the instance, but leaves the database closed and starts the database recovery procedures associated with disk failure. The force option gives the database startup procedure some extra pressure to assist in starting an instance that either has trouble opening or trouble closing normally. There are two alter database statements that can be used to change database accessibility once the instance is started as well. Several options exist for shutting down the database as well. The DBA must again connect to the database as internal using the Server Manager tool. The three options for shutting down the Oracle database are normal, immediate, and abort. When the DBA shuts down the database with the normal option, the database refuses new connections to the database by users and waits for existing connections to terminate. Once the last user has logged off the system, then the shutdown normal will complete. The DBA issuing a shutdown immediate causes Oracle to prevent new connections while also terminating current ones, rolling back whatever transactions were taking place in the sessions just terminated. The final option for shutting down a database is shutdown abort, which disconnects current sessions without rolling back their transactions and prevents new connections to the database as well. The final area covered in this chapter was the creation of a database. The steps of database creation were discussed. The first area covered was the process modeling that takes place when the database designer creates the entity-relationship diagram. After developing a model of the process to be turned into a database application, the designer of the application must then give a row count forecast for the applications tables. This row count forecast allows the DBA to size the amount of space in bytes that each table and index needs in order to store data in the database. Once this sizing is complete, the DBA can then begin the work of creating the database. First, the DBA should back up existing databases associated with the instance, if any, in order to prevent data loss or accidental deletion of a disk file resource. The next thing that should happen is the DBA should create a parameter file that is unique to the database being created. Several initialization parameters were identified as needing to be set to create a database. They are DB_NAME, DB_DOMAIN, DB_BLOCK_SIZE, DB_BLOCK_BUFFERS, PROCESSES, ROLLBACK_SEGMENTS, LICENSE_MAX_SESSIONS, LICENSE_MAX_WARNING, LICENSE_MAX_USERS. After the parameter file is created, the DBA can execute the create database command, which creates all physical disk resources for the Oracle database. The physical resources are datafiles, control files, and redo log files, the SYS and SYSTEM users, the SYSTEM tablespace, one rollback segment in the SYSTEM tablespace, and the Oracle data dictionary for that database. After creating the database, it is recommended that the DBA back up the new database in order to avoid having to re-create the database from scratch in the event of a system failure.

Of particular importance in the database creation process is the process by which the data dictionary is created. The data dictionary must be created first in a database because all other database structure changes will be recorded in the data dictionary. This creation process happens automatically by Oracle. Several scripts are run in order to create the tables and views that comprise the data dictionary. There are two "master" scripts that everything else seems to hang off of. The first is catalog.sql. This script creates all the data dictionary tables that document the various objects on the database. The second is called catproc.sql. This script runs several other scripts that create everything required in the data dictionary to allow procedural blocks of code in the Oracle database, namely packages, procedures, functions, triggers, snapshots, and certain packages for PL/SQL such as pipes and alerts. Two-Minute Drill Three major components of the Oracle architecture are memory structures that improve database performance, disk resources that store Oracle data, and background processes that handle disk writes and other time-consuming tasks in the background. Memory structures in the Oracle architecture are System Global Area (SGA) and Program Global Area (PGA). The SGA consists of: the buffer cache for storing recently accessed data blocks, the redo log buffer for storing redo entries until they can be written to disk, and shared pool for storing parsed information about recently executed SQL for code sharing. Disk resources in the Oracle architecture are divided into physical and logical categories. Physical disk resources are control files that store the physical layout of database, redo log files that store redo entries on disk, password files to store passwords for DBAs when Oracle authentication is used, parameter files, and datafiles that store tables, indexes, rollback segments, and the data dictionary. The fundamental unit of storage in a datafile is the data block. Logical disk resources are tablespaces for storing tables, indexes, rollback segments, the data dictionary. The storage of these logical disk resources is handled with segments and extents, which are conglomerations of data blocks. DBWR writes data blocks back and forth between disk and the buffer cache. LGWR writes redo log entries between redo log buffer and online redo log on disk. It also writes redo log sequence numbers to datafiles and control files at checkpoints (also handled by CKPT when enabled) and tells DBWR to write dirty buffers to disk. CKPT writes redo log sequence numbers to datafiles and control files when enabled. This task can also be handled by LGWR. PMON monitors user processes and cleans up the database transactions in the event that they fail. SMON automatically handles instance recovery at startup when necessary, and defragments small free spaces in tablespaces into larger ones.

ARCH handles archiving of online redo logs automatically when set up. RECO resolves in-doubt transactions on distributed database environments. LCKn obtains locks on remote databases. Dnnn receives user processes from SQL*Net listener in multithreaded server (MTS) environments. Snnn (server process) reads data from disk on the database on behalf of the user process. Two user authentication methods exist in Oracle: operating system authentication and Oracle authentication. There are two roles DBAs require to perform their function on the database. Under OS authentication, the privileges are called osdba and osoper. In Oracle authentication environments, they are called sysdba and sysoper. To use Oracle authentication, the DBA must create a password file using the ORAPWD utility. To start and stop a database, the DBA must connect as internal or sysdba. The tool used to start and stop the database is called Server Manager. There are at least six different options for starting a database: Startup nomount Start instance, do not mount a database. Startup mount Start instance, mount but do not open the database. Startup open Start instance, mount and open database. Startup restrict Start instance, mount and open database, but restrict access to those users with restricted session privilege granted to them. Startup recover Start instance, leave database closed, and begin recovery for disk failure scenario. Startup force Make an instance start that is having problems either starting or stopping. When a database is open, any user with a username and password, with the create session privilege or the CONNECT role granted to them, may connect to the database. The database availability can be changed at any time using the alter database command. Options available for alter database are mount, open, and enable (disable) restricted session. Closing or shutting down a database must be done by the DBA while running Server Manager and while the DBA is connected to the database as internal or sysdba. There are three options for closing a database: shutdown normalno new existing connections allowed, but existing sessions may take as long as they want to wrap up; shutdown immediateno new connections allowed, existing sessions are terminated, and their transactions rolled back, shutdown abortno new

connections allowed, existing sessions are terminated, transactions are not rolled back. Instance recovery is required after shutdown abort is used. The first step in creation of a database is to model the process that will be performed by the database application. The next step in creating a database is to back up any existing databases associated with the instance. After that, the DBA should create a parameter file with unique values for several parameters, including the following: DB_NAME The local name for the database. DB_DOMAIN The network-wide location for the database. DB_BLOCK_SIZE The size of each block in the database. DB_BLOCK_BUFFERS The number of blocks stored in the buffer cache. PROCESSES The maximum number of processes available on the database. ROLLBACK_SEGMENTS Named rollback segments that the database must acquire at startup. LICENSE_MAX_SESSIONS The maximum number of sessions that can connect to the database. LICENSE_MAX_WARNING The sessions trying to connect above the number specified by this parameter will receive a warning message. LICENSE_MAX_USERS The maximum number of users that can be created in the Oracle instance. LICENSE_MAX_SESSIONS and LICENSE_MAX_WARNING are used for license tracking or LICENSE_MAX_USERS is used, bot not both, usually. After creating the parameter file, the DBA executes the create database command, which creates all physical database structures in the database, along with logical database structures like SYSTEM and an initial rollback segment, as well as the SYS and SYSTEM users. On conclusion of the create database statement, the database is created and open. The default password for SYS is CHANGE_ON_INSTALL. The default password for SYSTEM is MANAGER. The create database command also creates the Oracle data dictionary for that database. This task is done first to capture the other database objects that are created in the database. The number of datafiles and redo log files created for the life of the database can be limited with the maxdatafiles and maxlogfiles options of the create database statement. The size of a datafile is fixed, unless the autoextend option is used.

The size of a control file is directly related to the number of datafiles and redo logs for the database. At least two scripts are used create the database: catalog.sql Creates all data dictionary tables and views that track database objects like tables, indexes, and rollback segments. catproc.sql Creates dictionary views for all procedural aspects of Oracle, like PL/SQL packages, procedures, functions, triggers, snapshots, and special utility PL/SQL packages such as those used to manage pipes, alerts, locks, etc.