Вы находитесь на странице: 1из 112

1INTRODUCT ION ========= ===== ORACLE PRODUCTS AND SERVICES ----------------------------------------------------------1)ORACLE DATABASE:-it's the first database designed for

enetrprise grid computing. 2)ORACLE APPLICATION SERVER:-jave2 platform and ent errise edtion(J2EE)-certified server that integrates everything needed to develop and deploy web based applications. it deployes e-buisness portal,web services and transectional applications including pl/sql, oracle forms and J2EE based applications. 3)ORACLE APPLICATOIN:-it's a complete set of buisne ss application for managing and automat ing process of your organisation. 4)ORACLE COLLABORATION SUIT:-it's a single integrat ed system for u r organisations communication data eg.voice e-mail,fax,calende r information etc. 5)ORACLE DEVELOPER SUIT:- it's complete integrated enviornment that combine app lication development and buisness inteligence tool

s. 6)ORACLE SRVICES:-services such as oracle consultin g and oracle university provides u expertise for u r Oracle projects. global grid forum(GGF) as standards body that dev elop standard for Grid computing. Oracle has created grid computing infrastructure so ftware that balances all types of workload across servers enables all those servers t o be managed as one complete system. Oracles grid computing technology includes a)automatic storage managemant(ASM) b)Real Application Cluster(RAC) c)Oracle streams d)Enterprise Manager Grid Control ASM:-spreads database data across all disks,creates maintains storage grid and ------provides highest input/output with minimal manageme nt cost.data availability increase with optional mirroring and u can add or d rop disks online. RAC:-runs and scales all application workloads on c luster of servers and offers the -------following features *INTEGRATED CLUSTERWARE:-This includes functionalit y for clusters connectivity, me ssaging and locking.cluster control and recovery. *AUTOMATIC WORKLOAD MANAGEMENT:-rules can be define d to allocate

processing resource to each servic es both during normal operations and in response to failures.this rules can be dynamicaly modified to meet changin g buisness needs.this dynamic allocati on of resource within database ique to Oracle RAC. *AUTOMATIC EVENT NOTIFICATION TO THE MID-TIER:-when the cluster configuration changes the mid tier c an immediately adapt to instance failo ver or availalabilty of new instance,this ena bles end user to continue working in t he event of instance failover typica lly caused by network timeouts. ORACLE STREAMS:-It provides unified famework for in grid is un

formation sharing,combining ------------------------------- message, queuing ,data replication,event notification,data warehouse loading, and publishing subscribing functionality in to single technology.O racle streams can automatically capture database changes,pro pagate changes to subscribing nodes and detects and resolve data update confilicts. ENTERPRISE MANAGER GRID CONTROL:-Manages gridwise o perations that inludes ------------------------------------------------------------------managing the entire stack of software,provi sioning users,cloning of databases and managing pa tches. It can monitor performances of all applicat ions from the point of view of end users.

2INSTALLI NG ORACLE DATABASE SOFTWARE ========= ============================ TASK OF AN ORACLE DATABASE ADMINISTRATOR:--------------------------------------------------------------------------------------1)Evaluating the database server hardware. 2)installing the oracle software. 3)planning the database and security strategy.

4)Creating,Migrating and Opening the database. 5)backing up the database. 6)enrolling system uers and planning for their orac le network access. 7)implementing database design. 8)recovering from database failure. 9)monitoring database performance. TOOLS USED TO ADMINISTER AN ORACLE DATABASE ----------------------------------------------------------------------------------------------1)Oracle Universal Installer:-Installs u r Oracle s oftware and options. 2)Database Configuration Asistant:-Creates dabase f rom Oracle supplied templates. 3)Database Upgrade asistant:-it guides u throgh upg ade of u r existing database to n ew Oracle release. 4)Oracle Network Manager:-it is used to configure n etwork connectivity for u r Oracle database and applications. 5)Oracle Enterprise Manager.:-It combines graphical consol,agents,common services an d tools to provide comprihensive system managemant platfo rm for managing oracle products. Three main Enterprise Manager Tools are a)Enterprise Manager Database Consol:-Used to admin ister one database. b)Enterprise Manager Grid Control:-Used to adminite r many databases at the same time. c)Enterprise Manager Java Consol:-Used to access to ols that are not web enabled. 6)SQL*PLUS,iSQL*PLUS:-standard command line interfa ce for managing the database. 7)Recovery Manager(RMAN):-it's an Oracle tool that provide complete solution for backup,restoration and recovery needs for entire database or for spec

ific database files. 8)Oracle Secure Backup:-Provides tape backup manage ament for Oracle ecosystem 9)Datapump:-it provides high speed tranfer of data from one database to another, 10)SQL*Loader:11)Command-line Tools

OFA STANDS FOR OPTIMAL FLEXIBLE ARCHITECTURE. it is designed to *organise large amount of software *faciliate routine administrative task. *facilitate switching between multiple oracle datab ase *manage and administer database growth adequetly. *help eliminate fragmentation of free space.

*orapwd is the utility to

create password file.

*when u conn. as sysdba u r connenected to sys scem a but *when u r connected as sysoper u r connected to pub lic schema. REMOTE_LOGIN_PASSWORD file has 2 parameters 1)EXCLUSIVE:-only one database can use the password file.

using exclusive password file u can grant sysdba or sysoper previllage to any user. 2)SHARED:-more than one instance can use this passw ordfile. u can not add user to the shared password file. ENTRIES privilage in ORAPWD utility can allow to ad d users to the password file. when database is created 2 users are created autom atically. 1) SYS password is CHANGE_ON_INSTALL. it is the owner of database dat adictionary. 2)SYSTEM password is MANAGER.owner of internal tables and views used by oracle tools. ORACLE TOOLS ARE 1)OUI 2)ODBC 3)PASSWORD FILE 4)SQL PLUS 5)ETERPRISE MANAGER ORACLE ENTERPRISE MANAGER it is highly scalable three tier structure three ti ers are as folloews 1)CONSOL:- it is comprised of clients and managemen t applications. 2)ORACLE MANAGER SERVER(OMS):- it has administartiv e user accounts,process functions such as jobs, events and manages flow of information between console(first tier) and managed nodes(third tier).

ORACLE ENTERPRISE MANAGER REPOSITORY:-it is set of tables used by oracle manager services(OMS) as its persistant back-end store. 3)NODES:- It is comprised of managed nodes which co ntains target such as database and other managed services. each node has oracle intelligent agent, it communic ates with QMS'S and performs tasks sent by console and client appicati on.

DBA TOOLS INSTANCE MANAGER:- performes startup shutdown and m onitor databases. SECURITY MANAGER:- used to manage user privileges. STORAGE MANAGER:-maintains tablespaces,datafiles,ro llback segment and log groups. SCHEMA MANAGER:-creates and maintains object such a s tables,inedxes and views. SQL PLUS WORKSHEET:- provides capability to issue a ny sql statements against any database.

LESSON 3 CREA TING AN ORACLE DATABASE ============= ======================== PLANING THE DATABASE ---------------------------------------------It is important to plan the logical structure of da tabase.before creating a database u should know how many datafiles will make up a table space,which type of information will be stored in the database and on which disk dr ives the datafiles will be physically

stored. in distributed enviornments this planning i s extremely important as physical location of frequently accesed data drameticaly eff ects application performance. during planning stage develop a backup strategy. before creation of database u must plan for the fol lowing 1)logical storage structure of a database. 2)overall database design. 3)backup strategy for database. DATABASE EXAMPLES ------------------------------------A)DATA WAREHOUSE ---research and marketing data ---state of federal tax payment ---professional liscensing(doctors,nuses etc) B)TRANSECTION PROCESSING --store checkout register system --automatic teller machine(ATM) transection C)GENERAL PURPOSE --retail billing eg.software house of a nursury. 4-MANAGING THE ORACLE INSTANCE ===================== ========== MANAGEMENT FRAMEWORK ------------------------------------------------Three components are 1)database instance that is being managed. 2)listener . that allows connections to the database

3)management interface --database control

--management agent(when using grid control) STARTING AND STOPPING DATABASE CONTROL oracle provides a standalone management consol cal led Database Control.from database control u can m anage only one database. To start a dbconsole process use the following comm and ---------------emctl start dbconsole to stop dbcosole process use the following command ------------emctl stop dbconsole to view the status of dbconsole process ----------------emctl status dbconsole Dtabase Control uses server side agent process.this server process automatically starts and stopes when the dbconsole process is sta rted or stoped. ORACLE ENTERPRISE MANAGER ---------------------------------------------------When u install an oracle database OUI automaticaly installs oracle enterprise manager. It's a web based database control serves as primary tool for managing u r database. it provides a graphical tool for doing almost any t ask that u would have to do as DBA. SQL*PLUS AND iSQL*PLUS --------------------------------------------This tools enables u to perform many database manag

ement operations as well as select,insert ,update or delete the data in the dat abase. iSQL*PLUS is a component of SQL*PLUS product.iSQL*P LUS has a server side listener process which must be started before u conect to th e browser the command to start such process is isqlplusctl start after the process is started connect it by entering the following URL in the browser http:\\host name:port\isqlplus u can call SQL*PLUS from a shell script or BAT file by invoking sqlplus and using the operating system scripting syntex for passing parameter. u can call an existing SQL script file from within SQL*plus. this can be done at the command line just simply by using '@' operator. SQL>@script.sql INITIALISATION PARAMETER FILES -------------------------------------------------------------When u start an instance initialisation parameter f ile is read. there are two types of parameter files 1)server parameter file:-it's a binary file that written to and read by the database server it must not be edited manualy.it's reffered as 'SP FILE' server parameter file.the default name of this file which automaticaly saught at the start up, is SPFILE<SID>.ORA

2)TEXT INITIALISATION PARAMETER FILE:-This is a kin d of parameter file which can be read by the server but is not written by the server . this file which is automatically saught at the startup when SPFILE is not found,the default name of this file is INIT<SID>.ORA. INITIALISATION PARAMETERS ----------------------------------------------------There are two kinds of initialisation parameters 1)basic:-these are one that u likely to set to keep u r database running with good performance.it is necesary to set and tune only 32 basic parameters.all other parameters are considered as advaneced. EXAMPLES 1)CONTROL_FILES:-specifies one or more control file names.the range of values is from 1 to 8. 2)DB_BLOCK_SIZE:-specifies size(in bytes) of an ora cle database block.this value is set at the time of database creation and can't be chang ed default is 8k system dependant. 3)DB_CACHE_SIZE:-specifies the size of the standard block buffer cache. it should be atleast 16 MB.default value is 48 KB. 4)DB_FILE_MULTIBLOCK_READ_COUNT:-specifies maximum number of blocks read during an I/O operation,default value 8. 5)DB_FILES:-specifies maximum number of datafiles c an be opened for the database, default value is OS dependant. 6)AG_AGGREGATE_TARGET:- specifies amount of PGA are a to be allocated to all

server process attached to the instance.this memor y does not reside in SGA.The minimum value for this is 10M and maximum value is 400G.automatic tuning of this work area is fully disabled. 7)PROCESS:-Specifies maximum number of OS user proc ess that can be simultaneously connected to oracle server. 8)SHARED_POOL_SIZE:-specifies size of shared pool i n bytes,it contains objects such as shared cursors, shared procedures,control structur es and paralle execution message buffers.larger value van improve performance in mul tiuser systems. 9)UNDO_MANAGEMENT:-specifies which undo management mode the system should use.when set AUTO the system is started in System M anaged Undo(SMU) mode, otherwise it is started in Rollback Undo(RBU) mode. in RBU mode UNDO space is allocated externaly as rollback segment.in SMU mode undo spac e is allocated externally as undo tablespace. STARTING UP AN ORACLE DATABASE INSTANCE -------------------------------------------------------------------------------------Starting an instance includes following tasks *searching <ORACLE_HOME>/DATABASE for a following p articular file name --spfile<sid>.ora --if not found ,spfile.ora --if not found,init<sid>.ora specifying PFILE parameter with STARTUP the following behavior *allocating the SGA overrides

*starting background process *opening the ALERT<SID>.LOG file and the trace fil e. THE following scenario describes different stages o f starting up an instance NOMOUNT:-The instance is typically started in NOMOU NT mode during database creation.during recreation of control files and d uring backup and recovery scenario. MOUNT:-Mounting the database includes following tas ks *associating database with previosely started insta nce. *locating and opening the control files specified i n the parameter file *reading control files to obtain names and status o f the datafile and online redolog files. database must be mount and must not be opened durin g the following tasks *renaming datafiles *enabling and disabling archiving online redo logf iles archiving options *performing full database recovery OPEN:-opening the database includes following tasks *opening the online datafiles *opening the online redo logfiles if any of the online redolog fies or datafiles are not present at the time of opening the database then oracle server returns an error. u can startup a database instance in a restricted m ode so that it is available to the

users with administrative options only. DATABASE SHUTDOWN MODES ----------------------------------------------------1)ABORT:-Performs least amount of work before shyti ng down.this is typicaly used when no othe form of shutdown works or when u need to shutdown immediately because of impending situation such as notice of po wer outage within seconds. 2)IMMEDIATE:-It is themost typically used option.un commited transections are rolled back. *current SQL statements being processed by th e oracle database are not completed. *the oracle server does not wait for the user s to disconnect. *oracle server rollsback active transections and disconnect all users. *oracle server closes and dismounts satabase before shutting down the instance. *the next startup doesnot require instance re covery. 3)TRANSECTIONAL:-Allows transection to finish. *no client can start a new transection. *a client is disconnected when it ends the tran section. *when all transections are completed the shutdo en starts immediately. *the next startup doesnot require instance reco very. 4)NORMAL:-Waits for session to disconnect. it proceeds with the following conditions *the new connections can be made *oracle server waits for all users to disconnect *database and redo buffers are written to the di sk.

*the background processes are terminated and SGA is removed from the memory. *the oracle server closes and dismounts the data base *the next starup doesnot require an instance rec overy. 5)ABORT *Current sql statemants being processed by oracle s erver are immediately terminated. *database does not wait for currently connected use r to disconnect. *the database and redo buffers are not written to t he disk. *uncommited transections arenot rolled back. *the instance is terminated without closing the fil es. *the database is not closed or dismounted. *the next startup requires instance recovery. *abort is the fastest mode of shutdown where as nor mal is the slowest. ALERT LOG FILE ----------------------------The alert log file of the database is a cronologica l log of messages and errors including the following *any non default initialisation parameters used at the startup. *all internal errors *administrative operations such as SQL statements C REATE,DROP AND ALTER DATABASE and enerprise manager or the SQL*PLUS statements such as STARTUP,SHUTDOWN,ARCHIVELOG and RECOVER. * sevaral messeges and error relating to the functi on of shared server dispatcher process. *errors during automatic refresher view. alertlog file is on the server and is located at di

rectory specified with BACKGROUND_DUMP_DEST initialisation parameter. DYNAMIC PERFORMANCE VIEWS ------------------------------------------------------dynamic performance views provides access to inform ation about changing states and conditions in the database.they based on virtua l tables that are built from memory structures inside the server.they arenot con ventional tables that resides on the database that is why few of them can show u the data before the database is mounted or open. they include the information about *session *file status *progress of jobs and tasks *locks *backup status *memory usage and allocation *system and session paramaters *sql execution *statistics and matrics THE DICT AND DICT_COLUMNS views also contains names of the dynamic performance views. *these views are owned by the SYS user. *differrent vies are available at different time --instance has been started --the database is mounted --the database is open. *u can query v$FIXED_TABLE to see all the view name . *read consistancy is not guaranteed on this views a s these are dynamic.

5-MANAGING DATABASE STORAGE STR UCTURES =============================== ========= STORAGE STRUCTURES ----------------------------------------A database is devided into logical storage units ca lled tablespaces,each table space has many logical dtablocks. a specific number of co ntigous logical blocks form an extent. set of extents that are allocated for a certain log ical structures form one segment. An oracle datablock is smallest unit of logical I/O . HOW TABLE DATA IS STORED -------------------------------------------------When a table is created a segment is created to hol d it's data. a table space contains collection of segments.a row of a table is ultimat ely stored in datablock in the form of ] row piece,the entire raw may not be stored at the s ame place. this happens when the raw is too large to fit in the single block. DATABLOCK CONTENTS -----------------------------------------BLOCK HEADER:-It contains segment type(table or ind ex), datablock address,table directory, raw directory, and transection slot of 2 3 bytes each, which is used when modifications are made to raws in the block. ROWDATA:-this is actual data s. the rows in the block

FREE SPACE:-This enables the row data and the heade r data space to grow when necessary.initialy the free space in the block is c ontigous,but deletion and updates

may fragments free space in the block which can c oalesced by the oracle server when necessary. TABLESPACE AND DATA FILES ----------------------------------------------------The oarcle database stores data logically in the ta blespaces and physically in the datafiles. TABLESPACES * can belong to only one database. *consist of one or more datafiles *are further divide into logical units of storage. DATA FILES *can belong to only one tablespace and datafiles. *are repository for schema objects. the simplest oracle database would have two table s paces 1)system 2)sysaux each with one datafile another database can have three tablespaces each wi th two datafiles.A single database can have as many as 65,534 datafiles. ORACLE MANAGED FILES(OMF):-it eleminates the need f or u to manage the o.s. files comprising oracle database. u specify an operation in terms of database objects rather then file names.a database can have a mixture of o racle-managed and unmanaged files. SPACE MANAGEMENT IN TABLESPACES -------------------------------------------------------------------

LOCALY MANAGED TABLESPACE *Free extents are managed in the tablespace *a bitmap is used to record free extents *each bit corresponds to a block or a group of bloc ks. *the bit vlaue indicates used or free extents. *the use of localy managed tablespace is recommende d. DICTIONARY MANAGED TABLESPACE *free extents are managed by datadictionary *appropriate tables are updated when extents are al located or deallocated. *these tablespaces are supported only for backward compitibilty. if u want to convert dictionary managed table space into localy managed use the DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL procer dure to do this. TABLESPACE IN THE PRECONFIGURED DATABASE ------------------------------------------------------------------------------------1)SYSTEM:-This used by oracle server to manage the database.it contains datadictionary and the table that contains administrative informat ion.these are all alocated in the SYS schema and can be accessed by SYS user or other adm inistrative user with required privillages. 2)SYSAUX:-this is an auxillary tableapce to system tablespace. 3)TEMP:-Your temparory tablespace is used when u ex ecute an sql statement that requires creation of temperory segments,each user i s assigned a temporary tablespace. best practise is to define a temprory tablespace fo r a database,which is assigned to

newly created user. 4)UNDOTBS1:-This is undo tablespace used by databas e server to store undo information. this tablespace is created at the time of database creation. 5)USERS:-This tablespace is used to store permanene t user objects and data.for preconfigured database USERS tablespace is the defa ult tablespace for all objects created by nonsystem users. for SYS and SYSTEM user s default permanent tablepace is SYSTEM. 6)EXAMPLE:-Thie tablespace contains sample schemas that can be installe when u create the database. ALTERING A TABLESPACE -------------------------------------------u can alter the tablespace by *renaming *changing the status:-a tablespace in one of the th ree states any of the three states may not be available as its avilibility depends on the type of table space 1)READ AND WRITE:the tablespace can be online an d read from and written to. 2)READ ONLY:in this state the database the exist ing transection can be completed(commited or rolled back) but no more DML operations are allowed on an objectsin the data base.u can make SYSTEM and SYSAUX tablespace read only. 3)OFFLINE:u can make an online table offline so that this portionof the database is unavailable for use.

when u make the database offline, u can use the folllowing options ---NORMAL:-A tablespace can be taken offline normaly if if no error condition exists of the datafil e of the tablespace. ---TEMPORARY:-a database can be taken offline temporarily if there's some error with one or more dat afiles of the tablespace ---IMMIDIATE:-tablespace can be taken offline immeditely without oracle daatabase taking checkpoint to any of the datafiles.when u specify offline media reco very is required before taking database online.u cannot take tablespace offline immediate if the database is runnin g in NOARCHIVELOG mode. ---FOR RECOVER:-this mobe is deprecated and i s supported only for backword competibility.

CHANGING THE SIZE:-u can add space to tablespace by adding datafile to the tablespace or changing the size of datafile in the tablespace. THRESHOLDS:-u have three options to change the thre sholds of u r tablespace 1)use database defult thresholds: this uses preset thresholds 2)specify thresholds: this enables u to set thresho

lds 3)disable thresholds: this turns of space usage ale rts for this tablespace. ACTIONS WITH THE TABLESPACE --------------------------------------------------------1)add datafile:- adds datafile to the tablespace. 2)create like:-creates anothe tablepace by using an other tablespace as a template 3)generate DDL:-generate data difinition langauge t hat craetes a tablespace. 4)make localy managed:-converts the tablespace loca ly managed if it is dictionary managed currently.this conve rsion is one way. 5)make read ony:-stops all writes to the tablespace . 6)make writable:-allows DML and other write activit ies on the objects in the tablespace. 7)place online:-brings currently offline tablespace online. 8)reorganise:-start reorganise wizard to move objec ts around within the tablespace to reclaim the space. 9)run segment advisor:-starts segment advisor which u can for space fragmentation within the object.at the tablespace advise is generated for every segment in in it. 10)show dependancy:-shows the object that this tabl espace depend upon. 11)show tablespace content:-shows information about all the segment present in the tablespace.

12)take offlone:-makes currenly available tablespac e unavailable. DROPPING A TABLESPACE ----------------------------------------------When u drop a tablespace file pointers in the contr ol file of the associated database are removed.u cannot drop a tablespace that contain s actie segments,it is best to take the tablespace offline before droping it. VIEWING TABLESPACE INFORMATION ----------------------------------------------------------------tablesapce and datafile can be viewed by quering t he folowing *table space information --dba_tablespace --v$tablespaces *datafile information --dba_data_files --v$datafiles *tempfile information --dba_temp_files --v$tempfiles ENLARGING A DATABASE --------------------------------------------following are the ways to enlarge a database a)creating a new tablespace b)adding a datafile to an existing tablespace c)increasing the size of datafile d)providing for the dynamic growth of datafile

AUTOMATIC STORAGE M ANAGEMENT -----------------------------------------------------------------*ASM is a high performance,portable cluster file sy stem *manages oracle database files *spreads data across disks to balance load. it enab les dba to increase the size of database without having to shutdown the database t o adjust storage allocation *mirror data to provide fault tolerance.datamanagem ent is done by selecting desired reliablity and performance characteristics for cla sses of data rather then with human interecction on per-file basis. *solve many storage management challenges ASM KEY FEATURES AND BENEFITS -----------------------------------------------------------*stripes files but not the logical volume.i.e. it d evides the file into extents but not the data extents and spreads the extents evenly ac ross the disks when storage capacity changes ASM doesnot restripe all the data but moves an amount of data proportional to the amount of storage added or rem oved to evenly distributed file and maintain balance load across the disk,this is d one while the database is active. *provides online disk reconfiguration and dynamic rebalancing. *allows for adjustible rebalancing speed. *provides redendency on per file basis,i.e. morror ing is applied on file basis rather then on volume basis there for same disk can contain mirrored an d nonmirrored data.it allows mirroring of protecti on without the need to purchase third party logical

volume. *supports only oracle database files. *is cluster aware,i.e. it supports RAC and eliminat es the need for cluster logical volume manager or cluster file system. *is automaticaly installed ASM CONCEPTS ---------------------------ASM doesnot eliminate any pre-existing datafile fun tionality.u can create new file as ASM file and leave the old file to be administered on the old way or u can eventualy migrate them to ASM files. database file can be stored as ASM file. At the top of the hirarchy is ASM disk groups. any single ASM file can be stored in only one diskg roup. a disk group may contain file from the several databases.a disk group is made of multiple ASM disks and each ASM disk belong to only one disk group.ASM files ar e always spread across all the ASM disks in the disk group ASM disks are parti tioned in allocation unit of(AU) of one megabite. each of an allocation unit(AU) is the smallest contigous disk space that ASM . ASM does not allow oracle datablock to be split across allocation units.

6-ADMINIS TERING USER SECURITY ========= ==================== DATABASE USER ACCOUNT:-It is mean to organise the o wnership and access of database objects PASSWORD:-it is an authentication by the oracle dat abase

PRIVILEGE:-It is a right to execute a particularb t ype of sql statement or to access anothr users objects. ROLE:-it is a named group of related privileges tha t are granted to users or to another roles. PROFILE:-Ii imposes a named set of resource limits on datanase usege and instace resourecs. QUOTA:-is a space allawance in a given tablespace.t his is one of the ways by which u can control resuorce usage by users. DATABASE USER ACCOUNTS ------------------------------------------------Each database user account has *unique user name :-it cannot exceed 30 bytes,conno t contain special characters and must start with a letter. *an authentication method:- except from password th ere several authentication methods oracle 10g supports which are biometrices,certific ate and token authentication method. *a default tablespace this is the place where user creates database objects. *a tepararoy tablespace:-this is the space where us er creates tamparory objects like sorts and teporarory tables. *a user profile :-this is set of resource and pass word restrictions assigned to the user. *a consumer group:-this is used by resource manager . *a lock status:-user can access only unlock account s. PEDEFINED ACCOUNTS ----------------------------------------

1)THE SYS ACCOUNT:*is granted the DBA role. *has all privileges with admin option. *is required for startup shutdown and some maitaina nce commands. *owns datadictionary. *owns Automatic Workload Repository(AWR) 2)THE SYSTEM ACCOUNT:-is granted the DBA role but n ot the SYSDBA privileges. This two accounts arenot used for routine purposes and can't be deleted. CREATING A USER ------------------------------U can dreate a user through eterprise manager by se lecting ADMINSTRATION>SCHEMA> USER AND PRIVILEGES>USERS and then click create button. AUTHENTICATING USER -------------------------------------------Athenticatication means verifying the identity of s omeone (a user,device or other entity) who wants to use data,resource or applications.afte r authentication authorisation process can allow or limitthe levels of accessand action pe rmited to the entity. when u create a user u must decide on the authentic ation technique.the technque can be modified later A)password:-this is refferd to as authentication by oracle database. passwords are automaticaly and transparently encrypted during ne twork(client/server and server/server) conections,by using modified Data Encryption Standard(DES)

algorithm before sending them to the network. B)EXTERNAL:-his is also reffered as authentication by operating system. user can connect to oracle databse without username or password .with an external authentication u r database relies on the underlyin g operating system or network authentication service to restrict access to databa se accounts.in order to do so set OS_AUTHENT_PREFIX parameter and use this prefix in the oracle usernames.the default value for this parameter is 'ops$' for back word comptibilty with the previous versions of the oracle software.the oracle dtabase compares prefixed username with oracle username in the database when user attempts to connect. GLOBAL:-it is the oracle advanced security option,i t is very strong and allows the user to be identified through 'bio-metrics','x05certific ates','token devices' and oracle internet directory.

ADMINISTRATOR AUTHENTICA TION ----------------------------------------------------------------OPERATING SYSTEM SECURITY ---DBA's must have OS privileges to create and dele te files. ---typical user must not have all OS privileges to create all delete database files. ADMINISTRATOR SECURITY ---SYSDBA and SYSOPER connections are authorised b y password file or OS

pasword file authentication records the DBA use rs by name. ---OS authentication does not record the specific u ser. --- OS authentication takes precedence over passwor d file authentication for SYSDBA and SYSOPER user. ---OS authentication is used if there is no passwor d file,if the supplied user name or password is not there in the password file or if user name and password is not supplied. UNLOCKING USER ACCOUNT AND RESETING THE PASSWORD ------------------------------------------------------------------------------------------------------during instalation and database creation u can unlo ck and reset many of the user accounts supplied by database user accounts.

PRIVILEGES -------------------There are two types of privileges 1)system privileges ---------------------------------Enable user to perform particular actions in the da tabase. each system privilege allows user to perform perticular database operatio n or class of database operations. there are more then hunderd distinct system privile ges.granting a system privilege with ANY clause means that the privilege croses the schema lines. following system privilages are granted only to adm inistrator a)RESTRICTED SESSION:-this peivilege allows u to lo gin even if the database is

opened in the restricted mode. b)SYSDBA and SYSOPER:-this privilege allows u to st artup,shutdown and perform recovery and other administrative tasks in the data base.SYSOPER allows user to perform basic operational task without the ability to look at the user data. it includes the followig system privileges *STARTUP AND SHUTDOWN *CREATE SPFILE *ALTER DATABASE OPEN/MOUNT/BACKUP *ALTER DATABASE ARCHIVELOG *ALTER DATABASE RECOVER(only complete recovery) *RESTRICTED SESSION The SYSDBA prvilege aditionaly authorises incomplet e recovery and deletion of database. C)OP ANY OBJECT:-allows user to delete objects owne d by any other schema. D)CREATE MANAGE DROP AND ALTER TABLESPACE:-This all ows the user for the tablespace administeration. E)CREATE ANY DIRECTORY:-Oracle database allows user to call any external code for eg.clibrary from within PL/SQL,the oprating sy syetm directory where the code resides must be linked to a virtual oracle director y object.with abve privilege u can call any insecure code objects. this privelege allows user to create any directory boject with read and write with read and write access to any directory that any oracle s oftware owner can access.with this user can attempt to read and write any databas e files such as datafile,redolog file and audit log.

F)GRANT ANY OBJECT PRIVILEGE:-htis privilege allows u to grant permisions on the objects that u donot own. G)ALTER DATABASE and ALTER SYSTEM:-This is a very powerful privilege that allows u to modify the database and oracle instance, su ch as renaming the datafile or flushing the buffer cache. 2)OBJECT PRIVILEGES -------------------------------------Allows the user to perform action on a specific obj ect such as table,view,squence, procedure,functions and packege. object privilege c an be granted by owner of the object, by administartor or by someone who has been explicitely given permision to grant privileges on the objects. REVOKING SYSTEM PRIVILEGES ------------------------------------------------------User with ADMIN OPTION for system pivilege can revo ke the privilege from any database user.the revoker need not have to be the s ame user who originaly granted the privilege. there are no cascading effect when the SYSTEM privi lege is revoked. REVOKING THE OBJECT PRIVILEGES -------------------------------------------------------------Cacading effect can be obsrved when object privilge s are revokedrelating to DML operations.rwevoking object privilege also cascades when given WITH GARNT OPTION.

BENIFITS OF ROLES -------------------------------A)Easier privilege management:-rather then grantin g the same set of privilege to different users,u can grant the privileges to a role, and then grant the role to users. B)Dynamic privilege management:-If the privileges a ssociated with the role are modified,all users who are granted the role aqui re the modified privileges automaticaly and immediately. C)Selective availability of privileges:-roles can b e enabled and disabled to turn on and off privileges teporarily. a role can have both SYSTEM and OBJECT privileg es. a role can require password to be enabled. roles cannot be owned by anyone and it doesnot b elong to any schema.

PREDEFINED ROLES ----------------------------------*CONNECT:-create session *RESOURCE:-create cluster,create indextype,create o perator,create procedure,create sequence,create table,create tabl e,create rype. *SCHEDULER_ADMIN:-create any job,create external jo b,create job,execute any class, execute any progra m,manage schedular *DBA:-most system privileges,several other jobs. do bot grant to nonadministrators.

*SELECT_CATALOG_ROLE:-no system privileges but HS_A DMIN_ROLE and 1700 object privi leges on the datadictionary. FUNCTIONAL ROLES ----------------------------------*XDBADMIN:-Contains the privilege required to admin ister (XML)Extensible Markup Langage. *AQ_ADMINISTRATOR_ROLE:-provides privilege to admin ister advance queing. *HS_ADMIN_ROLE:-provides privileges to administer h etrogeneous services. u mustnot alter the privileges in the functional r oles without any asistance of oracle support because u may disable the needed f unction. SECURE ROLES -------------------------roles are usualy enebled by default. it is possible to a)make a role nondefault:-user must exlicitely enab le the role before the role's privileges can be exercised. b)have a role requiring additional authentication:default authentication for a role is nonebut is possinble to have a role require aditi onal authentication before it can be set. c)create a secure application role which can be ene bled by executing pl/sql procecdure. the pl/sql procecduer can check user's network ad dress,which progrm the user is running,time of the day etc.

by default enetrprise manager automaticaly grnts C ONNECT role to new users. PROFILES AND USERS -----------------------------------user are assigned only one profile at any time. pro file impose a named set of resources limits on database usage and instance resource.prof ilr also manage the account status and place limitations on use's passwordevery user i s assigned a profile. Profile cannot impose limitation s on profile until l RESOURCE_LIMIT parameter is set to true. profile enables the administrator to control the fo llowing system resources 1)CPU:-CPU resource can be controlled per-session o r per-call basis, if the session that cosunae more than specific time limit specified in the parameter then that session services recieves an error message.per-call limitat ions prevents single command from consuming too much CPUusage, if the commandexceeds the limit then it is aborted and user gets an error message. 2) NETWORK/MEMORY:-each datanase session consumes s ystem's memory resource and if the session is fromthe user who is not lo cal to the server then consumes network resources. u can specify the following *Connect Time:-indicates for how many minutes a user can be connected before it gets loggd off. *Idle Time:-indicates for how many minutes the u sers session can remain idle before being automatically logged off.the IDLE TMIE is not affected by long runnig queries and other operations.

*Current Session:-indicates how many current ses sions can be created by using dataabse user account. *Private SGA:-Limits the amount of space consume d within the SGA for sorting, merging bitmap and so on.this restriction take s effect only if the session uses shared server. 3)DISK I/O:-This limits the amount of data user can read either at per session or per-call level.read/session and read/call places limitations on total number of reads both from memory and disk. profiles also allow composite limit which are based on weighted combination of cpu/session,read/session,connect-time and private S GA. IMPLIMENTING PASSWORD SECURITY FEATURE:Oracle password management is implemented with user profiles. standard decurity featurs provided by profile are a s follows 1)ACCOUNT LOCKING:-Enables automatic locking of acc ount for a set duration if a user fails to login to the system in a specified numb er of attempts *the FAILED_LOGIN_ATTEMPTS parameter specifies n umber of failed login attempts before the account gets locked out. *the PASSWORD_LOCK_TIME parameter specifies numbe r of days for which the account is locked after number of failed login attempts . 2)PASSWORD AGING AND EXPIRATION:-enables user passw ord to have lifetime,after

which the password expire and must be changed *the PASSWORD_LIFE_TIME parameter determines the lifetime of password in days. *the PASSWORD_GRACE_TIME parameters specifies the grace period in days for changing the password. Applicatons must catch the "password expired" warn ing message and handle the password change,otherwise the grace period expires and user account is locked out without knowing the reason. 3)PASSWORD HISTORY:-checks new password to ensure t hat the new password is not refused for a specified amount of time or specifi ed number of passowrd changes. *PASSWOERD_REFUSE_TIME:-Specifies that the user cannot reuse the password for a given number of days. *PASSWORD_REUSE_MAX:-Species the number of woed changes that are required before current password can be reused. pass

4)PASSWORD COMPLEXITY VERIFICATION:-Makes complexit y checks on the password that it meets certain rules.the PASSWORD_VERIFY_ FUNCTION parameter names PL/SQL functions that prforms password comlexity check before password is assigned.this function must be owned by a SYS us er nad must return a booloean value. SUPPLIED PASSWORVERIFICATION FUNCTION:VERIFY_FUNCTI ON The function above enforces follwing restrictions o

n the password a)minimun length is four characters b)the password cannot be the same as username c)the pasword must have atleast one alphabetic,one numeric and one special character. d)the password must differ from previous password b y atleast three letters. oracle erver provides password complexity verificat ion function named VERIFY_FUNCTION. this function is created with <oracle_home/rdbms/admin/utlpwdmg.sql> script. it m ust be created in the SYS schema. UTLPWDMG script also changes the default profile. ALTER PROFILE <PROFILE_NAME> PASSWORD LIFE_TME ---------PASSWORD GARCE_TIME ---------ETC ASSIGHNING QUOTA TO USERS ----------------------------------------------------QUOTAS can be: A specified value in kilobytes or megebytes. Unlimited by default a user has no quota in any tablespace.th ere are three options to provide quota on tablespace 1)unlimited 2)value:this is number of kilobytes or megabytes a user can use.this doesnot guarentee that the space is set aside for that user. 3)UNLIMITED TABLESPACE system privilege:- this syst em privilege overrides all individual quotas and gives user unlimited quota on all tablespaces, including SYSTEM and SYSAUX tablespace. one dodnot need quota in temparory tbaleslace or an

y undo tableapace. the quota is replenished when the user drops any ob jects with the PURGE clause or objects in the recyclebin are automaticaly purged.

7-MANAGING S CHEMA OBJECTS ============ =============== SCHEMA --------------Schema is a collection of database objects owned by a particula user. for a production database the schema doesnot repres ent a person, but an application. schema objects are the logical structures that dirc tly refers to the databases data. u can define configurations such that objects in a single schema can be in a different tablespaces,and a tablespae can hold objects from d ifferent schema's. when we create a database several schemas are creat ed out of which two are the most important 1)SYS schema:-this contains datadictionary. 2)SYSTEM schema:-this contains aditional table and views that stores administrative information. some sample schema's are also created eg. HR,OE,QS ,PM,SH. all integrity constraints can be in of the four fol lowing states 1)DISABLE NOVALIDATE:-this often used when the data is already from the validated source and the table is rea only,so no new data is being enterd in the table.

2)DISABLE VALIDATE:-This is oeten used when the exi sting data is validated but no data is going to be modified and the index is no t otherwise needed for the performance. 3)ENABLE NOVALIDATE:-this is freequently used when the constraint violations can be corrected and at the same time new violations are not allowed to enter the system. 4)ENABLE VALIDATE:-both new and existing data confi rm the constraint.this is the default state of a constraint. CONSRAINT CHECKING:-We can derer checking constrain t for validi ty untill the end of the transection NONDEFERED CONSTRAITNS:-also known as immediate con straints, are enforced at the end of every DML statements.a constraint viola tion causes the statement to rolled back.a constriant that is defined nondefera ble can't be chenged to deferrable. DEFFFERED:-Are the constriants only checked when th e transection is commited. if a constraint violation are detected at commit ti me then the transection is rolled back.tjis constraints are most useful when t he parent child both rows are enetered in foreign key relationship are enetred at the same time. A constraint that is defined deferable can be spec ified as one of the following INITIALY IMMEDIATE:- specifies that by default it m ust function as immidiate constraint.

INITIALY DEFFERED:-speciefies that by default the c onstraint must be enforced only at the end of transection. DROPING A TABLE:Use FLASHBACK command to recover schema object from the recyclebin. the PURGE RECYCLEBIN command empties the recyclebin . the CASCADE CONSTRAINT OPTION is required to all de pendant referential integrity constraints. if u donot use the purge option thethe space in the tablespace is still considered being used. TRUNCATING A TABLE:-It releases the used space.corr esponding indexes are truncated. TYPES OF INDEXES:-Most common types of indexes are a)B-tree:-it is in the form of binary tree and is t he default type of index. structure of b-tree at the top of the index is header which contains e ntries point to the next level in the index.at the next level are the branch blocks which in turn points to the blocks in the next level, at the lowest level are the lea f nodes which contain imdex entries that points to the rows in the table. b-tree index leaf entry chracteristics: *key values are repeated unless the index is compre ssed. *there is no index entry corresponding to raw that has all key columns that are NULL. hence there is a full tablescan when the where con dition is NULL *ristricted rawid is used to point to the rows of

the table because all raws belong to the same segment. effect of DML operations on an index *insert operation results in the insertion of an in dex entry in the data block. *deletion of a raw results only in a logical deleti on of index entry slace used by the deleted raw is not available to new entries unti ll all the entries in the block are deleted. *updates to the key column results in logiacl delet ion and insertion to an index. the PCTFREE setting has no effect on the index exc ecpt at the time of creation. BIT-map indexes are more advantegeous than b=tree i ndexes in certain situations *when table is having millions of raws and key col umn has low cardinality i.e. there are few distinct values for the column. *when the queries use combination of WHERE clause i nvolving OR operator. *when there is read inly or low update activities o n the key column. structure of a BIT-map index node of a bit-map index contains the following *an entry header that contains the columns and lock information. *key values consisting of length and value pair for each column. *end rawid i.e. block no,row no.and file no. *a bitmap segment cosisting of a string of a bit. INDEX OPTIONS ---------------------------*A unique index ensures that every s unique.

indexed value i

*an index have it's key value that is stored in asc ending or descnding order. *a reverse key index has it's key value bytes store d in reverse order. *a composite index is that based on more than one c olumn. *a function-based index is an index based on functi ons return value. *compressed index has repeated key values removed. TEMPARORY TABLES ----------------------------------------*Provides storage of data that is automatically cle aned up when the session or transection is complete. *provides private storage of data for each sessions . *is available to all sessions for use withouteffect ing each other's private data. DML locks are never required on the temparory tables. following clauses controls the lifetime of raws -------------------------------------------------------------------------------ON COMMIT DELETE RAWS:-Specifies that the lifetime of the row is for duration of transection. ON COMMIT PRESERVE ROWS:-specifies that the lifetim e of the rows is for he duration of session. CREATE GLOBAL TEMPORARY TABLE employee_emp ON COMMI T PRESERVE ROWS AS SELECT * FROM employees; creates a temparory table and u can use EXPORT or IMPORT or DATAPUMP to export and import t he definition of the table. however no data can be imported.

u can create indexes and views in temporary tables. DATADICTIONARY --------------------------------Oracle's datadictionary is the discripyion of the d atabase.it contains names and attributes of all the objects in the database.this information is stored in the base tables that rae maintained by oracle database.u can access this tables by using predefined views. this datadictionary *is used by oracle database server to find inforama tion about users conatraints and storage. *is maintained by the oracle server as object stuct ure or definitions are modified. *is avilabe for use by any user to query informatio n about database. *is owned by SYS user *should never modified directly using SQL. the DICTIONARY datadictionary view or DICT synonym for this, contains the names and description of everything in the datadictionary . DATA DICTIONARY VIEWS -------------------------------------------A)DBA_:-This ype of views are queried by DBA,it co ntains everything and these are subset of no other objects and can have additional colums for DBA use only. B)ALL_:-This kind of views are queried by all users .it contains everything that user has privilege to see.it is a subset of DBA_views .it includes users own objects. C)USER_:-This kind of viwes are queried by all user

s.it contains ebery thing that user owns.it is as subset of ALL_views.it is usualy th e same as ALL_views except for the missing owner column.some views have abriviated a s PUBLIC synonyms.

8-MANAGING DATA AN D CONCURENCY ================== =============== COMMIT:-Makes the changes permanent. ROLLBACK:-Undoes the changes. before commit or rollback are issued rollback is in pending state.only user who made the chenges is allowed to see the changed data. oth er user cannot issue the DML in the same data untill the user commits or rollbacks the statemet.this is controlled automatically by oracle's locking mechanism. PL/SQL:-Oracle's procedural language extension to S QL.it is fourth generation programing language. it provides: *procedural extension to SQL *portability across platform and products. *higher level of security and dataintegrity rotecti on. *support for object oriented programing. as PL/SQL runs in the database. it is very efficien t foe data-intensive operations and minimise network traffiec applications. PL/SQL OBJECTS -------------------------A)PACKAGES:- It is collection os functions and proc edures that are logicaly releted. it declares types,variables,variables,constants, exceptions,cursors and subprograms

available for use.following are the few mainatina nace and administration packages #DBMS_STATES:-Gather,View and Modify optimiser sta tistics. #DBMS_OUTPUT:-Generate output from pl/sql #DBMS_SESSION:-PL/SQL access to ALTER SESSION and SET ROLE statements. #DBMS_RANDOM:generate random numbers. #DBMS_UTILITY:-Get time,cpu time and version versi on information. #DBMS_SCHEDULER:-Schedules functions and proceduer s that are callable from pl/sql. #DBMS_CRYPTO:-encrypts and decrypts database data. #UTL_FILE:-reads and writes to O.S. files from pl/s ql. B)PACKEGE BODY:-it fully defines cursors and subpro grams.it contains private declarations hidden from the caller. C)TYPEBODY:-it is a clooection of procedures and fu nctions associated with the user defined data types. D)PROCEDURE:- it is a pl/sql blocks that performs s pecific action. E)FUNCTION:-it is a pl/sql block that returns asing le value by using RETURN PL/SQL command.there are many built in functions such a s SYSDATE,SUM,AVG and TO_DATE.they are typicaly used to compute value. F)TRIGGER:-it is pl/sql block that is executed when certain event occurs in to the database.it is best to keep triggers code very s hort and any thing requires lenthy code in seperate package. triggering events #DML:-insert,update,delete #DDL:-create,drop,alter,grant,revoke,rename #database:-logon.logof,startup,shutdown,servere rror.

most triggers can be specified to fire before the event occurs or after has occured. LOCKS -----------*Locks pervent multiple sessions from changing the same data at the same time. *they are automaticaly obtained at the lowest level of the statement, to minimise potential conflicts with other transections. *they do not escalate LOCKING MECHANISM --------------------------------------*high level of data concurency:-row level locks for insert,update and delete -no locks required for queries *automatic queue management:-it requires no adminis trator interaction. *locks held untill transection ends. Transection that modify data require rowlevel locks rather then block level or table level locks. modification of objects(such as table moves) require object level locks rather than whole database or schema level locks.q uery doesnot require locks. DATA CONCURRENCY -------------------------------------lock mechnism defaults to fine grained,row-level lo cking mode. following are the other lock modes 1)ROW SHARE:-Permits concurent access to the locked table, but prohibits session from locking the entire table for exclusive access. 2)ROW EXCLUSIVE:-is same as rowshare but also prohi bits locking in SHARE mode. the ROW EXECLUSIVE locks are autimaticaly obtained

when uodating,inserting or deleting data. 3)SHARE:-permits concurrent queries but prohibits u pdates,it is required to create an index on the table. 4)SHARE ROW EXVCLUSIVE:-Is used to query a whole ta ble and allow others to query rows in the table,but prohibits othrs from lockin g the table in share mode or updating rows. 5)EXCLUSIVE:-Permits query on the locked table but prohibits any other activity on it, it is required to drop a table. the LOCK command accepts a special argument that c ontrols the waiting behavior , NO WAIT.it returns control to u even if the table is been locked by another session. it is not necessary to manualy lock objetcs automat ic locking mechanism provides the dataconcurrency needed for all applications. DML LOCKS ----------------------Each DML transection acuire two kinds of locks 1)an EXCLUSIVE row locks for row or rows being upda ted. 2)an EXCLUSIVE table-level lock for table being upd ated.this prevents another session from locking the whole table possibly drop or tr uncate it while changes being made. ENQUEUE MECHANISM ---------------------------------------The enequeque mechanism keeps track of: *seesions waiting for locks *the requested locks made *the order in which session requested the lock

the session holding a SHARE lock is granted EXCLUSI VE lock without having to wait in the queue again. LOCK CONFLICTS -----------------------------Lock conflicts occur often but can be solved usualy through time and enqueque mechanism.in rare case lock conficts require admini strator intervention. POSSIBLE CAUSE OF LOCK CONFLICTS ----------------------------------------------------------------*uncommited changes *large-runnig transections:-lock conflicts are comm on when batch processing and transection are being performed simutaneously. *unnecessarily high locking levels:- some of the da tabase doesnot support row level lockinf instead page level or table level,some of the applications running on many databases are written which support only high leve l locking. DETECTING LOCK CONFLICTS -----------------------------------------------Use blocking sessions from the performance page in the enterprise manager to locate lock conflicts. the Automatic Database Diagnostic Monitor(ADDM) als o automaticaly detects lock conflicts and advise u on inefficient locking trend s. RESOLVING CONFLICTS ----------------------------------------*have the session holding the lock commit or rollba ck. *terminate the session holding lock as last resort

by killing the session remember when the session is killed ,all work within the session is lost,the user must login again and redo all work since the killed sessions last commit. *SQL statemant can be used to determine the blockin g session and kill it SQL>SELECT SID,SERIAL#,USERNAME FROM V$SESSION W HERE SID IN (SELECT BLOCKING SESSION FROM V$SESSION) SQL>ALTER SYSTEM KILL SESSION <SID>,<SERIAL#>,<US ERNAMF> IMMEDIATE; DEADLOCK --------------------It is special example of a lock conflict,it aruse w hen two or more sessions wait for data locked by each other,since each is waiting for othe no one can complete their transection to resolve the conflict.the oracle data base utomaticaly detects the deadlocks and terminates the statement with an err or, proper response to that error is either commit or rollback.

9-MANAGING UN DO DATA ============= ========= UNDODATA -------------------*It is copy of original premodified data *captured for every transection that changes data. *retained untill the transection is ended -user undoing the transection(rollback) -user ending the transection(commit)

-user session abnormaly terminating(rollback) -user session normaly terminating with exit(commi t) TRANSECTIONS AND UNDO DATA ----------------------------------------------------------*each transection is assigned to only one undo segm ent. u can see which undo transections are assigned to which segment by checking V$TRANSECTION performance view. undo segmnts are specialised segments created autom aticaly by instance,the autimaticly grow and shrink as per the need. they are like circ ular storage buffer. parallel DML operations can cause transection to co nsume more then one undo segments. an undo segment can service more than one transecti on at a time.

*used to suport -rollback operation -read consistant and flesback queries -recovery from failed transections STORING UNDO INFORMATION -----------------------------------------------------Ubdo information is stored in undo segments, which are in turn stored in undo tablespace. undo tablespaces: *are used only for undo segments. *have special recovery considerations. *may be associted with only a sinle instance. *require only one of them be current writable undo tablespace for a given instance

at any given time. *undo segments are always owned by SYS.each segment hes minimum of two extents. maximum number of extents depend on the database b lock size *undo table spaces are permanent, localy managed ta blesoace with autoamatic extent allocation *undo tablespace can be recovered only while the in stnce is in the MOUNT state. UNDO DATA VERSES REDO DATA UNDO DATA -REDO DATA ------------------------------------------

---

-*record of:how to undo a change -- how to reproduce a change. -*used for: rollback,read orward database changes consistancy -*stored in: undos egments les -*protect against: loss incinsistant read in multiuser system --- rolling f -redo log fi

-- data ---

MONOTORING UNDO -----------------------------------

Undo ususly requires very little management.the are as to monitor include *free space in the undo tablespace :- proactive mon itoring detects the space problem in undo table spacebefore in effefcts any user. *"snapshot too old"errors:- this error occurs when the query runs too long and mean time the transection in process gets comitte d and older data is been released from undo segment hence the same data is no more available,to prevent this error to occur undo retention time should be configur ed to accomodate the longest running query. ADMINISTERING UNDO ---------------------------------------acministartion of undo should include preventing *space error in the undo tablespace --size the undo tablespace proerly --ensure the lager transections commit periodicaly *"snapshot too old" errors --configure an appropriate undo retention interval --size the undo tablespace properly --consider guarenting undo retention use automatic undo management set UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=UNDOTBS1 WITH MANUAL MANAGEMENT DBA MUST ALSO CONSIDER THE F OLLOWING --segment sizing including maximum extent and exten t sizing --identifyng and eliminating blocking transections --ceating enough rollback segments to handle transe ctions --chosing a tablespace to contain rollback segments

(undo tablepaces are use only with automatic undo management.) CONFIGURING UNDO RETENSION --------------------------------------------------------UNDO RETENSION specifies in seconds the amount alre ady commited undo information is to be retained. the only time u must set this parameter is when: *the undo tablespace has AUTOEXTEND option enebled *u want to set undo retension for LOB's *u want to guarentee retension. undo tablespace ignores UNDO_RETENSION unlessretens ion guarantee is enebled. undo information is devided into three catagories 1)uncommited undo information:-it supports currntly running transection, and it is rquired if a user wants to rollback or if the tra nsection has failed.uncimmited undo information is never overwritten. 2)commited undo information:-it is no longer needed to support a running transection, but it is still needed to meet the undo retension interval. it is known as "unexpired" undo information. 3)expired undo information:-it is no longer needed to support a running transection. it is over written when space is required by running transection. GUARANTEEING UNDO RETENSION ------------------------------------------------------------The default undo behavior is to over write commited transection that have not yet expired rather then to alloe an active transection to fail beause of lack of undo space.

the behavior can be changed by guaranteeing undo re tension this enforces retension settings even if the transection fails. RETENSION GUARANTEE is a tablespace attribute rathe r than an initialisation parameter. this can be changed only with SQL comman d line statements SQL>ALTER TABLESOACE undotbs1 RETENSION GUARANTEE To return a guaranted tablespace to its normal sett ings by following comand SQL>ALTER TABLESPACE undotba1 RETENSION NOGUARANTEE the rention guarantee is aplied only to undo tables pace SIZING THE UNDO TABLESPACE -----------------------------------------------------Datafiles belonging to an undo tablespace can autom aticaly extend when they run out of space.oracle recomends that datafiles that a re associated with undo tablespace should not have automatic extension enabled,this pr events single user from inadvertentely consuming large amount of disk spac e by avoiding to commit transections. UNDO ADVISER provides an estimate of undo tablespac e size required to satisfy a given undo retension period. enter the desired rete nsion period and the analysis region of the advisor displays the tablespace size require d to support the retension period.

10-IMPLEMENTING O RACLE DATABASE SECURITY ================= ======================= INDUSTRY SECURITY REQUIREMENTS ----------------------------------------------------------------A variety of laws have been passed to ensure securi ty and privacy of data 1)Sarabans-Oxly Act(SOX):-The detail os SOX include equirements for providing the information that is used to generate reprots,ans in ternal controls that are used to assure the intedrity of financial information. 2)Health Information Portability and Accountability act(HIPAA):-Is inteded to protect personaly in identifiable health information from release or misuse. information holder must provide audit trails of all who access this data. 3)UK data protection:-is intended to protect indivi dual privacy by restricting to access individualy identifiable data. OTHE LAWS -------------------*Family Education Rights and Privacy Act(FERPA):-co vers health and personal information hold by schols. *Clifornia Breach Law:-an organisation holding a va riety of personal identity information must protect that information. *Federal Information Security Management Act(FISMA) :-It creates security guidance

and standards through documents that are managed b y National Institute of Standards. DATABASE SECURITY -----------------------------------There are several aspects of security *restricting access to data and services:-Oracle da tbase provide extermely fine grained authorisation control to limit database access.ri stricting access must include the principle of least privilege. *authenticating users:-user account that are not in use must be locked to prevent attempts to compromise authentication *monitoring suspecious activity:-identifying unusu sal database activity is the first step to detect information theft.Oracle database provid es rich set of auditing tools to track user activity and identify suspecious ternds. PRINCIPLE OF LEAST PRIVILEGE -----------------------------------------------------*install only required software on the machine. *active only required services on the machine. *give OS and database to only those users that requ ires it. *limit access to root or administrator account *limit access to SYSDBA and SYSOPER account *limit users access to only the database objects tr equired to do their jobs. *monitoring suspecious activity APPLYING THE PRINCIPLE OF LEAST PRIVILEGE ---------------------------------------------------------------------------------*protect the edictionary 07_DICTIONARY_ACCESSIBILITY=FALSE *revoke unnecesary priviges from PUBLIC

REVOKE EXECUTE ON UTL_SMTP,UTL_TCP,UTL_HTP,UTL_FILE FROM PUBLIC; *restrict the directories accessibility by user *limit users with administrative privileges *restrict remote database authentication REMOTE_OS_AUTHENT=FALSE on the remote authentication process .the database user is authenticated externaly. .the remote system authenticates the user .user logs into database without further authentic ation The more poqwerful packeges that may be misused are : 1)UTL_SMTP:-permits arbitrary e-mail messages to be sent by using the database as Simple Mail Transfer Protocol(SMTP) mail server. this may permit unauthorised mail exchange. 2)UTL_TCP:-Permits outgoing network connection to b e established by any database server to any receiving or waiting network servi ces.thus, arbitorary data can be sent between any database server and any waiting netw ork services. 3)UTL_HTTP:-allows the database server to request a nd retrive nay data via HTTP. granting this lackage to public may permit data to sent via HTML forms to any melicious web sites. 4)UTL_FILE:-If cinfigured improperly, allows text l evel access to any file on the host operating system. MONITORING FOR SUSPECIOUS ACTIVITY -----------------------------------------------------------------------

Monitoring or auditing must be an integral part of your security procedures. proeprly focused auditing has minimal impact on sy stem performance. 1)MANDATORY AUDITING:All ORACLE database audit cert ain actions regardless of other audit options or parameters.the reason for mendatory audit logs is that the database needs to record certain databa se activities such as system startup and shutdown. 2)STANDARD DATABASE AUDITING:-This is set at system level by using AUDIT_TRAIL initialisation parameter.after u enable auditing select the objects and privileges that u want to audit.if AUDIT_TRAIL parameter i s set to OS then the audit records are stored in the operating system's audit system.if the AUDIT_TRAIL parameter is set to DB then u can view the recdords in DB_AUDIT_T RAIL view,which is a part of SYS schema. V$AUDIT_TRAIL view allowes u to view all XML fil es in the directory.if not maintained properly the audit trail can concume so much spa ce that it affects the performance of your system. the extra information that is collected by standa rd auditing includes *the System Change Number which records every cha nge to the system. *the exect SQL text executed by the user and the bind variables used with the SQL text. this colums appear only if u have specified AUDIT_ TRAIL=DB,EXTENDED in u r intialistion parameter file. thh extra information that is collected by fine gra ined auditing includes: *a serial number for each audit record

*a ststement number that links multiple audit entri es. the DBA_COMMON_AUDIT_TRAIL view combines standard a nd fine grained audit records. best using tip:-because auditing adds up u r system load disable all u r not using. 3)VALUE BASED AUDITING:-it captures not only the a udited event that has occured but the actual vlaues that have been updated,inserted or deleted.value based auditing can be implemented through database triggers.when a u ser uadates,inserts or deletes a data from a table with the appropriated trigger at tached,tha trigger works in the background to copy audit information to a table ta ht is designed to contain the audit information.value based auditing tends to degrade the performance more than standard database auditing.database triggers can a lso be used to capture information about the user who connects in case when the stand ard database auditing doesnot gather sufficiant information. 4)FINE GARINED AUDITING:-It captures acutal SQL sta temets that has been isued rather than only the events that have occured. It monitors data access on the basis of content,can be linked to a table or view to one or more column,may fire a procedure, is administered with DBA_FGA package. DMMS_FGA.AL L_COLUMNS and DBMS_FGA.ANY_CLUMNS 5)DBA_AUDITING:-separate the auditing duties betwee n DBA and the auditor or security

administrators who monitors the DBA activities o n an operating system audit trail. ENABLE AUDITING ALTER SYATEM SET audit_trail="xml" SCOPE=SPFILE Restart the dataabse after modifying the static in itialisation parameter.u must enable database audi ting before u specify audit settings. SPECIFYING AUDIT OPTIONS -------------------------------------------------*SQL statement auditing AUDIT table; SQL statement auditing can be focused by username or by success or by failure SQL>AUDIT table by hr WHENEVER NOT SUCCESSFUL; *system privilege auditing(focused and non focused) AUDIT select any table,create any trigger; AUDIT select any table by hr by session; By default auditing is BY ACCESS, each time the ited system privilege is execised an audit record is generated.we choose to group se records with the BY SESSION clause so that only one record is generated per ion.using BY SESSION CLAUSE limit the performance ane impact the storage of tem privilege auditing. aud tho ses sys

*object privilege auditing(focused and nonm focused ) AUDIT ALL on hr.employees; AUDIT UPDATE,DELETE on hr.employees by access; It can be used to audit tables,views,procedures,seq uences,directories and user defined

datatypes.unlike system privilege auditing the defa ult grouping is by session. FGA POLICY --------------------Defines:-audit criteria and audit action Is created with:-DBMS_FGA.ADD_POLICY procedure. this procedure accepts only folloeing arguments a)policy name:-policy_name=>'audit_emps_salary' b)audit condition:- audit_condition=>'department_id =10' c)audit column:- audit_column=>'salary,commision_pc t' d)object:object_achema=>'hr' object_name=>'employees' e)handler:-an optional event hadler is a PL/SQL pro cedure that defines any additional actions that must be taken during auditing.if an audit event hadler is defined,then audit entry is inserted into audit trail and the audit event hadler is executed. the event handler is passed as two arguments 1)the shema that contains PL/SQL progrm unit. handler_schema=>'secure' 2)name of the PL/SQL program unit handler_module=>'log_emps_salary'] by default audit trail always writes SQL text and S QL bind infarmation to LOBs.it can be changed if the system is suffering performance degr adation. f)status:-it indicates whether the FGA policy is en abled. enabled=> true audit DML statements: CONSIDERATIONS ------------------------------------------------------------------------

*records are audited if the FGA policy is satisfied and the relevent coliumns are referenced. *delete statements are audited regardless of any sp ecified columns. *merge statements are audited with the underlying a ny update or insert statements. DGA GUIDELINES -----------------------------*to audit all statements use NULL condition. *policy names must be unique. *the audited view or table must exist when u create the policy. *if the audited column does not exist in the table no rows are audited. *if the event handler does not exist,no error is re turned and still the audit record is created. DBA auditing ------------------------users with SYSOPER or SYSDBA can connect when the database is closed. *audit trail must be stored outside the database. *connections as sysdba or sysoper are always audite d. *u can enable aditional auditing of SYSDBA or SYSOP ER operations with audit_sys_operations=true. *u can control storage location of audit trail with audit_file_dest initialisation parameter. MAINTAINING AUDIT TRAIL -----------------------------------------------Best maontainance must include revewing the audit r ecords and removing the older records from the database or OS.audit trail for sta ndaed auditing is stored in

AUD$ table.the audit trail for DGA is FGA_LOG$ tabl e.u can move this tables to another table space by using export import utilitie s.moving audit tables out of SYSTEM table space is not supported. audit records can be lost during removing records f rom the audit tables,to avoid this use export based on time stamp,and then delete rows from the audit trail,based on the same time stamp. SECURITY UPDATES ----------------------------------Oracle secuity contains brief description of vulner ability,an assesement of risk and degree of exposure associted with the vulnerability and application workground oe patches.security alerts are posted on Oracle Techno logy Network Website on the Oracle Metalink(METALINK),pnly customer with Custom er Support Identification (CSI) number can download patches. APPLYING SECURITY PATCHES ---------------------------------------------------*use the Critical Patch Update(CPU) Process. Critical Patch Update(CPU) procees:-Ii bundles tog ether critical patches on quarterly basis. *apply all security patches and workarounds. *contact oracle security products team:-if u find a ny security vulnerability in u r Oracle software,follow the instructions provid ed from the security alert link at http:/otn.oracle.com or at http://www.oracle.com/technology/deploy/security/a lerts.htm. 11-ORACLE N ET SERVICES

======= ================ it is responsible for establishing and maintaining the connection between client application and database server as well as e xchanging messages between them. on a client computer it is a background component f or application connection to the database. to make a client or middle tier connection oracle net requires the client to know the following:1)host name 2)port 3)protocol 4)name of the service the process of determinig this information is calle d 'names resolution'. TOOLS FOR CONFIGURING AND MANAGING ORACLE NETWORK:1)ENTERPRISE MANAGER 2)COMMAND LINE 3)ORACLE NET MANAGER 4)ORACLE NET CONFIGURTION ASISTANT WITH LISTENER CONTROL UTILITY U CAN PERFORM FOLLOWI NG FUNCTIONS 1)start the listener 2)stop the listener 3)check the status of the listner 4)reinitialise the listner from the cofiguration fi le parameters 5)dynamically configure many listner 6)change the listner password on database server oracle net includes active proce

ss called listener. DATBASE SERVICE REGISTRATION --------------------------------------------------------the listner can find the name of instance and locat ion of the instance's ORACLE_HOME by the following two ways 1)DYNAMIC SERVICE REGESTRATION:-instance automatica ly regester the name of default listener on the databa se satrtup. 2)STATIC SERVICE REGESTRATION:-instance do not auto maticaly register the name of default listener. this used to happen in earlier v ersion of oracle i.e. before 9i. still in the new version u may choose s tatic service registartion if u r listener is not is not the on default port i.e. 1521 or u r applica tion requires static service regestration. the most common use of oracle net services is to al low incoming database connection. you can configure additional listner to conncet ora cle instance to non oracle datasource. oracle listner is a gateway to all non local instan ce. a single listner can service multiple database instance and thousands of net co nnections. FOUR WAYS OF RESOLVING CONNECTION INF ORMATION ------------------------------------------------------------------------------------------------1)EASY CONNECT NAMING:-this method enables client t o connect an oracle server by TCP/IP string consisting of host name opt

ional port and service name. it is enabled by default . requires no client side configuration. suppotrs only tcp/ip no ssl supports no advanced connection such as a)connect-time fail over b)source routing and c)load balancing 2)LOCAL NAMING *requires client side resolution file * supports all net protocols *supports all advanced connection options such as source routing connect-time fail over load balancing with local naming user supplies alias for oracle ne t services,oracle converts the alias in to host,protocol,port and service name . with this user has to remeber only short alias inst ead of long connecting string. this is appropriate for organisation whose net service configueration do not change very often. 3)DIRECTORY NAMING:*requires LDAP with oracle net resolution information loaded are -- oracle internet directory --microsoft active directory services *supports all oracle net protocols *supports all advamce connection options with directory naming user supplies alias for oracl e net service, with alias oracle checks externel list of known services and converts alias in to oracle

port,host,service name and protocol so user has to remember only short alias instead of long connecting string. the benefit of directory n aming over local naming is that as soon as the new service name is added to LDAP directory the service name is available for the user to connect, in local naming DBA has to first distribute TRANSNAMES.ORA files containing the changed service name. this is appropaiate for the organisation whose net service cofiguration changes continuously. 4) EXTERNAL NAMING METHOD -----------------------------------------------

*uses a supported non-oracle naming services includes a)NETWORK INFORMATION SERVICES(NIS) b)DISTRIBUTED COMPUTING ENVIORNMENT(DCE) CELL DIREC TORY SERVICES(CDS) it is similar to directory naming ADVANCED CONNECTION OPT IONS ----------------------------------------------------------1)FAILOVER:- tries each access in order untill one is succeed. 2)FAILOVER & LOAD BALANCING:-tries each address ran domly untill one succceed. 3)LOAD BALANCING:-tries one address selected at ran dom. 4)SOURCE ROUTING:-use each address in random until l the destination is rea ched. 5)NONE:-use only first address.

* the TNSPING utility checks for oracle net servic es alias. --it supports easy connect net resolution --it supports local naming and directory naming --it ensures connectivity between client and orac le net services listner. --does not verify weather requested service is av ailable. *TNSPING utility is useful when system has multiple ORACLE_HOME directory as it locates the configuration files. USER SESSION S --------------------------------DEDICATED SERVER there is one to one ratio between user process and server process. in a heavily loaded system dedicated server process can be prohi bitive and -ve ly effect systen scalability. u can improve system scalability in such situations by following ways a)adding more memory and additional c.p.u. capabili ties. b)using oracle shared server architecture. SHARED SERVER PROCESS in this kind of process when a connection request a rrive listner maintains a list of dispatcher that are available for each service name .unlike dedicated process single dispatcher can make hundreds of user sessions. *connection pooling it enables the database server to timeou t an idle session and use the connection to service active seesion.the idle log ical session remains open and the physical

connection is automaticaly re-establishe d when the next request comes from that session. in this facility large no. o f concurrent user can be accomaodated with existng hardware.connection pulling i s configurable through shared server. *when not to use shared server process a)database administartion b)backup and recovery operations c)batch processing and bulk load operations d)data warehousing

12--PROACTIVE MA ITAINANCE --------------------------------------------------main elements of proacitve maintainace are 1)AUTOMATIC WORKLOAD REPOSITORY(AWR):- it is built in repository in oracle database. at regular intervals oracle database takes a sna pshot of its vital statistics and network information and stores it in the AWR. the captured data can be analysed by u,databse itself or by both.by default snapsho ts are retained for 7 days. u can modify both snapshot intervals and retetntion inte rvals.ucan work with AWR by eterprise manager or a package called DBMS_WORKLOAD _REPOSITORY AWR infrasructure consist of major two parts 1)an in-memory stastics colliection facility used b y oracle 10g component to collect statistics information gathered by snaps hots. statistics information is accessible by v$ views.statistics are stored in resistant memory.

2)the AWR snapshots which can be accesed by datadic tionary views and enterprise manager. AWR SNAPSHOT SETS:-it is a mecanism to tag the sets snapshot data for important periods. snapshot sets can be identified by either user defined name or by system generated identifier. u can create a snapshot sets by executing the follo wing procedure DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE and specifying name and pair of snapshot identifie rs. snapshots are used to compare the past time behavior and current time behavior of the system . u acn get snapid's by DBA_HIST_SNAPSHOT or by enterprise amnager. AWR setting include a)retention period b)collection interval c)collection levels three collection levels basic:-disables most os ADDM functionality typical:-recommended all:-adds aditional tuning information to snaps hots.configure setting level to all when tunning new application when tunnnin g is complete the setting should be rsconfigured as typical. decreasing this above three settings can affect th e functionality of components increasing this above three settings can increase t he functionality but at the cost of space required. AUTOMATIC DATABAS E DIGNOSTIC MONITOR (ADDM)

----------------------------------------------------------------------------it runs immediately after each AWR snapshots monitor the instance, detects bottlenecks stores the results in AWR some common problems detected by ADDM are 1)cpu bottlenecks 2)poor network management 3)i/o capcity 4)undersize of oracle memory structure 5)lock contention 5)high load SQL statements 6)high load PL/SQL and JAVA time 7)high checkpoint load and cause above results are accesible through enterprise man ager ADDM's recomendation can include 1)hardware change 2)database configuration 3)schema changes 4)application changes 5)using other advisor ADVISORY FRAMEWORK ---------------------------------------advisors are server components that provide u with useful feedback about resource utilisation and their performance. major benifits are * it uses uniform interface for all advisors *all advisors have common interface and same storag e i.e. AWR advisory framework has seven kinds of advisors 1)AUTOMATIC DATABASE DIAGNOSTIC MONITOR(ADDM):- it reviews database performance performance every 60 minutes.i

t's goal is to cetect system bottleneck and fixes recomendation before system p erformance degared noticebly 2)MEMORY ADVISOR:- it is collection of several advi sory functions that help determine best settings for shared pool,programe bu ffer cache and programglobal area. 3)MEAN-TIME-TO-RECOVER(MTTR) ADVISOR:-with this u c an set the length of time required to recover after database crash. 4)SQL ACCES ADVISOR:- this advisor analyses all SQL statements in given period of time and suggest the neeed of additional indexes or mate rialised view to improve parformance. 5)SQL TUNING ADVISOR:-analyses each SQL statements and amke recomendations for improving its perfoemance.we cannot invoke this advisor derectly but can call it through TOP SQL or TOP sssessions 6)SEGMENT ASVISOR:-this advisor looks for tables an d indexes that cosumes more space than required. 7)UNDOMANAGEMENT ADVISOR:- with this u can detrmine how much tablespace is required during DBMS ADVISOR PACKAGE it has all constants and declarations of procedure for all modules. u can execute this package to execute this task via coomand line. following are few examples of advisor package 1)CREATE_TASK 2)DELETE_TASK 3)EXECUTE_TASK 4)INTERRUPT_TASK 5)GET_TASK_REPORT 6)RESUME_TASK 7)UPDATE_TASK_ATTRIBUTES 8)SET_TASK_PARAMETER 9)MARK_RECOMEDATION 10)GET_TASK_SCRIPT

SERVER GENERATED ALERTS this are notifications when database is in undesire d state and need u r attention. a few key matrics that can provide early notificati ons are *average file read time(centiseconds) *dump area used % *response time(per transection) *sql response time *table space used% *wait time% default server generated alerts *table space usage(warning 85%,critical 97%) *snapshot too old *recovery area on freee space *resumable session suspended for each alert message database provides a link to invoke corrosponding advisor. REACTING ALERTS *run addm or other advisor *take corrective measures *acknowledge alerts which are not automatically cle ared. u can define matrics for more than 120 matrics tablespace usage matrics is databse related rest o f all matrices are instance related. stateful alerts apear in DBA_ALERTS_OUTSTANDING and when cleared goes in DBA_ALERT_HISTORY. AUTOMATED MAINTAINANACE TASK *schedular initiates jobs *jobs run in the default maintainance window *limit maintainace impact on normal operations by u

sing resoruce manager EXAMPLES OF MAINTAINANCE *gathering optimiser statistics *gathering segment information *backing up database by analysing the information stored in AWR the data base can identify the need to perform routine maintainace tasks. by default the m aintainance window starts at 6 p.m. and ends at 6 a.m. and thruogh out the weeke nd. u can costomise the maintainance window with start time,end time,freeq uecy ,days of week. 13-PERFORMANCE MANAGEMENT -------------------------------------------------------PERFORMANCE MONITORING performance measurments are reffered as database me trics .in multitier systems viewing sessions may not provide the information u need to analyse the performance. grouping sessions into service names enables u to m onitor performance more accurately. SQL TUNING ADVISOR it's a primary driver for tuning process,it calls a utomatic tuning optimisersor (ATO) to perform 4 specefic analysys 1)statistics analysys:- ATO checks each query objec t for missing or stale statistics and advices for gathering relevant statistics. 2)sql profiling:-ATO verifies its own estimates and collects auxilary information to remove estimation erros it collects this informa tion in SQL profile once SQL profile is created it effects query optimiser to ge nerate a well tuned plan.

3)ACCESS PATH ANALYSYS:-ATO recomends to create a n ew index to improve the access to each table if it's needed. 4)SQL structure analysys :-the ATO makes relevant suggetions to restructure SQL statements that use bad plans. source for sql tuning advisor to analyse *top SQL statements:-analyse the top sqlstatements currently active. *SQL tuning sets:-analyse a set sql statements u pr ovide. *snapshots:-analyse a snapshot *baseline:-analyse a baseline. SQL ACCESS ADVISOR its used to improve the schema and sql query perfor mance.it makr recomedations to create indexes or materialised view to improve u r query performance for given workload AUTOMATIC SHARED MEMORYMANAGEMENT(ASMM) *simplify memory management *specify total SGA through initialisation parameter *enables the oracle server to manage the amount of memory allocated to shared pool, java pool,streams pool,large pool,b uffer cache. MANUALY SETTING SHARED MEMORY MANAGEMENT *sizes the components through mutiple individual in itialisastion parameter. *uses memory advisors to make recomendations. U CAN INCREASE THE TOTAL SGA SIZE BY INCREASING THE VALUE OF SGA_TARGET PARAMETER BUT IT CANT TAKE THE VALUE MOR E THAN

SGA_MAX_VALUE PARAMETER if u r enviornment requires special sizing not reco mende by ASMM the paramers u r supposed to adjust are *SHARED_POOL_SIZE *LARGE_POOL_SIZE *JAVA_POOL_SIZE *DB_CACHE_SIZE *STREAMS_POOL_SIZE MEMORY ADVISOR it helps u tune the size ofu r memory structure.u c an use this only auotmatic memory tuning is disabled. it has three advisors that recomends u for memory s tructure such as shared pool inSGA PGA buffer cache in SGA DYNAMIC PERFORMANCE STATISTICS oracle generates statistics at 1)systemwide level 2)session specific 3)service specific at all level both cumulative statistics and wait e vent statistics are generated. to analyse any performance problems u typically loo k at the change in statistics over period of time u r interested in.all staistics are cataloged in V$STATENAME and wait events are cataloged in V$EVENT_NAME view. about 360 statistics are available in oracle. displaying systemwide statistics SELECT NAME,CLASS,VALUE FROM V$SYSSTAT; INVALID AND UNUSABLE OBJECTS the current status of database objects can be viewe d by quering data dictionary.

if u find a certian PL/SQL objects with the status INVALID the first thing u have to think over is whether it was valid anytime? if it w san't valid anytime before than nothing to do untill the error is fixed by the code generator. if the invalid pl/sql code was valid for someperiod of time tan u can fix the error by manualy recompile the object or most of the time system recomlies the objects by itself. u can recopile the pl/sql objects by ALTER PROCEDURE HR.ADD_JOB_HISTORY COMPILE alter the sql package by ALTER PACKAGE HR.MAINTAINEMP COMPILE; ALTER PACKAGE HR.MAINTAINEMP COMPILE BODY; unusable indexes are made usable by rebuilding them to recalculate the pointers ALTER INDEX HR.EMPID_PK REBUILD ALTER INDEX HR.EMPID_PK REBUILD ONLINE(with tis cla use a user can continue updating the indexes table without waiting the inde x to be rebuilt) ALTER INDEX HR.EMPID_PK REBUILD TABLESPACE USERS 14Y CONCEPTS -------------------------------------------------------------ADMINISTRATOTRS DUTIES ARE *PROTECT THE DATABASE FROM FAILURE *INCREASE THE MEAN TIME BETWEEN FAILURES(MTTB) *DECREASE THE MEAN TIMR TO RECOVER(MTTR) *MINIMISE THE LOSS OF DATA FAILURE CATAGORIES 1)STATEMENT FAILURE:- select,insert,update,delete f ails BACKUP AND RECOVER

2)USER PROCEDURE FAILURE:-database session fails 3)NETWORK FAILURE:-connectivity to the database is lost 4)USER ERROR:-user operation is incorrect 5)INSTANCE FAILURE:-database shuts down unexpectedl y 6)MEDIA FAILURE:-one or more datbase files are lost . STATEMENT FAILURE --------------------------------------------TYPES AND SOLUTONS *attempts to enter invalid data-work with user to v alidate data. *attempts to perform operations with insufficient p rivilleges-provide approprite privillages. *attempts to allocate space thst fails-increase the owner quota,add space to tablesapce. *logic errors in applications:-work with developers to correct program errors. USER PROCESS FAILURE -------------------------------------TYPICAL PROBLEMS *user abbnormaly disconnects *user's session is abnormaly terminated *user experiences program error that terminates the session. SOLUTION -----------------DBA's action is not needed instance background process rollback uncommited cha nges and releases locks. NETWORK FAILURE -----------------------------------TYPICALPROBLEMS AND SOLUTIONS ---------------------------------------------------

-------------*listner failure-configure backup listner and conne ct time fail over. *network interface card(NIC) fails-configure multip le network cards. *network connectio fails-configure backup network c onnections USER ERROR -----------------------TYPICAL PROBLEMS AND SOLUTIONS ------------------------------------------------------------------*user inadvertantly modifies or deletes data-roll b ack or flashback query to recvover. *user drops a table-recover table from recyclebin. INSTANCE FAILURE ----------------------------------TYPICAL PROBLEMS ----------------------------------*POWER OUTAGE *HARDWARE FAILURE *FAILURE OF ONE OF THE BACKGRUOND PROCESS. *EMERGENCY SHUTDOWN PROCEDURES. SOLUTIONS -------------------------investigate the cuase of failure by alert log,trace file and enter prise manager. restart the instace by using "start up" command,rec overing from instance failure is automatic. BACKGROUND PROCESS AN D RECOVERY ------------------------------------------------------------------1)CHECKPOINT --------------------it is responsible for *signaling DBWn at checkpoints. *updating datafile header with checkpoint informati

on. *updating control file header with checkpoit inform ation. checkpoint exists for the following reasons --------------------------------------------------------------------------*to ensure that modified data blocks in the memory are written to the disk regularly so that data is not lost in case of system or datab ase failure. *to reduce the time required for inatance recovery. *to ensure that commited data is written to datafil e at the time of shutdown. the checkpoint information written by CKPT process includes checkpoint position, system change number,location in the redo logfile t o begin recovery,information about logs and so on. the CKPT process does not write datablecks to the d isk or redo blocks to online redolog files. 2)REDOLOGFILES AND LOGWRITER -------------------------------------------------------redolog files:*record changes to the datbase *should be multiplexed to protect against loss. logwriter writes:*at commit *when one third full *at every three second *when DBWn writes a redolog consists group of redolog files.each grou p consists redologfiles and its multiplexed copies,the LGWR process writes redolog record from redolog buffer

to all members of redolog gruops.redolog groups are used in a circular fashion. mutiplexed redolog file should reside on a differen t disk space. 3)ARCHIVER(ARCn) ------------------------------it's an optional background process.it automaticall y archives redolog files when archivelog mode is set for the database.it preserve s the record os all changes made to the databese. the ARCn process initiates backing up of filled log gruop when log switch occurs. this enables the recovery of database at the point of failure even if the disk drive is damaged. when datbase is configured in NOARCHIVELOG mode the online redolog files are overwritten each time a logswitch occurs. when database is cifigured to ARCHIVELOG mode the i nactive gruops of filled online redolog files are archived before the y are reused. archivelog mode is essential for most backup strate gies. INSTANCE RECOVERY -------------------------------------it's caused by attempts to open database whose file s aren't synchronised at he time of database shutdown,it's automatic and it's uses the information stored in redolog groups to synchronise files. it involves two distinct operations

1)rolling forward:-datafiles are restored to their state before instance failed. 2)rolling back:-changes made but not commited are r eturned to the original state. PHASES OF INSTANCE RECOVERY 1)datafiles out of sych. 2)rollfaorward(redo) commited and non commited data files. 3)rollback(undo) commited in datafiles. TUNING INSTANCE RECOVERYu can tune the database rec overy by controlling the difference between checkpoint posission and the en d of redolog files.the time required for the instance recovery is the time req uired to bring data files from their last checkpoint to latest SCN recorded in the control file. the administrator can control this time by setting MTTR target through the size of relog groups.the distance between the check point position and the end of redo log group can't be more 90% of the smallest redolog group. USING MTTR ADVISOR specify the minimum time required in seconds or min utes. default is 0 i.e. disabled. the maximum value is 3600 seconds i.e. an hour. MEDIA FAILURE --------------------------CAUSES AND SOLUTIONS 1)diskdrive failure-restore the affetcted file from backup 2)failure of disk controller- inform the database a bout location of new file. 3)deletion or curruption of datafile-recover throug

h file by applying redo information. CONFIGURING FOR RECOVER ABILITY -------------------------------------------------------------------to configure u r database for maximum recoverabilit y u shoud follow the steps given 1)schedule regular backups 2)multiplex redolog groups 3)multiplex backups 4)retain archived copis of redolog groups:-this is known as placing the database in archived log mode. CONTROLFILES -------------------------------this is small binary file that describes the struct ure fo database.it must be available for writing by the oarle server when the database i s mounted or opened. without this the database can't be mounted and reco very or recraetion of this file is required.it is suggested that u r database shoul d have atleast two cpies of control files, each copy on a seperate disk,atleast one copy on a seperate disk controller. REDOLOG FILES ------------------------------it's suggested that redolog groups have atleast two member fils at each group,each group on a seperate disk drive,each member on a sep erate disk controller.the loss of entire log group ia cause of serious media failu re because it can result in loss of data.redolog heavily influences teh databas

e performance as commit can't be complete untill the transection information isn't w ritten in the redologs.u must place redologs on a fastest disk served by fastest contro ller.u can multiplex redolog by adding member to existing redolog group. ARCHIVELOG FILES ---------------------------------to preserve u r redo information create archived co pies of redolog files by performing the following steps 1)specify hte archivelog file naming convention 2)specify one or more archive log file locations. switch a database to archive log mode. 15-PERFORMING DATABASE BACKUPS ====================== =========== ORACLE SECURE BACKUP ------------------------------------------oracles current backup and recovery product for the database is Recovery Mnager. it complements existing functionality in following ways 1)complete backup solution:- it provides protection for database and nondatabase data to protect whole oracle enviornment. 2)media management:-oracle securebackup provides me dia management layer to RMAN database backups to tape.before secure backup customer had to purchase expensive third party media management products to offer integration between RMAN tape backups. the combination of RMAN and oracle secure backup p rovides end to end backup solution.in order for RMAN to store all back

up on the tape a device known as Media Management Library(MML) must be configured . RMAN can make consistant or inconsistant backups,pe rform incremental and full backups, and backup either a full database or porti on of it. USER MANAGED BACKUP -------------------------------------------it is manual process of tracking backup needs and s tatus. it requires DBA to write scripts it requires the datafiles are put in correct mode f or backup relies on operating system commands to make backup of files following are some of the steps that scripts must t ake: *query V$DATAFILE to the name of the datafile to ba cked up and their current state *query V$LOGFILE identify the name of the redolog f ile. *query V$CONTROL file to indentify hte control file to backup *place each tablespace in online backup mode. *query V$BACKUP to see what datafiles are part of t ablespace that has been placed in online backup mode. *issue operating system copy command to copy datafi les to backup location. *bring each tablespace out of online backup mode. BACKUP TERMINOLOGY ----------------------------------------backup strategy 1)whole database backup includes all datafiles and atleast one control file.

2)partial database backup inncludes 0 or more table spaces,0 or more datafiles and may or may not include control files.

backup type 1)full:- makes a copy of each datablock that contai ns data and which is within the file being backed up. 2)incremental:-makes a coppy of all datablocks that have changed since some some previos backup. oracle 10 g supports two levels of incremental back ups 0 level:- this is same as full backup 1 level:- this backs up all datablocks since level 0 backup 0 level backup is also called as baseline backup backup mode 1)offline backup:-is also called as consistant or cold backup and is taken when the database is not open it's called so as the scn in the datafile header matches the scn in the control file. 2)online backups :- is also called as incosistant or hot backups and are taken when the database is open. inconsistant backup needs rec overy in order tobe used. IMAGE COPIES:-this are the duplicates of data or ar chived log files.simlpy copying the files by operating system copy commands. BACKUP SETS:-are the copies of one or more data or archive log sets, with backup sets emty databloocks are not stored ,there by causing backup sets to use less space in the disk or tape.with an image copy o nly the file or files need to be retrived from the tape or disk ,whereas with bac

kup sets entire backup set must be retrived from the tape in order to extract a single file or files.most database contains 20% of empty datablock. a backup of database running in NOARCHIVE mode must have all three attributes. offline,full and whole database. a backup of database running in ARCHIVE mode have a ccess to the full range of database option. FLASH RECOVERY AREA ------------------------------------------it is a space set aside on a disk to contain archiv ed logs,flashback logs and backups. monitor flash recovery area to *configure flashback logging *size the recovery area *view current space consumption 16PERFORMING DATABASE RECOVERY ===== ================================= opening a database -------------------------------to open a database a)all conrol files must be present and synchronised . b)all online datafiles must be present and synchro nised. c)atleast one member of each redolog groups must be present.

before being fully opened the database performs int ernal consistancy check *nomount:-the instance reads parameter file, no dat afiles are checked at this stage. *mount:-at this stage it checks for the control fi le to be present and synchronised. is any of the parameter file is missing or currupt iot will retun an error and remains in no mount mode. *open:- while moving from mount to open state inatn ce checks atleast one member of redolog group is present. all missing members ar e noted in the alert log. when u start an instance the efault mode is open,u may choose to start the instance in some other mode. after instance is open it may fail due to loss of *any control file *datafile belonging to system or undo tablespaces. *an entire redolog group this error may be detected by inspecting alert log file. LOSS OF A CONTROL FILE perform the following steps 1.shutdown the instance if it is still open. 2.restore the missing controll file by coping it from existing controllfile. 3.start the instance. LOSS OF REDO LOG FILE if a member of redolog group is lost 1.normal operation of the instance is not affected. 2.u receive a message in the alert log notifying u that a member is not found. 3.u can restore the missing log file by coping one of the remaining file from the same group.

To perform redolog recovery , perform the following steps *determine there is a missing log file by examining the alert log. *restore the missing file by coping from one of th e existing files from the same group. *if the media failure is due loss of disk drive or controller,rename missing file. *if the group has been archived or u r in NOARCHIVE mode then u may solve the problem by clearing the group to recreate the missi ng file or files. select appropriate group and select Clear Logfile action the command is as follows SQL>ALTER SYSTEM CLEAR LOGFILE GROUP# database control doesnot allow u to clear the group which isn't archived.doing so breaks the chain of redo information.if u clear an un archived group u need to immeditely take the full backup of u r database. the command to clear unarchived log group is SQL>ALTER SYSTEM CLEAR UNARCHIVED LOG FILE GROUP # LOSS OF DATAFILE IN NOARCHIVEMODE if the database is in NO ARCHIVE mode and datafile is lost, perform the following steps: *shutdown the database if it's not yet *restore the datafile including all data and contr ol filesfrom backups. *open the database *have the user reenter all changes since last backu p LOSS OF NONCRITICAL DATAFILE IN ARCHIVELOG MODE If the datafile lost doesnot belong to system or un do tablespace it doesn't effect rest of the database,it only affects the object tha t are there in the missing file.

to restore and recover the missing file perform the following steps *click 'perfom recovery' on the maintainance proert ies page *select "datafile" as the recovery type and select "restore to current time" *add all datafile that need recovery *determine the location i.e. default or new when di sk or controller is missing. *submit RMAN job to restore and recover the missi ng file. LOSS OF SYSTEM CRITICAL DATAFILES if the lost datafile belongs to system or undo tabl espace perform following steps *shutdown the instance by SHUTDOWN ABORT command if it's not down automaticaly. *mount the database *click 'perform recovery' on the maintainance prope rties page. *select 'datafile' as the recovery type and select 'restore to current time'. *add all datafile that needs recovery. determine the locaytion i.e. defualt or new if the disk or controller is missing. *submit RMAN job to restore and recover the missing datafile. *open the database.

17-PERFORM ING FLASHBACK ========== ============== FLASHBACK TECHNOLOGY :-

BENEFITS:*in traditional techniques entire database or file has to be restored while in flashback just the incorrect data has to be restored hence it is fast. *every change in the database log must be examined to restore the data while in flash back recovery changes are indexed by row a nd transections hence it is easy to trace incorrect data. *flashback commands are easy as compared with compl ex multistep procedure. flashback technology must be used when logical curr uption occurs in database. with the flash back terminology u can diagnose the error that have occured with the database,u can view the transections that have contributed to specific row modifications, view the entire set versions of give n row during some period of time. u can even view the data that appeared at specific time in past. FLASHBACK DATABASE ------------------------------------------*works like a rewind button for database *can be used in case of logical data corruptions ma de by the user. with flashback database, the time to recover databa se is proportional to the number of changes that need to be backed out be cause u need not to restore datafiles. falshback database is implemented by usi ng type of log files called FLASHBACK DATABASE LOGS. the oracle database periodically logs "before image s" of the datablocks in the FLASHBACK DATABASE LOGS.the FLASHBACK DATABASE LOGS

are automatically created and maintained in the flash recovery area. THE OPPOSITE OF FLASHBACK IS RECOVER. After the flashback database operation is complete u must open the database in read only mode in order to verify the correct targe t time or scn has been used. CANT USE THE FALSHBACK DATABASE IN THE FOLLOWING SI TUATIONS *the control file has been restored or recreated. *a table space has been droped. *a datafile has been shrunk.any shrunk datafile mus t be taken offline before performing flash back operation. FLASHBACK TABLE *recovers the table or tables to a specific point i n time without restoring a backup. *data is retrived from undo table spacr to eprform FLASHBACK operation. *the FLASHBACK TABLE privilege is required to perfo rm flashback table operation. *the row movement must be enabled on the table u r performing flash back operation on. *flashback table is an in-place operation. *database stays online. u can use flashback version query and flashback tra nsection query to find appropriate flashback time. using entrprise manager u can enable row movement b y perfprming following tasks 1)select tables in the schema region on the adminis tration property page, enter the schema name to search for the table and click go.

2)click the table name,now u r on the view table pa ge. 3)click edit,which takes u to the edit table page. 4)click the option tab where u can change the enabl e row movement setting for the table. 5)set enable row movement to yes and click apply. command to perform flashback table is as follows sql>FLASHBACK TABLE HR.EMPLOYEES TO TIMESATMP TO_TIMESTAMP('2003-08-07 05:32:00','YYYY-MM-DD HH24 :MI:SS'); *flashback command executes as a single transection ,aquiring exclusive DML locks. *statistics are not flashed back. *current indexes and dependent objects are maintain ed. *flashback table operations --can't be performed on the system tables,remote ta bles and fixed tables. --can't span DDL operations --are written to the alert lod file. --generate undo and redo data. *FLASHBACK DROP with this feature u can undo the effects of 'DROP T ABLE' statement. without restoring the traditional point in time rec overy.this is made possible via recyclebin,which can be queried via DBA_REC YCLEBIN view. FLASHBACK DROP DOESNOT WORK FOR THE FOLLOWING TABLE S *reside in the system table space. *use fine-grained auditing or virtual private datab ase. *reside in dictionary managed tablespace. *have been purged either by manual perging or autom

atic perging under space pressure. FOLLOWING DEPENDENCIES ARE NOT PROTECTED *bitmap join indexes *materialised view logs. *referential integrity constraints. *indexes dropped before tables. FLASHBACK TIME NEVIGATION FLASHBACK QUERY:-query all data as it existed in sp ecific point in time. FLASHBACK VERSION QUERY:-see all versions of rows b etween two times and transections that changed the row. with this technology u query the database as of cer tain time,by using 'as of clause' of the 'select' statement u can spec ify the timestamp for which u wnat to view the data. AS OF clause can be followed by either TIMESTAMP OR SCN number eg. SQL>UPDATE EMPLOYEE SET SALARY=(SELECT SALARY FROM EMPLOYEES AS OF TIMESTAMP TO_TIMESTAMP('2005-05-04', 'YYYY-MM-DD hh24:mm:ss' WHERE EMPLOYEE_ID=200) WHERE EMPLOYEE_ID=20 0; FLASHBACK TRANSECTION QUERY:-see all changes made b y a transection. with this query u can perform queries on the databa se of a certain time span or range of user specified SCN numbers.this feature enables u to use VERSIONS clause to retrive all the raws that exist between t wo points of time or two SCNs.

eg.SQL>SELECT VERSION_XID , SALARY FROM EMPLOYEES V ERSIONS BETWEEN TIMESTAMP T1 AND T2 WHERE EMPLOYEE_ID=200; FLASHBACK VERSIONS CLAUSE CANNOT BE USED *to qury external tables *teperory tables *fixed tables *views it can not span DDL statements statement shrink operations used for maintainace op erations to move table rows across blocks are filtered out.

FLASHBACK TRANSECTION QUERY it is a diagnostic tool u can use to see the change s made to the database at the transection lavel. u can use FLASHBACK_TRANSECTION_QUERY view to deter mine all the necessary SQL statements that can be used to undo changes mad e either by specific transection or during specific period of time. FLASHBACK TRANSECTION QUERY CONSIDERATIONS *DDLs(changes made to the datadictionary) are seen as dictionary updates. *droped objects are seen as object number. *droped user are seen as user identifiers. when there is no enough undo data for specifid tra nsection a row with value of UNKNOWN in the OPERATION column of FLASHBACK_TRA NSECTION_QUERY is returned.

18-MOVING DA TA =========== === GENARAL ARCHTECTURE ------------------------------------------MAJOR COMPONENTS 1)DBMS_DATAPUMP:-this package coprises of the API for high speed of export and import utilities for bulk data and metadata movement. 2)DIREDT PATH API(DPAPI):-theis is an intertface su ported by oracle 10g to minimise data conversion at loading and unloading time. 3)DBMS_METADATA:-this is a package used by worker process for all metadata loading and unloading. object definitions are stored in XML rather than SQ L. 4)EXTERNAL TABLE API:-with ORACLE_DATAPUMP and ORACLE_LOADER drivers u can store data in exter nal tables.the select statement read external table as though they are stored in database. 5)SQL *LOADER:-the SQL*LOADER has been tntegrated w ith external table,there by providing automatic migrati on of loader control file to external table acces para meter. 6)EXPDP and IMPDP:- clients are thin layers that ma ke calls to DBMS_DATAPUMP to initiate and monitor data pump operation. 7)OTHER APPLICATIONS:-thees are applications such a s

database control, replication,transportable table s paces and user applicaitons, that benefits from this infr astrucutre. SQL*PLUS also can be used as a client DBMS_DATAPUMP for simple status queries against on going operatio ns. DIRECTORY OBJECTS -------------------------------------directory objects are logical structures representi ng physical directory on servers file system. they contain loca tion of specific operaitng system directory.directory o bjects are owned by sys user.directory names are unique across the database as all are located in a single name space( SYS). directort objects are required when u specify file locations for Data Pump because it access files on the server rather than on the client. SQL LOADER ----------------------it loads the data from external files into tables o f oracle database. files that are used by SQL LOADER are as follows a)INPUT DATAFILE:-sql loader reads data from one or more file that are specified in the control file.sql loa der organises the data in the fixed record format,variable recor d format or streamed record foemat. the record format can be

specified in the control file using the INFILE para meter. the default is stream record format. b)CONTROL FILE:-it is a text file written in a lang uage that SQL LOADER understands. CONTROL FILE indicates SQL LOADER wher to find the data how to parse and inter pret the data,where to insert the data and so on. It contain s DDL language instructions. conrol file has three sections 1)first section *input file name and records to be skipped. *INFILE clause to specify where the input data is located. *data tobe loaded. 2)second section *two or more INTO TABLE blocks.each of theis block contains information about the table(table name an d columns of the table) in to which the data is to b e loaded. 3)third section:-is optional and if present contain s input data. C)LOG FILES:-it contains detailed summary of load,i ncluding deacription errors occured during load process.if S QL LOADER cannot create log filr the execution of loading process twerminates. D)BAD FILE:-it contains records that are rejected e ither by loader or by database. loader rejects te records wh en the format is invalid and database rejects the reco

rds when the record it self is invalid and the ejected recor ds are put in BAD file. E)DISCARD FILE:-this file contains records rejected by SQL LOADER as it doesn't match any record-selection cr iteria mentioned in the control file. METHODS OF SAVING DATA 1)CONVENTIONAL PATH LOAD:It uses sql proceesing and COMMIT statement for sav ing data.it always generates REDO entries. it enforces all constraints.fires IN SERT triggers,can load the data into clustered table and allow other user to update tabl e during load operation. 2)DIRECT PATH LOAD:It uses data saves(faster operation) instead of COM MIT, generates REDO only under specific conditions,enforces omly unique key,primar y key and not null,doesnot fire INSERT trigger,does not load in to clustered table and prevents other user from making changes in to table during load operation. FOLLOWIG FEATURES DIFFERENTIATE 'DATA SAVE' FROM 'C OMMIT' *during data sava,only full database is written to the database. *the blocks are written only after high water mark( HWM). *after a data save, high water marks (HWM) is moved . *internal resources are not rleased after data save . *a data save doesnot end transection.

*indexes are not updated at each data save. If a small number of rows are to be inserted in a l arge table than use conventional loads.

DAT A PUMP -------------------------It is a server based facility for high speed data a nd meta data movement.the datapump infrastructure is callable via DBMS_DATAPUMP PL/SQL package. oracle database 10g provides the following tools 1)command line import and export clients called EXP DP and IMPDP respectively. 2)a web based import export interface that is acces ible from database control. datapump automatically decides the data access meth od to use i.e. can be direct path or external tables. data pump uses direct path loading and unloading if tables structure allows it for single stream performance.If the table is aclustere d table,with referential integrity constraints,encrypted columns or othe items, data p ump uses external table rather than direct path to move data. the ability of data pump to deattach and reattach t o long running jobs without affecting the job itself, enables u to monitor jobs from multiple locations while they

are running. all stopped data pump jobs can be rest arted without loss of data as long as mainframe remains undesturbed. DATA PUMP BENEFITS ------------------------------------1)Fine grained objects and data selection.using EXC LUDE,INCLUDE and CONTENT parameters. 2)explicit specification of database version.using VERSION parameter. 3)parallel execution.using PARALLEL parameter. 4)estimation of space consumption for export job.by using ESTIMATE_ONLY parameter. 5)network mode in destributed enviornment. 6)remaping capabilites during import(change the tar get datafile,schema and table space). 7)data sampling and data compression. DATA PUMP EXPORT AND IMPORT --------------------------------------------------------EXPORT is a utility for unloading data and meta d ata in to set of operating system files called dump file sets.IMPORT is the data pump uitility to load the data and mata data stored in the export dump file set in to target system. the data pump API access its file on the server ra ther than client.this is known as network mode,this mode is used to export the data f rom the read only source of database. at the center of every data pump job is a creation of Master Table (MT) which is created in the schema of user running the data pump job.the MT maintains all

aspects of job.MT is built during file based expor t job and writen to the dump file set as the last step. conversly loading MT in the user s schema is the first step of file based import and is used to sequence the creation o f all object imported. MT is the key to restart the data pump job in the event o f planned or unplanned stopping of job.the MT is stopped when data pump fi nishes it's job normally. DATA PUMP-- INTERFACES *command line:enables u to specify most of the export parameters directly on the command line. *parameter file:- enables u to specify all comand-l ine parameters in a parameter file.the only exception is the PARFILE parameter. *interactive command line:-stps loging to terminal and displays export or import prompts where u can enter various commands. this m ode is enabled by pressing Ctrl+C when export proces s is started with comand line interface or paramet er file *web interface:-on the database control home page c lick the maintainanace tab and then select following links from the utility region a)export to files b)import from files or c)import from database DATA PUMP -- EXPORT AND IMPORT MODES *FULL *SCHEMA

*TABLE *TABLE SPACE *TRANSPORTABLE TABLE SPACE FINE GRAINED OBJECT SELECTION:-The data pump can in clude or exclude virtualy any type of objects. the EXCLUDE parameter can exclude any database obje ct type from an import or export operation EXCLUDE= OBJECT_TYPE[:''NAME_EXPR"] The INCLUDE parameter can include any database obje ct type from export or import operation. INCLUDE=OBJECT_TYPE[:''NAME_EXPR"] The CONTENT parameter enebles u to request for the current operation,only the metadata only the data or both CONTENT=ALL|METADATA_ONLY|DATA_ONLY The QUERY parameter operates on a similar manner as original export utility, with two significant enhancement 1)it can be qualified with a table name. 2)it can be used during import as well QUERY=HR.EMPLOYEES:"WHERE DEPARTMENT_ID IN (10, 20) AND SALARY<1600 ORDER BY DEPARTMENT_ID" ADVANCED FEATURE ----------------------------------

SAMPLING TASK:create test data METHOD:-specify percentage of data to be sampled an d unloaded from the source database when performing data pump export. example to unload 44% of hr.employees table SAMPLE="HR"."EMPLOYEES":44 example to unload 30% percent of the entire export job EXPDP HR/HR DIRECTORY=DATA_PUMP_DIR DUMP FILE=SAMPLE1.DMP SAMPLE=30 SAMPLE parameter is not valid for network exports EXPORT OPTIONS FILES:-Three types of files are managed by datapump jobs 1)dump files for data and metadata that is to be mo ved 2)log files for messages 3)sql files for the output of SQLFILE operation. since datapump is server based and not client base d ,data pump files are accesed relative to oracle directory paths.absolute paths a re not supported for security reason. DATA PUMP FILE LOCATIONS:the order of precedence of file locations 1)perfile directory:- perfile directory objects may be specified for each dump file, log file and sql file they are seperated from the file by (:) 2)the datapump client provide DIRECTORY parameter w hich specifies the name of directory object,directory object describes the loc ation in which the files to be

accesed. 3)u can define an enviornmental variable called DAT A_PUMP_DIR rather then DIRECTORY parameter which specifies the name of dir ectory objects. 4)ther is a default derectory object created for ev ery database named as DATA_PUMP_DIR the acces to this drectory object i s granted through EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. the exact directory path specification for DATA_PUM P_DIR varies as per the value of ORACLE_BASE and ORACLE_HOME sytem variables on the existance of DATA_PUMP_DIR subdirectory.if ORACLE_BASE Is defin ed on the target system then that value is used other wise ORACLE_HOME value is used.if DATA_PUMP_DIR is not found then the default path is used which is ORACLE_HOME/RDBMS/LOG scheduling and running a job:-data pump jobs are sc heduled as repeatable jobs by enterprise manager database control. DUMP FILE NAMING AND SIZE The DUMPFILE parameter specifies the name and direc tories of disk-based dump files.multiple file specification may be provided w ith comma.file names may contain substitution variable %U which implies multiple fil es may be geenrated.if no DUMPFILE is specified EXPDAT.DMP is used by default .craeted dump files are auto extensible by default ,but if FILE size parame ter is specified the file gets nonextensible.with the help of %U parameter sp ecified more dump files are created if more dump space is required but if the p arameter %U is not specified the client receives a message to add a new file.

pre existing files that match the resulting file na mes are not over written; they result in the error and the job is aborted. DATA PUMP IMPORT -----------------------------------It is a utility for loading an export dump file set in to a target system.the dump file set is made up of one or more disk files that contains table data,database object meta data,and control information. u can interract with data pump import by 1)command line:-use IMPDP command and specify param eter. 2)parameter file:-u can enter command line paramet ers in a file. 3)interactive comand mode:-u acn attach additional jobs to executing and stopped jobs. TARNSFORMATIONS u can remap: datafile by using REMAP_DATAFILE ,it is used while moving databases acros platforms with different file system symentics. tablespaces by using REMAP_TABLESPACE,allows teh ob ject to be moved from one table space to another. schemas by using REMAP_SCHAMA,provides the capabili ty to change the object ownership. the TRANSFORM parameter enables u to alter object c reation DDL for the object being loaded. DATA PUMP:PERFORMANCE CONSIDERATION Ucan improve through out a job wth the PARALLEL pa rameter.in general the degree of parallelism to be set more than twice the number of CPU on an instance . u must supply at least one file for each degree of

parallelism.degree of parallelism can be reset at any time during the job. PERFORMANCE INTIALISATION PARAMETERS *performance of the database can be affected by --DISK_ASYCH_IO=TRUE --DB_BLOCK_CHECKING=FALSE --DB_BLOCK_CHECKSUM=FALSE *the following should be set high to allow for the maximum parallelism --PROCESS --SESSION --PARALLEL_MAX_SERVERS *the following should be sized generously --SHARED_POOL_SIZE --UNDO TABLESPACE

EG. REMAP_DATAFILE='C:\ORADATA\TB6.F'':'/U1/TB6.F' DATA PUMP ACCESS PATH:CONSIDERATIONS one of the access path is automatically selected by data pump,it uses DIRECT PATH load and unload when thesingle stream performance i s required. data pump uses EXTERNAL TABLES if any of the follow ing conditions exists *table with fine grained access control enabled in insert and select mode. *domain index which exists for LOB column. *table with active trigger defined. *global index on partitioned tables with single par tition load. *BFILE or opaque type columns. *referential integrity constraints. *varray column embeded opaque type. as both the method supports same resentation,data can be unloaded external data rep

with one method and can be loaded with another meth od. EXTERNAL TABLES -------------------------------An 'External tables' is composed of properitory for mat flat files that are operating system indepaendant.after an external table is cre ated and populated,no rows may be added updated or deleted from it.external tables may not have indexes. data pump access driver enable loading and unloadin g operations for external tables. data can be directly used from external tables or l oaded in to another database. resulting files can be read only with ORACLE DATA P UMP access driver. u can combine different files EXTERNAL TABLE MANIPULATION EITH ORACLE_DATAPUMP -------------------------------------------------------------------------------------------------------EG. CREATE TABLE emp_ext(first_name,last_name,departmen t_name) ORGANISATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DERECTORY ext_dir LOCATION('emp1,exp','emp2.exp','emp3.exp') ) PARALLEL AS SELECT e,first_name,e.last_name,d.department_name FROM employees e,departments d WHERE e.deparment_id=d.department_id AND d.departme nt_name in

('MARKETING','PURCHASING'); The above example shows how the new external table population can export selective set of records from the employees and deparments table. number of files in location clause must match the d egree of parallelism because each input output requires it's own file. EXTERNAL TABLE POPULATION WITH ORACLE_LOADER ------------------------------------------------------------------------------------------------CREATE TABLE extab_employees (employee_id number(4), first_name varchar2(30), last_name varchar2(20), hire_date date) ORGANISATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY exttab_dat_ dir ACCESS PARAMETERS (records delimited by new line badfile extab_bad_dir:'empxt%a_%p.bad' logfile extab_bad_dir:'empxt%a_%p.log' fields terminated by ',' missing field values are null (employee_id,first_name,last_name hire_date char date_format date mask "dd-mon-yy yy")) LOACTION('empxt1.dat','empxt2.dat') ) PARALLEL REJECT LIMIT UNLIMITED; if u have lots of data to be loaded,enable PARALLEL for the load operation ALTER SESSION ENABLE PARALLEL DML; DATA DICTIONARY [DBA|ALL|USER]_EXTERNAL_TABLES:-Lists the specific

attributes of external table in database. [DBA|ALL|USER]_LOCATION:-lists the data source of e xternal tables. [DBA|ALL|USER]_TABLES:-describes the relational tab les in the database. [DBA|ALL|USER]_TAB_columns:-describes the columns o f tables,views and clusters in the database.

Вам также может понравиться