Академический Документы
Профессиональный Документы
Культура Документы
Deadlock Error: Take the trace file in user dump destination and analysis it for
the error.
ORA-01555 Snapshot error: Check the query try to fine tune and check the undo
size.
Unable to extent segment: Check the tablespace size and if require add space in
the tablespace by 'alter database datafile .... resize' or alter tablespace add
datafile command.
Explain Dual table. Is any data internally stored in dual Table. Lot of
users is accessing select sysdate from dual and they getting some
millisecond differences. If we execute SELECT SYSDATE FROM EMP;
what error will we get. Why?
Dual is a system owned table created during database creation. Dual table
consist of a single column and a single row with value x. We will not get any
error if we execute select sysdate from scott.emp instead sysdate will be treated
as a pseudo column and displays the value for all the rows retrieved. For
Example if there is 12 rows in emp table it will give result of date in 12 rows.
As an Oracle DBA what are the entire UNIX file you should be familiar
with?
To check the process use: ps -ef |grep pmon or ps -ef
To check the alert log file: Tail -f alert.log
To check the cpu usage; Top vmstat 2 5
What is a Database instance?
A database instance also known as server is a set of memory structures and
background processes that access a set of database files. It is possible for a
single database to be accessed by multiple instances (this is oracle parallel
server option).
What are the Requirements of simple Database?
A simple database consists of:
One or more data files, One or more control files, Two or more redo log files,
Multiple users/schemas, One or more rollback segments, One or more
Tablespaces, Data dictionary tables, User objects (table, indexes, views etc.)
The server (Instance) that access the database consists of:
SGA (Database buffer, Dictionary Cache Buffers, Redo log buffers, Shared SQL
pool), SMON (System Monitor),PMON (Process Monitor), LGWR (Log Write), DBWR
(Data Base Write), ARCH (ARCHiver), CKPT (Check Point), RECO, Dispatcher, User
Process with associated PGS
Which process writes data from data files to database buffer cache?
The Background process DBWR rights data from datafile to DB cache.
How to DROP an Oracle Database?
You can do it at the OS level by deleting all the files of the database. The files to
be deleted can be found using:
A full backup is an operating system backup of all data files, on-line redo log files
and control file that constitute oracle database and the parameter. If you are
using the Rman for backup then in Rman full backup means Incremental backup
on 0 level.
While taking hot backup (begin end backup) what will happens back
end?
When we r taking hot backup (begin backup - end backup) the datafile header
associated with the datafiles in the corresponding Tablespace is frozen. So Oracle
will stop updating the datafile header but will continue to write data into
datafiles. In hot backup oracle will generate more redos this is because oracle
will write out complete changed blocks to the redo log files.
Which is the best option used to move database from one server to
another serve on same network and Why?
Import Export, Backup-Restore, Detach-Attach
Import-Export is the best option used to move database from one server to
another serve on same network. It reduces the network traffic. Import/Export
works well if youre dealing with very small databases. If we have few million
rows its takes minutes to copy when compared to seconds using backup and
restore.
What is Different Type of RMAN Backup?
Full backup: During a Full backup (Level 0) all of the block ever used in datafile
are backed up. The only difference between a level 0 incremental backup and a
full backup is that a full backup is never included in an incremental strategy.
Comulative Backup: During a cumulative (Level 0) the entire block used since
last full backup are backed up.
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks
changed since level 0
Differential Backup: During incremental backup only those blocks that have
changed since last cumulative (Level 1) or full backup (Level 0) are backed up.
Incremental backup are differential by default.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE
Give one method for transferring a table from one schema to another:
There are several possible methods: Export-Import, CREATE TABLE... AS SELECT
or COPY.
What is the purpose of the IMPORT option IGNORE? What is its default
setting?
The IMPORT IGNORE option tells import to ignore "already exists" errors. If it is
not specified the tables that already exist will be skipped. If it is specified, the
error is ignored and the tables data will be inserted. The default value is N.
What happens when the DEFAULT and TEMP tablespace clauses are left
out from CREATE USER statements?
The user is assigned the SYSTEM tablespace as a default and temporary
tablespace. This is bad because it causes user objects and temporary segments
to be placed into the SYSTEM tablespace resulting in fragmentation and
improper table placement (only data dictionary objects and the system rollback
segment should be in SYSTEM).
What happens if the constraint name is left out of a constraint clause?
The Oracle system will use the default name of SYS_Cxxxx where xxxx is a
system generated number. This is bad since it makes tracking which table the
constraint belongs to or what the constraint does harder.
What happens if a Tablespace clause is left off of a primary key
constraint clause?
This result in the index that is automatically generated being placed in then
USERS default tablespace. Since this will usually be the same tablespace as the
table is being created in, this can cause serious performance problems.
What happens if a primary key constraint is disabled and then enabled
without fully specifying the index clause?
The index is created in the users default tablespace and all sizing information is
lost. Oracle doesnt store this information as a part of the constraint definition,
but only as part of the index definition, when the constraint was disabled the
index was dropped and the information is gone.
Using hot backup without being in archive log mode, can you recover in
the event of a failure? Why or why not?
You can't recover the data because in archive log mode it take the backup of
redo log files if it in Active mode, If it in inactive mode then it is not possible to
take the backup of redolog files once the size is full, so in that case it is
impossible to take hot backup
What causes the "snapshot too old" error? How can this be prevented
or mitigated?
This is caused by large or long running transactions that have either wrapped
onto their own rollback space or have had another transaction write on part of
their rollback space. This can be prevented or mitigated by breaking the
transaction into a set of smaller transactions or increasing the size of the
rollback segments and their extents.
How can you tell if a database object is invalid?
then look at the output from the tkprof tool. This can also be used to generate
explain plan output.
What is Explain plan and how is it used?
The EXPLAIN PLAN command is a tool to tune SQL statements. To use it you must
have an explain_table generated in the user you are running the explain plan for.
This is created using the utlxplan.sql script. Once the explain plan table exists
you run the explain plan command giving as its argument the SQL statement to
be explained. The explain plan table is then queried to see the execution plan of
the statement. Explain plans can also be run using tkprof.
How do you prevent output from coming to the screen?
The SET option TERMOUT controls output to the screen. Setting TERMOUT OFF
turns off screen output. This option can be shortened to TERM.
How do you prevent Oracle from giving you informational messages
during and after a SQL statement execution?
The SET options FEEDBACK and VERIFY can be set to OFF.
How do you generate file output from SQL?
By use of the SPOOL command
A tablespace has a table with 30 extents in it. Is this bad? Why or why
not.
Multiple extents in and of themselves arent bad. However if you also have
chained rows this can hurt performance.
How do you set up tablespaces during an Oracle installation?
You should always attempt to use the Oracle Flexible Architecture standard or
another partitioning scheme to ensure proper separation of SYSTEM, ROLLBACK,
REDO LOG, DATA, TEMPORARY and INDEX segments.
You see multiple fragments in the SYSTEM tablespace, what should you
check first?
Ensure that users dont have the SYSTEM tablespace as their TEMPORARY or
DEFAULT tablespace assignment by checking the DBA_USERS view.
What are some indications that you need to increase the
SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another
indication is steadily decreasing performance with all other tuning parameters
the same.
Guideline for sizing db_block_size and db_multi_block_read for an
application that does many full table scans?
Oracle almost always reads in 64k chunks. The two should have a product equal
to 64 or a multiple of 64.
When looking at v$sysstat you see that sorts (disk) is high. Is this bad
or good? If bad -How do you correct it?
If you get excessive disk sorts this is bad. This indicates you need to tune the
sort area parameters in the initialization files. The major sort parameter is the
SORT_AREA_SIZE parameter.
When should you increase copy latches? What parameters control copy
latches?
When you get excessive contention for the copy latches as shown by the "redo
copy" latch hit ratio. You can increase copy latches via the initialization
parameter LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your
system.
Where can you get a list of all initialization parameters for your
instance? How about an indication if they are default settings or have
been changed?
You can look in the init.ora file for an indication of manually set parameters. For
all parameters, their value and whether or not the current value is the default
value, look in the v$parameter view.
Describe hit ratio as it pertains to the database buffers. What is the
difference between instantaneous and cumulative hit ratio and which
should be used for tuning?
The hit ratio is a measure of how many times the database was able to read a
value from the buffers verses how many times it had to re-read a data value
from the disks. A value greater than 80-90% is good, less could indicate
problems. If you simply take the ratio of existing parameters this will be a
cumulative value since the database started. If you do a comparison between
pairs of readings based on some arbitrary time span, this is the instantaneous
ratio for that time span. Generally speaking an instantaneous reading gives more
valuable data since it will tell you what your instance is doing for the time it was
generated over.
Discuss row chaining, how does it happen? How can you reduce it? How
do you correct it?
Row chaining occurs when a VARCHAR2 value is updated and the length of the
new value is longer than the old value and would not fit in the remaining block
space. This results in the row chaining to another block. It can be reduced by
setting the storage parameters on the table to appropriate values. It can be
corrected by export and import of the effected table.
You are getting busy buffer waits. Is this bad? How can you find what is
causing it?
Buffer busy waits could indicate contention in redo, rollback or data blocks. You
need to check the v$waitstat view to see what areas are causing the problem.
The value of the "count" column tells where the problem is, the "class" column
tells you with what. UNDO is rollback segments, DATA is data base buffers.
If you see contention for library caches how you can fix it?
Increase the size of the shared pool.
If you see statistics that deal with "undo" what are they really talking
about?
Rollback segments and associated structures.
If a tablespace has a default pctincrease of zero what will this cause (in
relationship to the SMON process)?
The SMON process would not automatically coalesce its free space fragments.
If a tablespace shows excessive fragmentation what are some methods
to defragment the tablespace? (7.1,7.2 and 7.3 only)
In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace
name coalesce level ts#'; command is the easiest way to defragment contiguous
free space fragmentation. The ts# parameter corresponds to the ts# value found
in the ts$ SYS table. In version 7.3 the alter tablespace coalesce; is best. If the
free space is not contiguous then export, drop and import of the tablespace
contents may be the only way to reclaim non-contiguous free space.
How can you tell if a tablespace has excessive fragmentation?
If a select against the dba_free_space table shows that the count of tablespaces
extents is greater than the count of its data files, then it is fragmented.
You see the following on a status report: redo log space requests 23
redo log space wait time 0 Is this something to worry about? What if
redo log space wait time is high? How can you fix this?
Since the wait time is zero, no problem. If the wait time was high it might
indicate a need for more or larger redo logs.
If you see a pin hit ratio of less than 0.8 in the estat library cache
report is this a problem? If so, how do you fix it?
This indicates that the shared pool may be too small. Increase the shared pool
size.
If you see the value for reloads is high in the estat library cache report
is this a matter for concern?
Yes, you should strive for zero reloads if possible. If you see excessive reloads
then increase the size of the shared pool.
You look at the dba_rollback_segs view and see that there is a large
number of shrinks and they are of relatively small size, is this a
problem? How can it be fixed if it is a problem?
A large number of small shrinks indicates a need to increase the size of the
rollback segment extents. Ideally you should have no shrinks or a small number
of large shrinks. To fix this just increase the size of the extents and adjust
optimal accordingly.
You look at the dba_rollback_segs view and see that you have a large
number of wraps is this a problem?
A large number of wraps indicates that your extent size for your rollback
segments are probably too small. Increase the size of your extents to reduce the
number of wraps. You can look at the average transaction size in the same view
to get the information on transaction size.
You see multiple extents in the Temporary Tablespace. Is this a
problem?
As long as they are all the same size this is not a problem. In fact, it can even
improve performance since Oracle would not have to create a new extent when
a user needs one.
How do you set up your Tablespace on installation Level: Low?
The answer here should show an understanding of separation of redo and
rollback, data and indexes and isolation of SYSTEM tables from other tables. An
example would be to specify that at least 7 disks should be used for an Oracle
installation.
Disk Configuration:
SYSTEM tablespace on 1, Redo logs on 2 (mirrored redo logs), TEMPORARY
tablespace on 3, ROLLBACK tablespace on 4, DATA and INDEXES 5,6
They should indicate how they will handle archive logs and exports as well as
long as they have a logical plan for combining or further separation more or less
disks can be specified.
You have installed Oracle and you are now setting up the actual
instance. You have been waiting an hour for the initialization script to
finish, what should you check first to determine if there is a problem?
Check to make sure that the archiver is not stuck. If archive logging is turned on
during install a large number of logs will be created. This can fill up your archive
log destination causing Oracle to stop to wait for more space.
When configuring SQLNET on the server what files must be set up?
INITIALIZATION file, TNSNAMES.ORA file, SQLNET.ORA file
When configuring SQLNET on the client what files need to be set up?
SQLNET.ORA, TNSNAMES.ORA
You have just started a new instance with a large SGA on a busy
existing server. Performance is terrible, what should you check for?
The first thing to check with a large SGA is that it is not being swapped out.
What OS user should be used for the first part of an Oracle installation
(on UNIX)?
You must use root first.
When should the default values for Oracle initialization parameters be
used as is?
Never
How many control files should you have? Where should they be
located?
At least 2 on separate disk spindles (Mirrored by Oracle).
How many redo logs should you have and how should they be
configured for maximum recoverability?
You should have at least 3 groups of two redo logs with the two logs each on a
separate disk spindle (mirrored by Oracle). The redo logs should not be on raw
devices on UNIX if it can be avoided.
Why are recursive relationships bad? How do you resolve them?
A recursive relationship defines when or where a table relates to itself. It is
considered as bad when it is a hard relationship (i.e. neither side is a "may" both
are "must") as this can result in it not being possible to put in a top or perhaps a
bottom of the table. For example in the EMPLOYEE table you could not put in the
PRESIDENT of the company because he has no boss, or the junior janitor
because he has no subordinates. These type of relationships are usually resolved
by adding a small intersection entity.
What does a hard one-to-one relationship mean (one where the
relationship on both ends is "must")?
This means the two entities should probably be made into one entity.
How should a many-to-many relationship be handled?
By adding an intersection entity table
What is an artificial (derived) primary key? When should an artificial (or
derived) primary key be used?
A derived key comes from a sequence. Usually it is used when a concatenated
key becomes too cumbersome to use as a foreign key.
When should you consider de-normalization?
For SQLNET V1 check for the existence of the orasrv process. You can use the
command "tcpctl status" to get a full status of the V1 TCPIP server, other
protocols have similar command formats. For SQLNET V2 check for the presence
of the LISTENER process(s) or you can issue the command "lsnrctl status".
What file will give you Oracle instance status information? Where is it
located?
The alert.ora log. It is located in the directory specified by the
background_dump_dest parameter in the v$parameter table.
Users are not being allowed on the system. The following message is
received: ORA-00257 archiver is stuck. Connect internal only, until
freed. What is the problem?
The archive destination is probably full, backup the archivelogs and remove
them and the archiver will re-start.
Where would you look to find out if a redo log was corrupted assuming
you are using Oracle mirrored redo logs?
There is no message that comes to the SQLDBA or SRVMGR programs during
startup in this situation, you must check the alert. log file for this information.
You attempt to add a datafile and get: ORA-01118: cannot add anymore
datafiles: limit of 40 exceeded. What is the problem and how can you
fix it?
When the database was created the db_files parameter in the initialization file
was set to 40. You can shutdown and reset this to a higher value, up to the value
of MAX_DATAFILES as specified at database creation. If the MAX_DATAFILES is set
to low, you will have to rebuild the control file to increase it before proceeding.
You look at your fragmentation report and see that smon has not
coalesced any of you tablespaces, even though you know several have
large chunks of contiguous free extents. What is the problem?
Check the dba_tablespaces view for the value of pct_increase for the
tablespaces. If pct_increase is zero, smon will not coalesce their free space.
Your users get the following error: ORA-00055 maximum number of
DML locks exceeded? What is the problem and how do you fix it?
The number of DML Locks is set by the initialization parameter DML_LOCKS. If
this value is set to low (which it is by default) you will get this error. Increase the
value of DML_LOCKS. If you are sure that this is just a temporary problem, you
can have them wait and then try again later and the error should clear.
You get a call from you backup DBA while you are on vacation. He has
corrupted all of the control files while playing with the ALTER DATABASE
BACKUP CONTROLFILE command. What do you do?
As long as all datafiles are safe and he was successful with the BACKUP
controlfile command you can do the following:
CONNECT INTERNAL STARTUP MOUNT (Take any read-only tablespaces offline
before next step
ALTER DATABASE DATAFILE .... OFFLINE;
RECOVER DATABASE USING BACKUP CONTROLFILE
ALTER DATABASE OPEN RESETLOGS; (bring read-only tablespaces back online)
Shutdown and backup the system, then restart If they have a recent output file
from the ALTER DATABASE BACKUP CONTROL FILE TO TRACE; command, they
can use that to recover as well.
If no backup of the control file is available then the following will be required:
CONNECT INTERNAL STARTUP NOMOUNT CREATE CONTROL FILE .....; However,
they will need to know all of the datafiles, logfiles, and settings for
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the
database to use the command.
You have taken a manual backup of a datafile using OS. How RMAN will
know about it?
Whenever we take any backup through RMAN in the repository information of the
backup is recorded. The RMAN repository can be either controlfile or recovery
catalog. However if you take a backup through OS command then RMAN does
not aware of that and hence recorded are not reflected in the repository. This is
also true whenever we create a new controlfile or a backup taken by RMAN is
transferred to another place using OS command then controlfile/recovery catalog
does not know about the prior backups of the database.
So in order to restore database with a new created controlfile we need to inform
RMAN about the backups taken before so that it can pick one to restore.
This task can be done by catalog command in RMAN.
Add information of backup pieces and image copies in the repository that
are on disk.
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
)D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;
The above query will displays for each sort segment in the database the tablespace
the segment resides in, the size of the tablespace, the amount of space within the
sort segment that is currently in use, and the amount of space available.
What is the frequency of log Updated..?
Whenever commit, checkpoint or redolog buffer is 1/3rd full, Time out occurs (3
sec.), 1 MB of redo log buffer
What are the Possibilities of Logical Backup (Export/Import)
- We can export from one user and import into another within the same database.
- We can export from one database and import into another database (but both
source and destination databases should be are ORACLE databases)
- When migrating from one platform to another like from windows to sun Solaris
then export is the only method to transfer the data.
What is stored in Oratab file
"oratab" is a file created by Oracle in the /etc or /var/opt/oracle directory when
installing database software. Originally ORATAB was used for SQL*Net V1, but lately
it is being used to list the databases and software versions installed on a server.
database_sid:oracle_home_dir:Y|N
The Y|N flags indicate if the instance should automatically start at boot time (Y=yes,
N=no).
Besides acting as a registry for what databases and software versions are installed
on the server, ORATAB is also used for the following purposes:
Oracle's "dbstart" and "dbshut" scripts use this file to figure out which instances
are to be start up or shut down (using the third field, Y or N).
The "oraenv" utility uses ORATAB to set the correct environment variables.
One can also write Unix shell scripts that cycle through multiple instances using
the information in the oratab file.
In your database some blocks of particular datafile are corrupted. What
statement will you issue to know how many blocks are corrupted?
You can check the " Select * from V$DATABASE_BLOCK_CORRUPTION; " view to
determine the corrupted blocks.
What is a flash back query? This feature is also available in 9i. What are
the difference between 9i and 10g (related to flash back query).
Oracle 9i flashback 10g enhancement
Flashback query:
Flashback version query
Flashback_Transactional_query view
10g new Features:
Flashback Table
Flashback database
Setup for new feature:
AUM
Flash Recovery Area
Describe the use of %ROWTYPE and %TYPE in PL/SQL
%ROWTYPE allows you to associate a variable with an entire table row. The %TYPE
associates a variable with a single column type.
How can the problem be resolved if a SYSDBA, forgets his password for
logging into enterprise manager?
There are two ways to do that:
1. Login as SYSTEM and change the SYS password by using ALTER USER.
2. Recreate the password file using orapwd and set remote_password_file exclusive
and then restart the instance.
3. Also you can enter as / as sysdba and then after change the password Alter
user sys identified by xxx;
How many maximum number of columns can be part of primary key in a
table in 9i and 10g.
You can set primary key in a single table up to 16 columns of table in oracle 9i and
10g.
What is RAC?
RAC stands for Real Application Cluster. In previous versions, it is known as
PARALLEL SERVER. RAC is a mechanism that allows multiple instances (on different
hosts/nodes) to access the same database. The benefits: It provides more memory
resources, since more hosts are being used; If one host gets down, then other host
assumes it's work load.
What is Data Pumping?
Data Pumping is a data movement utility. This is a replacement to imp/exp utilities.
The earlier imp/exp utilities are also data movement utilities, but they work within
the local servers only. Where as, impdp/expdp (Data pumping) are very fast and
perform data movements from one database to another database on same as well
as different host. In other words, it provides secure transports.
What is Data Migration?
Data migration is actually the translation of data from one format to another format
or from one storage device to another storage device. Data migration is necessary
when a company upgrades its database or system software, either from one version
to another or from one program to an entirely different program.
What is difference between spfile and init.ora file
init.ora or spfile both are contains Database parameters information. Both are
supported by oracle. Every database instance required either any one. If both are
present first choice is given to spfile only. init.ora saved in the format of ASCII where
as SPFILE saved in the format of binary. init.ora information is read by oracle engine
at the time of database instance started only that means any modification made in
this those are applicable in the next startup only. But in spfile modifications (through
alter system..... command) can applicable without restarting oracle database
(restarting instance).
What is SCN? Where the SCN does resides?
SCN - System Change Number - is always getting incremented by Oracle server and
will be used to make sure the consistency across the database. The system change
DML Triggers to be DISABLED and then ENABLED once the insert completed.
DISABLE the Clustered Index and then ENABLED once the insert completed.
If Monday take full backup and Tuesday it was cumulative backup and
Wednesday we taken incremental backup, Thursday some disaster happen
then what type of recovery and how it will take?
Restore the Monday full backup + Tuesday cumulative backup + Wednesday
Incremental backup. Becausecumulative and incremental clears the archives every
backup
What is the difference between local managed Tablespace & dictionary
managed Tablespace ?
The basic diff between a locally managed tablespace and a dictionary managed
tablespace is that in the dictionary managed tablespace every time a extent is
allocated or deallocated data dictionary is updated which increases the load on data
dictionary while in case of locally managed tablespace the space information is kept
inside the datafile in the form of bitmaps every time a extent is allocated or
deallocated only the bitmap is updated which removes burden from data dictionary.
The Tablespaces that record extent allocation/deallocation in the dictionary are
called dictionary managed tablespaces and tablespaces that record extent
allocation in the tablespace header are called locally managed tablespaces.
While installing the Oracle 9i ( 9.2) version, automatically system takes
the space of approximately 4 GB. That is fine.... Now, if my database is
growing up and it is reaching the 4GB of my database space...Now, I would
like to extend my Database space to 20 GB or 25 GB... what are the things
i have to do?
Following steps can be performed:
1. First check for available space on the server.
2. You can increase the size of the datafiles if you have space available on the
server and also you can make auto extend on. So that in future you don't need to
manually increase the size.
The alternative better idea is that make the autoextend off and add more datafiles
to the Tablespace. Making a single datafile to a bigger size is risky. By making
autoextend off you can monitor the growth of the tablespace schedule a growth
monitoring script with a threshold of 85 full.
CPU bottlenecks
Undersized memory structures
Inefficient or high-load SQL statements
Database configuration issues
Four major steps to detect these issues: Analyzing Optimizer Statistics
Analyzing an Execution Plan
Using Hints to Improve Data Warehouse Performance
Using Advisors to Verify SQL Performance
Analyzing Optimizer Statistics
Optimizer statistics are a collection of data that describes more details about the
database and the objects in the database. The optimizer statistics are stored in the
data dictionary. They can be viewed using data dictionary views similar to the
following:
SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME 'GATHER_STATS_JOB';
Because the objects in a database can constantly change statistics must be
regularly updated so that they accurately describe these database objects.
Statistics are maintained automatically by Oracle Database or you can maintain the
optimizer statistics manually using the DBMS_STATS package.
Analyzing an Execution Plan
General guidelines for using the EXPLAIN PLAN statement are:
To use the SQL script UTLXPLAN.SQL to create a sample output table called
PLAN_TABLE in your schema.
To include the EXPLAIN PLAN FOR clause prior to the SQL statement.
After issuing the EXPLAIN PLAN statement to use one of the scripts or packages
provided by Oracle Database to display the most recent plan table output.
The execution order in EXPLAIN PLAN output begins with the line that is indented
farthest to the right. If two lines are indented equally then the top line is normally
executed first.
To analyze EXPLAIN PLAN output:
which are only created and accessed through RMAN, are the only form in which
RMAN can write backups to media managers such as tape drives and tape libraries.
A backup set contains one or more binary files in an RMAN-specific format. This file
is known as a backup piece. A backup set can contain multiple datafiles. For
example, you can back up ten datafiles into a single backup set consisting of a
single backup piece. In this case, RMAN creates one backup piece as output. The
backup set contains only this backup piece.
What is an UTL_FILE? What are different procedures and functions
associated with it?
The UTL_FILE package lets your PL/SQL programs read and write operating system
(OS) text files. It provides a restricted version of standard OS stream file
input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line
terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.
Differentiate between TRUNCATE and DELETE?
The Delete commands will log the data changes in the log file where as the truncate
will simply remove the data without it. Hence Data removed by Delete command
can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL
statement whereas DELETE is a DML statement.
What is an Oracle Instance?
Instance is a combination of memory structure and process structure. Memory
structure is SGA (System or Shared Global Area) and Process structure is
background processes.
Components of SGA:
Database Buffer Cache: It is further divided into Library Cache and Data Dictionary
Cache or Row Cache,
Shared Pool/large pool/stream pool/java pool
Cache recovery: Changes being made to a database are recorded in the database
buffer cache. These changes are also recorded in online redo log files
simultaneously. When there are enough data in the database buffer cache, they are
written to data files. If an Oracle instance fails before the data in the database
buffer cache are written to data files, Oracle uses the data recorded in the online
redo log files to recover the lost data when the
associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the before
image of the modified data is stored in an undo segment. The data stored in the
undo segment is used to restore the original values in case a transaction is rolled
back. At the time of an instance failure, the database may have uncommitted
transactions. It is possible that changes made by these uncommitted transactions
have gotten saved in data files. To maintain read consistency, Oracle rolls back all
uncommitted transactions when the associated database is re-started. Oracle uses
the undo data stored in undo segments to accomplish this. This process is called
transaction recovery.
1. Rolling forward the committed transactions
2. Rolling backward the uncommitted transactions
What is written in Redo Log Files?
Log writer (LGWR) writes redo log buffer contents Into Redo Log Files. Log writer
does this every three seconds, when the redo log buffer is 1/3 full and immediately
before the Database Writer (DBWn) writes its changed buffers into the data file.
How do you control number of Datafiles one can have in an Oracle
database?
When starting an Oracle instance, the database's parameter file indicates the
amount of SGA space to reserve for datafile information; the maximum number of
datafiles is controlled by the DB_FILES parameter. This limit applies only for the life
of the instance.
How many Maximum Datafiles can there be in an Oracle Database?
Default maximum datafile is 255 that can be defined in the control file at the time of
database creation.
It can be increased by setting the initialization parameter value up to higher at the
time of database creation. Setting this value too higher can cause DBWR issues.
Before 9i Maximum number of datafile in database was 1022.After 9i the limit is
applicable to the number of datafile in the Tablespace.
What is a Tablespace?
What is Ora-01555 - Snapshot Too Old error and how do you avoid it?
1. Increase the size of rollback segment. (Which you have already done)
2. Process a range of data rather than the whole table.
3. Add a big rollback segment and allot your transaction to this RBS.
4. There is also possibility of RBS getting shrunk during the life of the query by
setting optimal.
5. Avoid frequent commits.
6. Google out for other causes.
What is a locally Managed Tablespace?
A Locally Managed Tablespace is a tablespace that manages its own extents
maintaining a bitmap in each data file to keep track of the free or used status of
blocks in that data file. Each bit in the bitmap corresponds to a block or a group of
blocks. When the extents are allocated or freed for reuse, Oracle changes the
bitmap values to show the new status of the blocks. These changes do not generate
rollback information because they do not update tables in the data dictionary
(except for tablespace quota information), unlike the default method of Dictionary Managed Tablespaces.
Following are the major advantages of locally managed tablespaces
necessary. When statistics are updated for a database object, Oracle invalidates any
currently parsed SQL statements that access the object. The next time such a
statement executes, the statement is re-parsed and the optimizer automatically
chooses a new execution plan based on the new statistics.
Collect Statistics on Table Level
sqlplus scott/tiger
exec dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => 'EMP', estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', cascade => true, degree => 5 - )
/
Collect Statistics on Schema Level
sqlplus scott/tiger
exec dbms_stats.gather_schema_stats ( ownname => 'SCOTT', options => 'GATHER', estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', cascade => true, degree => 5 - )
Yes, you can schedule your statistics but in some situation automatic statistics
gathering may not be adequate. It suitable for those databases whose object is
modified frequently. Because the automatic statistics gathering runs during an
overnight batch window, the statistics on tables which are significantly modified
during the day may become stale.
There may be two scenarios in this case:
Volatile tables that are being deleted or truncated and rebuilt during the course of
the day.
Objects which are the target of large bulk loads which add 10% or more to the
objects total size.
So you may wish to manually gather statistics of those objects in order to choose
the optimizer the best execution plan. There are two ways to gather statistics.
Using DBMS_STATS package.
Using ANALYZE command
How can you use ANALYZE statement to collect statistics?
ANALYZE TABLE emp ESTIMATE STATISTICS FOR ALL COLUMNS;
ANALYZE INDEX inv_product_ix VALIDATE STRUCTURE;
ANALYZE TABLE customers VALIDATE REF UPDATE;
ANALYZE TABLE orders LIST CHAINED ROWS INTO chained_rows;
ANALYZE TABLE customers VALIDATE STRUCTURE ONLINE;
To delete statistics:
ANALYZE TABLE orders DELETE STATISTICS;
To get the analyze details:
SELECT owner_name, table_name, head_rowid, analyze_timestamp FROM
chained_rows;
On which columns you should create Indexes?
The following list gives guidelines in choosing columns to index:
You should create indexes on columns that are used frequently in WHERE clauses.
You should create indexes on columns that are used frequently to join tables.
You should create indexes on columns that are used frequently in ORDER BY
clauses.
You should create indexes on columns that have few of the same values or unique
values in the table.
You should not create indexes on small tables (tables that use only a few blocks)
because a full table scan may be faster than an indexed query.
If possible, choose a primary key that orders the rows in the most appropriate order.
If only one column of the concatenated index is used frequently in WHERE clauses,
place that column first in the CREATE INDEX statement.
If more than one column in a concatenated index is used frequently in WHERE
clauses, place the most selective column first in the CREATE INDEX statement.
What type of Indexes is available in Oracle?
B-tree indexes: the default and the most common.
B-tree cluster indexes: defined specifically for cluster.
Hash cluster indexes: defined specifically for a hash cluster.
Global and local indexes: relate to partitioned tables and indexes.
Reverse key indexes: most useful for Oracle Real Application Clusters.
Bitmap indexes: compact; work best for columns with a small set of values
Function-based indexes: contain the pre-computed value of a function/expression
Domain indexes: specific to an application or cartridge.
What is B-Tree Index?
B-Tree is an indexing technique most commonly used in databases and file systems
where pointers to data are placed in a balance tree structure so that all references
to any data can be accessed in an equal time frame. It is also a tree data structure
which keeps data sorted so that searching, inserting and deleting can be done in
logarithmic amortized time.
A table is having few rows, should you create indexes on this table?
You should not create indexes on small tables (tables that use only a few blocks)
because a full table scan may be faster than an indexed query.
A Column is having many repeated values which type of index you should create on
this column
B-Tree index is suitable if the columns being indexed are high cardinality (number of
repeated values). In fact for this situation a bitmap index is very useful but bitmap
index are vary expensive.
When should you rebuild indexes?
There is no thumb rule when you should rebuild the index. According to expert it
depends upon your database situation:
When the data in index is sparse (lots of holes in index, due to deletes or updates)
and your query is usually range based or If Blevel >3 then takes index in rebuild
consideration; desc DBA_Indexes;
Because when you rebuild indexes then database performance goes down.
In fact binary tree index can never be unbalanced. Binary tree performance is good
for both small and large tables and does not degrade with the growth of table.
Can you build indexes online?
Yes, we can build index online. It allows performing DML operation on the base table
during index creation. You can use the statements:
CREATE INDEX ONLINE and DROP INDEX ONLINE.
ALTER INDEX REBUILD ONLINE is used to rebuild the index online.
A Table Lock is required on the index base table at the start of the CREATE or
REBUILD process to guarantee DDL information. A lock at the end of the process
also required to merge change into the final index structure.
A table is created with the following setting
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
Percent Increase allows the segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next parameter
(200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent
(1.4*200=280k).
AND the 4th extent will be 1 + PCTINCREASE/100 times the third extent
(1.4*280=392k!!!) and so on...
Can you Redefine a table Online?
Yes. We can perform online table redefinition with the Enterprise Manager
Reorganize Objects wizard or with the DBMS_REDEFINITION package.
It provides a mechanism to make table structure modification without significantly
affecting the table availability of the table. When a table is redefining online it is
accessible to both queries and DML during the redefinition process.
Purpose for Table Redefinition
Add, remove, or rename columns from a table
Converting a non-partitioned table to a partitioned table and vice versa
Switching a heap table to an index organized and vice versa
Modifying storage parameters
Adding or removing parallel support
Reorganize (defragmenting) a table
Transform data in a table
Restrictions for Table Redefinition:
One cannot redefine Materialized Views (MViews) and tables with MViews or MView
Logs defined on them.
One cannot redefine Temporary and Clustered Tables
One cannot redefine tables with BFILE, LONG or LONG RAW columns
One cannot redefine tables belonging to SYS or SYSTEM
One cannot redefine Object tables
Table redefinition cannot be done in NOLOGGING mode (watch out for heavy
archiving)
Cannot be used to add or remove rows from a table
Can you assign Priority to users?
Yes, we can do this through resource manager. The Database Resource Manager
gives a database administrators more control over resource management decisions,
so that resource allocation can be aligned with an enterprise's business objectives.
With Oracle database Resource Manager an administrator can:
Guarantee certain users a minimum amount of processing resources regardless of
the load on the system and the number of users
Distribute available processing resources by allocating percentages of CPU time to
different users and applications.
Limit the degree of parallelism of any operation performed by members of a group
of users
Create an active session pool. This pool consists of a specified maximum number of
user sessions allowed to be concurrently active within a group of users. Additional
sessions beyond the maximum are queued for execution, but you can specify a
timeout period, after which queued jobs terminate.
Allow automatic switching of users from one group to another group based on
administrator-defined criteria. If a member of a particular group of users creates a
session that runs for longer than a specified amount of time, that session can be
automatically switched to another group of users with different resource
requirements.
Prevent the execution of operations that are estimated to run for a longer time than
a predefined limit
Create an undo pool. This pool consists of the amount of undo space that can be
consumed in by a group of users.
Configure an instance to use a particular method of allocating resources. You can
dynamically change the method, for example, from a daytime setup to a nighttime
setup, without having to shut down and restart the instance.
What is Ora-01555 - Snapshot Too Old error and how do you avoid it?
1. Increase the size of rollback segment. (Which you have already done)
2. Process a range of data rather than the whole table.
3. Add a big rollback segment and allot your transaction to this RBS.
4. There is also possibility of RBS getting shrunk during the life of the query by
setting optimal.
5. Avoid frequent commits.
6. Google out for other causes.
What is a locally Managed Tablespace?
A Locally Managed Tablespace is a tablespace that manages its own extents
maintaining a bitmap in each data file to keep track of the free or used status of
blocks in that data file. Each bit in the bitmap corresponds to a block or a group of
blocks. When the extents are allocated or freed for reuse, Oracle changes the
bitmap values to show the new status of the blocks. These changes do not generate
rollback information because they do not update tables in the data dictionary
(except for tablespace quota information), unlike the default method of Dictionary Managed Tablespaces.
Following are the major advantages of locally managed tablespaces
necessary. When statistics are updated for a database object, Oracle invalidates any
currently parsed SQL statements that access the object. The next time such a
statement executes, the statement is re-parsed and the optimizer automatically
chooses a new execution plan based on the new statistics.
Collect Statistics on Table Level
sqlplus scott/tiger
exec dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => 'EMP', estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', cascade => true, degree => 5 - )
/
Collect Statistics on Schema Level
sqlplus scott/tiger
exec dbms_stats.gather_schema_stats ( ownname => 'SCOTT', options => 'GATHER', estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', cascade => true, degree => 5 - )
Yes, you can schedule your statistics but in some situation automatic statistics
gathering may not be adequate. It suitable for those databases whose object is
modified frequently. Because the automatic statistics gathering runs during an
overnight batch window, the statistics on tables which are significantly modified
during the day may become stale.
There may be two scenarios in this case:
Volatile tables that are being deleted or truncated and rebuilt during the course of
the day.
Objects which are the target of large bulk loads which add 10% or more to the
objects total size.
So you may wish to manually gather statistics of those objects in order to choose
the optimizer the best execution plan. There are two ways to gather statistics.
Using DBMS_STATS package.
Using ANALYZE command
How can you use ANALYZE statement to collect statistics?
ANALYZE TABLE emp ESTIMATE STATISTICS FOR ALL COLUMNS;
ANALYZE INDEX inv_product_ix VALIDATE STRUCTURE;
ANALYZE TABLE customers VALIDATE REF UPDATE;
ANALYZE TABLE orders LIST CHAINED ROWS INTO chained_rows;
ANALYZE TABLE customers VALIDATE STRUCTURE ONLINE;
To delete statistics:
ANALYZE TABLE orders DELETE STATISTICS;
To get the analyze details:
SELECT owner_name, table_name, head_rowid, analyze_timestamp FROM
chained_rows;
On which columns you should create Indexes?
The following list gives guidelines in choosing columns to index:
You should create indexes on columns that are used frequently in WHERE clauses.
You should create indexes on columns that are used frequently to join tables.
You should create indexes on columns that are used frequently in ORDER BY
clauses.
You should create indexes on columns that have few of the same values or unique
values in the table.
You should not create indexes on small tables (tables that use only a few blocks)
because a full table scan may be faster than an indexed query.
If possible, choose a primary key that orders the rows in the most appropriate order.
If only one column of the concatenated index is used frequently in WHERE clauses,
place that column first in the CREATE INDEX statement.
If more than one column in a concatenated index is used frequently in WHERE
clauses, place the most selective column first in the CREATE INDEX statement.
What type of Indexes is available in Oracle?
B-tree indexes: the default and the most common.
B-tree cluster indexes: defined specifically for cluster.
Hash cluster indexes: defined specifically for a hash cluster.
Global and local indexes: relate to partitioned tables and indexes.
Reverse key indexes: most useful for Oracle Real Application Clusters.
Bitmap indexes: compact; work best for columns with a small set of values
Function-based indexes: contain the pre-computed value of a function/expression
Domain indexes: specific to an application or cartridge.
What is B-Tree Index?
B-Tree is an indexing technique most commonly used in databases and file systems
where pointers to data are placed in a balance tree structure so that all references
to any data can be accessed in an equal time frame. It is also a tree data structure
which keeps data sorted so that searching, inserting and deleting can be done in
logarithmic amortized time.
A table is having few rows, should you create indexes on this table?
You should not create indexes on small tables (tables that use only a few blocks)
because a full table scan may be faster than an indexed query.
A Column is having many repeated values which type of index you should
create on this column
B-Tree index is suitable if the columns being indexed are high cardinality (number of
repeated values). In fact for this situation a bitmap index is very useful but bitmap
index are vary expensive.
When should you rebuild indexes?
There is no thumb rule when you should rebuild the index. According to expert it
depends upon your database situation:
When the data in index is sparse (lots of holes in index, due to deletes or
updates) and your query is usually range based or If Blevel >3 then takes
index in rebuild consideration;
desc DBA_Indexes;
Because when you rebuild indexes then database performance goes down.
In fact binary tree index can never be unbalanced. Binary tree performance is good
for both small and large tables and does not degrade with the growth of table.
Can you build indexes online?
Yes, we can build index online. It allows performing DML operation on the base table
during index creation. You can use the statements:
CREATE INDEX ONLINE and DROP INDEX ONLINE.
ALTER INDEX REBUILD ONLINE is used to rebuild the index online.
A Table Lock is required on the index base table at the start of the CREATE or
REBUILD process to guarantee DDL information. A lock at the end of the process
also required to merge change into the final index structure.
A table is created with the following setting
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
Percent Increase allows the segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next parameter
(200k)
Yes, we can do this through resource manager. The Database Resource Manager
gives a database administrators more control over resource management decisions,
so that resource allocation can be aligned with an enterprise's business objectives.
With Oracle database Resource Manager an administrator can:
Guarantee certain users a minimum amount of processing resources regardless of
the load on the system and the number of users
Distribute available processing resources by allocating percentages of CPU time to
different users and applications.
Limit the degree of parallelism of any operation performed by members of a group
of users
Create an active session pool. This pool consists of a specified maximum number of
user sessions allowed to be concurrently active within a group of users. Additional
sessions beyond the maximum are queued for execution, but you can specify a
timeout period, after which queued jobs terminate.
Allow automatic switching of users from one group to another group based on
administrator-defined criteria. If a member of a particular group of users creates a
session that runs for longer than a specified amount of time, that session can be
automatically switched to another group of users with different resource
requirements.
Prevent the execution of operations that are estimated to run for a longer time than
a predefined limit
Create an undo pool. This pool consists of the amount of undo space that can be
consumed in by a group of users.
Configure an instance to use a particular method of allocating resources. You can
dynamically change the method, for example, from a daytime setup to a nighttime
setup, without having to shut down and restart the instance.
DELETE SHAAN
WHERE rowid IN
( SELECT LEAD(rowid) OVER
(PARTITION BY EMPLOYEE_ID ORDER BY NULL)
FROM SHAAN );
Method4:
delete from SHAAN where rowid not in
( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method5:
delete from SHAAN
where rowid not in ( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method6:
SQL> create table table_name2 as select distinct * from table_name1;
SQL> drop table table_name1;
SQL> rename table_name2 to table_name1;
What is Automatic Management of Segment Space setting?
Automatic Segment Space Management (ASSM) introduced in Oracle9i is an easier
way of managing space in a segment using bitmaps. It eliminates the DBA from
setting the parameters pctused, freelists, and freelist groups.
ASSM can be specified only with the locally managed tablespaces (LMT). The
CREATE TABLESPACE statement has a new clause SEGMENT SPACE MANAGEMENT.
Oracle uses bitmaps to manage the free space. A bitmap, in this case, is a map that
describes the status of each data block within a segment with respect to the
amount of space in the block available for inserting rows. As more or less space
becomes available in a data block, its new state is reflected in the bitmap.
CREATE TABLESPACE myts DATAFILE '/oradata/mysid/myts01.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M
SEGMENT SPACE MANAGEMENT AUTO;
Your database is always enabled to allow dedicated server processes, but you must
specifically configure and enable shared server by setting one or more initialization
parameters.
Can you import objects from Oracle ver. 7.3 to 9i?
We can not import from lower version export to higher version in fact. But not sure
may be now concept is changed.
How do you move tables from one tablespace to another tablespace?
Method 1:
Export the table, drop the table, create the table definition in the new tablespace,
and then import the data (imp ignore=y).
Method 2:
Create a new table in the new tablespace with the "CREATE TABLE x AS SELECT *
from y" command:
CREATE TABLE temp_name TABLESPACE new_tablespace AS SELECT * FROM
real_table;
Then drop the original table and rename the temporary table as the original:
DROP TABLE real_table;
RENAME temp_name TO real_table;
Note: After step #1 or #2 is done, be sure to recompile any procedures that may
have been
invalidated by dropping the table. Prefer method #1, but #2 is easier if there are no
indexes, constraints, or triggers. If there are, you must manually recreate them.
Method 3:
If you are using Oracle 8i or above then simply use:
SQL>Alter table table_name move tablespace tablespace_name;
How do see how much space is used and free in a tablespace?
SELECT * FROM SM$TS_FREE;
SELECT TABLESPACE_NAME, SUM(BYTES) FROM DBA_FREE_SPACE GROUP BY
TABLESPACE_NAME;
Can view be the based on other view?
Yes, the view can be created from other view by directing a select query to use the
other view data.
What happens, if you not specify Dictionary option with the start option in
case of LogMinor concept?
It is recommended that you specify a dictionary option. If you do not, LogMiner
cannot translate internal object identifiers and datatypes to object names and
external data formats. Therefore, it would return internal object IDs and present
data as hex bytes. Additionally, the MINE_VALUE and COLUMN_PRESENT functions
cannot be used without a dictionary.
What is the Benefit and draw back of Continuous Mining?
The continuous mining option is useful if you are mining in the same instance that is
generating the redo logs. When you plan to use the continuous mining option, you
only need to specify one archived redo log before starting LogMiner. Then, when you
start LogMiner specify the DBMS_LOGMNR.CONTINUOUS_MINE option, which directs
LogMiner to automatically add and mine subsequent archived redo logs and also the
online catalog.
Continuous Mining is not available in Real Application Cluster.
What is LogMiner and its Benefit?
LogMiner is a recovery utility. You can use it to recover the data from oracle redo log
and archive log file. The Oracle LogMiner utility enables you to query redo logs
through a SQL interface. Redo logs contain information about the history of activity
on a database.
Benefit of LogMiner?
1. Pinpointing when a logical corruption to a database; suppose when a row is
accidentally deleted then logMiner helps to recover the database exact time based
and changed based recovery.
2. Perform table specific undo operation to return the table to its original state.
LogMiner reconstruct the SQL statement in reverse order from which they are
executed.
3. It helps in performance tuning and capacity planning. You can determine which
table gets the most update and insert. That information provides a historical
perspective on disk access statistics, which can be used for tuning purpose.
4. Performing post auditing; LogMiner is used to track any DML and DDL performed
on database in the order they were executed.
What is Oracle DataGuard?
Oracle DataGuard is a tools that provides data protection and ensures disaster
recovery for enterprise data. It provides comprehensive set of services that create,
maintain, manage, and monitor one or more standby databases to enable
production Oracle databases to survive disasters and data corruption. Dataguard
maintains these standsby databases as transitionally consistent copies of the
production database. Then, if the production database becomes failure Data Guard
can switch any standby database to the production role, minimizing the downtime
associated with the outage. Data Guard can be used with traditional backup,
restoration, and cluster techniques to provide a high level of data protection and
data availability.
What is Standby Databases
A standby database is a transitionally consistent copy of the primary database.
Using a backup copy of the primary database, you can create up to 9 standby
databases and incorporate them in a Data Guard configuration. Once created, Data
Guard automatically maintains each standby database by transmitting redo data
from the primary database and then applying the redo to the standby database.
Similar to a primary database, a standby database can be either a single-instance
Oracle database or an Oracle Real Application Clusters database. A standby
database can be either a physical standby database or a logical standby database:
Difference between Physical standby Logical standby databases
Provides a physically identical copy of the primary database on a block-for-block
basis. The database schema, including indexes, is the same. A physical standby
database is kept synchronized with the primary database, though Redo Apply, which
recovers the redo data, received from the primary database and applies the redo to
the physical standby database.
Logical Standby database contains the same logical information as the production
database, although the physical organization and structure of the data can be
different. The logical standby database is kept synchronized with the primary
database though SQL Apply, which transforms the data in the redo received from
the primary database into SQL statements and then executing the SQL statements
on the standby database.
If you are going to setup standby database what will be your Choice
Logical or Physical?
We need to keep the physical standby database in recovery mode in order to
apply the received archive logs from the primary database. We can open physical
stand by database to read only and make it available to the applications users
(Only select is allowed during this period). Once the database is opened in Read
only mode then we can not apply redo logs received from primary database.
We do not see such issues with logical standby database. We can open up the
database in normal mode and make it available to the users. At the same time, we
can apply archived logs received from primary database.
If the primary database needed to support pretty large user community for the OLTP
system and pretty large Reporting Group then better to use logical standby as
primary database instead of physical database.
What are the requirements needed before preparing standby database?
OS Architecture of primary database secondary database should be same.
The version of secondary database must be the same as primary database.
The Primary database must run in Archivelog mode.
Require the same hardware architecture on the primary and all standby site.
Does not require the same OS version and release on the primary and secondary
site.
Each Primary and secondary database must have its own database.
What are Failover and Switchover in case of dataguard?
Failover is the operation of bringing one of the standby databases online as the new
primary database when failure occurs on the primary database and there is no
possibility of recover primary database in a timely manner. The switchover is a
situation to handle planned maintenance on the primary database. The main
difference between switchover operation and failover operation is that switchover is
performed when primary database is still available or it does not require a flash
back or re-installation of the original primary database. This allows the original
primary database to the role of standby database almost immediately. As a result
schedule maintenance can performed more easily and frequently.
When you use WHERE clause and when you use HAVING clause?
HAVING clause is used when you want to specify a condition for a group function
and it is written after GROUP BY clause The WHERE clause is used when you want to
specify a condition for columns, single row functions except group functions and it is
written before GROUP BY clause if it is used.
What is a cursor and difference between an implicit & an explicit cursor?
A cursor is a PL/SQL block used to fetch more than one row in a Pl/SQl block. PL/SQL
declares a cursor implicitly for all SQL data manipulation statements, including
quries that return only one row. However, queries that return more than one row
you must declare an explicit cursor or use a cursor FOR loop.
Restore and recover a subset of the database as a DUMMY database to export the
table data and import it into the primary database. This is the best option as only
the dropped table goes back in time to before the drop.
How to find running jobs in oracle database
select sid, job,instance from dba_jobs_running;
select sid, serial#,machine, status, osuser,username from v$session where
username!='NULL'; --all active users
select owner, job_name from DBA_SCHEDULER_RUNNING_JOBS; --for oracle 10g
How to find long running jobs in oracle database
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,
time_remaining remaining, message from v$session_longops
where time_remaining = 0 order by time_remaining desc
Login without password knowledge
This is not the genuine approach consider it as a practice.
SQL> CONNECT / as sysdba
Connected.
SQL> SELECT password FROM dba_users WHERE username='SCOTT';
PASSWORD
--------------- --------------F894844C34402B67
SQL> ALTER USER scott IDENTIFIED BY anything;
User altered.
SQL> CONNECT scott/anything
Connected.
OK, we're in. Let's quickly change the password back before anybody notices.
SQL> ALTER USER scott IDENTIFIED BY VALUES 'F894844C34402B67';
User altered.
While applying the CPU Patch why we need to update the Oracle
Inventory?
Because when you apply the CPU it updates the oracle binaries.
How do you remove an SPFILE parameter (not change the value of, but
actually purge it outright)?
Use "ALTER SYSTEM RESET ..." (For database versions 9i and up)
Syntax:
ALTER SYSTEM RESET PARAMETER SID='SID|*'
ALTER SYSTEM RESET "_TRACE_FILES_PUBLIC" SCOPE=SPFILE SID='*';
NOTE: The "SID='SID|*'" argument is REQUIRED!
Can you use RMAN to recover RMAN?
Yes, you can!
Which situation Exist condition is better than IN
If the resultant of sub query is small then IN is typically more appropriate where as
resultant of sub query is big/large/long then EXIST is more appropriate. The Exist
always results full scan of table where as first query can make use of index on Table.
Is Oracle really quicker on Windows than Solaris?
I found in my experience that Yes, windows perform better on comparable hardware
just about any UNIX box. I am working on Windows but once I installed Solaris trying
to test. I found the windows installations always outperformed the Solaris ones both
on initial loading the pool cache and subsequent runs. The test package is rather
large (5000+ lines), which is used in a form to display customer details. On Solaris I
was typically getting an initial return time of 5 seconds and on windows, typically, 1
second. Even subsequent runs (i.e. cached) the windows outperformed Solaris. The
parameter sizes for the SGA were approx. the same and the file systems are the
conventional method. In both cases the disk configuration is local.
What is Difference between DBname and instance_name?
A database is a set of files (data, redo, ctl and so on) where as An instance is a set
of processes (SMON, PMON, DBWR, etc) and a shared memory segment (SGA).
A database may be mounted and opened by many INSTANCES (Parallel Server)
concurrently. An instance may mount and open ANY database -- however it may
only open a single database at any time. There for you need unique (for the set of
files).
Does DBCA create instance while creating database?
DBCA does not create instance. It create database (set of files). The instance is only
feelings do a shutdown and goodbye instance and on windows it registers the
necessary services that can be used to start an instance when you want.
Is there any way to create database without DBCA?
Yes, you can used oradim directly
What's the difference between connections, sessions and processes?
A connection is a physical circuit between you and the database. A connection
might be one of many types -- most popular begin DEDICATED server and SHARED
server. Zero, one or more sessions may be established over a given connection to
the database as show above with sqlplus. A process will be used by a session to
execute statements. Sometimes there is a one to one relationship between
CONNECTION->SESSION->PROCESS (eg: a normal dedicated server connection).
Sometimes there is a one to many from connection to sessions (eg: like autotrace,
one connection, two sessions, one process). A process does not have to be
dedicated to a specific connection or session however, for example when using
shared server (MTS), your SESSION will grab a process from a pool of processes in
order to execute a statement. When the call is over, that process is released back to
the pool of processes.
SQL>select username from v$session where username is not null;
you can see one session, me
SQL>select username, program from v$process;
you can see all of the backgrounds and my dedicated server...
Autotrace for statistics uses ANOTHER session so it can query up the stats for your
CURRENT session without impacting the STATS for that session!
SQL>select username from v$session where username is not null;
now you can see two session but...
SQL>select username, program from v$process;
Same 14 processes...
What about Fragmentation situation (LMT) in oracle 8i and up?
Fragmentation is that if you have many small holes (regions of contiguous free
space) that are too small to be the next extent of any object. These holes of free
space resulted from dropping some object (or truncating them) and the resulting
free space cannot be used by any other object in that tablespace. This is a direct
result of using pctincrease that is not zero and having many weird sized extents
(every extents is unique size and shape). In oracle 8i and above we all are using
locally managed tablespace. These would use either uniform sizing or our automatic
allocation scheme. In either case it is almost impossible to get into a situation where
you have unusable free space.
To see if you suffer from fragmentation you can query from DBA_FREE_SPACE (best
to do an alter tablespace to ensure all contiguous made into 1 big free region). You
would look any free space that is smaller then the smallest next extent size for any
object in that tablespace. Check with below query:
Select * from dba_free_space
where tablespace_name = 'T' and bytes <= ( select min(next_extent)
from dba_segments where tablespace_name = 'T') order by block_id
Is there a way we can flush out a known data set from the database buffer
cache?
No you dont, in real life; the cache would never be empty. It is true that 10g
introduce an alter system flush buffer_cache, but it is not really worthwhile. Having
empty buffer cache is fake, if no more so than what you are currently doing.
What would be the best approach to benchmark the response time for a
particular query?
run query q1 over and over (with many different inputs)
run query q2 over and over (with many different inputs)
discard first couple of observations, and last couple
use the observations in the middle
What is difference between Char and Varchar2 and which is better
approach?
A CHAR datatype and VARCHAR2 datatype are stored identically (eg: the word
'WORD' stored in a CHAR(4) and a varchar2(4) consume exactly the same amount
of space on disk, both have leading byte counts).
The difference between a CHAR and a VARCHAR is that a CHAR(n) will ALWAYS be N
bytes long, it will be blank padded upon insert to ensure this. A varchar2(n) on the
other hand will be 1 to N bytes long, it will NOT be blank padded. Using a CHAR on a
varying width field can be a pain due to the search semantics of CHAR.
Consider the following examples:
startup mount
alter database noarchivelog;
alter database open;
connect /
How to Update millions or records in a table?
If we had to update millions of records I would probably opt to NOT update.
I would more likely do:
CREATE TABLE new_table as select <do the update "here"> from old_table;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
You can do that using parallel query, with nologging on most operations generating
very
little redo and no undo at all in a fraction of the time it would take to update the
data.
SQL>create table new_emp as select empno, LOWER(ename) ename, JOB,
MGR, HIREDATE, SAL, COMM, DEPTNO from emp;
SQL>drop table emp;
SQL>rename new_emp to emp;
How to convert database server sysdate to GMT date?
Select sysdate, sysdate+(substr(tz_offset(dbtimezone),1,1)||1)*to_dsinterval(0
||substr(tz_offset( DBTIMEZONE ),2, 5)||:00) from dual;
It lets 3rd parties (anyone really) build an alternative interface to RMAN as it permits
anyone
that can connect to an Oracle instance to control RMAN programmatically.
How To turn Debug Feature on in rman?
run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;
now you will see a output
You can always turn debug off by issuing
rman>debug off;
Assuming I have a "FULL" backup of users01.dbf containing employees table that
contains 1000 blocks of data. If I truncated employees table and then an
incremental level 1 backup of users tablespace is taken, will RMAN include 1000
blocks that once contained data in the incremental backup?
The blocks were not written to the only changes made by the truncate was to the
data dictionary (or file header) so no, it won't see them as changed blocks since
they were not changed.
Where should the catalog be created?
The recovery catalog to be used by Rman should be created in a separate database
other than the target database. The reason is that the target database will be
shutdown while datafiles are restored.
How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
What are the various reports available with RMAN?
rman>list backup; rman> list archive;
What is the use of snapshot controlfile in terms of RMAN backup?
Rman uses the snapshot controlfile as a way to get a read consistent copy of the
controlfile, it uses this to do things like RESYNC the catalog (else the controlfile is a
moving target, constantly changing and Rman would get blocked and block the
database)
Can RMAN write to disk and tape Parallel? Is it possible?
Rman currently won't do tape directly, you need a media manager for that,
regarding disk and tape parallel not as far as I know, you would run two backups
separately (not sure). May be trying to maintain duplicate like that could get the
desired.
What is the difference between DELETE INPUT and DELETE ALL command
in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where
we archive the files, when a command is issued through rman to backup archivelogs
it uses one of the location to backup the data. When we specify delete input the
location which was backed up will get deleted, if we specify delete all (all
log_archive_dest_n) will get deleted.
DELETE all applies only to archived logs.
delete expired archivelog all;
Is it possible to restore a backupset (actually backup pieces) from a
different location to where RMAN has recorded them to be ?
With 9.2 and earlier it is not possible to restore a backupset (actually backup pieces)
from a different location to where RMAN has recorded them to be. As a workaround
you would have to create a link using the location of where the backup was
originally located. Then when restoring, RMAN will think everything is the same as it
was.
Starting in 10.1 it is possible to catalog the backup pieces in their new location into
the controlfile and recovery catalog. This means they are available for restoration
by RMAN without creating the link.
What is difference between Report obsolete and Report obsolete orphan
Report obsolete backup are reported unusable according to the users retention
policy where as Report obsolete orphan report the backup that are unusable
because they belong to incarnation of the database that are not direct ancestor of
the current incarnation.
How to Increase Size of Redo Log
1. Add new log files (groups) with new size
ALTER DATABASE ADD LOGFILE GROUP
2. Switch with alter system switch log file until a new log file group is in state
current
3. Now you can delete the old log file
ALTER DATABASE DROP LOGFILE MEMBER
What is the difference between alter database recover and sql*plus
recover command?
ALTER DATABASE recover is useful when you as a user want to control the recovery
where as SQL*PLUS recover command is useful when we prefer automated recovery.
Difference of two view V$Backup_Set and Rc_Backup_Set in respect of
Rman
The V$Backup_Set is used to check the backup details when we are not managing
Rman catalog that is the backup information is stored in controlfile where as
Rc_Backup_Set is used when we are using catalog as a central repository to list the
backup information.
Can I cancel a script from inside the script? How I cancil a select on
Windows client?
Use ctl-c
How to Find the Number of Oracle Instances Running on Windows Machine
C:\>net start |find OracleService
How to create an init.ora from the spfile when the database is down?
Follow the same way as you are using
SQL> connect sys/oracle as sysdba
SQL> shutdown;
SQL> create pfile from spfile;
SQL> create spfile from pfile;
When you shutdown the database, how does oracle maintain the user
session i.e.of sysdba?
You still have your dedicated server
!ps -auxww | grep ora920
sys@ORA920> !ps -auxww | grep ora920
sys@ORA920> shutdown
sys@ORA920> !ps -auxww | grep ora920
You can see you still have your dedicated server. When you connect as sysdba, you
fire up dedicated server that is where it is.
What is ORA-002004 error? What you will do in that case?
A disk I/O failure was detected on reading the control file. Basically you have to
check whether the control file is available, permissions are right on the control file,
spfile/init.ora right to the right location, if all checks were done still you are getting
the error, then from the multiplexed control file overlay on the corrupted one.
Let us say you have three control files control01.ctl, control02.ctl and control03.ctl
and now you are getting errors on control03.ctl then just copy control01.ctl over to
control03.ctl and you should be all set.
In order to issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE; database
should be mounted and in our case it is not mounted then the only other option
available is to restore control file from backup or copy the multiplexed control file
over to the bad one.
Why do we need SCOPE=BOTH clause?
BOTH indicates that the change is made in memory and in the server parameter
file. The new setting takes effect immediately and persists after the database is
shut down and started up again. If a server parameter file was used to start up the
database, then BOTH is the default. If a parameter file was used to start up the
database, then MEMORY is the default, as well as the only scope you can specify.
How to know Number of CPUs on Oracle
Login as SYSDBA
SQL>show parameter cpu_count
NAME TYPE VALUE
cpu_count integer 2
Could you please tell me what are the possible reason for Spfile corruption
and Recovery?
It should not be corrupt under normal circumstances, if it were, it would be a bug or
failure of some component in your system. It could be a file system error or could be
a bug.
You can easily recover however from
a) Your alert log has the non-default parameters in it from your last restart.
b) it should be in your backups
c) strings spfile.ora > init$ORACLE_SID.ora - and then edit the resulting file to clean
it up would be options.
How you will check flashback is enabled or not?
At least 2 redo groups are required for a Oracle database to be working normally.
My spfile is corrupt and now I cannot start my database running on my laptop. Is
there a way to build spfile again?
if you are on unix then
$ cd $ORACLE_HOME/dbs
$ strings spfilename temp_pfile.ora
edit the temp_pfile.ora, clean it up if there is anything "wrong" with it and then
SQL> startup pfile=temp_pfile.ora
SQL> create spfile from pfile;
SQL> shutdown
SQL> startup
On windows -- just try editing the spfile [do not try with the prod db first try to check
on test db. It can be dangerous], create a pfile from it. save it, and do the same or if
you got problem you can startup the db from the command line using sqlplus create
a pfile, do a manual startup (start the oracle service, then use sqlplus to start the
database)
What is a fractured block? What happens when you restore a file
containing fractured block?
A block in which the header and footer are not consistent at a given SCN. In a usermanaged backup, an operating system utility can back up a datafile at the same
time that DBWR is updating the file. It is possible for the operating system utility to
read a block in a half-updated state, so that the block that is copied to the backup
media is updated in its first half, while the second half contains older data. In this
case, the block is fractured.
For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER
DATABASE BEGIN BACKUP command is the solution for the fractured block problem.
When a tablespace is in backup mode, and a change is made to a data block, the
database logs a copy of the entire block image before the change so that the
database can reconstruct this block if media recovery finds that this block was
fractured.
The block that the operating system reads can be split, that is, the top of the block
is written at one point in time while the bottom of the block is written at another
point in time. If you restore a file containing a fractured block and Oracle reads the
block, then the block is considered a corrupt.
You recreated the control file by using backup control file to trace and using alter
database backup controlfile to location command what have you lost in that case?
You lost all of the backup information as using backup controlfile to trace where as
using other ALTER DATABASE BACKUP CONTROLFILE to D:\Backup\control01.ctl. All
backup information is retained when you take binary control file backup.
If a backup is issued after shutdown abort command what kind of
backupis that?
It is an inconsistent backup. If you are in noarchivelog mode ensure that you issue
the shutdown immediate command or startup force is another option that you can
issue: startup force->shutdown abort; followed by shutdown immediate;
Private database link is created on behalf of a specific user. A private database link
can be used only when the owner of the link specifies a global object name in a SQL
statement or in the definition of the owner's views or procedures.
Public database link is created for the special user group PUBLIC. A public database
link can be used when any user in the associated database specifies a global object
name in a SQL statement or object definition.
Network database link is created and managed by a network domain service. A
network database link can be used when any user of any database in the network
specifies a global object name in a SQL statement or object definition.
How to know which version of database you are working?
select * from v$version;
In Reference to Rman point in time Recovery which scenario is better for
you (Until time or until sequence)?
I am practicing various scenarios for backup and recovery using RMAN. I find until
SCN better than until time, with log_seq in the middle. Until time is still going to use
(ultimately) an SCN to recover, so if you know the SCN it would be preferred if not
then time is fine.
If you have forgotten the root password on CentOS then what you will do?
If you are on CentOS then follow these steps:
- At the splash screen during boot time, press any key which will take you an
interactive menu.
- Then select a Linux version you wish to boot and press a to append option to the
line this will bring you to a line with the boot command
- Next at the end of that line type single as an option/parameter and then Press
Enter to exit and execute the boot this will start the OS with single user mode
which allow you to reset the root password by typing passwd and you can set new
password for root.
How to determine whether the datafiles are synchronized or not?
select status, checkpoint_change#, to_char(checkpoint_time, 'DD-MON-YYYY
HH24:MI:SS') as checkpoint_time, count(*)
from v$datafile_header
group by status, checkpoint_change#, checkpoint_time
order by status, checkpoint_change#, checkpoint_time;
Check the results of the above query if it returns one and only one row for the online
datafiles, means they are already synchronized in terms of their SCN. Otherwise the
datafiles are still not synchronized yet.
You have just restored from backup and do not have any control files. How
would you go about bringing up this database?
If you do not have a control file, you can create one from scratch in SQL*Plus as
follows:
1. sqlplus /nolog
2. connect / as sysdba
3. Startup nomount;
4. the either create controlfile or restore it from the backup (if you have)
5. alter dataase mount;
6. Recover database using backup controlfile;
7. Alter database open;
From more details follow my blog post "Disaster Recovery from the scratch":
http://shahiddba.blogspot.com/2012/05/rman-disaster-recovery-from-scratch.html
Is there any way to find the last record from the table?
select * from employees where rowid in(select max(rowid) from employees);
select * from employees minus select * from employees where rownum < (select
count(*) from employees);
How you will find Oracle timestamp from current SCN?
select dbms_flashback.get_system_change_number scn from dual; -- Oracle Ver. 9i
SCN
-----------8843525
SQL> Select to_char(CURRENT_SCN) from v$database; -- oracle Ver. 10g or above
SQL> select current_scn, dbms_flashback.get_system_change_number from
v$database; --standby case
SQL> select scn_to_timestamp(8843525) from dual;
STATUS
--cr
free
xcur
-- flush buffer cache for 10g and upwards
SQL> alter system flush buffer_cache;
System altered.
-- flush buffer cache for 9i and upwards
SQL> alter session set events immediate trace name flush_cache;
Session altered.
-- Shows buffer cache was freed after flushing buffer cache
SQL> select distinct status from v$bh;
STATUS
Free
How to suspend all jobs from executing in dba_jobs?
By setting the value of 0 to the parameter job_queue_processes you can suspend
all jobs from executing in DBA_JOBS. The value of this parameter can be changed
without instance restart.
SQL> show parameter job_queue_processes;
NAME TYPE VALUE
job_queue_processes integer 400
Now set the value of the parameter in memory, which will suspend jobs from
starting
SQL> alter system set job_queue_processes=0 scope=memory;
System altered.
How to see the jobs currently being executed?
By using dba_jobs_running to can see all the job currently executing
SQL> select djr.sid, djr.job, djr.failures, djr.this_date, djr.this_sec, dj.what from
dba_jobs_running djr, dba_jobs dj where djr.job = dj.job;
What is GSM in Oracle application E-Business Suite?
GSM stands for Generic Service Management Framework. Oracle E-Business Suite
consist of various compoennts like Forms, Reports, Web Server, Workflow,
Concurrent Manager. Earlier each service used to start at their own but managing
these services (given that) they can be on various machines distributed across
network. So Generic Service Management is extension of Concurrent Processing
which manages all your services , provide fault tolerance (If some service is down
ICM through FNDSM and other processes will try to start it even on remote server)
With GSM all services are centrally managed via this Framework.
How can you license a product after installation?
You can use ad utility adlicmgr to licence product in Oracle application.
In a situation when you want to know which was the last query fired by the user.
How to check?
Select S.USERNAME||'('||s.sid||')-'||s.osuser UNAME
,s.sid||'/'||s.serial# sid,s.status "Status",p.spid,sql_text sqltext
from v$sqltext_with_newlines t,V$SESSION s , v$process p
where t.address =s.sql_address and p.addr=s.paddr(+) and t.hash_value =
s.sql_hash_value
order by s.sid,t.piece;
Can one copy Oracle software from one machine to another?
Yes, one can copy or FTP the Oracle Software between similar machines. Look at the
following example:
# use tar to copy files and directorys with permissions and ownership
tar cf $ORACLE_HOME | rsh cd $ORACLE_HOME; tar xf
To copy the Oracle software to a different directory on the same server:
cd /new/oracle/dir/
(cd $ORACLE_HOME; tar cf . ) | tar xvf NOTE: Remember to relink the Intelligent Agent on the new machine to prevent
messages like Encryption key supplied is not the one used to encrypt file:
cd /new/oracle/dir/
cd network/lib
make -f ins_agent.mk install
A single transaction can have multiple deletes and a single SCN number
identifying all of these deletes. What if I want to flash back only a single
individual delete?
You would flash back to the SYSTEM (not your transactions) SCN at that point in
time. The SYSTEM has an SCN, your transaction has an SCN. You care about the
SYSTEM SCN with flashback, not your transactions SCN.
Are flash back queries useful for the developer or the DBA both? How can I
as a developer and DBA get to know the SCN number of a transaction?
Oracle Flashback is a tool is useful for both either DBA and Developer. If you deleted
data accidently then either DBA or Developer both can flashback, recover and fix
this problem. As a developer you can use
"dbms_flashback.get_system_change_number" to returns the current system SCN
and as DBA you can use Log Miner utility to to look back in time at various events to
find SCN's as well.
After Performing DML operation you are using flashback query to retun
back your committed data can you use flashback concept after Truncating
any data?
In version 9i, Flashback is limited to Data Manipulation Language (DML) commands
such as SELECT,INSERT, UPDATE, and DELETE. Truncate doesn't generate any undo
for the table truncate just cuts it all loses where as delete puts the deleted data into
undo. Flashback query works on undo.
My suggestion is install the same software on another server then then apply
restore and recover procedure on the same environment or directory structure.
No idea about "relink oracle without doing any installation", see the admin guide for
your OS for details on things like this.
There is any difference between Oracle TCL and DCL command?
DCL stands for Data Control Language. These command are used to configure and
control database objects such as GRANT, REVOKE where as TCL stands for
Transaction Control language. It is used to manage the changes made by DML
statements. It allows statements to be grouped together into logical transactions
such as
COMMIT - save work done
SAVEPOINT - identify a point in a transaction to which you can later roll back
ROLLBACK - restore database to original since the last COMMIT
SET TRANSACTION - Change transaction options like isolation level and what
rollback segment to use
What happens when the lock is disabling on the table?
When you disabling the lock on table then you are not able to perform DDL
operation on that table but you still to manage DML operation easily
For Example:
Create Table s1 (Eno number(2), ename varchar2(15), salary number(5,2));
insert into s1 values (1, 'shahid', 400);
insert into s1 values (1, 'javed', 200);
insert into s1 values (2, 'karim', 100);
--disable lock on table
Alter table s1 disable table lock;
-- cannot drop/truncate table as table lock is disable
drop table s1;
truncate table s1;
--you cannot able to add/modify/drop column
Use the Primary Key with the table. If you combine rowid with the primary key then
it will be perfectly safe to use rowid id in all cases.
If you have a single delete statement that deletes many records using
rowids. Would there ever be a time when the rowid within this table
change during the execution of this delete statement?
In order for a rowid to change you have to enable row movement first so if row
movement is not enabled then answer is NO. If it is, then flashback table could
change a rowid incase of DDL statement and would not happen concurrently with a
delete (so it would not affect it).
For Example:
Alter table s1 shrink space compact, that moves rows and would change rowids.
Update of a partition key that causes a row to move, that moves rows and would
change rowids.
If I fire two inserts in a table, whether the rowid of the 2nd record will be
greater than rowid of the 1st record?
The answer is NO see the example below
if you insert A
then insert B
later insert C
delete A
insert D
It is quite possible in above example that D will be "first" in the table as it took over
A's place. If rowids always "grew", than space would never be reused (that would be
an implication of rowids growing always. We would never be able to reuse old space
as the rowid is just a file.block.slot-on-block - a physical address).
Difference between Stored Procedure and Macro?
Stored Procedure:
It does not return rows to the user.
It has to use cursors to fetch multiple rows
It used Inout/out to send values to user
It is stored in DATABASE or USER PERM
Use the FEEDBACK=n import parameter. This command will tell IMP to display a dot
for every N rows imported.
How we will increase performance on particular table? Here I am inserting
2GB data in table, its takes more time to insert in a table. Is there any way
to increase performance on a particular table?
Index on huge table while doing insert will not only solution to improve
performance. Get your table partitioned that will make table insertion faster and
also easy to manage the archive data. Alternatively do one thing first disable
constraints as well as index then perform insertion then again enable.
You can use high-speed solid-state disk (RAM-SAN) to make Oracle inserts run up to
300x faster than platter disk.
How to reduce alert log Size?
If you move or delete your Alert log file, it is recreated automatically in next startup,
alternatively you can put a script at OS level to move the archives and use new one.
So the best way to reduce the size of log is just move your aler.log to some other
place. Oracle will recreate it in next startup.
How you will know the instance is Primary or Standby?
By querying v$database one can tell if the host is primary or standby
On the primary database:
SQL> select database_role from v$database;
DATABASE_ROLE
-----------------PRIMARY
OR check the value of controlfile_type in V$database i.e is CURRENT for primary
and "STANDBY" for standby
SQL> SELECT controlfile_type FROM V$database;
CONTROL
------------CURRENT
On the Standby database:
SQL> select database_role from v$database;
DATABASE_ROLE
------------------PHYSICAL STANDBY
SQL> SELECT controlfile_type FROM V$database;
CONTROL
------------STANDBY
Note: You may need to connect to as sys if the instance is in mount state
How would you determine what sessions are connected and what
resources they are waiting for?
Use of V$SESSION and V$SESSION_WAIT
Give two methods you could use to determine what DDL changes have
been made.
You could use Logminer or Streams
How would you determine who has added a row to a table?
Turn on fine grain auditing for the table.
Explain the differences between PFILE and SPFILE
A PFILE is a Static, text file that initializes the database parameter in the moment
that its started. If you want to modify parameters in PFILE, you have to restart the
database.
A SPFILE is a dynamic, binary file that allows you to overwrite parameters while the
database is already started (with some exceptions).
Name some clients that can connect with Oracle?
There are several such as SQL Developer, SQL-Plus, TOAD, dbvisualizer, PL/SQL
Developer.
In which view can you find information about every view and table of
oracle dictionary?
DICT or DICTIONARY view. You can query as:
SQL> SELECT * FROM DICT;
How can we change which databases are started during a reboot in Linux
Env.?
Edit the /etc/oratab
tables and views for the database's dictionary are store in this schema and are
manipulated only by ORACLE.
SYSTEM user account - It has all the system privileges for the database and
additional tables and views that display administrative information and internal
tables and views used by oracle tools are created using this username.
What are the minimum parameters should exist in the parameter file
(init.ora) ?
DB NAME - Must set to a text string of no more than 8 characters and it will be
stored inside the datafiles, redo log files and control files and control file while
database creation.
DB_DOMAIN - It is string that specifies the network domain where the database is
created. The global database name is identified by setting these parameters
(DB_NAME & DB_DOMAIN) CONTORL FILES - List of control filenames of the
database. If name is not mentioned then default name will be used.
DB_BLOCK_BUFFERS - To determine the no of buffers in the buffer cache in SGA.
PROCESSES - To determine number of operating system processes that can be
connected to ORACLE concurrently. The value should be 5 (background process) and
additional 1 for each user.
ROLLBACK_SEGMENTS - List of rollback segments an ORACLE instance acquires at
database startup. Also optionally
LICENSE_MAX_SESSIONS,LICENSE_SESSION_WARNING and LICENSE_MAX_USERS.
What is the difference between NAME_IN and COPY ?
Copy is package procedure and writes values into a field.
Name in is a package function and returns the contents of the variable to which you
apply.
How do you implement the If statement in the Select Statement
We can implement the if statement in the select statement by using the Decode
statement. e.g select DECODE (EMP_CAT,'1','First','2','Second'Null); Here the Null is
the else statement where null is done .
How many rows will the following SQL return?
Select * from emp Where rownum = 10;
No rows
Can dual table be deleted, dropped or altered or updated or inserted?
Yes
If it set at instance level, trace file will be created for all connected sessions.
If it is set at session level, trace file will be generated only for specified
session.
How can you use automatic PGA memory management with oracle 9i or
above?
Set the WORK_AREA_SIZE_POLICY parameter to AUTO and set
PGA_AGGREGATE_TARGET
When a user comes to you and asks that a particular SQL query is taking
more time. How will you solve this?
If you find the particular query is taking time to execute, then take a SQLTRACE with
explain plan, it will show how the SQL query will be executed by oracle, depending
upon the report you will tune your database.
Then determine the table size and check the user requirement is % of data from
query table. If it is less then
For example: one table has 10000 records, but you want to fetch only 5 rows, but in
that query oracle does the full table scan. Only for 5 rows full table scan is not a
good, so create an index on that particular column.
If the user requirement is more than 80% of data from query table then in that case
if we create index, again user will get poor performance because oracle will get
contention on db buffer cache since first of all index block need to be picked up as
well as almost all block from that table will be pull out. Hence it will increase the I/O,
also other user request may get slow performance since existing data in cache will
be flush out and reloaded.
Additionally we need to check system level performance, either any problem with
dbwn either dbwn writing slow any modified data which is in buffer to datafile and
either user server process is waiting for space in buffer cache?
Check alert log file too.
Check if user query needed join or sorting?
Check either there is not enough space in temporary tablespace?
If user again user again facing issue then we need drill down to check either any
issue with table block level either table needs defragments if watermark reached
high.
What is Difference between sqlnet.ora, listener.ora, tnsname.ora network
file?
sqlnet.ora: The normal location for this file is D:\oracle\ora92\network\admin. The
sqlnet.ora file is the profile configuration file, and it resides on the client machines
and the database server. The sqnet.ora is text file (optional) that contain basic
configuration details used by the SQL*Net. It contain network configuration details
such domain name, as what path to take in resolving then name of an instance,
order of naming method, authentication services etc.
listener.ora: The normal location for this file is D:\oracle\ora92\network\admin. This
file is client side file (typically on remote PC). The client uses this tnsname.ora file to
obtain connection details from the desired database.
tnsname.ora: The normal location for this file is D:\oracle\ora92\network\admin. This
file is located on both client and server. If you make configuration changes on the
server ensure you can connect to the database through the listener if you are
logged on to the server. If you make configuration change on the client ensure you
can connect from your client workstation to the database through the listener
running on the server.
What is the address of official oracle support?
Metalink.oracle.com or support.oracle.com
Is the password in oracle case sensitive?
In oracle 10g and earlier version NO and since 11g is YES
You can make image copies only on disk but not on a tape device. "backup as copy
database;" Therefore, you can use the backup as copy option only for disk backups,
and the backup as backupset option is the only option you have for making tape
backups.
How can we see the C:\ drive free space capacity from SQL?
create an external table to read data from a file that will be as below
create BAT file free.bat as
@setlocal enableextensions enable delayedexpansion
@echo off
for /f "tokens=3" %%a in ('dir c:\') do (
set bytesfree=%%a
)
set bytesfree=%bytesfree:,=%
echo %bytesfree%
endlocal && set bytesfree=%bytesfree%
You can create a schedular to run the above free.bat, free_space.txt inside the
oracle directory.
Differentiate between Tuning Advisor and Access Advisor?
The tuning Advisor:
-->It suggests indexes that might be very useful.
-->It suggests query rewrites.
-->It suggests SQL profile
The Access Advisor:
-->It suggest indexes that may be useful
-->Suggestion about materialized view.
-->Suggestion about table partitions also in latest version of oracle.
How to give Access of particular table for particular user?
GRANT SELECT (EMPLOYEE_NUMBER), UPDATE (AMOUNT) ON
HRMS.PAY_PAYMENT_MASTER TO SHAHID;
The Below command checks the SELECT privilege on the table
PAY_PAYMENT_MASTER on the HRMS schema (if connected user is different than the
schema)
SELECT PRIVILEGE
FROM ALL_TAB_PRIVS_RECD
WHERE PRIVILEGE = 'SELECT'
AND TABLE_NAME = 'PAY_PAYMENT_MASTER'
AND OWNER = 'HRMS'
UNION ALL
SELECT PRIVILEGE
FROM SESSION_PRIVS
WHERE PRIVILEGE = 'SELECT ANY TABLE';
What are the problem and complexities if we use SQL Tuning Advisor and
Access Advisor together?
I think both the tools are useful for resolving SQL tuning issues. SQL Tuning Advisor
seems to be doing logical optimization mainly by checking your SQL structure and
statistics and the SQL Access Advisor does suggest good data access paths, that is
mainly work which can be done better on disk.
Both SQL Tuning Advisor and SQL Access Advisor tools are quite powerful as they
can source the SQL they will tune automatically from multiple different sources,
including SQL cache, AWR, SQL tuning Sets and user defined workloads.
Related with the argument complexity and problem of using these tools or how you
can use these tools together better to check oracle documentation.
-->We do not see such issues with logical standby database. We can open the
database in normal mode and make it available to the users. At the same time, we
can apply archived logs received from primary database.
-->For OLTP large transaction database it is better to choose logical standby
database.
How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects.
Otherwise using import/export and data pump utility you can recreate or re-organize
your schema.
To configure RMAN Backup for 100GB database? How we would estimate
backup size and backup time?
Check the actual size of your database. For rman backup size almost depends on
your actual size of database.
SELECT SUM(BYTES)/1024/1024/1024 FROM DBA_SEGMENTS;
Backup time depends on your hardware configuration of your server such as CPU,
Memory, and Storage.
Later you can also minimize the backup time by configuring multiple channels with
the backup scripts.
How can you control number of datafiles in oracle database?
The db_files parameter is a "soft limit " parameter that controls the maximum
number of physical OS files that can map to an Oracle instance. The maxdatafiles
parameter is a different - "hard limit" parameter. When issuing a "create database"
command, the value specified for maxdatafiles is stored in Oracle control files and
default value is 32. The maximum number of database files can be set with the init
parameter db_files.
Regardless of the setting of this parameter, maximum per database: 65533 (May be
less on some operating systems), Maximum number of datafiles per tablespace: OS
dependent = usually 1022
You can also by Limited size of database blocks and by the DB_FILES initialization
parameter for a particular instance. Bigfile tablespaces can contain only one file,
but that file can have up to 4G blocks.
What is Latches and why they are used in oracle?
FROM PAY_EMPLOYEE_PERSONAL_INFO
WHERE EMPLOYEE_NUMBER BETWEEN 1 AND 100);
Example: Query used with = operator is Nested query
SELECT * FROM PARTIAL_PAYMENT_SEQUENCE
WHERE SEQCOD = (SELECT MAX(SEQCOD) FROM PARTIAL_PAYMENT_SEQUENCE);
One after noon suddenly you get a call from your application user and
complaining the database is slow then what will be your first step to solve
this issue?
High performance is common expectation for end user, in fact the database is never
slow or fast in most of the case session connected to the database slow down when
they receives unexpected hit. Thus to solve this issue you need to find those
unexpected hit. To know exactly what the second session is doing join your query
with v$session_wait.
SELECT NVL(s.username, '(oracle)') AS username, s.sid, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.sid = sw.sid and s.username = 'HRMS'
ORDER BY sw.seconds_in_wait DESC;
Check the events that are waiting for something, try to find out the objects locks for
that particular session. Follow the link: Find Locks : Blockers
Locking is not only the cause to effects the performance. Disk I/O contention is
another case. When a session retrieves data from the database datafiles on disk to
the buffer cache, it has to wait until the disk sends the data. The wait event shows
up for the session as db file sequential read (for index scan) or db file scattered
read (for full table scan). Query link: DB File Sequential Read Wait/ DB File
Scattered Read , DB Locks
When you see the event, you know that the session is waiting for I/O from the disk
to complete. To improve session performance, you have to reduce that waiting
period. The exact step depends on specific situation, but the first technique
reducing the number of blocks retrieved by a SQL statement almost always works.
Reduce the number of blocks retrieved by the SQL statement. Examine the
SQL statement to see if it is doing a full-table scan when it should be using an
index, if it is using a wrong index, or if it can be rewritten to reduce the
amount of data it retrieves.
Place the tables used in the SQL statement on a faster part of the disk.
Consider increasing the buffer cache to see if the expanded size will
accommodate the additional blocks, therefore reducing the I/O and the wait.
<!--[if !supportLists]--> <!--[endif]-->Supports other database as well MSSQL Server, IBM DB2 and Sybase.
However there are some key issues that it does not address. For Example privilege
user can login to the OS directly and make local connections to the database. This
bypasses the database firewall. For these issues, would need use of other security
options such as Audit Vault, VPD etc.
What is Oracle RAC One Node?
Oracle RAC one Node is a single instance running on one node of the cluster while
the 2nd node is in cold standby mode. If the instance fails for some reason then RAC
one node detect it and restart the instance on the same node or the instance is
relocate to the 2nd node incase there is failure or fault in 1st node. The benefit of
this feature is that it provides a cold failover solution and it automates the instance
relocation without any downtime and does not need a manual intervention. Oracle
introduced this feature with the release of 11gR2 (available with Enterprise Edition).
What are invalid objects in database?
Sometimes schema objects reference other objects such as a view contains a query
that reference table or other view and a PL/SQL subprogram invokes other
subprograms or may reference another tables or views. These references are
established at compile time and if the compiler cannot resolve them, the dependent
object being compiled is marked invalid.
On Linux: tune2fs -l
On Solaris: df -g /tmp
How to find location of OCR file when CRS is down?
If you need to find the location of OCR (Oracle Cluster Registry) but your CRS is
down.
When the CRS is down:
Look into ocr.loc file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
How can you Test your Standby database is working properly or not?
To test your standby database, make a change to particular table on the production
server, and commit the change. Then manually switch a logfile so those changes
are archived. Manually ship the newest archived redolog file, and manually apply it
on the standby database. Then open your standby database in read-only mode, and
select from your changed table to verify those changes are available. Once you
have done, shutdown your standby and startup again in standby mode.
What is Dataguard & what is the purpose of Data Guard?
Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has
been utilized in the industry extensively at times of Primary site failure, failover,
switchover scenarios.
a) Oracle Data Guard ensures high availability, data protection, and disaster
recovery for enterprise data.
b) Data Guard provides a comprehensive set of services that create, maintain,
manage, and monitor one or more standby databases to enable production Oracle
databases to survive disasters and data corruptions.
c) With Data Guard, administrators can optionally improve production database
You have collection of patch (nearly 100 patches) or patchset. How can
you apply only one patch from it?
With Napply itself (by providing patch location and specific patch id) you can apply
only one patch from a collection of extracted patch. For more information check the
opatch util NApply help. It will give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and
subset of patch installed in your ORACLE_HOME.
If both CPU and PSU are available for given version which one, you will
prefer to apply?
From the above discussion it is clear once you apply the PSU then the recommended
way is to apply the next PSU only. In fact, no need to apply CPU on the top of PSU as
PSU contain CPU (If you apply CPU over PSU will considered you are trying to
rollback the PSU and will require more effort in fact). So if you have not decided or
applied any of the patches then, I will suggest you to go to use PSU patches. For
more details refer: Oracle Products [ID 1430923.1], ID 1446582.1
PSU is superset of CPU then why someone choose to apply a CPU rather
than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues.
It seems to be theoretically more consecutive approach and can cause less trouble
than PSU as it has less code changing in it. Thus any one who is concerned only with
security fixes and not functionality fixes, CPU may be good approach.
Will Patch Application affect System Performance?
Sometimes applying certain patch could affect Application performance of SQL
statements. Thus it is recommended to collect a set of performance statistics that
can serve as a baseline before we make any major changes like applying a patch to
the system.
What is your day to day activity as an Apps DBA?
As an Apps DBA we monitor the system for different alerts (Entreprise Manager or
third party tools used for configuring the Alerts) Tablespace Issues, CPU
consumption, Database blocking sessions etc., Regular maintenance activities like
cloning, patching, custom code migrations (provided by developers) and Working
with user issues.
How often do you use patch in your organization?
Usually for non-production the patching request comes around weekly 4-6 and the
same patches will be applied to Production in the outage or maintenance window.
Production has weekly maintenance window (eg. Sat 6PM to 9PM) where all the
changes (patches) will applied on production.
How often do you use cloning in your organization?
Cloning happens weekly or monthly depending on the organization requirement.
Generally when we need to perform major task such as oracle financial annual
closing etc.
Try to close some of the idle sessions connected to the database will help you to
free some TEMP space. Otherwise you can also use Alter Tablespace PCTINCREASE
1 followed by Alter Tablespace PCTINCREASE 0
What is the use of setting GLOBAL_NAMES equal to true?
Setting GLOBAL_NAMES indicates how you might connect to the database. This
variable is either true or false. If it is set to true enforces database link to have
same link as the remote database to which they are linking.
What is the purpose of fact and dimension table? What type of index is
used with fact table?
Fact and dimension tables are involved in producing a star schema. A fact table
contains measurements while dimension table will contain data that will help to
describe the fact table. A Bitmap index is used with fact table.
If you got complain application is running very slow from your application
user. Where do you start looking first?
Below are some of very important step to identify the root cause of slowness in
Application database.
-->If found poor written statements then run EXPLAIN PLAN on these
statements and see whether new index or use of HINT brings the cost of SQL
down.
You need to restore from backup and do not have any control files. What
will be your step to recover the database?
Create a text based control files, saved on the disk same location where all the
datafiles are located then issue the recover command by using backup control file
clause.
Shutdown abort; -- if db still open
Startup nomount;
create controlfile
database <name>
logfile '<online redo log groups>'
noresetlogs|resetlogs
maxlogfiles 10
maxlogmembers <your value>
datafile '<names of all data files>'
maxdatafiles 254
archivelog;
SQL> alter database mount;
recover database [until cancel] [using backup controlfile];
alter database open [noresetlogs/resetlogs];
Use alter database open if you created the control file with NORESETLOGS and
have performed no recovery or a full recovery (without until cancel).
Use alter database open noresetlogs if you created the control file with
NORESETLOGS and performed a full recovery despite the use of the until cancel
option.
Use alter database open resetlogs if you created the control file with RESETLOGS
or when you performed a partial recovery.
In below list which SQL phrase is NOT supported by oracle?
ON DELETE CASCADE
ON UPDATE CASCADE
CREATE SEQUENCE [SequenceName]
DROP SEQUENCE [SequenceName]
Answer: B
What is the effect on working with Report when flex/confine mode are ON?
When flex mode is ON, reports automatically resize the parent when the child is
resized.
When the confine mode is ON, the object cannot be moved outside its parent in
layout.
How will you enforce security using stored procedure?
Dont grant user access directly to tables within the application. Instead grant the
ability to access the procedure that accesses the tables. When procedure execute it
will execute the privilege of procedures owner. Users cannot access except via the
procedure.
What is RAC? What is the benefit of RAC over single instance database?
In Real Application Clusters environments, all nodes concurrently execute
transactions against the same database. Real Application Clusters coordinates each
node's access to the shared data to provide consistency and integrity.
Benefits:
-->Improve response time
-->Improve throughput
-->High availability
-->Transparency
Can you configure primary server and standby server on different OS?
NO, Standby database must be on same version of database and same version of
OS.
If you want users will change their passwords after every 60 days then
how you will enforce this?
Oracle password security is implemented through oracle PROFILES which are
assigned to users. PASSWORD_LIFE_TIME parameter limits the number of days the
same password can be used for authentication.
You have to first create database PROFILE and then assign each user to this profile
or if you have already having PROFILE then you need to just alter the above
parameter.
create profile Sadhan_users
limit
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME UNLIMITED;
Then create user or already created user assigned to this profile.
SQL> Create user HRMS identified by oracle profile sadhan_users;
If you have already assigned profile then you can directly modify the profile
parameter:
SQL> Alter profile sadhan_users set PASSWORD_LIFE_TIME = 90;
What happens actually in case of instance Recovery?
While Oracle instance fails, Oracle performs an Instance Recovery when the
associated database is being re-started. Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database
buffer cache as well as redo log files simultaneously. When there are enough data in
the database buffer cache, they are written to data files. If an Oracle instance fails
before these data are written to data files, Oracle uses online redo log files to
recover the lost data when the associated database is re-started. This process is
called cache recovery.
Transaction recovery: When a transaction modifies data in a database (the before
image of the modified data is stored in an undo segment which is used to restore
the original values in case the transaction is rolled back). At the time of an instance
failure, the database may have uncommitted transactions. It is possible that
changes made by these uncommitted transactions have gotten saved in data files.
To maintain read consistency, Oracle rolls back all uncommitted transactions when
the associated database is re-started. Oracle uses the undo data stored in undo
segments to accomplish this. This process is called transaction recovery.
What is the main purpose of CHECKPOINT in oracle database?
A checkpoint is a database event, which synchronize the database blocks in
memory with the datafiles on disk. It has two main purposes: To establish a data
consistency and enable faster database Recovery. For more information: Discussion
on Checkpoint and SCN
Can you change the Characterset of database?
No, you can not change the character set of database, you will need to re-create the
database with appropriate characterset.
What is Cascading standby database?
A CASCADING STANDBY is a standby database that receives its REDO information
from another standby database (not from primary database).
What the use of ANALYZE command?
To collect statistics about object used by the optimizer and store them in the data
dictionary, delete statistics about the object, validate the structure of the object and
identify migrated and chained rows of the table or cluster.
How will you check active shared memory segment?
ipcs -a
How will you check paging swapping in Linux?
vmstat s
prstat s
swap l
sar p
How do you check number of CPU installed on Linux server?
psrinfot v
When you moved oracle binary files from one ORACLE_HOME server to another
server then which oracle utility will be used to make this new ORACLE_HOME
usable?
Relink all
In which months oracle release CPU patches?
JAN, APR, JUL, OCT
Oracle version 9.2.0.4.0 what does each number refers to?
Oracle version number refers:
s ltra
How to execute Linux command in Background?
Use the "&" at the end of command or use nohup command
What Linux command will control the default permission when file are
created?
Umask
Give the command to display space usage on the LINUX file system?
df lk
What is the use of iostat/vmstat/netstat command in Linux?
Iostat reports on terminal, disk and tape I/O activity.
Vmstat reports on virtual memory statistics for processes, disk, tape and CPU
activity.
Netstat reports on the contents of network data structures.
What are the steps to install oracle on Linux system. List two kernel
parameter that effect oracle installation?
Initially set up disks and kernel parameters, then create oracle user and DBA group,
and finally run installer to start the installation process. The SHMMAX & SHMMNI two
kernel parameter required to set before installation process.
__________ Parameter change will decrease Paging/Swapping?
Answer: Decrease_Shared_Pool_size
_______ Command is used to see the contents of SQL* Plus buffer
Answer: LIST
Transaction per rollback segment is derived from ________
Answer: Processes
LGWR process writes information into ___________
Answer: Redo log files.
A database over all structure is maintained in a file __________
Answer: Control files
YES. You can create and rebuild indexes online. This enables you to update base
tables at the same time you are building or rebuilding indexes on that table. You can
perform DML operations while the index building is taking place, but DDL operations
are not allowed. Parallel execution is not supported when creating or rebuilding an
index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
If an oracle database is crashed? How would you recover that transaction
which is not in backup?
If the database is in archivelog we can recover that transaction otherwise we cannot
recover that transaction which is not in backup.
What is the benefit of running the DB in archivelog mode over no
archivelog mode?
When a database is in no archivelog mode whenever log switch happens there will
be a loss of some redoes log information in order to avoid this, redo logs must be
archived. This can be achieved by configuring the database in archivelog mode.
What is SGA? Define structure of shared pool component of SGA?
The system global area is a group of shared memory area that is dedicated to oracle
instance. All oracle process uses the SGA to hold information. The SGA is used to
store incoming data and internal control information that is needed by the
database. You can control the SGA memory by setting the parameter db_cache_size,
shared_pool_size and log_buffer.
Shared pool portion contain three major area: Library cache (parse SQL statement,
cursor information and execution plan), dictionary cache (contain cache, user
account information, privilege user information, segments and extent information,
buffer for parallel execution message and control structure.
You have more than 3 instances running on the Linux box? How can you
determine which shared memory and semaphores are associated with
which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help
command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
How would you extract DDL of a table without using a GUI tool?
Select dbms_metadata.get_ddl('OBJECT','OBJECT_NAME') from dual;
If you are getting high Busy Buffer waits then how can you find the
reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into
the db cache. There could be the reason when the block may be busy in the cache
and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer
busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where
event = 'buffer busy waits';
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
Can flashback work on database without UNDO and with rollback
segments?
No, flashback query enable us to query our data as it existed in a previous state. In
other words, we can query our data from a point in time before any other users
made permanent changes to it.
Can we have same listener name for two databases?
No
79178265
SQL> begin
for i in 1 .. 1000
Loop
insert into s values ( i );
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select dbms_flashback.get_system_change_number - &SCN from dual;
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER-79178265
-----------------------------------------------5
It only advanced by 5 - but we did over 1,000 DML statements thus the SCN is not
assigned to a SQL statement. The SCN is incremented upon commit.
SQL>
SQL> select dbms_flashback.get_system_change_number scn from dual;
SCN
---------79178271
SQL> begin
for i in 1 .. 1000
loop
insert into s values ( i );
COMMIT;
end loop;
end;
PL/SQL procedure successfully completed.
SCN
---------8806085
SQL> begin
for i in 1 .. 1000
loop
insert into S1 values ( 1, 'SHAAN' );
rollback;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8806085 from (
select dbms_flashback.get_system_change_number scn from dual
);
SCN SCN-8806085
---------- ------------8806085 2014
SQL> Select dbms_flashback.get_system_change_number scn from dual;
SCN
---------8806085
SQL> begin
for i in 1 .. 10000
loop
insert into S1 values ( 1, 'SHAAN' );
rollback;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8806085 from (
select dbms_flashback.get_system_change_number scn from dual
);
SCN SCN-8806085
---------- ------------155317184 20180
Even more than if you do not rollback but commit instead
SQL> Create table S2 ( eno number(4));
Table created.
SQL> select dbms_flashback.get_system_change_number scn from dual;
SCN
---------8828432
SQL> begin
for i in 1 .. 1000
loop
insert into s2 values ( i );
commit;
end loop;
end;
/
be available on your system and requires installation of OUI. Thus from the above
discussion coming to your question it is not ideal to say OPATCH is another patch.
Critical Patch Update (CPU) was the original quarterly patches that were released by
oracle to target the specific security fixes in various products. CPU is a subset of
patchset updates (PSU). CPU are built on the base patchset version where as PSU
are built on the base of previous PSU
Patch Set Updates (PSUs) are also released quarterly along with CPU patches are a
superset of CPU patches in the term that PSU patch will include CPU patches and
some other bug fixes released by oracle. PSU contain fixes for bugs that contain
wrong results, Data Corruption etc but it doe not contain fixes for bugs that that
may result in: Dictionary changes, Major Algorithm changes, Architectural changes,
Optimizer plan changes
Regular patchset: Please do not confuse between regular patchests and patch set
updates (PSU). Consider the regular patchset is super set of PSU. Regular Patchset
contain major bug fixes. The importance of PSU is minimizing once a regular
patchset is released for a given version. In comparison to regular patch PSU will not
change the version of oracle binaries such as sqlplus, import/export etc.
If both CPU and PSU are available for given version which one, you will
prefer to apply?
From the above discussion it is clear once you apply the PSU then the recommended
way is to apply the next PSU only. In fact, no need to apply CPU on the top of PSU as
PSU contain CPU (If you apply CPU over PSU will considered you are trying to
rollback the PSU and will require more effort in fact). So if you have not decided or
applied any of the patches then, I will suggest you to go to use PSU patches. For
more details refer: Oracle Products [ID 1430923.1], ID 1446582.1
PSU is superset of CPU then why someone choose to apply a CPU rather
than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues.
It seems to be theoretically more consecutive approach and can cause less trouble
than PSU as it has less code changing in it. Thus any one who is concerned only with
security fixes and not functionality fixes, CPU may be good approach.
How can you find the PSU installed version?
PSU references at 5th place in the oracle version number which makes it easier to
track such as (e.g. 10.2.0.3.1). To determine the PSU version installed, use OPATCH
utility:
OPATCH lsinv -bugs_fixed | grep -i PSU
To find from the database:
Can you stop applying a patch after applying it to a few nodes? What are
the possible issues?
Yes, it is possible to stop applying a patch after applying it to a few nodes. There is a
prompt that allows you to stop applying the patch. But, Oracle recommends that
you do not do this because you cannot apply another patch until the process is
restarted and all the nodes are patched or the partially applied patch is rolled back.
How you know impact of patch before applying a patch?
OPATCH <option> -report
You can use the above command to know the impact of the patch before actually
applying it.
How can you run patching in scripted mode?
opatch <option> -silent
You can use the above command to run the patches in scripted mode.
Can you use OPATCH 10.2 to apply 10.1 patches?
No, Opatch 10.2 is not backward compatible. You can use Opatch 10.2 only to apply
10.2 patches.
What you will do if you lost or corrupted your Central Inventory?
In that case when you lost or corrupted your Central Inventory and your
ORACLE_HOME is safe, you just need to execute the command with
attachHomeflag, OUI automatically setup the Central Inventory for attached home.
What you will do if you lost your Oracle home inventory (comps.xml)?
Oracle recommended backup your ORACLE_HOME before applying any patchset. In
that case either you can restore your ORACLE_HOME from the backup or perform
the identical installation of the ORACLE_HOME.
When I apply a patchset or an interim patch in RAC, the patch is not
propagated to some of my nodes. What do I do in that case?
In a RAC environment, the inventory contains a list of nodes associated with an
Oracle home. It is important that during the application of a patchset or an interim
patch, the inventory is correctly populated with the list of nodes. If the inventory is
not correctly populated with values, the patch is propagated only to some of the
nodes in the cluster.
OUI allows you to update the inventory.xml with the nodes available in the cluster
using the -updateNodeList flag in Oracle Universal Installer.