Академический Документы
Профессиональный Документы
Культура Документы
SP - Shared Pool; divided into Library Cache (LC) and Data Dictionary Cache (DDC) or Row
Cache.
The names and locations of associated datafiles and redo log files
Tablespace information
RMAN Catalog
To maintain read consistency, Oracle rolls back all uncommitted transactions when the associated
database is re-started.
Oracle uses the undo data stored in undo segments to accomplish this.
This process is called transaction recovery.
6. What is being written into the Redo Log Files?
Redo log records all changes made in datafiles.
In the Oracle database, redo logs comprise files in a proprietary format which log a history of all
changes made to the database. Each redo log file consists of redo records. A redo record, also called a
redo entry, holds a group of change-vectors, each of which describes or represents a change made to a
single block in the database.
Let's get into this topic a little bit dipper:
Log writer (LGWR) writes redo log buffer contents Into Redo Log FIles. LGWR does this every three
seconds, when the redo log buffer is 1/3 full and immediately before the Database Writer (DBWn)
writes its changed buffers into the datafile. The redo log of a database consists of two or more redo
log files. The database requires a minimum of two files to guarantee that one is always available for
writing while the other is being archived (if the DB is in ARCHIVELOG mode). LGWR writes to
redo log files in a circular fashion. When the current redo log file fills, LGWR begins writing to the
next available redo log file. When the last available redo log file is filled, LGWR returns to the first
redo log file and writes to it, starting the cycle again.
Filled redo log files are available to LGWR for reuse depending on whether archiving is enabled.
If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log file is available
after the changes recorded in it have been written to the datafiles.
If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file is available to
LGWR after the changes recorded in it have been written to the datafiles and the file has been
archived.
Oracle Database uses only one redo log files at a time to store redo records written from the redo log
buffer. The redo log file that LGWR is actively writing to is called the current redo log file. Redo log
files that are required for instance recovery are called active redo log files. Redo log files that are no
longer required for instance recovery are called inactive redo log files.
If the database is in ARCHIVELOG mode it cannot reuse or overwrite an active online log file until
one of the archiver background processes (ARCn) has archived its contents.
If archiving is disabled (DB is in NOARCHIVELOG mode), then when the last redo log file is full,
LGWR continues by overwriting the first available active file.
A log switch is the point at which the database stops writing to one redo log file and begins writing to
another. Normally, a log switch occurs when the current redo log file is completely filled and writing
must continue to the next redo log file. However, you can configure log switches to occur at regular
intervals, regardless of whether the current redo log file is completely filled. You can also force log
switches manually.
Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs
and LGWR begins writing to it.
When the database archives redo log files, the archived log retains its log sequence number.
7. How do you control number of Datafiles one can have in an Oracle database?
The db_files parameter is a "soft limit " parameter that controls the maximum number of physical OS
files that can map to an Oracle instance.
The maxdatafiles parameter is a different - "hard limit" parameter.
When issuing a "create database" command, the value specified for maxdatafiles is stored in Oracle
control files and default value is 32.
The maximum number of database files can be set with the init parameter db_files.
8. How many Maximum Datafiles can there be in Oracle Database?
Regardless of the setting of this paramter, maximum per database: 65533 (May be less on some
operating systems)
Maximum number of datafiles per tablespace: OS dependent = usually 1022
Limited also by size of database blocks and by the DB_FILES initialization parameter for a particular
instance
Bigfiletablespaces can contain only one file, but that file can have up to 4G blocks.
9. What is a Tablespace
A tablespace is a logical storage unit within an Oracle database.
Tablespace is not visible in the file system of the machine on which the database resides.
A tablespace, in turn, consists of at least one datafile which, in turn, are physically located in the file
system of the server.
A datafile belongs to exactly one tablespace. Each table, index and so on that is stored in an Oracle
database belongs to a tablespace.
The tablespace builds the bridge between the Oracle database and the filesystem in which the table's
or index' data is stored.
There are three types of tablespaces in Oracle:
Permanent tablespaces
Undo tablespaces
Temporary tablespaces
Data file headers are also updated with the latest checkpoint SCN, even if the file had no changed
blocks. Checkpoints occur AFTER (not during) every redo log switch and also at intervals specified
by initialization parameters.
Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times
in the database alert log.
Checkpoints can be forced with the ALTER SYSTEM CHECKPOINT; command.
SCN can refer to:
System Change Number - A number, internal to Oracle that is incremented over time as change
vectors are generated, applied, and written to the Redo log.
System Commit Number - A number, internal to Oracle that is incremented with each database
COMMIT.
Note: System Commit Numbers and System Change Numbers share the same internal sequence
generator.
13. Which Process reads data from Datafiles?
Server Process - There is no background process which reads data from datafile or database buffer.
Oracle creates server processes to handle requests from connected user processes. A server process
communicates with the user process and interacts with Oracle to carry out requests from the
associated user process. For example, if a user queries some data not already in the database buffers of
the SGA, then the associated server process reads the proper data blocks from the datafiles into the
SGA.
Oracle can be configured to vary the number of user processes for each server process.
In a dedicated server configuration, a server process handles requests for a single user process.
A shared server configuration lets many user processes share a small number of server processes,
minimizing the number of server processes and maximizing the use of available system resources.
14. Which Process writes data in Datafiles?
Database Writer background process DBWn (20 possible) writes dirty buffers from the buffer cache to
the data files.
In other words, this process writes modified blocks permanently to disk.
15. Can you make a Datafile auto extendible.If yes, how?
YES. A Datafile can be auto extendible.
Here's how to enable auto extend on a Datafile:
SQL>alter database datafile '/u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF'
autoextend on;
Note: For tablespaces defined with multiple data files (and partitioned table files), only the "last" data
file needs the autoextend option.
SQL>spool runts.sql
SQL>select 'alter database datafile '|| file_name|| ' '|| ' autoextend on;' from dba_data_files;
SQL>@runts
16. What is a Shared Pool?
The shared pool portion of the SGA contains the library cache, the dictionary cache, buffers for
parallel execution messages, and control structures. The total size of the shared pool is determined by
the initialization parameter SHARED_POOL_SIZE.
The default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms.
Increasing the value of this parameter increases the amount of memory reserved for the shared pool.
Session memory for the shared server and the Oracle XA interface (used where transactions
interact with more than one database)
Tables with long and long raw columns are prone to having chained rows
Tables with more then 255 columns will have chained rows as Oracle break wide tables up
into pieces.
Re-schedule long-running queries when the system has less DML load
The ORA-01555 snapshot too old also relates to your setting for automatic undo retention
Eliminates the need for recursive SQL operations against the data dictionary (UET$ and
FET$ tables)
Locally managed tablespaces eliminate the need to periodically coalesce free space
(automatically tracks adjacent free space)
In a busy database, the volume of the SELECT audit trail could easily exceed the size of the database
every data.
Plus, all data in the audit trail must also be audited to see who has selected data from the audit trail.
28. What does DBMS_FGA package do?
The DBMS_FGA package provides fine-grained security functions. DBMS_FGA is a PL/SQL
package used to define Fine Grain Auditing on objects.
DBMS_FGA Package Subprograms:
ADD_POLICY Procedure - Creates an audit policy using the supplied predicate as the audit
condition
AWR statistics: Oracle has an automatic method to collect AWR "snapshots" of data that is
used to create elapsed-time performance reports.
Optimizer statistics: Oracle has an automatic job to collect statistics to help the optimizer
make intelligent decisions about the best access method to fetch the desired rows.
You should create indexes on columns that are used frequently in WHERE clauses
On columns that have few of the same values or unique values in the table
B*Tree Cluster Indexes - They are used to index the cluster keys
Reverse Key Indexes - The bytes in the key are reversed. This is used to stop sequential keys
being on the same block like 999001, 999002, 999003 would be reversed to 100999, 200999,
300999 thus these would be located on different blocks.
Descending Indexes - They allow data to be sorted from big to small (descending) instead of
small to big (ascending).
Bitmap Indexes - With a bitmap index , a single index entry uses a bitmap to point to many rows
simultaneously, they are used with low data that is mostly read-only. Schould be avoided in OLTP
systems.
Function Based Indexes - These are B*Tree or bitmap indexes that store the computed result of a
function on a row(s) (for example sorted results)- not the column data itself.
Application Domain Indexes - These are indexes you build and store yuorself, either in Oracle or
outside of Oracle
interMedia Text Indexes - This is a specialised index built into Oracle to allow for keyword searching
of large bodies of text.
35. What is B-Tree Index?
A B-Tree index is a data structure in the form of a tree, but it is a tree of database blocks, not rows.
Note: "B" is not for binary; it's balanced.
36. A table is having few rows, should you create indexes on this table
Small tables do not require indexes; if a query is taking too long, then the table might have grown
from small to large.
You can create an index on any column; however, if the column is not used in any of these
situations, creating an index on the column does not increase performance and the index takes up
resources unnecessarily.
37. A Column is having many repeated values which type of index you should create on this
column, if you have to?
For example, assume there is a motor vehicle database with numerous low-cardinality columns such
as car_color, car_make, car_model, and car_year. Each column contains less than 100 distinct values
by themselves, and a b-tree index would be fairly useless in a database of 20 million vehicles.
38. When should you rebuilt indexes?
In 90% cases - NEVER.
When the data in index is sparse (lots of holes in index, due to deletes or updates) and your query is
usually range based.
Also index blevel is one of the key indicators of performance of sql queries doing Index range scans.
39. Can you built indexes online?
YES. You can create and rebuild indexes online.
This enables you to update base tables at the same time you are building or rebuilding indexes on that
table.
You can perform DML operations while the index build is taking place, but DDL operations are not
allowed.
Parallel execution is not supported when creating or rebuilding an index online.
The following statements illustrate online index build operations:
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
40. Can you see Execution Plan of a statement?
YES. In many ways, for example from GUI based tools like TOAD, Oracle SQL Developer.
Configuring AUTOTRACE, a SQL*Plus facility
AUTOTRACE is a facility within SQL*Plus to show us the explain plan of the queries we've
executed, and the resources they used.
Once the PLAN_TABLE has been installed in the database, You can control the report by setting the
AUTOTRACE system variable.
SET AUTOTRACE ON EXPLAIN - The AUTOTRACE report shows only the optimizer
execution path.
SET AUTOTRACE ON STATISTICS - The AUTOTRACE report shows only the SQL
statement execution statistics.
SET AUTOTRACE ON - The AUTOTRACE report includes both the optimizer execution
path and the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the
printing of the user's query output, if any.
41. A table has been created with below settings. What will be size of 4th extent?
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
"NEXT" Specify in bytes the size of the next extent to be allocated to the object.
Percent Increase allows your segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next parameter (200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent (1,4*200=280k).
AND The fourth extent will be 1 + PCTINCREASE/100 times the third extent (1,4*280=392k!!!),
and so on...
42. What is DB Buffer Cache Advisor?
The Buffer Cache Advisor provides advice on how to size the Database Buffer Cache to obtain
optimal cache hit ratios.
Member of Performance Advisors --> Memory Advisor pack.
43. What is STATSPACK tool?
STATSPACK is a performance diagnosis tool provided by Oracle starting from Oracle 8i and above.
STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application
tuning activities by providing data which identifies high-load SQL statements.
Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK,
users that are not licensed to use the Enterprise Manager Diagnostic Pack should continue to use
statspack.
Manage storage
Oracle Database provides a mechanism to make table structure modifications without significantly
affecting the availability of the table.
The mechanism is called online table redefinition.
When a table is redefined online, it is accessible to both queries and DML during much of the
redefinition process.
The table is locked in the exclusive mode only during a very small window that is independent of the
size of the table and complexity of the redefinition, and that is completely transparent to users.
Online table redefinition requires an amount of free space that is approximately equivalent to the
space used by the table being redefined. More space may be required if new columns are added.
You can perform online table redefinition with the Enterprise Manager Reorganize Objects wizard or
with the DBMS_REDEFINITION package.
46. Can you assign Priority to users?
YES. This is achievable with Oracle Resource Manager.
DBMS_RESOURCE_MANAGER is the packcage to administer the Database Resource Manager.
The DBMS_RESOURCE_MANAGER package maintains plans, consumer groups, and plan
directives. It also provides semantics so that you may group together changes to the plan schema.
47. You want users to change their passwords every 2 months. How do you enforce this?
Oracle password security is implemented via Oracle "profiles" which are assigned to users.
PASSWORD_LIFE_TIME - limits the number of days the same password can be used for
authentication
First, start by creating security "profile" in Oracle database and then alter the user to belong to the
profile group.
1) creating a profile:
create profile all_users
limit
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME UNLIMITED;
2) Create user and assign user to the all_users profile
SQL>create user chuck identified by norris profile all_users;
3) To "alter profile" parameter, say; change to three months:
SQL>alter profile all_users set PASSWORD_LIFE_TIME = 90;
48. How do you delete duplicate rows in a table?
There is a few ways to achieve that:
DELETE FROM table_name WHERE rowid NOT IN (SELECT max(rowid) FROM table_name
GROUP BY id);
More ways:
The DELETE command is used to remove rows from a table. A WHERE clause can be used to only
remove some rows.
If no WHERE condition is specified, all rows will be removed. After performing a DELETE operation
you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it.
DELETE will cause all DELETE triggers on the table to fire.
TRUNCATE removes all rows from a table. A WHERE clause is not permited. The operation cannot
be rolled back and no triggers will be fired.
As such, TRUCATE is faster and doesn't use as much undo space as a DELETE.
51. What is COMPRESS and CONSISTENT setting in EXPORT utility?
COMPRESS
Simply: COMPRESS=n - Allocated space in database for imported table will be exactly as the space
required to hold the data.
COMPRESS=y - The INITIAL extent of the table would be as large as the sum of all the extents
allocated to the table in the original database.
In other words:
The default, COMPRESS=y, causes Export to flag table data for consolidation into one initial extent
upon import.
If extent sizes are large (for example, because of the PCTINCREASE parameter), the allocated space
will be larger than the space required to hold the data.
If you specify COMPRESS=n, Export uses the current storage parameters, including the values of
initial extent size and next extent size.
If you are using locally managed tablespaces you should always export with COMPRESS=n
An
CONSISTENT
Default: n. Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to
ensure that the data seen by Export is consistent to a single point in time and does not change during
the execution of the exp command.
You should specify CONSISTENT=y when you anticipate that other applications will be updating the
target data after an export has started.
If you use CONSISTENT=n, each table is usually exported in a single transaction. However, if a table
contains nested tables, the outer table and each inner table are exported as separate transactions.
If a table is partitioned, each partition is exported as a separate transaction.
Therefore, if nested tables and partitioned tables are being updated by other applications, the data that
is exported could be inconsistent. To minimize this possibility, export those tables at a time when
updates are not being done.
52. What is the difference between Direct Path and Conventional Path loading?
A conventional path load executes SQL INSERT statements to populate tables in an Oracle database.
A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks
and writing the data blocks directly to the database files.
53. Can you disable and enable Primary key?
You can use the ALTER TABLE statement to enable, disable, modify, or drop a constraint.
When the database is using a UNIQUE or PRIMARY KEY index to enforce a constraint, and
constraints associated with that index are dropped or disabled, the index is dropped, unless you
specify otherwise.
While enabled foreign keys reference a PRIMARY or UNIQUE key, you cannot disable or drop the
PRIMARY or UNIQUE key constraint or the index.
Rows are accessed via a logical rowid and not a physical rowid like in heap-organized tables
You cannot modify an IOT index property using ALTER INDEX (error ORA-25176), you
must use an ALTER TABLE instead.
Advantages of an IOT
As an IOT has the structure of an index and stores all the columns of the row, accesses via
primary key conditions are faster as they don't need to access the table to get additional
column values.
As an IOT has the structure of an index and is thus sorted in the order of the primary key,
accesses of a range of primary key values are also faster.
As the index and the table are in the same segment, less storage space is needed.
In addition, as rows are stored in the primary key order, you can further reduce space with key
compression.
As all indexes on an IOT uses logical rowids, they will not become unusable if the table is
reorganized.
58. Can you import objects from Oracle ver. 7.3 to 9i?
Different versions of the import utility are upwards compatible. This means that one can take an
export file created from an old export version, and import it using a later version of the import utility.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling older
imp/exp versions to work (for backwards compatibility).
For example, one can run $ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to
allow the Oracle 7.3 exp/imp utilities to run against an Oracle 8 database.
59. How do you move tables from one tablespace to another tablespace?
There are several methods to do this;
1) export the table, drop the table, create the table definition in the new
tablespace, and then import the data (imp ignore=y).
2) Create a new table in the new tablespace with the CREATE TABLE statement AS SELECT all
from source table
command:
CREATE TABLE temp_name TABLESPACE new_tablespace AS SELECT * FROM source_table;
Then drop the original table and rename the temporary table as the original:
DROP TABLE real_table;
RENAME temp_name TO real_table;
Note: don't forget to rebuild any indexes.
1. What is an Oracle Instance?
2. What information is stored in Control File?
3. When you start an Oracle DB which file is accessed first?
4. What is the Job of SMON, PMON processes?
5. What is Instance Recovery?
6. What is written in Redo Log Files?
7. How do you control number of Datafiles one can have in an Oracle database?
8. How many Maximum Datafiles can there be in an Oracle Database?
9. What is a Tablespace?
10. What is the purpose of Redo Log files?
11. Which default Database roles are created when you create a Database?
12. What is a Checkpoint?
13. Which Process reads data from Datafiles?
14. Which Process writes data in Datafiles?
15. Can you make a Datafile auto extendible. If yes, how?
16. What is a Shared Pool?
17. What is kept in the Database Buffer Cache?
18. How many maximum Redo Logfiles one can have in a Database?
19. What is difference between PFile and SPFile?
20. What is PGA_AGGREGRATE_TARGET parameter?
21. Large Pool is used for what?
22. What is PCT Increase setting?
23. What is PCTFREE and PCTUSED Setting?
24. What is Row Migration and Row Chaining?
25. What is 01555 - Snapshot Too Old error and how do you avoid it?
26. What is a Locally Managed Tablespace?
27. Can you audit SELECT statements?
28. What does DBMS_FGA package do?
29. What is Cost Based Optimization?
situation?
28. You lost some datafiles and you don't have any full backup and the database
was running in NOARCHIVELOG mode. What you can do now?
29. How do you recover from the loss of datafile if the DB is running in
ARCHIVELOG mode?
30. You loss one datafile and DB is running in ARCHIVELOG mode. You have full
database backup of 1 week old and partial backup of this datafile which is just 1
day old. From which backup should you restore this file?
31. You loss controlfile how do you recover from this?
32. The current logfile gets damaged. What you can do now?
33. What is a Complete Recovery?
34. What is Cancel Based, Time based and Change Based Recovery?
35. Some user has accidentally dropped one table and you realize this after two
days. Can you recover this table if the DB is running in ARCHIVELOG mode?
36. Do you have to restore Datafiles manually from backups if you are doing
recovery using RMAN?
37. A database is running in ARCHIVELOG mode since last one month. A datafile
is added to the database last week. Many objects are created in this datafile.
After one week this datafile gets damaged before you can take any backup. Now
can you recover this datafile when you don't have any backups?
38. How do you recover from the loss of a controlfile if you have backup of
controlfile?
39. Only some blocks are damaged in a datafile. Can you just recover these
blocks if you are using RMAN?
40. Some datafiles were there on a secondary disk and that disk has become
damaged and it will take some days to get a new disk. How will you recover from
this situation?
41. Have you faced any emergency situation. Tell us how you resolved it?
42. At one time you lost parameter file accidentally and you don't have any
backup. How you will recreate a new parameter file with the parameters set to
previous values.
1. How do you see how many instances are running?
2. How do you automate starting and shutting down of databases in Unix?
3. You have written a script to take backups. How do you make it run
automatically every week?
4. What is OERR utility?
5. How do you see Virtual Memory Statistics in Linux?
6. How do you see how much hard disk space is free in Linux?
7. What is SAR?
8. What is SHMMAX?
9. Swap partition must be how much the size of RAM?
10. What is DISM in Solaris?
11. How do you see how many memory segments are acquired by Oracle
Instances?
12. How do you see which segment belongs to which database instances?
13. What is VMSTAT?
14. How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?
15. How do you remove Memory segments?
16. What is the difference between Soft Link and Hard Link?
17. What is stored in oratab file?
18. How do you see how many processes are running in Unix?
19. How do you kill a process in Unix?
20. Can you change priority of a Process in Unix?