Вы находитесь на странице: 1из 130

1

PL-SQL Functions
Comparison Operators
Select Operators
Single Row Functions for Numbers, Chars and Dates
Conversion Functions
Miscellaneous Single Row Functions
Aggregate Functions
Analytical Functions
Object Reference Functions
Date Format Models
Date Prefixes and Suffixes
Number Format Models

Comparison Operators

Table 1-1. Comparison Operators

Operator What it does


= true if two expressions are equal
!= ^= -= <> logically equivalent—true if two expressions are not equal

> True if left expression is greater than right expression


>= True if left expression is greater than or equal to right expression

< True if left expression is less than right expression


<= True if left expression is less than or equal to right expression

IN Is equal to any member of a set or subquery


NOT IN Does NOT equal any member of a set or subquery
ANY, SOME True if one or more of the values in the list of expressions or subquery satisfies the
condition
ALL True if all of the values in the list of expressions or subquery satisfies the condition

BETWEEN x True if greater than or equal to x and less than or equal to y (can be reversed in
AND y meaning with NOT)
EXISTS True if the subquery returns at least one row (can be reversed in meaning with NOT)

LIKE pattern 'True if expression or subquery matches pattern. '%' matches any sequence of
[ESCAPE 'c'] characters, '_' matches any single character. If ESCAPE is used, the character 'c' causes
the character following to be taken literally (can be reversed in meaning with NOT).

IS NULL TRUE if the value is NULL (can be reversed in meaning with NOT)

Select Operators

Also called SET operators


2
Table 1-2. Select Operators (Sets)

Operator What it does


UNION This combines the results of two queries and returns the set of
distinct rows returned by either query
UNION ALL This combines the results of two queries and returns all rows
returned by either query, including duplicates
INTERSECT This combines the results of two queries and returns the set of
distinct rows returned by both queries
MINUS This combines the results of two queries and returns the distinct
rows that were in the first query, but not in the second

Table 1-3. Other Select Operators

Operator What it does


(+) Denotes that the preceding column is an outer join
* Wildcard operator. Equals all columns in a select statement
PRIOR Denotes a parent-child relationship in a tree-structured query
ALL Include all duplicate rows in a query (the default)
DISTINCT Eliminate duplicates in a result set

Single Row Functions

Number Functions

Table 1-6. Single Row Number Functions

Function What it does


ABS(n) Returns absolute value of n
ACOS(n) Returns arc cosine of n in radians
ASIN(n) Returns arc sine of n in radians
ATAN(n) Returns arc tangent of n, in radians
ATAN2(n,m) Returns the arc tangent of n and m, in radians
BITAND(n,m) Computes the bitwise logical AND of the bits of n and
m, where n and m are nonnegative integers. Returns an
integer.
CEIL(n) Ceiling function—returns the smallest integer >= n
COS(n) Returns the cosine of n where n is in radians
COSH(n) Returns the hyperbolic cosine of n where n is in radians
EXP(n) Returns en
FLOOR(n) Returns the largest integer <= n
LN(n) Returns the natural log of n
LOG(m,n) Returns the base m log of n
3

Function What it does


MOD(m,n) Returns the modulus of m, n—the remainder of m
divided by n. (Returns m when n=0)
POWER(m,n) Returns m raised to the nth power
ROUND (m[,n]) Rounds m to the nearest n places. Where n is omitted,
default is zero. n must be an integer
SIGN(n) For n < 0, returns –1, for n > 0, returns 1, for n = 0,
returns 0
SIN(n) Returns sine(n) where n is in radians
SINH(n) Returns the hyperbolic sine(n) where n is in radians
SQRT(n) Returns the square root of n
TAN(n) Returns the tangent(n) where n is in radians
TANH(n) Returns the hyperbolic tangent(n) where n is in radians
TRUNC (m[,n]) Truncate. Returns m truncated to n places. Where n is
omitted, it returns the integer value of m.
WIDTH_BUCKET Returns the “bucket” in which exp belongs, where min is
(exp,min,max,num) the minimum value, max is the maximum value, and
num is the number of divisions (buckets) to use

Character Functions

Table 1-7. Character Single Row Functions

Function What it does


CHR (n) Returns the character whose binary value is n.
Accepts USING NCHAR_CS clause
CONCAT (char1,char2) Combines two strings, char1 and char2
INITCAP(char) Returns char with the first character of each
word in char capitalized
LOWER(char) Returns char with all characters converted to
lowercase
LPAD(char1,n[,char2]) Returns char1 padded on the left to width n
with character sequence in char2. Default
padding is a single blank (space).
LTRIM(char[,set]) Returns char with initial characters in set
removed from the left. Default set is a blank
character (space).
NLS_INITCAP(char[,nlsparam]) Returns char with the first character of each
word in char capitalized. Accepts an NLS
parameter.
NLS_LOWER(char[,nlsparam]) Returns char with all characters converted to
lowercase. Accepts an NLS parameter.
NLSSORT(char[,nlsparam]) Returns language specific sort of char. Accepts
an NLS parameter.
NLS_UPPER(char[,nlsparam]) Returns char with all characters converted to
uppercase. Accepts an NLS parameter.
4

Function What it does


REPLACE(char[,searchstring[,replacestring]]) Returns char with searchstring replaced by
replacestring. Where replacestring is omitted or
null, all instances of searchstring are removed.
Where searchstring is omitted or null, char is
returned.
RPAD(char1,n[,char2]) Returns char1 padded on the right to width n
with character sequence in char2. Default
padding is a single blank (space).
RTRIM(char[,set]) Returns char with initial characters in set
removed from the right. Default set is a blank
character (space).
SOUNDEX(char) Returns the phonetic equivalent of char. Allows
for searches for words that sound alike but are
spelled differently.
SUBSTR(string,n[,m]) Returns the substring of string, starting at
position n, for a length of m (or to the end of
also: string if m is not present)

SUBSTRB - bytes

SUBSTRC - unicode

SUBSTR2 - UCS2 codepoints

SUBSTR4 - UCS4 codepoints


TRANSLATE(char,from,to) Returns char, with all occurrences of characters
in the from string replaced with the
corresponding character in the to string. If to is
shorter than from, then from characters
without a corresponding to character will be
removed. Empty to returns NULL, not an empty
string.
TREAT(exp AS [[REF] [schema.]] type) Changes the declared type of exp to type
TRIM([[LEADING|TRAILING|BOTH] Returns source with leading and/or trailing
[trimchar]FROM]source) trimchars removed. Default trimchar is a blank
space, default action is to remove both leading
and trailing blank spaces.
UPPER (char) Returns char with all characters converted to
uppercase
ASCII (char) Returns the number value of the first character
of char
INSTR(str,substr[,pos[,occur]]) “In string” function. Returns the position of the
occurrence occur of substr in str, starting at
also: pos. Default for pos and occur is 1. If pos is
negative, search works backwards from the
INSTRB - bytes end of str.

INSTRC - unicode
5

Function What it does

INSTR2 - UCS2 codepoints

INSTR4 - UCS4 codepoints


LENGTH (char) Returns the length of char

also:

LENGTHB - bytes

LENGTHC - unicode

LENGTH2 - UCS2 codepoints

LENGTH4 - UCS4 codepoints

Date Functions

Table 1-8. Date Single Row Functions

Function What it does


ADD_MONTHS(d,n) Returns the date d plus n months. If d is the last day of the
month, or d+n would be past the end of the month,
returns the last day of the month.
CURRENT_DATE Returns the current Gregorian date as datatype DATE, in
the session specific time zone
CURRENT_TIMESTAMP Returns the current date and time as datatype TIMESTAMP
[(precision)] WITH TIME ZONE, in the session specific time zone.
Precision defaults to 6 places.
DBTIMEZONE Returns the time zone of the database
EXTRACT (datetime FROM expr) datetime can be YEAR, MONTH, DAY, HOUR, MINUTE,
SECOND, TIMEZONE_HOUR, TIMEZONE_MINUTE,
TIMEZONE_REGION, or TIMEZONE_ABBR, and expr can be
either an internal value or datetime value expression
FROM_TZ(timestamp, Returns timestamp converted to a TIMESTAMP WITH TIME
time_zone) ZONE value, using time_zone
LAST_DAY(date) Returns the date of the last day of the month containing
date
LOCALTIMESTAMP [(precision)] Returns the current date and time of the session in
datatype TIMESTAMP of precision
MONTHS_BETWEEN(date1, Returns the number of months between date1 and date2
date2)
NEW_TIME(date,zone1,zone2) Returns date converted from time zone zone1 to zone2.
NLS_DATE_FORMAT must be set to a 24-hour format.
NEXT_DAY(date,weekday) Returns the next weekday later than date where weekday
is the day of the week or its abbreviation
NUMTODSINTERVAL (n, char) Returns n converted to an INTERVAL DAY TO SECOND
6

Function What it does


literal. char can be 'DAY,' 'HOUR,' 'MINUTE,' or 'SECOND,'
or an expression that resolves to one of those
NUMTOYMINTERVAL (n, char) Returns n converted to an INTERVAL YEAR TO MONTH
literal. char can be 'MONTH' or 'YEAR' or an expression that
resolves to one of those
ROUND (date[,fmt]) Returns date rounded to the nearest unit specified by the
format model fmt. Defaults to the nearest day.
SESSIONTIMEZONE Returns the time zone of the current session, either as a
time zone offset or a time zone region name, depending on
the format used for the most recent ALTER SESSION
statement
SYS_EXTRACT_UTC (datetz) Extracts the UTC value of datetz where datetz is a datetime
with time zone displacement
SYSDATE Returns the current date and time
SYSTIMESTAMP Returns the system timestamp in TIMESTAMP WITH TIME
ZONE datatype
TO_DSINTERVAL(char [nlsparm]) Converts char to an INTERVAL DAY TO SECOND type
TO_TIMESTAMP Converts char to datatype of TIMESTAMP. fmt specifies the
(char[,fmt[nlsparm]]) format of char if other than the default for datatype
TIMESTAMP
TO_TIMESTAMP_TZ Converts char to datatype of TIMESTAMP WITH TIME
(char[,fmt[nlsparm]]) ZONE. fmt specifies the format of char if other than the
default for datatype TIMESTAMP WITH TIME ZONE.
TO_YMINTERVA(char) Converts char to an INTERVAL YEAR TO MONTH type
TRUNC (date[,fmt]) Returns date truncated to the time unit specified by fmt. If
fmt is omitted, date is truncated to the nearest day.
TZ_OFFSET(tzname | Returns the timezone offset
SESSIONTIMEZONE |
DBTIMEZONE | '+|-hh:mi')

Conversion Functions

Table 1-9. Conversion Single Row Functions

Function What it does


ASCIISTR(string) Returns the ASCII string in the database language of
string which can be in any character set. Non-ASCII
characters are converted to their UTF-16 binary values.
BIN_TO_NUM(expr[,expr…]) Converts the binary bits of expr,expr,… to a number.
Example: BIN_TO_NUM(1,1,0,1) returns 13.
CAST(expr | [MULTISET] (subquery) Converts from one built in datatype or collection type to
AS type) another
CHARTOROWID(char) Converts char to type ROWID
COMPOSE('string') Converts string to its Unicode string equivalent in the
same character set
CONVERT(char, dest_set Returns char converted from source_set character set to
7

Function What it does


[,source_set]) dest_set character set. If source_set is not specified, the
database character set is assumed.
DECOMPOSE(string [CANONICAL | Returns a unicode string decomposed from its fully
COMPATIBILITY]) normalized form. If CANONICAL(the default) is used, the
result can be recomposed with COMPOSE.
HEXTORAW (char) Returns hexadecimal digits of char as RAW
NUMTODSINTERVAL (n, char) Converts number n to an INTERVAL DAY TO SECOND
literal. char can be 'DAY,' 'HOUR,' 'MINUTE,' or 'SECOND'
NUMTOYMINTERVAL (n, char) Converts number n to an INTERVAL YEAR TO MONTH
literal. char can be 'YEAR or 'MONTH'
RAWTOHEX(raw) Converts raw to its hexadecimal equivalent character
value
RAWTONHEX(raw) Converts raw to its hexadecimal equivalent NVARCHAR2
character value
ROWIDTOCHAR(rowid) Converts rowid to a VARCHAR2 18 characters long
ROWIDTONCHAR(rowid) Converts rowid to a NVARCHAR2 18 characters long
TO_CHAR (nchar | clob | nclob) Converts an NCHAR, NVARCHAR2, CLOB or NCLOB value
to the underlying database character set
TO_CHAR (date [,fmt[nlsparm]]) Converts date to VARCHAR2, using format fmt and any
nlsparm
TO_CHAR (num [,fmt[nlsparm]]) Converts num to VARCHAR2, using format fmt and any
nlsparm
TO_CLOB (lob_col|char) Converts lob_col or char to CLOB value
TO_DATE char [,fmt[nlsparm]] Converts char to a date, using the format fmt and any
nlsparm. If fmt is not specified, then the default date
format is used.
TO_DSINTERVAL (char [nlsparm]) Converts char to an INTERVAL DAY TO SECOND literal
TO_LOB(long_col) Converts the LONG or LONG RAW value of long_col to
LOB values
TO_MULTI_BYTE(char) Converts single byte char to multibyte characters
TO_NCHAR(char [,fmt[nlsparm]]) Converts a string from the database character set to the
national character set
TO_NCHAR (datetime | Converts a date, time, or interval value from the
interval[,fmt[nlsparm]]) database character set to the national character set
TO_NCHAR (n [,fmt[nlsparm]]) Converts a number to a string in the NVARCHAR2
character set
TO_NCLOB (lob_column | char) Converts char or lob_column to NCLOB data, using the
national character set
TO_NUMBER(char[,fmt[nlsparm]]) Converts char to a number, using fmt as the format
specifier
TO_SINGLE_BYTE(char) Returns char with any multibyte characters converted to
the corresponding single byte characters
TO_YMINTERVAL(char [nlsparm]) Converts char to an INTERVAL YEAR TO MONTH literal
TRANSLATE (text USING CHAR_CS | Returns text translated into the database character set
8

Function What it does


NCHAR_CS) (USING CHAR_CS) or the national character set (USING
NCHAR_CS)
UNISTR(string) Returns string in Unicode using the database Unicode
character set

Miscellaneous Single Row Functions

Table 1-10. Miscellaneous Single Row Functions

Function What it does


BFILENAME('dir','fname') Returns a locator for an LOB binary file on the filesystem.
dir is the database object that is an alias for the full
pathname of the file directory, fname is the actual file
name.
COALESCE(expr[,expr,...]) Returns the first nonnull expression in a list of
expressions
DECODE(expr,search ,result Searches expr for search, returning the specific result for
[ ,search,result...][,default]) each search. Returns default if search is not found.
DEPTH(correlation_int) Returns the number of levels in the path specified by an
UNDER_PATH condition
DUMP(expr[,return_fmt Returns a VARCHAR2 value with the datatype, length,
[,start[,length]]]) and internal representation of expr, using the format of
return_fmt. Returns entire internal representation unless
start and optionally length are specified.
EMPTY_BLOB() Returns a locator for a BLOB, allowing you to initialize the
BLOB
EMPTY_CLOB() Returns a locator for a CLOB, allowing you to initialize the
CLOB
EXISTSNODE(XML_Instance, path Walks the XML tree and returns success if a node is found
[expr]) that matches the specified path
EXTRACT (XML_Instance, path Walks the XML tree and, if nodes are found which match
[expr]) the specified path, returns those nodes
EXTRACTVALUE(XML_Instance, path Walks the XML tree and, if nodes are found that match
[expr]) the specified path, returns the scalar value of those
nodes
GREATEST(expr[,expr,...]) Returns the expression in the list with greatest value. All
data types are implicitly converted to the data type of the
first expression. Character comparisons use the database
character set.
LEAST(expr[,expr,...]) Returns the expression in the list with least value. All
data types are implicitly converted to the data type of the
first expression. Character comparisons use the database
character set.
NLS_CHARSET_DECL_LEN Returns the declaration width of the NCHAR column of
(bytes,set_id) width bytes and a character set ID of set_id
NLS_CHARSET_ID(text) Returns the number of a character set ID with a character
set name of text
9

Function What it does


NLS_CHARSET_NAME(num) Returns the character set name of the character set with
ID num
NULLIF(expr1,expr2) Returns null if expr1and expr2 are equal, else returns
expr1
NVL(expr1,expr2) Returns expr2 if expr1 is NULL, else returns expr1
NVL2(expr1,expr2,expr3) Returns expr2 if expr1 is NOT NULL, else returns expr3
PATH (correlation_int) Returns the relative path to the resource specified in an
UNDER_PATH or EQUALS_PATH condition
SYS_CONNECT_BY_PATH Returns the path of a column value from root to node in
(column,char) an hierarchical query. Column values are separated by
char.
SYS_CONTEXT('namespace', Returns a VARCHAR2 with the value of param of
'param'[,len]) namespace. Return is 256 bytes unless overridden by len.
SYS_DBURIGEN(col|attr Generates a URL that can be used to retrieve an XML
[rowid][,col|attr [rowid],...] document from one or more columns col or attributes
[,'text()']) attr with or without a rowid
SYS_EXTRACT_UTC(time) Returns the UTC from time where time is a datetime with
time zone displacement
SYS_GUID() Generates and then returns a Globally Unique IDentifier
(GUID) of 16 RAW bytes
SYS_TYPEID(obj_val) Returns the typeid of an object type operand
SYS_XMLAGG(expr [fmt]) Creates a single well-formed XML document from multiple
documents
SYS_XMLGEN(expr [fmt]) Creates a well-formed XML document from a database
row/column expression
UID Returns the UID of the current session user
UPDATEXML(XML_instance, path, Updates an XML document by searching for the node
expr) specified in the path, then replaces either the node or the
scalar value of the node, depending on argument types
USER Returns the username of the current session user
USERENV(param) Returns a variety of information about the current
session. While deprecated in favor of SYS_CONTEXT, this
is retained for backward compatibility.
VSIZE(expr) Returns the number of bytes used by the value
represented by expr
XMLAGG(XML_instance [ORDER BY Returns a well-formed XML document by aggregating a
sortlist]) series of XML fragments. The returned document is a
simple aggregate and no formatting is supported.
XMLCOLATTVAL Creates an XML fragment for one or more columns of a
single row. The format of the fragment is fixed as
<column name=”column name”>column
value</column>.
XMLCONCAT(XML_instance [, Returns an XML fragment created by concatenating a
XML_instance,...]) series of XML fragments or elements
XMLFOREST Creates an XML fragment for one or more columns of a
single row. The format of the fragment is fixed as
10

Function What it does


<column name>column value</column name>.
XMLSEQUENCE Used to “unroll” a stored XMLType into multiple rows for
further processing as individual elements
XMLTRANSFORM Applies an XSL style sheet to an XML document and
returns the resulting new XML document

Aggregate Functions

All of the aggregate functions described below can have an analytical clause appended to them using the
OVER (analytical_clause) syntax. For space considerations, we've omitted this from the Function column.

Table 1-11. Aggregate Functions

Function What it does


AVG([DISTINCT|ALL] expr) Computes the average of the rows returned by
expr. If the DISTINCT keyword is used,
duplicate rows will be excluded from the
calculation.
CORR( expr1 , expr2 ) Calculates the coefficient of correlation
between expr1 and expr2
COUNT(* | [DISTINCT|ALL] expr) Returns the number of [DISTINCT] rows in the
expr that are not null, or if * is specified, the
total number of rows, including duplicates and
nulls
COVAR_POP( expr1, expr2 ) Given a set of pairs, expr1 and expr2, where
nulls are excluded, returns the population
covariance
COVAR_SAMP( expr1, expr2 ) Given a set of pairs, expr1 and expr2, where
nulls are excluded, returns the sample
covariance
CUME_DIST(expr[,expr...]) WITHIN GROUP Given a list of values, finds and returns the
(ORDER BY expr [DESC|ASC] [NULLS [FIRST| cumulative distribution of a single value within
LAST]) that list
DENSE_RANK(expr[,expr...]) WITHIN GROUP Given an ordered group of rows, finds and
(ORDER BY expr) returns the rank of a single value within that
group
FIRST ORDER BY expr [DESC|ASC] [NULLS Returns the first row or rows from a set based
[FIRST|LAST]) on the specified sort order. If multiple rows tie
as “first” then all tied rows will be returned.
Used in an aggregate function.
GROUP_ID() Used in GROUP BY specification to distinguish
duplicate groups
GROUPING(expr) Used to distinguish superaggregate rows from
regular grouped rows when ROLLUP and CUBE
are used
GROUPING_ID(expr[,expr...]) Returns the number of the GROUPING bit
vector for a row
LAST ORDER BY expr [DESC|ASC] [NULLS Returns the last row or rows from a set based
11

Function What it does


[FIRST|LAST]) on the specified sort order. If multiple rows tie
as “last” then all tied rows will be returned.
Used in an aggregate function.
MAX([DISTINCT|ALL] expr) Returns the maximum value of expr. If the
DISTINCT keyword is used, duplicate rows will
be excluded from the calculation.
MIN([DISTINCT|ALL] expr) Returns the minimum value of expr. If the
DISTINCT keyword is used, duplicate rows will
be excluded from the calculation.
PERCENTILE_CONT(expr) WITHIN GROUP Given a list of values and a specified percentile
(ORDER BY expr [DESC|ASC]) ranking, returns the interpolated value of that
percentile by assuming a continuous
distribution of data in the list
PERCENTILE_DISC(expr) WITHIN GROUP Given a list of values and a specified percentile
(ORDER BY expr [DESC|ASC]) ranking, returns the smallest value that meets
or exceeds that percentile rank by assuming a
discrete distribution of data in the list
PERCENT_RANK(expr) WITHIN GROUP (ORDER Given a list of values, calculates the
BY expr [DESC|ASC][NULLS FIRST|LAST]) hypothetical rank of a single value within that
list
RANK(expr) WITHIN GROUP (ORDER BY expr Returns the rank (ordering) of expr in the
[DESC|ASC][NULLS FIRST|LAST]) group of values returned by the order by
expression
STDDEV([DISTINCT|ALL] expr) Returns the standard deviation of expr
STDDEV_POP([DISTINCT|ALL] expr) Returns the square root of the population
variance from computing the standard
deviation of expr
STDDEV_SAMP([DISTINCT|ALL] expr) Returns the square root of the cumulative
sample standard deviation of expr
SUM([DISTINCT|ALL] expr) Returns the sum of expr. Distinct eliminates
duplicates from the set of values being
summed.
VAR_POP(expr) Returns the population variance of expr. Nulls
are removed from the calculation.
VAR_SAMP(expr) Returns the sample variance of expr. Nulls are
removed from the calculation.
VARIANCE([DISTINCT|ALL] expr) The variance of expr, with duplicates removed
if DISTINCT is specified

Table 1-12. Regression Functions

Function What it does


REGR_SLOPE(expr,expr2) Returns the slope of a least squares regression line of the set of
number pairs defined by (expr,expr2)
12

Function What it does


REGR_INTERCEPT(expr,expr2) Returns the Y intercept of a least squares regression line of the set
of number pairs defined by (expr,expr2)
REGR_COUNT(expr,expr2) Returns the number of NOT NULL pairs used to fit the least squares
regression line of the set of number pairs defined by (expr,expr2)
REGR_R2(expr,expr2) Returns the R2 value (coefficient of determination) of a least
squares regression line of the set of number pairs defined by
(expr,expr2)
REGR_AVGX(expr,expr2) Returns the average value of expr2 of a least squares regression
line of the set of number pairs defined by (expr,expr2) after
removing nulls from the calculation
REGR_AVGY(expr,expr2) Returns the average value of expr of a least squares regression line
of the set of number pairs defined by (expr,expr2) after removing
nulls from the calculation
REGR_SXX(expr,expr2) Returns the value of calculating REGR_COUNT(expr, expr2) *
VAR_POP(expr2) with nulls removed from the calculation
REGR_SYY(expr,expr2) Returns the value of calculating REGR_COUNT(expr, expr2) *
VAR_POP(expr) with nulls removed from the calculation
REGR_SXY(expr,expr2) Returns the value of calculating REGR_COUNT(expr, expr2) *
COVAR_POP(expr,expr2) with nulls removed from the calculation

Analytical Functions

All of the aggregate functions described above can also have analytic functionality, using the OVER
(analytical_clause) syntax. For space considerations, we've declined to list them twice. Note that you
cannot nest analytic functions.

Table 1-13. Analytical Functions

Function What it does


FIRST_VALUE(expr) OVER (analytical_clause) Returns the first in the ordered set of expr
LAG(expr[,offset][,default]) OVER Provides access at a point offset prior to the
(analytical_clause) cursor in a series of rows returned by expr
LAST_VALUE(expr) OVER (analytical_clause) Returns the last in the ordered set of expr
LEAD(expr[,offset][,default]) OVER Provides access at a point offset beyond the
(analytical_clause) cursor in a series of rows returned by expr
NTILE(expr) OVER (analytical_clause) Divides the ordered dataset into expr
13

Function What it does


number of buckets
RATIO_TO_REPORT(expr) OVER Returns the ratio of expr to the sum
(analytical_clause) returned by analytical_clause
ROW_NUMBER(expr) OVER Assigns a unique number to each row
([partition_clause]order_by_clause)

Object Reference Functions

Table 1-14. Object Reference Functions

Function What it does


DEREF(expr) Returns the object reference of expr. Without this, an the
object ID of the reference would be returned.
MAKE_REF(table|view,key Returns a REF to a row of an object view or table
[,key...])
REF(correlation_var) Returns the REF value of correlation_var
REFTOHEX(expr) Converts expr to its hexadecimal equivalent where expr
is a REF
VALUE(correlation_var) Returns the value associated with the correlation_var

Date Format Models

Table 1-15. Date Format Models

Element Value Returned


- / , . ; “text” Quoted text and punctuation are reproduced in the result
AD A.D. Indicates date that is AD. Periods optional
AM A.M. PM Before or after noon. Periods optional
P.M.
BC B.C. Indicates date that is BC. Periods optional
CC SCC Century (SCC precedes BC century with -)
D The day of week (1–7)
DAY The name of the day of the week (Monday, Tuesday, etc.). Padded to
9 characters.
DD Day of month (1–31)
14

Element Value Returned


DDD The number of the day of year (1–366)
DY The name of the day of the week, abbreviated
E Abbreviated era name (for Japanese Imperial, ROC Official, and Thai
Buddha calendars)
EE Full era name
FF [1–9] Fractional seconds. 1–9 specifies the number of digits
HH Hour of day(12-hour clock)
HH12 Hour of day (12-hour clock)
HH24 Hour of day (24-hour clock)
IW Number of Week of the year
IYY IY I Last 3, 2, or 1 digit(s) of ISO year
IYYY 4-digit ISO year
J Julian day(number of days since January 1, 4712 BC)
MI Minute (0–59)
MM Month (01–12)
MON JAN, FEB, MAR, etc.
MONTH Full month name, padded to 9 characters
Q Quarter of year where JAN–MAR = 1
RM Month in Roman numerals (I–XII; JAN = I)
RR Last two digits of the year, for years in previous or next century
(where previous if current year is <=50, next if current year >50)
RRRR Round year. Accepts 4 or 2 digit input, 2 digit returns as RR.
SS Seconds (0–59)
SSSSS Seconds past midnight (0–86399)
TZD Abbreviated Time Zone String with Daylight Savings
TZH Time zone hour
TZM Time zone minute
WW The week of the year (1–53)
W The week of the month
X Local radix character
Y, YYY Year, with comma as shown
YEAR Year, fully spelled out. For SYEAR, BC dates use “-”

SYEAR
Y Final one, two, or three digits of the year

YY

YYY

Date Prefixes and Suffixes


15
The following prefixes can be added to date formats:

FM The fill mode toggle. Suppresses blank padding of MONTH or DAY


FX Specifies that the format of TO_DATE functions must be an exact match

The following suffixes may be added to date formats:

TH converts to an ordinal number ("5TH")


SP Spells out the number ("FIVE")
SPTH or THSP Spells out the ordinal number ("FIFTH")

Number Format Models

Table 1-16. Number Format Models

Element Example Value Returned


, 9,999 Returns a comma at the position specified
. 99.99 Returns a period (decimal point) at the position specified
$ $9999 Leading dollar sign
0 0999 Returns value with leading zeros
0 9990 Returns value with trailing zeros
9 9999 Returns value with the specified number of digits. Leading space
if positive, – if negative. Leading zeros are blank, except when
integer portion is zero, then a single leading zero is returned.
B B9999 As in 9, above, but returns a blank in all cases for leading zeros
C C999 Returns the ISO currency symbol
D 99D99 Returns the NLS decimal character in the specified position
EEEE 9.9EEEE Returns value in scientific notation
FM FM90.9 Returns a value without leading or trailing blanks
G 9G999 Returns the value with the NLS group separator in the specified
position
L L999 Returns the value with the NLS Local Currency Symbol in the
specified position. Negative values have a trailing minus sign
(–), positive values with a trailing blank.
PR 9999PR Returns negative values in <angle brackets>, positives have
leading and trailing blanks
RN rn RN rn Returns the value as Roman numerals, in the case-specified
S S9999 Returns the value with a + or – sign denoting positive or
9999S negative value in the position shown (can only be first or last
position).
TM TM “Text minimum.” Returns the smallest number of characters
possible and is case-insensitive. Default is TM9 that uses fixed
notation up to 64 characters, then scientific notation.
U U9999 Returns the “Euro” (or other) NLS dual currency symbol in the
specified position
16

Element Example Value Returned


V 999V99 Returns a value multiplied by 10 times the number of 9s
specified after the V
X XXXX Returns the Hexadecimal value. Precede with a 0 to have leading
zeros, or FM to remove the leading blank.

Advanced cursors Introduction to Oracle SQL & PL/SQL


Updated 03/20/2005

• Cursor FOR Loops


• Parameterized Cursors (passing parameters to cusors)

• Ref Cursor
• BULK COLLECT INTO

Cursor FOR Loops

The cursor FOR loop provides an elegant, simple syntax to to iterate over a result set. To underscore the
advantages of cursor FOR loops, consider the following PL/SQL block which uses a basic loop.

SET SERVEROUTPUT ON
DECLARE
-- EMP_CURSOR will retrieve all columns and all rows from the EMP table
CURSOR emp_cursor IS
SELECT *
FROM emp;
emp_record emp_cursor%ROWTYPE;
BEGIN
17

OPEN emp_cursor;
LOOP
--Advance the pointer in the result set, assign row values to EMP_RECORD
FETCH emp_cursor INTO emp_record;
--Test to see if no more results
EXIT WHEN emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' [' ||emp_record.empno||']');
END LOOP;
CLOSE emp_cursor;
END;
/

• Now examine the same query using a cursor FOR loop:

DECLARE
-- EMP_CURSOR will retrieve all columns and all rows from the EMP table
CURSOR emp_cursor IS
SELECT *
FROM emp;
BEGIN
FOR emp_record IN emp_cursor LOOP
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' ['||emp_record.empno||']');
END LOOP;
END;
/
• You can use a cursor for loop without a declared cursor by including a query in the FOR statement.
This can enable very compact code.

BEGIN
FOR emp_record IN (SELECT * FROM emp) LOOP
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' ['||emp_record.empno||']');
END LOOP;
END;
/
• While you can use EXIT statement within a FOR cursor loop, you should not use a cursor FOR loop
if you may need to exit the LOOP prematurely. Use a basic or WHILE loop instead.

Parameterized Cursors (passong parameters to cursors)

• Cursors can use variables to adjust which rows they select when opened. Instead of hard-coding a
value into the WHERE clause of a query, you can use a variable as a placeholder for a literal value.
The variable placeholder will substituted with the value of the variable when the cursor is opened.
This makes a query more flexible.

DECLARE
v_deptno NUMBER;
v_job VARCHAR2(15);
v_sum_sal NUMBER;

/* Since v_deptno and v_job are declared above, they are in scope,
* and can be referenced in the cursor body. They will be used as
* placeholders until the cursor is opened, at which
18

* point the values of the variables will be substituted (bound)


* into the query as literals.
*/
CURSOR emp_stats_cursor IS
SELECT SUM(sal) sum_sal
FROM emp
WHERE deptno=v_deptno
AND job=v_job;
BEGIN
v_deptno:=10;
v_job:='MANAGER';

OPEN emp_stats_cursor;
/* When the cursor is opened, the values of the PL/SQL
* variables are bound into the query.
* In this example, the cursor would return the
* result set using the following query:
SELECT SUM(sal) sum_sal
FROM emp
WHERE deptno=10 --current value of v_deptno is 10
AND job='MANAGER'; --current value of v_job is 'MANAGER'
*/

FETCH emp_stats_cursor INTO v_sum_sal;


CLOSE emp_stats_cursor;
DBMS_OUTPUT.PUT_LINE(v_deptno||' : '||v_job||' : '||v_sum_sal);

v_deptno:=30;
v_job:='SALESMAN';

OPEN emp_stats_cursor;
/* In this example, the cursor would
* return the result set using the following query:

SELECT SUM(sal) sum_sal


FROM emp
WHERE deptno=30 --current value of v_deptno is 30
AND job='SALESMAN'; --current value of v_job is 'SALESMAN'
*/

FETCH emp_stats_cursor INTO v_sum_sal;


CLOSE emp_stats_cursor;
DBMS_OUTPUT.PUT_LINE(v_deptno||' : '||v_job||' : '||v_sum_sal);
END;
/
• This method works, but there is a better way. You can declare a cursor using parameters; then
whenever you open the cursor, you pass in appropriate parameters. This technique is just as
flexible, but is easier to maintain and debug. The above example adapted to use a parameterized
cursor:

DECLARE

v_sum_sal NUMBER;

/* The parameters are declared in the cursor declaration.


* Parameters have a datatype, but NO SIZE; that is, you
19

* can declare a parameter of datatype VARCHAR2, but never


* VARCHAR2(20).
* As above, the parameters will be placeholders (that is,
* formal parameters) until the cursor is opened, at which
* point the actual parameter values will be substituted
* (bound) into the query as literals.
* The parameters are in scope (that is, they can be referenced)
* only inside the cursor body.
*/

CURSOR emp_stats_cursor(cp_deptno NUMBER, cp_job VARCHAR2) IS


SELECT SUM(sal) sum_sal
FROM emp
WHERE deptno=cp_deptno
AND job=cp_job;

BEGIN

OPEN emp_stats_cursor(10,'MANAGER');
/* When the cursor is opened, the values of the parameters
* are bound into the query. In other words, the actual parameters
* (values) will replace the formal parameters (placeholders).
* In this example, the cursor would return the result set using
* the following query:

SELECT SUM(sal) sum_sal


FROM emp
WHERE deptno=10
AND job='MANAGER';
*/

FETCH emp_stats_cursor INTO v_sum_sal;


CLOSE emp_stats_cursor;

DBMS_OUTPUT.PUT_LINE('10 : MANAGER : '||v_sum_sal);

OPEN emp_stats_cursor(30,'SALESMAN');
/* In this example, the cursor would return the result set
* using the following query:

SELECT SUM(sal) sum_sal


FROM emp
WHERE deptno=30
AND job='SALESMAN';
*/

FETCH emp_stats_cursor INTO v_sum_sal;


CLOSE emp_stats_cursor;
DBMS_OUTPUT.PUT_LINE('30 : SALESMAN : '||v_sum_sal);
END;
/
• Parameterized cursors are open easier to debug in larger PL/SQL blocks. This is because the the
declaration of the cursor body is often far from where the cursor is opened, but processing of the
cursor's result set is usually close to where the cursor is opened.
o When opening a cursor which uses variables, you must assign appropriate values to those
variables before opening the cursor. So when you are debugging how the cursor is opened,
20
you must confirm the appropriate variable names where the cursor is declared . This is
often inconvenient.
o When using PL/SQL variables, it's difficult to confirm the values of the variables when the
cursor is opened because the values could be set at an any point from the declaration on.
• Parameterized cursors eliminate both these problems because the values used in the cursor can be
determined in one place, the OPEN statement. And you don't need to know the names of the
cursor parameters. (Though you do need to know the order of the parameters.)
• An example which combines a cursor FOR loop with a parameterized query:

DECLARE
v_sum_sal NUMBER;
CURSOR emp_stats_cursor(cp_deptno NUMBER, cp_job VARCHAR2) IS
SELECT SUM(sal) sum_sal
FROM emp
WHERE deptno=cp_deptno
AND job=cp_job;

BEGIN
FOR dept_job_rec IN (SELECT DISTINCT deptno,job FROM emp) LOOP
OPEN emp_stats_cursor(dept_job_rec.deptno, dept_job_rec.job);
FETCH emp_stats_cursor INTO v_sum_sal;
CLOSE emp_stats_cursor;
DBMS_OUTPUT.PUT_LINE(dept_job_rec.deptno ||' : '||dept_job_rec.job||' : '||v_sum_sal);
END LOOP;
END;
/

Ref Cursor

Ref Cursor is THE method to returns result sets to client applications (like C, VB, etc).
You cannot define ref cursors outside of a procedure or function in a package specification or body. Ref
cursors can only be processed in the defining procedure or returned to a client application. Also, a ref
cursor can be passed from subroutine to subroutine and a cursor cannot. To share a static cursor like that,
you would have to define it globally in a package specification or body. Because using global variables is
not a very good coding practice in general, Ref cursors can be used to share a cursor in PL/SQL
without having global variables getting into the mix.

Last, using static cursors—with static SQL (and not using a ref cursor) —is much more efficient than using
ref cursors, and the use of ref cursors should be limited to

• Returning result sets to clients


• Sharing cursors across many subroutines (very similar to the above point, actually)
• Achieving your goal when there is no other efficient/effective means of doing so, such as
when dynamic SQL must be used

In short, you want to use static SQL first and use a ref cursor only when you absolutely have to.An
Example of Ref cursor is here:

create or replace function sp_ListEmp return types.cursortype


as
l_cursor types.cursorType;
begin
open l_cursor for select ename, empno from emp order by ename;
return l_cursor;
end;
/
21

Or like this for a procedure:

create or replace procedure getemps( p_cursor in out types.cursorType )


as
begin
open p_cursor for select ename, empno from emp order by ename;
end;

Cursor variables are cursors opened by a pl/sql routine and fetched from by another application or pl/sql
routine (in 7.3 pl/sql routines can fetch from cursor variables as well as open them). The cursor variables
are opened with the privelegs of the owner of the procedure and behave just like they were completely
contained within the pl/sql routine. It uses the inputs to decide what database it will run a query on. Here
is an example:

create or replace package types


as
type cursorType is ref cursor;
end;
/

create or replace function sp_ListEmp return types.cursortype


as
l_cursor types.cursorType;
begin
open l_cursor for select ename, empno from emp order by ename;
return l_cursor;
end;
/
create or replace procedure getemps( p_cursor in out types.cursorType )
as
begin
open p_cursor for select ename, empno from emp order by ename;
end;
/

BULK COLLECT INTO

Introduced in Oracle8i, BULK COLLECT allows you to retrieve multiple rows of data directly into
PL/SQL Collections. It will raise NO_DATA_FOUND if it doesn't find any rows, but it certainly doesn't
raise TOO_MANY_ROWS if it finds more than one!
More information can be obtained HERE.

DECLARE
TYPE title_aat IS TABLE OF magazine.title%TYPE
INDEX BY BINARY_INTEGER;

l_titles title_aat;
BEGIN
SELECT title
BULK COLLECT INTO l_titles
FROM magazine;
END;

My advice regarding the kinds of cursors to use when fetching data from Oracle in PL/SQL programs it to
whenever possible, use BULK COLLECT—it offers dramatic performance gains. In Oracle9i Release 2, you
can even use BULK COLLECT to fetch multiple rows directly into a collection of records. Of course, when
you want/need to fetch just a single row, BULK COLLECT doesn't make sense.
22

Explicit Cursors Introduction to Oracle SQL & PL/SQL


Updated 03/20/2005

• Concepts
• Working with explicit cursors
• Cursor Attributes
• Records and %ROWTYPE

Concepts of Implicit (Static) Cursors

Oracle uses cursors to process all SQL statements. From SQL*Plus, you issue a command and Oracle
takes care of creating a cursor and processing the command. These types of cursors are called implicit
cursors, because you (the user) cannot not name or control the cursor directly. In PL/SQL you also use
implicit cursors for DML statements and single select statements.

Implicit (static) cursor: commonly refers to the good old SELECT INTO, in which Oracle implicitly opens,
executes and closes the cursor for you, depositing any selected values of a single row INTO program data
structures.

CREATE OR REPLACE PROCEDURE show_title (author_in IN magazine.author%TYPE)


IS
l_title magazine.title%TYPE;
BEGIN
SELECT title INTO l_title
FROM magazine
WHERE author = author_in;
END;

The single select is a simple solution, but insufficient to solve the following problems.

• You may need to process more general result sets which return more or less than one row
• You may need to process rows in a specific order
• You may need to control the execution of your program depending on the result set.

To address these problems you need to use explicit cursors.

Working with explicit cursors

Explicit cursors work the same way implicit cursors work, but you control the execution explicitly.

DECLARE
v_ename VARCHAR2(12);
v_empno NUMBER:=7839;
23

CURSOR ename_cursor IS
SELECT ename
FROM emp
WHERE empno=v_empno;
BEGIN
OPEN ename_cursor;
FETCH ename_cursor INTO v_ename;
CLOSE ename_cursor;
END;
/

Examining each statement in turn:

DECLARE
...
Oracle allocates memory and processes the query
CURSOR ename_cursor IS
SELECT ename
FROM emp
WHERE empno=v_empno;
Oracle binds variables, and executes query identifying
BEGIN
the active set.
OPEN ename_cursor;

Oracle fetches a row from the active set, sets the


FETCH ename_cursor INTO v_ename; value of v_ename, and advances the pointer to the
active set.
FETCH ename_cursor INTO v_ename; Oracle fetches a row from the active set (as above).
CLOSE ename_cursor;
END; Oracle releases memory area.
/

Cursor Attributes

Use cursor attributes to determine whether the row was found and what number the row is.

Cursor Attributes
Attribute Description
cur%ISOPEN Returns TRUE if cursor is open.
cur%NOTFOUND Returns FALSE if the last FETCH found a row.
Returns TRUE if the last FETCH found a row.. (Logical inverse of
cur%FOUND
%NOTFOUND).
cur%ROWCOUNT Returns the number of rows modified by the DML statement.
SQL%BULK_ROWCOUNT Returns the number of rows processed for each execution of the bulk DML
operation.

Example using cursor attributes:

DECLARE
v_empno emp.empno%TYPE;
v_ename emp.ename%TYPE;
24

CURSOR emp_cursor IS
SELECT empno, ename
FROM emp;
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor INTO v_empno, v_ename;
EXIT WHEN emp_cursor%ROWCOUNT>10 or emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(INITCAP(v_ename)||' ['||v_empno||']');
END LOOP;
CLOSE emp_cursor;
END;
/

Records and %ROWTYPE

Instead of fetching values into a collection of variables, you could fetch the entire row into a record like so.

DECLARE
CURSOR emp_cursor IS
SELECT empno, ename, sal, job, deptno
FROM emp
WHERE deptno=30;
-- This creates a record named emp_row
-- based on the structure of the cursor emp_cur
emp_row emp_cursor%ROWTYPE;
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor
INTO emp_row;
EXIT WHEN emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(emp_row.ename||' ['
||emp_row.empno||'] makes '||TO_CHAR(emp_row.sal*12,'$99,990.00'));
END LOOP;
END;
/

You can reference the fields of a record using the syntax record_name.field_name.

In addition to basing a record on a cursor, you can also define records based on tables like so.

DECLARE
CURSOR emp_cursor IS
SELECT *
FROM emp
WHERE deptno=30;

emp_row emp%ROWTYPE; -- This creates a record named EMP_ROW


-- based on the structure of the EMP table
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor
INTO emp_row;
25
EXIT WHEN emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(emp_row.ename||' ['||emp_row.empno
||'] makes '||TO_CHAR(emp_row.sal*12,'$99,990.00'));
END LOOP;
END;
/

Oracle data types

====== DDL ======

Type Storage Range/Length Comments


----------------- ---------- -------------- -------------------
NUMBER 16 40 digit floating point
FLOAT 16 40 digit floating point
SMALLINT 16 40 digit floating point

NUMBER(a,b) varies a digits, b precision


FLOAT(a,b) varies a digits, b precision

DECIMAL 16 40 digit

INTEGER 16 40 digits
INTEGER(a) varies a digits

CHAR(a) a a=(1-255)
VARCHAR(a) varies 1 - 255
VARCHAR2(a) varies 1 - 2000

DATE 8 1/1/4217BC - 12/31/4712AD precision to minutes

LONG varies 0 - 2 GB stored inline, obsolete *


LONG RAW varies 0 - 2 GB stored inline, obsolete *

LONG VARCHAR varies 0 - 2 GB stored inline, obsolete *


26

BLOB varies 0 - 4 GB stored separate from table


CLOB varies 0 - 4 GB stored separate from table
NCLOB varies 0 - 4 GB stored separate from table

BFILE ?? ?? pointer to O/S file

ROWID 8 n/a row identifier within block

* Long datatypes are discouraged in Oracle 8. Note that are long and blob
datatypes are incompatible.

====== PL-SQL data types (differences) ======

Type Storage Range/Length Comments


----------------- ---------- -------------- ----------------------------
NUMERIC

VARCHAR
VARCHAR2

BLOB must be read in 32k chunks


CLOB
NCLOB

Creating a table

PCTFREE = Amount of space to leave in block during insert operations. Allows room for records to grow
within the same area.
PCUSED = The threshold at which the block is placed back on the free block list.
INITIAL/NEXT = The initial disk allocated, and the next extent size.
LOGGING = Indicates whether operations are written to the redo logs.

CREATE TABLE EMPLOYEE (


EMP_ID NUMBER(8),
LNAME VARCHAR2(30),
FNAME VARCHAR2(15),
HIRE_DT DATE,
SALARY NUMBER(8,2) )
PCTFREE 20
PCTUSED 50
STORAGE (
INITIAL 200K NEXT 200K
PCTINCREASE 0 MAXEXTENTS 50 )
TABLESPACE ts01
LOGGING ;

/* Free table unallocated table blocks */

ALTER TABLE COMPANY DEALLOCATE UNUSED ;


27
Creating indexes

CREATE UNIQUE INDEX EMP_IDX ON EMPLOYEE (EMP_ID) ;

/* index create - table has sorted data */

CREATE UNIQUE INDEX IDX_INVOICE_ITEMS ON INVOICE_ITEMS


(INVOICE_ID,YEAR,FREQ_CODE,FREQ_NUMBER,FIELD_NUMBER,RECORD_SEQ)
TABLESPACE TS_07
NOLOGGING
NOSORT ;

/* create index - constraint */

ALTER TABLE INVOICE_FORMAT


ADD CONSTRAINT PK_INVOICE_FORMAT PRIMARY KEY(INVOICE_ID)
USING INDEX TABLESPACE PROD_IDX_01;

Creating constraints

/* primary key constraint */

ALTER TABLE EMPLOYEE (


CONSTRAINT EMP_PK
PRIMARY KEY (EMP_ID)
);

ALTER TABLE REPORT_FORMAT


ADD CONSTRAINT PK_REPORT_FORMAT PRIMARY KEY(REPORT_ID)
USING INDEX TABLESPACE PROD_IDX_01;

/* foreign key constraint */

ALTER TABLE EMPLOYEE ADD (


CONSTRAINT EMP_LOC_ASSIGN_FK
FOREIGN KEY (EMP_ID,
LOC_CD)
REFERENCES LOC_REGISTRY (
EMP_ID,
LOC_CD)
);

Creating and using a sequence

/* create a sequence for employee ids */


28
CREATE SEQUENCE EMP_ID_SEQ
INCREMENT BY 1
NOMINVALUE
NOMAXVALUE
NOCYCLE
CACHE 20
NOORDER ;

/ * use the next emp id, and increment the sequence */

INSERT INTO EMPLOYEE(EMP_ID, LNAME, FNAME)


VALUES (EMP_ID_SEQ.NEXTVAL, 'SMITH', 'JIM') ;

/* get the current value of the sequence */

INSERT INTO EMPLOYEE(EMP_ID, LNAME, FNAME)


VALUES (EMP_ID_SEQ.CURRVAL, 'SMITH', 'JIM') ;

Creating triggers

The example below illustrates versioning of the EMP_RESUME table, which contains a blob field.

CREATE OR REPLACE TRIGGER EMP_RES_INS_TR


AFTER INSERT ON EMP_RES
FOR EACH ROW

DECLARE
VER1 NUMBER ;
EBLOB BLOB ;
VBLOB BLOB ;

BEGIN
EBLOB := EMPTY_BLOB();

SELECT (COUNT(*) + 1) INTO VER1


FROM VEMP_RES
WHERE EMP_ID =:NEW.EMP_ID ;

VBLOB := :NEW.RESUME ;

INSERT INTO VEMP_RES


( EMP_ID, DOC_URL,
A_USERID, D_MODIFIED, VER_NO, RESUME)
VALUES (
:NEW.EMP_ID, :NEW.DOC_URL,
USER, SYSDATE, VER1, EBLOB ) ;

SELECT RESUME
INTO EBLOB
FROM VEMP_RES
WHERE EMP_ID =:NEW.EMP_ID AND
VER_NO = VER1
29
FOR UPDATE ;

UPDATE VEMP_RES
SET RESUME = VBLOB
WHERE EMP_ID =:NEW.EMP_ID AND
VER_NO = VER1 ;

END;

Renaming a table

RENAME COMPANY TO CORPORATION ;

Synonyms and Database Links

-- Synonym Creation

GRANT SELECT ON USER5.COMPANY TO USER6 ;

CREATE SYNONYM USER6.COMPANY5 FOR USER5.COMPANY ;

-- Database Link

CREATE DATABASE LINK ARCHIVE_DATA CONNECT TO USER5 IDENTIFIED BY TIGER USING 'SERVER5' ;

/* user within this system can now reference tables using ARCHIVE_DATA.tablename */

Changing a column's type

ALTER TABLE CORPORATION MODIFY (COMPANY_NM VARCHAR2(100));

Moving a table

ALTER TABLE COMPANY MOVE


STORAGE (
INITIAL 200K
NEXT 200K
PCTINCREASE 0
MAXEXTENTS 50 )
TABLESPACE TS_01 ;

Using SQL-Plus
30
SQL-Plus is a query / command line utility which has some powerful formatting capabilities.
Getting Started
; Command line terminator
/ Execute the current batch of commands
SET SERVEROUTPUT ON Allow messages from PL-SQL to be displayed
SHOW ERRORS Show errors from last batch
EDIT Run editor, and load buffer
CLEAR BUFFER Clear buffer commands
& Prompt for value
@ Run commands in @filename

/**** Examples ****/

/* prompt for process id, and kill */

alter system kill session '&Victim'


/

Creating a stored procedure

Below is a simple stored procedure which deletes an invoice.


Note:
- variable declaration placement
- the syntax for comments /* --- */
- ALL select statements must have an into statement for the result set. Oracle stored procedures must use
"out" variables to return results to client programs.
- the declaration of INV_ID1 uses the column def as a prototype
CREATE OR REPLACE PROCEDURE PROC_DELETE_INVOICE
( USERID1 VARCHAR2, INV_ID1 INVOICE.INV_ID%TYPE )
AS
INV_COUNT NUMBER ;

BEGIN

INV_COUNT := 0;

/* check if invoice exists */

SELECT COUNT(*)
INTO INV_COUNT
FROM INVOICE
WHERE INV_ID = INV_ID1 ;

IF INV_COUNT > 0 THEN

DELETE FROM INVOICE WHERE INV_ID = INV_ID1 ;


COMMIT ;

END IF ;

END ;

Displaying output
31
All SELECT statements in PL-SQL must have an INTO clause; therefore another method is needed to
display output to the console.

DBMS_OUTPUT.PUT_LINE('TEST OUTPUT');

salary := 24000;

dbms_output.put_line(salary);

Output variables

Output variables are used to return data to another procedure, or to an external application which has
invoked the stored procedure.

/* sample procedure header using output variables */

TYPE INV_ARRAY IS TABLE OF NUMBER(8)


INDEX BY BINARY_INTEGER ;

CREATE OR REPLACE PROCEDURE PROC_GET_INV_NOS


( USERID1 IN VARCHAR2, INV_IDS OUT INV_ARRAY)
AS

...

Arrays and structures

Arrays and structures are implemented thought the use of "tables" and "records" in PL-SQL.

/* EXAMPLE OF A SIMPLE RECORD TYPE */

TYPE INVOICE_REC_TYPE IS RECORD


(INV_ID INVOICE.INV_ID%TYPE,
INV_DT INVOICE.INV_DT%TYPE ) ;

/* ARRAY DECLARATION */

TYPE NAME_TABLE_TYPE IS TABLE OF VARCHAR2(20)


INDEX BY BINARY_INTEGER ;

NAME_TABLE NAME_TABLE_TYPE ;

/* ARRAY SUBSCRIPTING */

I := I + 1;

NAME_TABLE(I) := 'JSMITH';

Conditionals
32
Sample formats of conditional branching are given below:

IF <condition> THEN <statement> ;

IF <condition> THEN
<statements> ;
END IF;

/* sample statement, note the pipes for concatenation */


IF (COUNT1 = 0) AND (COUNT2 > 0) THEN
RETMSG := 'Security attributes have not been assigned, ' ||
'you are restricted.';
ELSE
RETMSG := 'You are OK';
END IF;

Looping

WHILE (I < 10)


LOOP
/* ... SOME CMDS ... */

I = I + 1;
END LOOP;

Cursors

The first example depicts dbase-style row processing ; the second a more traditional "fetch" approach.

PROCEDURE PROC_SCAN_INVOICES (EXPIRE_DT IN DATE)


IS

CURSOR INVOICE_CUR IS
SELECT INV_ID, INV_DT FROM INVOICE ;

TYPE INVOICE_REC_TYPE IS RECORD


(INV_ID INVOICE.INV_ID%TYPE,
INV_DT INVOICE.INV_DT%TYPE ) ;

INVOICE_REC INVOICE_REC_TYPE ;

BEGIN

FOR INVOICE_REC1 IN INVOICE_CUR


LOOP

IF INVOICE_REC.INV_DT < EXPIRE_DT THEN

DELETE FROM INVOICE


WHERE INV_ID = INV_REC.INV_ID ;

DBMS_OUTPUT.PUT_LINE('INVOICE DELETETED:');
DBMS_OUTPUT.PUT_LINE(INV_REC.INV_ID);

END
33
END LOOP;

END;

/* ======================================= */

CREATE OR REPLACE PROCEDURE PROC_DOCEXPIRE_RPT


( RPT_BODY OUT LONG RAW )
IS

RPT_LINE VARCHAR2(1900);
RPT_PART VARCHAR2(1900);
RPT_LEAD VARCHAR2(200);
GLIB_ID1 NUMBER ;
GLIB_ID2 VARCHAR(12);
ORIG_LOC_CD1 VARCHAR2(12);
AUTHOR_ID1 VARCHAR2(30);
CONTRIBUTORS1 VARCHAR2(80);
TOPIC1 VARCHAR2(80);
NBR_ACCESS1 NUMBER ;
NBR_ACCESS2 VARCHAR2(12);
TOT_EXPIRED1 NUMBER ;
TOT_EXPIRED2 VARCHAR2(12);
COUNT1 NUMBER ;
RPT_BODY_PART LONG ;

CURSOR CUR1 IS
SELECT GLIB_ID, ORIG_LOC_CD, AUTHOR_ID, CONTRIBUTORS, TOPIC, NBR_ACCESS
FROM GEN_DOC
WHERE EXPIRE_DT < (SYSDATE + 30)
ORDER BY ORIG_LOC_CD, GLIB_ID ;

BEGIN

SELECT COUNT(*)
INTO TOT_EXPIRED1
FROM GEN_DOC
WHERE STAT_CD='90';

TOT_EXPIRED2 := TO_CHAR(TOT_EXPIRED1);

RPT_LEAD := '<H5>TOTAL EXPIRED DOCUMENT COUNT TO DATE: ... ' ||


TOT_EXPIRED2 || '</H5><HR>' ;

RPT_LINE := '<HTML><BODY BGCOLOR=#FFFFFF>' ||


'<H6>ABC Corporation</H6>' ||
'<H2>Gen Doc System - Documents Expiring Within 30 Days</H2><HR>' ||
RPT_LEAD ;
34
COUNT1 := 0;

OPEN CUR1;

RPT_LINE := RPT_LINE || '<TABLE>' ||


'<TD><U>No. Accesses</U></TD>' ||
'<TD><U>Document #</U></TD>' ||
'<TD><U>Topic</U></TD>' ||
'<TD><U>Author</U></TD>' ;

RPT_BODY := UTL_RAW.CAST_TO_RAW(RPT_LINE);

RPT_LINE := '';

LOOP
COUNT1 := COUNT1 + 1;
EXIT WHEN (COUNT1 > 500);
EXIT WHEN (UTL_RAW.LENGTH(RPT_BODY) > 32000);

FETCH CUR1 INTO


GLIB_ID1, ORIG_LOC_CD1, AUTHOR_ID1, CONTRIBUTORS1, TOPIC1, NBR_ACCESS1 ;
EXIT WHEN CUR1%NOTFOUND ;

RPT_PART := '<TR><TD>';
NBR_ACCESS2 := TO_CHAR(NBR_ACCESS1);
RPT_PART := CONCAT(RPT_PART,NBR_ACCESS2);
RPT_PART := CONCAT(RPT_PART,'</TD><TD>');
GLIB_ID2 := TO_CHAR(GLIB_ID1);

RPT_PART := RPT_PART || ORIG_LOC_CD1 || '-' || GLIB_ID2 ||


'</TD><TD>' || TOPIC1 || '</TD><TD>' ||
AUTHOR_ID1 || '</TD><TR>' ;

RPT_LINE := CONCAT(RPT_LINE, RPT_PART);

RPT_BODY_PART := UTL_RAW.CAST_TO_RAW(RPT_LINE);
RPT_BODY := UTL_RAW.CONCAT(RPT_BODY,RPT_BODY_PART);
-- RPT_BODY := RPT_BODY || RPT_LINE;
RPT_LINE := '';

END LOOP;
CLOSE CUR1 ;

RPT_LINE := '</TABLE></BODY></HTML>';

RPT_BODY_PART := UTL_RAW.CAST_TO_RAW(RPT_LINE);
RPT_BODY := UTL_RAW.CONCAT(RPT_BODY, RPT_BODY_PART);

EXCEPTION
WHEN OTHERS THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: PROC_DOCSTAT_RPT');
GLIB_ID1 := UTL_RAW.LENGTH(RPT_BODY);
DBMS_OUTPUT.PUT_LINE(GLIB_ID1);
END;

END;
35

Packages

A package is a construct which bounds related procedures and functions together. Variables declared in
the declaration section of a package can be shared among the procedures/functions in the body of the
package.
/* package */

CREATE OR REPLACE PACKAGE INVPACK


IS

FUNCTION COUNTINV (SALESREP IN VARCHAR2) RETURN INTEGER;

PROCEDURE PURGEINV (INV_ID IN INTEGER) ;

END INVPACK;

/* package body */

CREATE OR REPLACE PACKAGE BODY INVPACK


IS

COUNT1 NUMBER;

FUNCTION COUNTINV (SALESREP IN VARCHAR2) RETURN INTEGER


IS
BEGIN

SELECT COUNT(*)
INTO COUNT1
FROM INVOICE
WHERE SALES_REP_ID = SALESREP ;

RETURN COUNT1 ;
END COUNTINV;

PROCEDURE PURGEINV (INV_ID1 IN INTEGER)


IS
BEGIN

DELETE FROM INVOICE


WHERE INV_ID = INV_ID1
36

END PURGEINV;

/* initialization section for package */


BEGIN

COUNT1 := 0 ;

END INVPACK;

Exception Handling

The following block could appear at the end of a stored procedure:

EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('End of data !!);
WHEN OTHERS THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('OTHER CONDITION OCCURRED !');
END;

Using Blobs

Blob variables require special handling in PL-SQL. When reading from a file to a blob, only one statement
is required. When reading from a blob field to a PL-SQL variable, only 32k blocks can be processed, thus
necessitating a loop construct.

/*---------------------------------------*/
/* Read a blob from a file, and write */
/* it to the database. */
/*---------------------------------------*/

set serveroutput on size 500000 ;

truncate table image_test ;

create or replace directory image_dir as '/apps/temp/images' ;

create or replace procedure proc_imp_jpg


(fname1 in varchar2, image_id1 in numeric) is

file1 bfile ;
lblob blob ;
len int ;
e_blob blob ;

begin

file1 := bfilename('IMAGE_DIR',fname1);

e_blob := empty_blob();

insert into image_test (image_id, image_data)


values (image_id1, e_blob )
37
returning image_data into lblob ;

dbms_lob.fileopen(file1);

len := dbms_lob.getlength(file1) ;

dbms_lob.loadfromfile(lblob,file1,len);
dbms_lob.filecloseall();
commit;

exception
when others then
begin
dbms_output.put_line(sqlerrm);
dbms_lob.filecloseall();
commit;
end;

end ;
/

call proc_imp_jpg('jada.jpg',101)
/

/*-----------------------------*/
/* determine the length of */
/* a blob field */
/* by reading it */
/*-----------------------------*/
CREATE OR REPLACE PROCEDURE PROC_BLOB_LENGTH
(PART_ID1 NUMBER)
IS
SRC_LOB BLOB;
BUFFER RAW(100);
AMT BINARY_INTEGER := 100;
POS INTEGER := 1;
COUNTER INTEGER :=0;

BEGIN
SELECT PART_PHOTO INTO SCR_LOB
FROM PARTS
WHERE PART_ID=PART_ID1 ;

IF (SRC_LOB IS NOT NULL) THEN


LOOP
DBMS_LOB.READ (SRC_LOB, AMT, POS, BUFFER);
POS := POS + AMT;
COUNTER:=COUNTER+1;
END LOOP;
ELSE
DBMS_OUTPUT.PUT_LINE('** Source is null');
END IF;

EXCEPTION
WHEN NO_DATA_FOUND THEN
38
DBMS_OUTPUT.PUT_LINE('End of data, total bytes:');
DBMS_OUTPUT.PUT_LINE(POS);
END;

/* ============ Other blob length examples ====== */


/**** Note ! These functions may return null, if the column is null ... an Oracle bug */

X := UTL_RAW.LENGTH(RPT_BODY) ;

Y := DBMS_LOB.GETLENGTH(LONG_RAW_COL);

Using the Context cartridge

Managing Context is an arduous task. The best approach is to use the command line utility as much as
possible. Below is a code sample which creates a policy (a policy is a construct which informs context
which column on what table to scan during a search operation). The next example illustrates how to
perform a search, and stored the result keys in a table.
/* create a policy, on the emp_resume table */

ctx_svc.clear_all_errors;

dbms_output.put_line('Creating Policy ...');

ctx_ddl.create_policy(
POLICY_NAME => 'EMP_RES_POLICY',
COLSPEC => 'EMP_RES.RESUME',
SOURCE_POLICY => 'CTXSYS.DEFAULT_POLICY',
DESCRIPTION => 'EMP Policy',
TEXTKEY => 'EMP_ID',
DSTORE_PREF => 'CTXSYS.DEFAULT_DIRECT_DATASTORE',
FILTER_PREF => 'CTXSYS.HTML_FILTER',
LEXER_PREF => 'CTXSYS.DEFAULT_LEXER'
);
dbms_output.put_line('Indexing Policy ...');
ctx_ddl.create_index('EMP_POLICY');

/* Run a Context query, place the result key values in a table */

/* first, this table needs to be created */

CREATE TABLE EMP_CTX_RESULTS(


TEXTKEY VARCHAR2(64),
TEXTKEY2 VARCHAR2(64),
SCORE NUMBER,
CONID NUMBER );
/* this code can go in a stored proc */

POLICY1 := 'EMP_POLICY';
TABLE1 := 'EMP_CTX_RESULTS';
ID1 := 100 ;
QUERY1 := 'COBOL|FORTRAN';

CTX_QUERY.CONTAINS(POLICY1, QUERY1, TABLE1, 1, ID1);

/* the table will contain records with a CONID of 100 */


/* ... you can use ampersand or pipe as the conditional */
39
Sleep and Wait

Sometimes it is necessary to delay the execution of commands, for debugging, or batch runs.

/* Sleep 60 seconds */

execute dbms_lock.sleep(60);

/* Sleep one hour */

execute dbms_lock.sleep(3600);

Date Manipulation

/* display current time */

select to_char(sysdate, 'Dy DD-Mon-YYYY HH24:MI:SS')


as "SYSDATE"
from dual;

/* insert specific date/time into table */

insert into game_schedule


( sched_id, location, game_date )
values(2982, 'Chicago', to_date('2001/10/31:03:00:00PM', 'yyyy/mm/dd:hh:mi:ssam')) ;
40

Version information

SELECT * FROM product_component_version ;

List free and used space in database

SELECT sum(bytes)/1024 "free space in KB"


FROM dba_free_space;
SELECT sum(bytes)/1024 "used space in KB"
FROM dba_segments;

List session information

SELECT * FROM V$SESSION ;

List names and default storage parameters for all tablespaces

SELECT TABLESPACE_NAME, INITIAL_EXTENT, NEXT_EXTENT, MAX_EXTENTS,


PCT_INCREASE, MIN_EXTLEN
FROM DBA_TABLESPACES;

Tablespace types, and availability of data files

SELECT TABLESPACE_NAME, CONTENTS, STATUS


FROM DBA_TABLESPACES;

List information about tablespace to which datafiles belong

SELECT FILE_NAME,TABLESPACE_NAME,BYTES,AUTOEXTENSIBLE,
MAXBYTES,INCREMENT_BY
FROM DBA_DATA_FILES;

List data file information


41
SELECT FILE#,T1.NAME,STATUS,ENABLED,BYTES,CREATE_BYTES,T2.NAME
FROM V$DATAFILE T1, V$TABLESPACE T2
WHERE T1.TS# = T2.TS# ;

List tablespace fragmentation information

SELECT tablespace_name,COUNT(*) AS fragments,


SUM(bytes) AS total,
MAX(bytes) AS largest
FROM dba_free_space
GROUP BY tablespace_name;

Check the current number of extents and blocks allocated to a segment

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

Check the extents for a given segment

SELECT TABLESPACE_NAME, COUNT(*), MAX(BLOCKS), SUM(BLOCKS)


FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NUMBER ;

Extent information

SELECT segment_name, extent_id, blocks, bytes


FROM dba_extents
WHERE segment_name = TNAME ;

Extent information for a table

SELECT segment_name, extent_id, blocks, bytes


FROM dba_extents
WHERE segment_name = TNAME ;

List segments with fewer than 5 extents remaining

SELECT segment_name,segment_type,
max_extents, extents
FROM dba_segments
WHERE extents+5 > max_extents
AND segment_type<>'CACHE';
42
List segments reaching extent limits

SELECT s.segment_name,s.segment_type,s.tablespace_name,s.next_extent
FROM dba_segments s
WHERE NOT EXISTS (SELECT 1
FROM dba_free_space f
WHERE s.tablespace_name=f.tablespace_name
HAVING max(f.bytes) > s.next_extent);

List table blocks, empty blocks, extent count, and chain block count

SELECT blocks as BLOCKS_USED, empty_blocks


FROM dba_tables
WHERE table_name=TNAME;

SELECT chain_cnt AS CHAINED_BLOCKS


FROM dba_tables
WHERE table_name=TNAME;

SELECT COUNT(*) AS EXTENT_COUNT


FROM dba_extents
WHERE segment_name=TNAME;

Information about all rollback segments in the database

SELECT SEGMENT_NAME,TABLESPACE_NAME,OWNER,STATUS
FROM DBA_ROLLBACK_SEGS;

/* General Rollback Segment Information */

SELECT t1.name, t2.extents, t2.rssize, t2.optsize, t2.hwmsize, t2.xacts, t2.status


FROM v$rollname t1, v$rollstat t2
WHERE t2.usn = t1.usn ;

/* Rollback Segment Information - Active Sessions */

select t2.username, t1.xidusn, t1.ubafil, t1.ubablk, t2.used_ublk


from v$session t2, v$transaction t1
where t2.saddr = t1.ses_addr

Statistics of the rollback segments currently used by instance

SELECT T1.NAME, T2.EXTENTS, T2.RSSIZE, T2.OPTSIZE, T2.HWMSIZE,


T2.XACTS, T2.STATUS
FROM V$ROLLNAME T1, V$ROLLSTAT T2
WHERE T1.USN = T2.USN AND
T1.NAME LIKE '%RBS%';
43
List sessions with active transactions

SELECT s.sid, s.serial#


FROM v$session s
WHERE s.saddr in
(SELECT t.ses_addr
FROM V$transaction t, dba_rollback_segs r
WHERE t.xidusn=r.segment_id
AND r.tablespace_name='RBS');

Active sorts in instance

SELECT T1.USERNAME, T2.TABLESPACE, T2.CONTENTS, T2.EXTENTS, T2.BLOCKS


FROM V$SESSION T1, V$SORT_USAGE T2
WHERE T1.SADDR = T2.SESSION_ADDR ;

Index & constraint information

SELECT index_name,table_name,uniqueness
FROM dba_indexes
WHERE index_name in
(SELECT constraint_name
FROM dba_constraints
WHERE table_name = TNAME
AND constraint_type in ('P','U')) ;

Updating statistics for a table or schema

EXEC DBMS_STATS.GATHER_TABLE_STATS('SCHEMA1','COMPANY');

EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SCHEMA1');

ANALYZE TABLE COMPANY COMPUTE STATISTICS ;

List tables and synonyms

set pagesize 0;

select 'TABLE:',table_name,'current' from user_tables


union
select 'SYNONYM:',synonym_name,table_owner from user_synonyms
order by 1,2 ;

Constraint columns

SELECT constraint_name,table_name, column_name


FROM dba_cons_columns
WHERE table_name = TNAME
ORDER BY table_name, constraint_name, position
END IF;
44
Constraint listing

SELECT constraint_name, table_name,


constraint_type, validated, status
FROM dba_constraints;

Indexed column listing

select
b.uniqueness, a.index_name, a.table_name, a.column_name
from user_ind_columns a, user_indexes b
where a.index_name=b.index_name
order by a.table_name, a.index_name, a.column_position;

Trigger listing

SELECT trigger_name, status


FROM dba_triggers ;

Tuning: library cache

Glossary:
pins = # of time an item in the library cache was executed
reloads = # of library cache misses on execution
Goal:
get hitratio to be less than 1
Tuning parm:
adjust SHARED_POOL_SIZE in the initxx.ora file, increasing by small increments

SELECT SUM(PINS) EXECS,


SUM(RELOADS)MISSES,
SUM(RELOADS)/SUM(PINS) HITRATIO
FROM V$LIBRARYCACHE ;

Tuning: data dictionary cache

Glossary:
gets = # of requests for the item
getmisses = # of requests for items in cache which missed
Goal:
get rcratio to be less than 1
Tuning parm:
adjust SHARED_POOL_SIZE in the initxx.ora file, increasing by small increments

SELECT SUM(GETS) HITS,


SUM(GETMISSES) LIBMISS,
SUM(GETMISSES)/SUM(GETS) RCRATIO
FROM V$ROWCACHE ;

Tuning: buffer cache


45
Calculation:
buffer cache hit ratio = 1 - (phy reads/(db_block_gets + consistent_gets))
Goal:
get hit ratio in the range 85 - 90%
Tuning parm:
adjust DB_BLOCK_BUFFERS in the initxx.ora file, increasing by small increments

SELECT NAME, VALUE


FROM V$SYSSTAT WHERE NAME IN
('DB BLOCK GETS','CONSISTENT GETS','PHYSICAL READS');

Tuning: sorts

Goal:
Increase number of memory sorts vs disk sorts
Tuning parm:
adjust SORT_AREA_SIZE in the initxx.ora file, increasing by small increments

SELECT NAME, VALUE


FROM V$SYSTAT
WHERE NAME LIKE '%SORT%';

Tuning: dynamic extension

An informational query.

SELECT NAME, VALUE


FROM V$SYSSTAT
WHERE NAME='RECURSIVE CALLS' ;

Tuning: rollback segments

Goal:
Try to avoid increasing 'undo header' counts
Tuning method:
Create more rollback segments, try to reduce counts
SELECT CLASS,COUNT
FROM V$WAITSTAT
WHERE CLASS LIKE '%UNDO%' ;

Tuning: physical file placement

Informational in checking relative usages of the physical data files.

SELECT NAME, PHYRDS,PHYWRTS


FROM V$DATAFILE DF, V$FILESTAT FS
WHERE DF.FILE#=FS.FILE# ;

Killing Sessions

Runaway processes can be killed on the UNIX side, or within server manager.
46

/* Kill a session, specified by the returned sess-id / serial number */

SELECT sid, serial#, username from v$session

ALTER SYSTEM KILL SESSION 'sessid,ser#'

Archive Log Mode Status

/* Status of Archive Log Subsystem */

ARCHIVE LOG LIST

/* log mode of databases */

SELECT name, log_mode FROM v$database;

/* log mode of instance */

SELECT archiver FROM v$instance;

Recovering an Instance

An incomplete recovery is the only option if backups are run periodically on a cold instance. Complete
recovery is possible if archive logging is enabled, and backups are run while the database is active.

/* diagnose data file problem */


select * from v$recover_file ;

/* diagnose data file problem, by displaying tablespace info */


select file_id, file_name, tablespace_name, status
from dba_data_files ;

/* find archive log files */


select * from v$recovery_log ;

/* incomplete recovery #1 */

svrmgrl> shutdown abort

[[ In Unix copy data files from backup area to data directory(s). ]]

svrmgrl> connect;
svrmgrl> startup;

/* incomplete recovery #2 */

svrmgrl> shutdown abort;


svrmgrl> connect;
47
svrmgrl> startup mount;
svrmgrl> alter database rename file '/data2/ts05.dbf' to '/backups/ts05.dbf'
svrmgrl> alter database open;

/* incomplete recovery #3, for user error (i.e. drop table ) */


Note: archive logs must exist in LOG_ARCHIVE_DEST

svrmgrl> shutdown abort

[[ backup all files ]]


[[ restore required data file(s), using OS commands ]]

svrmgrl> connect;
svrmgrl> startup mount;
svrmgrl> recover database until time '2002-03-04:15:00:00' ;
svrmgrl> alter database open resetlogs;

/* complete recovery #1, for major recovery operations, closed instance */


Note: archive logs must exist in LOG_ARCHIVE_DEST

svrmgrl> shutdown abort

[[ backup all files ]]

svrmgrl> connect;
svrmgrl> startup mount;
svrmgrl> recover database ;
< or >
svrmgrl> recover datafile '/data4/ts03.dbf'
svrmgrl> startup open;

/* complete recovery #2, for major/minor recovery operations, open instance */


Note: archive logs must exist in LOG_ARCHIVE_DEST

svrmgrl> shutdown abort

[[ backup all files ]]


[[ restore corrupted data files, using OS commands ]]

svrmgrl> connect;
svrmgrl> startup mount;
svrmgrl> set autorecovery on ;
svrmgrl> recover tablespace ts03 ;
< or >
svrmgrl> recover datafile 4 ;
svrmgrl> startup open;

List log file information

These queries list the status / locations of the redo log files.

select group#, member, status from v$logfile ;


48
select group#,thread#,archived,status from v$log ;

A Simple Monitoring Tool

This tool loops a specified number of times, displaying memory usage along with user process counts for a
specific username.

--=================================================
--
-- proc_ora_monitor
--
-- parm1: username to count
-- parm2: number of loops, 5 sec duration
--
--
--=================================================

set serveroutput on ;

create or replace procedure


proc_ora_monitor ( user1 in varchar, reps1 in integer )
is

i number ;
usercount1 number ;
memory1 number ;
date1 varchar(20) ;
msg varchar(99) ;

begin

i := 0 ;

while ( i < reps1 )


loop
msg := '=> ' || to_char(SYSDATE, 'HH:MM:SS PM');

select count(1)
into usercount1
from sys.v_$session
where username = user1 ;

msg := msg || ', ' || user1 || ': ' || usercount1 ;

select round(sum(bytes)/1024/1024 ,2)


into memory1
from sys.v_$sgastat
where pool = 'shared pool' and
name = 'free memory' ;

msg := msg || ', free mb = ' || memory1 ;

select round(sum(bytes)/1024/1024 ,2)


into memory1
from sys.v_$sgastat
49
where pool = 'shared pool' and
name = 'processes' ;

msg := msg || ', processes mb = ' || memory1 ;

dbms_output.put_line(msg) ;

dbms_lock.sleep(5) ;

i := i + 1 ;
end loop ;

end;
/

show errors ;

execute proc_ora_monitor('SILVERUSER',2) ;

exit

Connection Errors

-------------------------------------------------------
ORA-01034: ORACLE not available
-------------------------------------------------------
TNS-12564: TNS:connection refused
-------------------------------------------------------
TNS-12530: Unable to start a dedicated server process
50
-------------------------------------------------------
Connection errors can crop up out of nowhere ; the error message tend to be vague, and not useful at all.
Here's a plan of attack which will solve many connection issues. Try each step, and proceed if the problem
persists.

1) Check your environment ; verify the variables depicted below are set.
( NT: check the registry )
The example below details a Solaris/CSH environment.
Note the TWO_TASK setting ...

setenv ORACLE_BASE /apps/oracle


setenv ORACLE_HOME ${ORACLE_BASE}
setenv ORACLE_SID db22
setenv TWO_TASK $ORACLE_SID
setenv LD_LIBRARY_PATH $ORACLE_HOME/lib:/usr/lib/X11
setenv ORACLE_PATH $ORACLE_HOME/bin:/usr/bin:/usr/local/bin
setenv ORA_CLIENT_LIB shared
set path = ($ORACLE_HOME/bin /bin /usr/bin /usr/local/bin /sbin /usr/sbin /usr/bin/X11 .)

2) Try to ping the instance:

tnsping db22

If there's an error, check $ORACLE_HOME/network/admin/tnsnames.ora

3) Restart the TNS service.

Solaris:
1) kill the process, running the tnslsnr binary
2) nohup $ORACLE_HOME/bin/tnslsnr start &
NT:
1) restart the service, in the control panel

4) SQL-Plus / ServerMgr

Try using this syntax: sqlplus user/password@instance

5) Solaris, shell change

Try switching the oracle user to the Bourne or Csh shell ; make a script for SQL-Plus
as follow:

#!/usr/bin/csh

setenv ORACLE_BASE /apps/oracle


setenv ORACLE_HOME ${ORACLE_BASE}
setenv ORACLE_SID db22
setenv LD_LIBRARY_PATH $ORACLE_HOME/lib:/usr/lib/X11
setenv ORACLE_PATH $ORACLE_HOME/bin:/usr/bin:/usr/local/bin
setenv ORA_CLIENT_LIB shared
setenv TWO_TASK $ORACLE_SID
set path = ($ORACLE_HOME/bin /bin /usr/bin /usr/local/bin /sbin /usr/sbin /usr/bin/X11 .)
51
sqlplus $1 $2 $3

# OR
#sqlplus $1@$ORACLE_SID

Also verify the oracle user owns the oracle directory tree.

6) Check the pfile, verify the settings detailed below. For this example,
the machine should have at least 512mb of memory, to handle the OS and
other processes.

# 100 MB shared pool memory


shared_pool_size = 104857600

# 65 processes need 130 MB of additional memory

processes = 65
sessions = 65

Solaris: check the "shared memory" and "semaphores" settings


also, in the /etc/system file.

7) Look at sqlnet.log ; also check the alert log in


$ORACLE_HOME/admin/$ORACLE_SID/bdump

8) Verify the Oracle version, SQLNet version, and patched OS are all compatible.

9) If the problem is still a mystery, the server may need to be restarted.

Explain Plan: syntax

Below is sample syntax for explain plan ( getting output from the optimizer )

delete from plan_table


where statement_id = '9999';
commit;
52
COL operation FORMAT A30
COL options FORMAT A15
COL object_name FORMAT A20

/* ------ Your SQL here ------*/

EXPLAIN PLAN set statement_id = '9999' for


select count(1) from asia_monthly_pricing_data where order_id > 5000
/

/*----------------------------*/

select operation, options, object_name


from plan_table
where statement_id = '9999'
start with id = 0
connect by prior id=parent_id and prior statement_id = statement_id;

exit
/

Oracle 9i Articles

• Oracle9i Database: New Features For Administrators OCP Exam Articles


• Oracle9i Database Release 2: New Features
• Oracle9i Installation Articles
• Oracle9i XML Articles
53
• Oracle9i Web Articles
• Oracle9i Miscellaneous Articles
• Oracle9i Application Server Articles

Oracle9i Database: New Features For Administrators OCP Exam Articles

Oracle9i: New Features For Administrators Exam Tips - Some tips on what to expect when upgrading your
OCP to Oracle9i.

ANSI ISO SQL Support In Oracle9i - Check out the new SQL enhancements to bring Oracle in line with the
ANSI/ISO SQL: 1999 standards.

Automatic Segment Free Space Management - Reduce segment header contention and wasted space
within blocks by switching from using FreeLists to Automatic Segment Free Space Management.

Automatic Undo Management - Replace you existing rollback segments with self-tuning undo segments.

Bitmap Join Indexes - Increase the performance and reduce the size of your data warehouse.

Character Semantics And Globalization - Prevent storage problems associated with mutlibyte character
sets by using character semantics when defining table columns.

Data Guard - Oracle9i Data Guard is the new name for Oracle8i Standby Database incorporating a large
number of new features. This article discusses those relevant to the Oracle9i Database: New Features For
Administrators OCP exam.

Export And Import Enhancements In Oracle9i - See what new functionality has been incorporated into the
Export and Import utilities in Oracle9i.

Flashback Query - Get a consistent view of the your data at a previous point in time.

High Availability Enhancements in Oracle9i - Reduce downtime for users by using the new high availability
features of Oracle9i.

Index Organized Table (IOT) Enhancements in Oracle9i - Better support for secondary indexes and
improved online maintenance make IOTs nearly as flexible as heap tables.

Index Skip Scanning - See how Oracle9i uses composite indexes even when the leading edge is not
referenced.

LOB Enhancements In Oracle9i - Kickstart the conversion from LONG and RAW columns to CLOB and BLOB
columns.

Logminer Enhancements In Oracle9i - A summary of the new features introduced in Oracle9i.

Memory Management In Oracle9i - See how easy memory management has become in Oracle9i.

Metadata API - Extract DDL or XML definitions of all database objects using this simple API.

Multiple Block Sizes - Improve efficiency by having multiple block sizes that relate more closely to the
operations that are being performed.

Oracle Managed Files - Let Oracle manage operating system level file placement, naming and cleanup.

Partitioning Enhancements In Oracle9i - Check out the additions and enhancements to partitioning in
Oracle9i.
54
Performance Enhancements In Oracle9i - Read about some of the new performance features in Oracle9i.

Persistent Initialization Parameters - Make database parameter changes persist between shutdowns.

Real Application Clusters - A brief introduction to the Oracle9i replacement for Oracle Parallel Server.

Recovery Enhancements In Oracle9i - Reduce unplanned downtime by using the new crash, instance and
media recovery features of Oracle9i.

Recovery Manager (RMAN) Enhancements In Oracle9i - Use the latest RMAN features which make backup
and recovery quicker and more reliable.

Resource Manager Enhancements In Oracle9i - Manage system resources more precisely using the
Resource Manager enhancements in Oracle9i.

Resumable Space Allocation - Make long running operations suspend rather than abort if they encounter
space errors.

Scalable Session Management - Learn about the new session management features in Oracle9i.

Security Enhancements In Oracle9i - A run through the new security features available in Oracle9i,
focusing on those relevant for the Oracle9i Database: New Features For Administrators OCP exam.

SQL New Features In Oracle9i - Check out the new SQL features with emphasis on those relevant for the
Oracle9i Database: New Features For Administrators OCP exam.

Workspace Management In Oracle9i - Allow multiple transactionally consistent environments to exist


within one database.

Oracle9i Database Release 2: New Features

Associative Arrays - Oracle9i Release 2 allows you to index-by string values using this renamed collection.

Bulk Binds and Record Processing in Oracle9i Release 2 - Take advantage of bulk binds for performance
improvements whilst using record structures.
DBNEWID Utility - Change the internal DBID and the database name using this new utility without
rebuilding your controlfile.

DBMS_XPLAN - Easily format the output of an explain plan with this replacement for the utlxpls.sql script.

Export BLOB Contents Using UTL_FILE - Use the new UTL_FILE functionality to write binary data to files.

FTP From PL/SQL - A description of two methods for triggering FTP jobs directly from PL/SQL.

InterMedia - Import-Export Of Images - Use Oracle interMedia to store and retrieve images without using
Java stored procedures.

Renaming Columns And Constraints - Oracle9i Release 2 now allows the renaming of columns and
constraints. Check out the syntax here.

SQL/XML - Oracle9i Release 2 includes functionality to support the emerging SQL/XML standard to simplify
XML generation from SQL queries.

STATISTICS_LEVEL - Let Oracle9i Release 2 control the collection of statistics and advisories with a single
parameter.

Streams - Based on Advanced Queuing and LogMinor, Oracle Streams form a distributed messaging
55
technology that can be used for a variety of tasks including messaging, replication and ETL processes.

UTL_FILE Enhancements - Oracle9i Release 2 includes some long overdue enhancements including basic
file handling and support for NCHAR and RAW data.

UTL_FILE - Random Access of Files - Use the UTL_FILE package for random access of files from PL/SQL.

XML DB - Store and retrieve XML documents from the Oracle XML DB repository using HTTP, FTP and
WebDAV in seconds.

XMLSEQUENCE - Use this operator to split multi-value results from XMLTYPE queries into multiple rows.

Oracle9i Installation Articles

Oracle9i (9.2.0.4.0) Installation On Fedora Core 1 Linux - A brief guide to installing Oracle9i (9.2.0.4.0) on
Fedora Core 1 Linux.

Oracle9i (9.2.0.1.0) Installation On RedHat 8.0 Linux - A brief guide to installing Oracle9i (9.2.0.1.0) on
RedHat 8.0 Linux.
56

Oracle9i (9.2.0.1.0) Installation On RedHat 9.0 Linux - A brief guide to installing Oracle9i (9.2.0.1.0) on
RedHat 9.0 Linux.

Oracle9i (9.2.0.4.0) Installation On RedHat Advanced Server 2.1 Linux - A brief guide to installing Oracle9i
(9.2.0.4.0) on RedHat Advanced Server 2.1 Linux.

Oracle9i (9.2.0.4.0) Installation On RedHat Advanced Server 3.0 Linux - A brief guide to installing Oracle9i
(9.2.0.4.0) on RedHat Advanced Server 3.0 Linux.

Oracle9i (9.2.0.1.0) Installation On Tru64 5.1b - A brief guide to installing Oracle9i (9.2.0.1.0) on Tru64
5.1b.

Oracle9i RAC Installation On Tru64 - A brief guide to installing Oracle9i (9.2.0.4.0) Real Application
Clusters (RAC) on Tru64 5.1b.

Manual Oracle Uninstall - Having trouble removing all Oracle software using the OUI? Try these methods.

Oracle9i XML Articles

Load XMLTYPE From File - A simple method to load XMLTYPE data from a file.

Parse XML Documents - Explode unstructured XML documents into relational tables using the new
integrated XDB packages.

XML DB - Store and retrieve XML documents from the Oracle XML DB repository using HTTP, FTP and
WebDAV in seconds.

SQL/XML - Oracle9i Release 2 includes functionality to support the emerging SQL/XML standard to simplify
XML generation from SQL queries.

XMLSEQUENCE - Use this operator to split multi-value results from XMLTYPE queries into multiple rows.

XMLType Datatype - Store XML documents in tables and query them using SQL.

XML Generation In Oracle9i Using DBMS_XMLQuery, DBMS_XMLGen, Sys_XMLGen And Sys_XMLAgg -


Generate XML and perform XSL transformations with ease using the new XML features of Oracle9i.

XML-Over-HTTP - XML-over-HTTP was the precursor to web services allowing easy access to XML via HTTP
GETs and POSTs.

XSQL Servlet and XSQL Pages - Publish dynamic XML documents through HTTP using the XSQL Servlet
utility.

Oracle9i Web Articles

Consuming Web Services - Access web services directly from PL/SQL using this simple API.

Email From PL/SQL In Oracle9i - Email from PL/SQL rather than using external procedures or Java.

File Upload and Download Procedures - Upload and download files directly from the database using
Database Access Descriptors.

FTP From PL/SQL - A description of two methods for triggering FTP jobs directly from PL/SQL.

Images from Oracle Over HTTP - Retrieve images directly from the database over HTTP.
57
Java Server Pages - Use Java as a scripting language to interact with the database from web pages.

PL/SQL Server Pages - Use PL/SQL as a scripting language to generate web pages directly from the
database.

PL/SQL Web Toolkit - Generate web pages directly from the database using this simple toolkit.

SQL*Plus Web Reports - Generate HTML reports directly from SQL*Plus.

Stateless Locking Methods - Learn how to avoid data loss in stateless environments.

Oracle9i Miscellaneous Articles

Advanced Queuing In Oracle9i - Get to grips with the basics of advanced queuing in Oracle9i.

Archivelog Mode On RAC - The differences between resetting the archive log mode on a single node
instance and a Real Application Cluster (RAC).

CASE Expressions And Statements - Learn how to use CASE expressions in both SQL and PL/SQL. In
addition, learn how to use the CASE statement in PL/SQL.

Complete Data Audit - A simple and generic solution for auditing before and after snapshots of data.

Compressed Tables - Compress whole tables or individual table partitions to reduce disk space
requirements.

Duplicate a Database Using RMAN - Use RMAN to create a duplicate, or clone, of a database from a recent
backup.

DBMS_LDAP - Accessing LDAP From PL/SQL - Use the DBMS_LDAP package to query and modify LDAP
entries from PL/SQL.

DBMS_LIBCACHE - Warm up the library cache of an instance by compiling the SQL and PL/SQL statements
from the library cache of another instance.

DBMS_PROFILER - Profile the run-time behaviour of PL/SQL code to identify potential bottlenecks.

DBMS_TRACE - Trace the run-time behaviour of PL/SQL code to identify potential bottlenecks.

Dynamic Binds Using Contexts - Simplify dynamic variable binds within dynamic SQL using contexts.

External Tables - Query the contents of flat files as if they were regular tables.

Full Text Indexing using Oracle Text - Efficiently query free text and produce document classification
applications using Oracle Text.

Generating CSV Files - A simple example of using the UTL_FILE package to generate CSV extract files.

Heterogeneous Services - Generic Connectivity In Oracle9i - Query non-Oracle datasources using ODBC.

Java Native Compilation - Improve the performance of Java procedural code by compiling it to native
shared libraries.

Mutating Table Exceptions - A simple method to prevent triggers producing mutating table exceptions.

Oracle Internet Directory - Use the Oracle Internet Directory to replace local Oracle Net configuration files
and Oracle Names Server.
58

Oracle Label Security - Configure row-level security with this out-of-the-box solution.

Pipelined Table Functions - Improve performance of ETL processes by pipelining all transformation
functions.

PL/SQL Native Compilation - Improve the performance of PL/SQL procedural code by compiling it to native
shared libraries.

RANK, DENSE_RANK, FIRST and LAST Analytic Functions - Simple examples of how to use these analytic
functions.

Recovery Manager (RMAN) - Explanation of RMANs basic backup, recovery and reporting functionality.

Storing Passwords In The Database - Store passwords securely in the database using this simple hashing
technique.

Transportable Tablespaces - Copy tablespaces to new instances in the time it takes to copy the datafiles.

Unregister a Database From RMAN - A step-by-step guide to unregistering unwanted databases from the
RMAN catalog.

Universal Unique Identifier (UUID) - Reduce data migration and replication issues by replacing sequence
generated IDs with UUIDs.

Useful Procedures And Functions - Procedures and functions you may have overlooked which can come in
useful during development.

Oracle9i Application Server Articles

Oracle9iAS Backup and Recovery - Simplify backup and recovery of Oracle9i Application Server using this
Oracle supplied Perl utility.

Oracle9iAS dcmctl Utility - Speed up 9iAS administration by avoiding Enterprise Manager.

Oracle9iAS (9.0.3.0.0) Installation On RedHat Advanced Server 2.1 - A brief guide to installing Oracle9iAS
(9.0.3.0.0) on RedHat Advanced Server 2.1.

Oracle9iAS (9.0.2.0.1) Portal Installation On Tru64 5.1b - A brief guide to installing Oracle9iAS (9.0.2.0.1)
Portal on Tru64 5.1b.

Oracle9iAS (9.0.2.0.1) Portal Installation On Windows 2000 - A brief guide to installing Oracle9iAS
(9.0.2.0.1) Portal on Windows 2000.

SQL*Loader

Maximazing SQL*Loader Performance

Use Direct Path Loads - The conventional path loader essentially loads the data by using standard
insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates
blocks in Oracle database block format. There are certain cases, however, in which direct path loads
59
cannot be used (clustered tables). To prepare the database for direct path loads, the script
$ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.

Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and
constraints can greatly enhance the performance.

Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of
calls to the database and increase performance. The size of the bind array is specified using the bindsize
parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the
maximum length of each row.

Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter
specifies the number of rows per commit. Issuing fewer commits will enhance performance.

Use Parallel Loads. Available with direct path data loads only, this option allows multiple
SQL*Loader jobs to execute concurrently.

$ sqlldr control=first.ctl parallel=true direct=true


$ sqlldr control=second.ctl parallel=true direct=true

Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the
data. The savings can be tremendous.

Disable Archiving During Load. While this may not be feasible in certain environments, disabling
database archiving can increase performance considerably.

Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the
data to the redo logs. This option is available for direct path loads only.

Using the table table_with_one_million_rows, the following benchmark tests were performed with the
various SQL*Loader options. The table was truncated after each test.

SQL*Loader Elapsed Time Time Reduction


Option (Seconds)
direct=false 135 -
rows=64
direct=false 92 32%
bindsize=512000
rows=10000
direct=false 85 37%
bindsize=512000
rows=10000
database in
noarchivelog
direct=true 47 65%
direct=true 41 70%
unrecoverable
direct=true 41 70%
unrecoverable
fixed width data

The results above indicate that conventional path loads take the longest. However, the bindsize and rows
parameters can aid the performance under these loads. The test involving the conventional load didn’t
come close to the performance of the direct path load with the unrecoverable option specified.
60
It is also worth noting that the fastest import time achieved for this table (earlier) was 67 seconds,
compared to 41 for SQL*Loader direct path – a 39% reduction in execution time. This proves that
SQL*Loader can load the same data faster than import. These tests did not compensate for indexes. All
database load operations will execute faster when indexes are disabled.

SQL*Loader Control File

The control file is a text file written in a language that SQL*Loader understands. The control file describes
the task that the SQL*Loader is to carry out. The control file tells SQL*Loader where to find the data, how
to parse and interpret the data, where to insert the data, and more. See Chapter 4, "SQL*Loader Case
Studies" for example control files.
Although not precisely defined, a control file can be said to have three sections:

1. The first section contains session-wide information, for example:


o global options such as bindsize, rows, records to skip, etc.
o INFILE clauses to specify where the input data is located
o data character set specification
2. The second section consists of one or more "INTO TABLE" blocks. Each of these blocks contains
information about the table into which the data is to be loadedsuch as the table name and the
columns of the table.
3. The third section is optional and, if present, contains input data.

Examples

Case 1: Loading Variable-Length Data


Loads stream format records in which the fields are delimited by commas and may be
enclosed by quotation marks. The data is found at the end of the control file.

Case 2: Loading Fixed-Format Fields:


Loads a datafile with fixed-length fields, stream-format records, all records the same length.

Case 3: Loading a Delimited, Free-Format File


Loads data from stream format records with delimited fields and sequence numbers. The
data is found at the end of the control file.

Case 4: Loading Combined Physical Records


Combines multiple physical records into one logical record corresponding to one database
row

Case 5: Loading Data into Multiple Tables


Loads data into multiple tables in one run

Case 6: Loading Using the Direct Path Load Method


Loads data using the direct path load method

Case 7: Extracting Data from a Formatted Report


Extracts data from a formatted report

Case 8: Loading Partitioned Tables


Loads partitioned tables.

Case 9: Loading LOBFILEs (CLOBs)


Adds a CLOB column called RESUME to the table emp, uses a FILLER field (RES_FILE), and
loads multiple LOBFILEs into the emp table.
61
Case 10: How to use TRIM, TO_NUMBER, TO_CHAR, User Defined Functions with SQL*Loader
How to use the functions TRIM, TO_CHAR/TO_NUMBER, and user defined functions in
connection with SQL*Loader

Case 11: Calling Stored Functions


How to call a Function from SQL*Loader

OPTIONS Clause

Continue Interrupted Load

Identifying Data Files

Loading into Non-Empty Tables

Loading into Multiple Tables

Case 1: Loading Variable-Length Data

• A simple control file identifying one table and three columns to be loaded.
• Including data to be loaded from the control file itself, so there is no separate datafile.
• Loading data in stream format, with both types of delimited fields -- terminated and enclosed.

Control File
1) LOAD DATA
2) INFILE *
3) INTO TABLE dept
4) FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
5) (deptno, dname, loc)
6) BEGINDATA
12,RESEARCH,"SARATOGA"
10,"ACCOUNTING",CLEVELAND
11,"ART",SALEM
13,FINANCE,"BOSTON"
21,"SALES",PHILA.
22,"SALES",ROCHESTER
42,"INT'L","SAN FRAN"

Notes:

1. The LOAD DATA statement is required at the beginning of the control file.
2. INFILE * specifies that the data is found in the control file and not in an external file.
3. The INTO TABLE statement is required to identify the table to be loaded (DEPT) into. By default,
SQL*Loader requires the table to be empty before it inserts any records.
4. FIELDS TERMINATED BY specifies that the data is terminated by commas, but may also be
enclosed by quotation marks. Datatypes for all fields default to CHAR.
5. Specifies that the names of columns to load are enclosed in parentheses. Since no datatype is
specified, the default is a character of length 255.
62
BEGINDATA specifies the beginning of the data.

Invoking SQL*Loader
To run this example, invoke SQL*Loader with the command:
sqlldr userid=scott/tiger control=ulcase1.ctl log=ulcase1.log

Case 2: Loading Fixed-Format Fields

• A separate datafile.
• Data conversions.

In this case, the field positions and datatypes are specified explicitly.
Control File
1) LOAD DATA
2) INFILE 'ulcase2.dat'
3) INTO TABLE emp
4) (empno POSITION(01:04) INTEGER EXTERNAL,
ename POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,
5) deptno POSITION(50:51) INTEGER EXTERNAL,
6) modifieddate "SYSDATE",
7) customerid constant "0"
)
Notes:

1. The LOAD DATA statement is required at the beginning of the control file.
2. The name of the file containing data follows the keyword INFILE.
3. The INTO TABLE statement is required to identify the table to be loaded into.
4. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into
that column. EMPNO, ENAME, JOB, and so on are names of columns in table EMP. The datatypes
(INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file,
not of corresponding columns in the EMP table.
5. Note that the set of column specifications is enclosed in parentheses.
6. This statement let me insert the current sysdate in this field
7. This statement let me put a constant value

Datafile

Below are a few sample data lines from the file ULCASE2.DAT. Blank fields are set to null automatically.

7782 CLARK MANAGER 7839 2572.50 10


7839 KING PRESIDENT 5500.00 10
7934 MILLER CLERK 7782 920.00 10
7566 JONES MANAGER 7839 3123.75 20
7499 ALLEN SALESMAN 7698 1600.00 300.00 30
7654 MARTIN SALESMAN 7698 1312.50 1400.00 30
.
Case 3: Loading a Delimited, Free Format File
63
• Loading data (enclosed and terminated) in stream format
• Loading dates using the datatype DATE
• Using SEQUENCE numbers to generate unique keys for loaded data
• Using APPEND to indicate that the table need not be empty before inserting new records
• Using Comments in the control file set off by double dashes
• Overriding general specifications with declarations for individual fields

In this case, the field positions and datatypes are specified explicitly.
Control File

This control file loads the same table as in Case 2, but it loads three additional columns (HIREDATE,
PROJNO, LOADSEQ). The demonstration table EMP does not have columns PROJNO and LOADSEQ. So if
you want to test this control file, add these columns to the EMP table with the command:
ALTER TABLE EMP ADD (PROJNO NUMBER, LOADSEQ NUMBER)
The data is in a different format than in Case 2. Some data is enclosed in quotation marks, some is set off
by commas, and the values for DEPTNO and PROJNO are separated by a colon.
1) -- Variable-length, delimited and enclosed data format
LOAD DATA
2) INFILE *
3) APPEND
INTO TABLE emp
4) FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
(empno, ename, job, mgr,
5) hiredate DATE(20) "DD-Month-YYYY",
sal, comm, deptno CHAR TERMINATED BY ':',
projno,
6) loadseq SEQUENCE(MAX,1))
7) BEGINDATA
8) 7782, "Clark", "Manager", 7839, 09-June-1981, 2572.50,, 10:101
7839, "King", "President", , 17-November-1981,5500.00,,10:102
7934, "Miller", "Clerk", 7782, 23-January-1982, 920.00,, 10:102
7566, "Jones", "Manager", 7839, 02-April-1981, 3123.75,, 20:101
7499, "Allen", "Salesman", 7698, 20-February-1981, 1600.00,
(same line continued) 300.00, 30:103
7654, "Martin", "Salesman", 7698, 28-September-1981, 1312.50,
(same line continued) 1400.00, 3:103
7658, "Chan", "Analyst", 7566, 03-May-1982, 3450,, 20:101
Notes:

1. Comments may appear anywhere in the command lines of the file, but they should not appear in
data. They are preceded with a double dash that may appear anywhere on a line.
2. INFILE * specifies that the data is found at the end of the control file.
3. Specifies that the data can be loaded even if the table already contains rows. That is, the table
need not be empty.
4. The default terminator for the data fields is a comma, and some fields may be enclosed by double
quotation marks (").
5. The data to be loaded into column HIREDATE appears in the format DD-Month-YYYY. The length of
the date field is a maximum of 20. If a length is not specified, the length is a maximum of 20. If a
length is not specified, then the length depends on the length of the date mask.
6. The SEQUENCE function generates a unique value in the column LOADSEQ. This function finds the
current maximum value in column LOADSEQ and adds the increment (1) to it to obtain the value
for LOADSEQ for each row inserted.
7. BEGINDATA specifies the end of the control information and the beginning of the data.
8. Although each physical record equals one logical record, the fields vary in length so that some
records are longer than others. Note also that several rows have null values for COMM.
64
Case 4: Loading Combined Physical Records

• Combining multiple physical records to form one logical record with CONTINUEIF
• Inserting negative numbers.
• Indicating with REPLACE that the table should be emptied before the new data is inserted
• Specifying a discard file in the control file using DISCARDFILE
• Specifying a maximum number of discards using DISCARDMAX
• Rejecting records due to duplicate values in a unique index or due to invalid data values

Control File
LOAD DATA
INFILE 'ulcase4.dat'
1) DISCARDFILE 'ulcase4.dsc'
2) DISCARDMAX 999
3) REPLACE
4) CONTINUEIF THIS (1) = '*'
INTO TABLE emp
(empno POSITION(1:4) INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,
deptno POSITION(50:51) INTEGER EXTERNAL,
hiredate POSITION(52:60) INTEGER EXTERNAL)
Notes:

• DISCARDFILE specifies a discard file named ULCASE4.DSC.


• DISCARDMAX specifies a maximum of 999 discards allowed before terminating the run (for all
practical purposes, this allows all discards).
• REPLACE specifies that if there is data in the table being loaded, then SQL*Loader should delete
that data before loading new data.
• CONTINUEIF THIS specifies that if an asterisk is found in column 1 of the current record, then the
next physical record after that record should be appended to it to from the logical record. Note that
column 1 in each physical record should then contain either an asterisk or a non-data value.

Data File
The datafile for this case, ULCASE4.DAT, is listed below. Note the asterisks in the first position and, though
not visible, a new line indicator is in position 20 (following "MA", "PR", and so on). Note that CLARK's
commission is -10, and SQL*Loader loads the value converting it to a negative number.

*7782 CLARK MANAGER 7839 2572.50 -10 2512-NOV-85


*7839 KING PRESIDENT 5500.00 2505-APR-83
*7934 MILLER CLERK 7782 920.00 2508-MAY-80
*7566 JONES MANAGER 7839 3123.75 2517-JUL-85
*7499 ALLEN SALESMAN 7698 1600.00 300.00 25 3-JUN-84
*7654 MARTIN SALESMAN 7698 1312.50 1400.00 2521-DEC-85
*7658 CHAN ANALYST 7566 3450.00 2516-FEB-84
* CHEN ANALYST 7566 3450.00 2516-FEB-84
*7658 CHIN ANALYST 7566 3450.00 2516-FEB-84

Rejected Records
65
The last two records are rejected, given two assumptions. If there is a unique index created on column
EMPNO, then the record for CHIN will be rejected because his EMPNO is identical to CHAN's. If EMPNO is
defined as NOT NULL, then CHEN's record will be rejected because it has no value for EMPNO.
Case 5: Loading Data in Multiple Tables

• Loading multiple tables


• Using SQL*Loader to break down repeating groups in a flat file and load the data into normalized
tables -- one file record may generate multiple database rows
• Deriving multiple logical records from each physical record
• Using a WHEN clause
• Loading the same field (EMPNO) into multiple tables

Control File
-- Loads EMP records from first 23 characters
-- Creates and loads PROJ records for each PROJNO listed
-- for each employee
LOAD DATA
INFILE 'ulcase5.dat'
BADFILE 'ulcase5.bad'
DISCARDFILE 'ulcase5.dsc'
1) REPLACE
2) INTO TABLE emp
(empno POSITION(1:4) INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
deptno POSITION(17:18) CHAR,
mgr POSITION(20:23) INTEGER EXTERNAL)
2) INTO TABLE proj
-- PROJ has two columns, both not null: EMPNO and PROJNO
3) WHEN projno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
3) projno POSITION(25:27) INTEGER EXTERNAL) -- 1st proj
3) INTO TABLE proj
4) WHEN projno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
4) projno POSITION(29:31 INTEGER EXTERNAL) -- 2nd proj
2) INTO TABLE proj
5) WHEN projno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
5) projno POSITION(33:35) INTEGER EXTERNAL) -- 3rd proj

Notes:

• REPLACE specifies that if there is data in the tables to be loaded (EMP and PROJ), SQL*loader
should delete the data before loading new rows.
• Multiple INTO clauses load two tables, EMP and PROJ. The same set of records is processed three
times, using different combinations of columns each time to load table PROJ.
• WHEN loads only rows with non-blank project numbers. When PROJNO is defined as columns
25...27, rows are inserted into PROJ only if there is a value in those columns.
• When PROJNO is defined as columns 29...31, rows are inserted into PROJ only if there is a value in
those columns.
• When PROJNO is defined as columns 33...35, rows are inserted into PROJ only if there is a value in
those columns.

Data File

1234 BAKER 10 9999 101 102 103


66
1234 JOKER 10 9999 777 888 999
2664 YOUNG 20 2893 425 abc 102
5321 OTOOLE 10 9999 321 55 40
2134 FARMER 20 4555 236 456
2414 LITTLE 20 5634 236 456 40
6542 LEE 10 4532 102 321 14
2849 EDDS xx 4555 294 40
4532 PERKINS 10 9999 40
1244 HUNT 11 3452 665 133 456
123 DOOLITTLE 12 9940 132
1453 MACDONALD 25 5532 200

Case 6: Loading Using the direct Path Load Method

• Use of the direct path load method to load and index data
• How to specify the indexes for which the data is pre-sorted.
• Loading all-blank numeric fields as null
• The NULLIF clause
• Note: Specify the name of the table into which you want to load data; otherwise, you will see LDR-
927. Specifying DIRECT=TRUE as a command-line parameter is not an option when loading into a
synonym for a table.

In this example, field positions and datatypes are specified explicitly.


Control File
LOAD DATA
INFILE 'ulcase6.dat'
INSERT
INTO TABLE emp
1) SORTED INDEXES (empix)
2)(empno POSITION(01:04) INTEGER EXTERNAL NULLIF empno=BLANKS,
ename POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL NULLIF mgr=BLANKS,
sal POSITION(32:39) DECIMAL EXTERNAL NULLIF sal=BLANKS,
comm POSITION(41:48) DECIMAL EXTERNAL NULLIF comm=BLANKS,
deptno POSITION(50:51) INTEGER EXTERNAL NULLIF deptno=BLANKS)
Notes:

• The SORTED INDEXES clause identifies indexes:presorting data:case study the indexes on which
the data is sorted. This clause indicates that the datafile is sorted on the columns in the EMPIX
index. This clause allows SQL*Loader to optimize index creation by eliminating the sort phase for
this data when using the direct path load method.
• The NULLIF...BLANKS clause specifies that the column should be loaded as NULL if the field in the
datafile consists of all blanks

Case 7: Extracting Data from a formatted Report

• Using SQL*Loader with an INSERT trigger


• Use of the SQL string to manipulate data
• Different initial and trailing delimiters
• Use of SYSDATE
• Use of the TRAILING NULLCOLS clause
• Ambiguous field length warnings
67
Note: This example creates a trigger that uses the last value of unspecified fields.
Data File
The following listing of the report shows the data to be loaded:
Today's Newly Hired Employees

Dept Job Manager MgrNo Emp Name EmpNo Salary (Comm)


---- -------- -------- ----- -------- ----- --------- ------
20 Salesman Blake 7698 Shepard 8061 $1,600.00 (3%)
Falstaff 8066 $1,250.00 (5%)
Major 8064 $1,250.00 (14%)
30 Clerk Scott 7788 Conrad 8062 $1,100.00
Ford 7369
DeSilva 8063 $800.00
Manager King 7839 Provo 8065 $2,975.00

Insert Trigger

In this case, a BEFORE INSERT trigger is required to fill in department number, job name, and manager's
number when these fields are not present on a data line. When values are present, they should be saved
in a global variable. When values are not present, the global variables are used.
The INSERT trigger and the package defining the global variables is:
CREATE OR REPLACE PACKAGE uldemo7 AS -- Global Package Variables
last_deptno NUMBER(2);
last_job VARCHAR2(9);
last_mgr NUMBER(4);
END uldemo7;
/

CREATE OR REPLACE TRIGGER uldemo7_emp_insert


BEFORE INSERT ON emp
FOR EACH ROW
BEGIN
IF :new.deptno IS NOT NULL THEN
uldemo7.last_deptno := :new.deptno; -- save value for later
ELSE
:new.deptno := uldemo7.last_deptno; -- use last valid value
END IF;
IF :new.job IS NOT NULL THEN
uldemo7.last_job := :new.job;
ELSE
:new.job := uldemo7.last_job;
END IF;
IF :new.mgr IS NOT NULL THEN
uldemo7.last_mgr := :new.mgr;
ELSE
:new.mgr := uldemo7.last_mgr;
END IF;
END;
/

Note: The phrase FOR EACH ROW is important. If it was not specified, the INSERT trigger would only fire
once for each array of inserts because SQL*Loader uses the array interface.

Control File
LOAD DATA
INFILE 'ULCASE7.DAT'
APPEND
INTO TABLE emp
68
1) WHEN (57) = '.'
2) TRAILING NULLCOLS
3) (hiredate SYSDATE,
4) deptno POSITION(1:2) INTEGER EXTERNAL(3)
5) NULLIF deptno=BLANKS,
job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
6) NULLIF job=BLANKS "UPPER(:job)",
7) mgr POSITION(28:31) INTEGER EXTERNAL
TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
ename POSITION(34:41) CHAR
TERMINATED BY WHITESPACE "UPPER(:ename)",
empno POSITION(45) INTEGER EXTERNAL
TERMINATED BY WHITESPACE,
sal POSITION(51) CHAR TERMINATED BY WHITESPACE
8) "TO_NUMBER(:sal,'$99,999.99')",
9) comm INTEGER EXTERNAL ENCLOSED BY '(' AND '%'
":comm * 100"
)
Notes:

• The decimal point in column 57 (the salary field) identifies a line with data on it. All other lines in
the report are discarded.
• The TRAILING NULLCOLS clause causes SQL*Loader to treat any fields that are missing at the end
of a record as null. Because the commission field is not present for every record, this clause says to
load a null commission instead of rejecting the record when only six fields are found instead of the
expected seven.
• Employee's hire date is filled in using the current system date.
• This specification generates a warning message because the specified length does not agree with
the length determined by the field's position. The specified length (3) is used.
• Because the report only shows department number, job, and manager when the value changes,
these fields may be blank. This control file causes them to be loaded as null, and an RDBMS insert
trigger fills in the last valid value.
• The SQL string changes the job name to uppercase letters.
• It is necessary to specify starting position here. If the job field and the manager field were both
blank, then the job field's TERMINATED BY BLANKS clause would cause SQL*Loader to scan
forward to the employee name field. Without the POSITION clause, the employee name field would
be mistakenly interpreted as the manager field.
• Here, the SQL string translates the field from a formatted character string into a number. The
numeric value takes less space and can be printed with a variety of formatting options.
• In this case, different initial and trailing delimiters pick the numeric value out of a formatted field.
The SQL string then converts the value to its stored form.

Case 8: Loading Partitioned Tables

• Partitioning of data
• Explicitly defined field positions and datatypes.
• Loading using the fixed record length option

Control File
LOAD DATA
1) INFILE 'ulcase10.dat' "fix 129"
BADFILE 'ulcase10.bad'
TRUNCATE
INTO TABLE lineitem
PARTITION (ship_q1)
2) (l_orderkey position (1:6) char,
69
l_partkey position (7:11) char,
l_suppkey position (12:15) char,
l_linenumber position (16:16) char,
l_quantity position (17:18) char,
l_extendedprice position (19:26) char,
l_discount position (27:29) char,
l_tax position (30:32) char,
l_returnflag position (33:33) char,
l_linestatus position (34:34) char,
l_shipdate position (35:43) char,
l_commitdate position (44:52) char,
l_receiptdate position (53:61) char,
l_shipinstruct position (62:78) char,
l_shipmode position (79:85) char,
l_comment position (86:128) char)
Notes:

• Specifies that each record in the datafile is of fixed length (129 characters in this example). See
Input Data and Datafiles.
• Identifies the column name and location of the data in the datafile to be loaded into each column.

Table Creation
In order to partition the data the lineitem table is created using four (4) partitions according to the
shipment date:
Create table lineitem
(l_orderkey number,
l_partkey number,
l_suppkey number,
l_linenumber number,
l_quantity number,
l_extendedprice number,
l_discount number,
l_tax number,
l_returnflag char,
l_linestatus char,
l_shipdate date,
l_commitdate date,
l_receiptdate date,
l_shipinstruct char(17),
l_shipmode char(7),
l_comment char(43)
)
partition by range (l_shipdate)
(
partition ship_q1 values less than (TO_DATE('01-APR-1996', 'DD-MON-YYYY'))
tablespace p01,
partition ship_q2 values less than (TO_DATE('01-JUL-1996', 'DD-MON-YYYY'))
tablespace p02,
partition ship_q3 values less than (TO_DATE('01-OCT-1996', 'DD-MON-YYYY'))
tablespace p03,
partition ship_q4 values less than (TO_DATE('01-JAN-1997', 'DD-MON-YYYY'))
tablespace p04
)
Input Data File

The datafile for this case, ULCASE8.DAT, is listed below. Each record is 129 characters in length. Note that
five(5) blanks precede each record in the file.
1 151978511724386.60 7.04.0NO09-SEP-6412-FEB-9622-MAR-96DELIVER IN PERSONTRUCK
iPBw4mMm7w7kQ zNPL i261OPP
70
1 2731 73223658958.28.09.06NO12-FEB-9628-FEB-9620-APR-96TAKE BACK RETURN MAIL
5wM04SNyl0AnghCP2nx lAi
1 3370 3713 810210.96 .1.02NO29-MAR-9605-MAR-9631-JAN-96TAKE BACK RETURN REG AIRSQC2C
5PNCy4mM
1 5214 46542831197.88.09.06NO21-APR-9630-MAR-9616-MAY-96NONE AIR Om0L65CSAwSj5k6k
1 6564 6763246897.92.07.02NO30-MAY-9607-FEB-9603-FEB-96DELIVER IN PERSONMAIL
CB0SnyOL PQ32B70wB75k 6Aw10m0wh
1 7403 160524 31329.6 .1.04NO30-JUN-9614-MAR-9601 APR-96NONE FOB C2gOQj OB6RLk1BS15
igN
2 8819 82012441659.44 0.08NO05-AUG-9609-FEB-9711-MAR-97COLLECT COD AIR
O52M70MRgRNnmm476mNm
3 9451 721230 41113.5.05.01AF05-SEP-9629-DEC-9318-FEB-94TAKE BACK RETURN FOB
6wQnO0Llg6y
3 9717 1834440788.44.07.03RF09-NOV-9623-DEC-9315-FEB-94TAKE BACK RETURN SHIP
LhiA7wygz0k4g4zRhMLBAM
3 9844 1955 6 8066.64.04.01RF28-DEC-9615-DEC-9314-FEB-94TAKE BACK RETURN REG
AIR6nmBmjQkgiCyzCQBkxPPOx5j4hB 0lRywgniP1297

Case 9: Loading LOB Files (CLOBs)

• Adding a CLOB column called RESUME to the table emp.


• Using a FILLER field (RES_FILE).
• Loading multiple LOBFILEs into the emp table.

Control File
LOAD DATA
INFILE *
INTO TABLE EMP
REPLACE
FIELDS TERMINATED BY ','
( EMPNO INTEGER EXTERNAL,
ENAME CHAR,
JOB CHAR,
MGR INTEGER EXTERNAL,
SAL DECIMAL EXTERNAL,
COMM DECIMAL EXTERNAL,
DEPTNO INTEGER EXTERNAL,
1) RES_FILE FILLER CHAR,
2) "RESUME" LOBFILE (RES_FILE) TERMINATED BY EOF NULLIF RES_FILE = 'NONE'
)
BEGINDATA
7782,CLARK,MANAGER,7839,2572.50,,10,ulcase91.dat
7839,KING,PRESIDENT,,5500.00,,10,ulcase92.dat
7934,MILLER,CLERK,7782,920.00,,10,ulcase93.dat
7566,JONES,MANAGER,7839,3123.75,,20,ulcase94.dat
7499,ALLEN,SALESMAN,7698,1600.00,300.00,30,ulcase95.dat
7654,MARTIN,SALESMAN,7698,1312.50,1400.00,30,ulcase96.dat
7658,CHAN,ANALYST,7566,3450.00,,20,NONE

Notes:

• This is a filler field. The filler field is assigned values from the datafield to which it is mapped.
• RESUME is loaded as a CLOB. The LOBFILE function is used to specify the name of the field that
specifies name of the file which contains the data for the LOB field.
71
Case 10: How to use TRIM, TO_NUMBER, TO_CHAR with SQL*Loader
This note explains how to use the functions TRIM, TO_CHAR/TO_NUMBER, and user defined functions in
connection with SQL*Loader using the following example:

-- Create the table


DROP TABLE TEST;
CREATE TABLE TEST
(
ID NUMBER PRIMARY KEY,
FIRST_NAME VARCHAR2(20),
LAST_NAME VARCHAR2(20) NOT NULL,
DEPARTMENT VARCHAR2(20) NOT NULL,
SALARY NUMBER(10,2) NOT NULL,
BONUS NUMBER(10,2),
DESCRIPTION VARCHAR2(50),
TOTAL NUMBER(10,2)
);

-- Create the user defined functions (used by SQL*Loader)


CREATE OR REPLACE FUNCTION GET_BONUS (dept IN VARCHAR2)
RETURN NUMBER AS
retval NUMBER(10,2);
BEGIN
retval := NULL;
if upper (dept) in ('CLERK', 'SALESMAN') then
if to_char (sysdate, 'YYYY') = '2002' then
retval := 9.2;
else
retval := 7.88;
end if;
elsif upper (dept) = 'ANALYST' then
retval := 18.7;
elsif upper (dept) = 'MANAGER' then
retval := -5.92;
end if;
return (retval);
END;
/

CREATE OR REPLACE FUNCTION CALC_SAL (sal IN NUMBER, bon IN NUMBER)


RETURN NUMBER AS
retval NUMBER(10,2);
BEGIN
if bon is null then
retval := sal;
else
retval := round (sal + (sal * bon / 100), 2);
end if;
return (retval);
END;
/

The table TEST will be loaded with this control file:

LOAD DATA
INFILE *
INTO TABLE TEST
TRUNCATE
72
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ID SEQUENCE,
FIRST_NAME "TRIM (:FIRST_NAME)", -- 1)
LAST_NAME "TRIM ('x' FROM :LAST_NAME)", -- 2)
DEPARTMENT,
SALARY "TO_NUMBER (:SALARY, '99999D99', -- 3)
'NLS_NUMERIC_CHARACTERS='',.''')",
BONUS "GET_BONUS (:DEPARTMENT)", -- 4)
DESCRIPTION "'Salary: '||:SALARY||' -> '|| -- 5)
DECODE (GET_BONUS (:DEPARTMENT), NULL, 'No bonus',
'Bonus: '||TO_CHAR (GET_BONUS (:DEPARTMENT), 'S09D90'))",
TOTAL "CALC_SAL (TO_NUMBER (:SALARY, '99999D99', -- 6)
'NLS_NUMERIC_CHARACTERS='',.'''), GET_BONUS (:DEPARTMENT))"
)

BEGINDATA
" Martin ";"xxxxxxxSmithxxx";CLERK;2459,25
" David ";"xxxxxxxAllenxxx";SALESMAN;4563,9
" Brad ";"xxxxxxxWardxxxx";SALESMAN;4815,81
" Marvin ";"xxxxxxxJonesxxx";MANAGER;9765,33
" Dean ";"xxxxxxxMartinxx";SALESMAN;4214,56
" John ";"xxxxxxxBlakexxx";MANAGER;10333,87
" Clark ";"xxxxxxxGablexxx";MANAGER;11011,11
" Scott ";"xxxxxxxTigerxxx";ANALYST;6865,88
" Ralph ";"xxxxxxxKingxxxx";PRESIDENT;18955,45
" Tina ";"xxxxxxxTurnerxx";SALESMAN;5324,44
" Bryan ";"xxxxxxxAdamsxxx";CLERK;1899,48
" Jesse ";"xxxxxxxJamesxxx";CLERK;2288,99
" John ";"xxxxxxxFordxxxx";ANALYST;7564,83
" John ";"xxxxxxxMillerxx";CLERK;1865,93

1) TRIM deletes the leading/trailing blanks in the column FIRST_NAME (i.e. " Martin " becomes
"Martin")
2) TRIM deletes the leading/trailing 'x' characters in the column LAST_NAME (i.e. "xxxxxxxSmithxxx"
becomes "Smith")
3) TO_NUMBER shows that the format of the numbers in the column SALARY is in the form: 99999D99.
That means max. 5 digits integer with max. 2 digit
post-decimal positions. The decimal separator is ','. If the format is not specified, then the records are
not loaded (ORA-1722 invalid number, if NLS_NUMERIC_CHARACTERS = '.,')
4) The column BONUS is calculated with the user defined function GET_BONUS. The Function expects an
input parameter, DEPARTMENT (VARCHAR2), and
returns the value, BONUS (NUMBER(2,2))
5) The column DESCRIPTION is a composition of the information from the previous columns. The Function
DECODE checks if a bonus is available to the department. If no bonus is available, then the message 'No
bonus' will be printed. The new thing here is the function TO_CHAR. This function modifies the format of
the BONUS in this form: sign, 2 integer digits with leading zeros, decimal separator, 2 post-decimal
positions with trailing zeros.
6) The column TOTAL is calculated with the user defined function CALC_SAL (the BONUS, if available, is
applied to the SALARY)

The result after the loading procedure looks like this in the table TEST:
SQL> select * from test;

ID FIRST_NAME LAST_NAME DEPARTMENT SALARY


---------- -------------------- -------------------- -------------------- ----------
1 Martin Smith CLERK 2459.25
2 David Allen SALESMAN 4563.9
73
3 Brad Ward SALESMAN 4815.81
4 Marvin Jones MANAGER 9765.33
5 Dean Martin SALESMAN 4214.56
6 John Blake MANAGER 10333.87
7 Clark Gable MANAGER 11011.11
8 Scott Tiger ANALYST 6865.88
9 Ralph King PRESIDENT 18955.45
10 Tina Turner SALESMAN 5324.44
11 Bryan Adams CLERK 1899.48
12 Jesse James CLERK 2288.99
13 John Ford ANALYST 7564.83
14 John Miller CLERK 1865.93

Case 11: Calling a Stored Function from SQL*Loader


This small example shows how to call a stored function. It is possible to call any built-in or user-defined
function during load process. Usually it's done for date columns, when non-default date format must be
used, however user-defined function(s) may be called to perform some application logic.
The syntax for such calls is quite simple:

LOAD DATA
INFILE *
APPEND
INTO TABLE dept (
deptno POSITION(01:02) INTEGER EXTERNAL,
dname POSITION(03:16) CHAR "LDR_PCK.NAME(:dname, :deptno)",
loc POSITION(17:29) CHAR
)
BEGINDATA
21Dep Loc
22Dep Loc

Here LDR_PCK.NAME is name of package and function, :dnname and :deptno are parameters.

When calling user-defined functions we must remember that only conventional path may be used. An
attempt to use direct path will cause an error:
SQL*Loader-00417 SQL String (on column column_name) not allowed in direct path.

The OPTIONS Clause

The OPTIONS clause is useful when you usually invoke a control file with the same set of options, or when
the command line and all its arguments becomes very long. This clause allows you to specify runtime
arguments in the control file rather than on the command line.

SKIP = n -- Number of logical records to skip (DEFAULT 0)


LOAD = n -- Number of logical records to load (DEFAULT all)
ERRORS = n -- Number of errors to allow (DEFAULT 50)
ROWS = n -- Number of rows in conventional path bind array (DEFAULT 64)
BINDSIZE = n -- Size of conventional path bind array in bytes
SILENT = {HEADER | FEEDBACK | ERROR | DISCARDS | ALL }
-- Suppress messages during run
74
For example:

OPTIONS (BINDSIZE=10000, SILENT=(ERRORS, FEEDBACK) )

Values specified on the command line override values specified in the control file. With this precedence,
the OPTIONS keyword in the control file established default values that are easily changed from the
command line.

Continuing Interrupted Loads

If SQL*Loader runs out of space for data rows or index entries, the load is discontinued. (For example, the
table might reach its maximum number of extents.) Discontinued loads can be continued after more space
is made available.

When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a
valid state. SQL*Loader's log file tells you the state of the tables and indexes and the number of logical
records already read from the input data file. Use this information to resume the load where it left off.

For example:

SQLLOAD / CONTROL=FAST1.CTL SKIP=345

CONTINUE\_LOAD DATA statement is used to continue a discontinued direct path load involving multiple
tables with a varying number of records to skip. For more information on this command, see chapter 6 of
``ORACLE7 Server Utilities Users Guide''.

Identifying Data Files

To specify the file containing the data to be loaded, use the INFILE or INDDN keyword, followed by the
filename. A filename specified on the command line overrides the first INFILE or INDDN statement in the
control file. If no filename is specified, the filename defaults to the control filename with an extension or
filetype of DAT.

Loading into Non-Empty Database Tables

SQL*Loader does not update existing records, even if they have null columns. If the tables you are loading
already contain data, you have three choices for how SQL*Loader should proceed:

INSERT - This is the default option. It requires the table to be empty before loading. SQL*Loader
terminates with an error if the table contains rows.

APPEND - If data already exists in the table, SQL*Loader appends the new rows to it; if data doesn't
already exist, the new rows are simply loaded.

REPLACE - All rows in the table are deleted and the new data is loaded. This option requires DELETE
privileges on the table.

You can create one logical record from multiple physical records using CONCATENATE and CONTINUEIF.
See chapter 6 of ``ORACLE7 Server Utilities Users Guide''.

Loading Logical Records into Tables


75
The INTO TABLE clause allows you to tell which table you want to load data into. To load multiple tables,
you would include one INTO TABLE clause for each table you wish to load.

The INTO TABLE clause may continue with some options for loading that table. For example, you may
specify different options (INSERT, APPEND, REPLACE) for each table in order to tell SQL*Loader what to do
if data already exists in the table.

The WHEN clause appears after the table name and is followed by one or more field conditions. For
example, the following clause indicates that any record with the value ``q'' in the fifth column position
should be loaded:

WHEN (5) = 'q'

A WHEN clause can contain several comparisons as long as each is preceded by AND. Parentheses are
optional but should be used for clarity with multiple comparisons joined by AND. For example:

WHEN (DEPTNO = '10') AND (JOB = 'SALES')

To evaluate the WHEN clause, SQL*Loader first determines the values of all the fields in the record. Then
the WHEN clause is evaluated. A row is inserted into the table only if the WHEN clause is true.

When the control file specifies more fields for a record than are present in the record, SQL*Loader must
determine whether the remaining (specified) columns should be considered null, or whether an error
should be generated. TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned
columns that are not present in the record as null columns. For example, if the following data

10 Accounting

is read with the following control file

INTO TABLE dept


TRAILING NULLCOLS
( deptno CHAR TERMINATED BY " ",
dname CHAR TERMINATED BY WHITESPACE,
loc CHAR TERMINATED BY WHITESPACE )

and the record ends after DNAME, then the remaining LOC field is set to null. Without the TRAILING
NULLCOLS clause, an error would be generated, due to missing data.

Oracle datatypes

Datatype summary for Oracle 7, 8 & 9


76

Max Size: Max Size: Max Size: M


Datatype Description
Oracle 7 Oracle 8 Oracle 9 P

Variable length character


2000 bytes 4000 bytes
string having maximum 4000 bytes 3
VARCHAR2(size) minimum is minimum is
length size bytes. minimum is 1 m
1 1
You must specify size
Variable length national
4000 bytes
character set string having 4000 bytes 3
NVARCHAR2(size) N/A minimum is
maximum length size bytes. minimum is 1 m
1
You must specify size
Now deprecated - VARCHAR
is a synonym for VARCHAR2
VARCHAR - - -
but this usage may change
in future versions.
Fixed length character data 255 bytes 2000 bytes
3
of length size bytes. This Default and Default and 2000 bytes
D
CHAR(size) should be used for fixed minimum minimum Default and minimum size is
m
length data. Such as codes size is 1 size is 1 1 byte.
1
A100, B102... byte. byte.
Fixed length national 2000 bytes
3
character set data of length Default and 2000 bytes
D
NCHAR(size) size bytes. This should be N/A minimum Default and minimum size is
m
used for fixed length data. size is 1 1 byte.
1
Such as codes A100, B102... byte.
M
1
1

m
p
b
w
The The e
precision p precision p d
can range can range
The precision p can range
from 1 to from 1 to T
from 1 to 38.
Number having precision p 38. 38. r
NUMBER(p,s)
and scale s. t
The scale s can range from
The scale s The scale s
-84 to 127.
can range can range F
from -84 to from -84 to p
127. 127. s

R
m
p
b
w
e
d
PLS_INTEGER signed integers PL/SQL PL/SQL PL/SQL only m
PLS_INTEGER values require only only r
less storage and provide -
better performance than 2
77

NUMBER values.
So use PLS_INTEGER where
you can!
m
signed integers (older slower r
BINARY_INTEGER
version of PLS_INTEGER) -
2
3
N
Character data of variable
2 2 2 Gigabytes - but now s
LONG length (A bigger version the
Gigabytes Gigabytes deprecated t
VARCHAR2 datatype)
w
c
from from f
January 1, January 1, 4
4712 BC to 4712 BC to from January 1, 4712 BC to D
DATE Valid date range
December December December 31, 9999 AD. 9
31, 4712 31, 9999 (
AD. AD. 4
the number of digits in the Accepted values of
TIMESTAMP
fractional part of the - - fractional_seconds_precision
(fractional_seconds_precision)
SECOND datetime field. are 0 to 9. (default = 6)
TIMESTAMP Accepted values of
As above with time zone
(fractional_seconds_precision) - - fractional_seconds_precision
displacement value
WITH {LOCAL} TIMEZONE are 0 to 9. (default = 6)
Time in years and months,
INTERVAL YEAR where year_precision is the Accepted values are 0 to 9.
- -
(year_precision) TO MONTH number of digits in the YEAR (default = 2)
datetime field.
Time in days, hours,
minutes, and seconds.

day_precision is the day_precision may be 0 to 9.


INTERVAL DAY (day_precision) maximum number of digits (default = 2)
TO SECOND in 'DAY' - -
(fractional_seconds_precision) fractional_seconds_precision
fractional_seconds_precision may be 0 to 9. (default = 6)
is the max number of
fractional digits in the
SECOND field.
Raw binary data of length
Maximum Maximum
size bytes.
RAW(size) size is 255 size is Maximum size is 2000 bytes 3
You must specify size for a
bytes. 2000 bytes
RAW value.
3
N
Raw binary data of variable
2 2 2 Gigabytes - but now s
LONG RAW length. (not intrepreted by
Gigabytes. Gigabytes. deprecated t
PL/SQL)
w
R
ROWID Hexadecimal string H
representing the unique s
address of a row in its table. r
(primarily for values u
78

o
t
returned by the ROWID (
pseudocolumn.) v
b
p
u
-
r
The
l
Hex string representing the maximum
The maximum size and o
UROWID logical address of a row of N/A size and
default is 4000 bytes i
an index-organized table default is
t
4000 bytes
p
o
O
Binary format of an
operating system label.This
MLSLABEL
datatype is used with
Trusted Oracle7.
CLOB Character Large Object 4Gigabytes 4Gigabytes 4Gigabytes 4
National Character Large
NCLOB 4Gigabytes 4Gigabytes 4
Object
BLOB Binary Large Object 4Gigabytes 4Gigabytes 4
T
B
d
BFILE pointer to binary file on disk 4Gigabytes 4Gigabytes c
f
(
b

Notes (pro's & cons)

INTEGER
This ANSI datatype will be accepted by Oracle - it is actually a synonym for NUMBER(38)

VARCHAR2:
Storing character data as Varchar2 will save space:

Store 'SMITH' not 'SMITH '

CHAR:
Over time, when varchar2 columns are updated they will sometimes create chained rows - because CHAR
columns are fixed width they are not affected by this - so less DBA effort is required to maintain
performance.

NUMBER
When retrieving data for a NUMBER column, consider (if you can) using the PL/SQL datatype:
PLS_INTEGER for better performance.

LONG
You should start using BLOB instead of LONG
79
Comparison with other RDBMS's

int10 int6 int1 char(n) blob


Oracle 8 NUMBER(10) NUMBER(6) NUMBER(1) VARCHAR2(n) BLOB
Sybase system 10 NUMERIC(10) NUMERIC(6) NUMERIC(1) VARCHAR(n) IMAGE
MS Access 97 Long Int or Double Single Byte TEXT(n) LONGBINARY
TERADATA INTEGER DECIMAL(6) DECIMAL(1) VARCHAR(n) VARBYTE(20480)
DB2 INTEGER DECIMAL(6) DECIMAL(1) VARCHAR(n) VARCHAR(255)
RDB INTEGER DECIMAL(6) DECIMAL(1) VARCHAR(n) LONG VARCHAR
INFORMIX INTEGER DECIMAL(6) DECIMAL(1) VARCHAR(n) BYTE
SYBASE NUMERIC(10) NUMERIC(6) NUMERIC(1) VARCHAR(n) IMAGE
MS SQL Server 6.0 NUMERIC(10) NUMERIC(6) NUMERIC(1) VARCHAR(n) IMAGE
RedBrick integer int int char(n) char(1024)
INGRES INTEGER INTEGER INTEGER VARCHAR(n) VARCHAR(1500)
Type of Indexes

Oracle8i also allows you to rebuild your indexes while online. In the past, creating or rebuilding the index
required a full lock on the table. On a large table, this could mean that an application is unusable for
several hours.
Now, however, Oracle allows you to create or rebuild the index while users can still perform the full range
of data processes. To do this, Oracle creates the index structure before populating it. While populating, all
changes to the table are recorded in a journal table. As the index is completed, the journal table changes
are then built in.
Brief table locks are made while the index structure is made and the journal table bought into the index.

To build an index on line, you use the following syntax:


CREATE INDEX my_index ON my_table (my_field) ONLINE;

The importance of PCTFREE

For a B-Tree index, PCTFREE determines the extent of a leaf-split. In other words, PCTFREE indicates the
amount of free-space within a block for “future updates”. However in the case of an index (unlike a table),
these updates do not make sense, since an update anyway forces a delete and subsequent insert. In the
case of indexes, PCTFREE plays a role mostly at the time of index-creation. A non-zero value specifies the
block-splitting ratio. If PCTFREE is set to 20, during index-creation, up to 80% of the leaf may contain key-
information.
However the remaining 20% should be available for future inserts of key-information into that leaf-block.
This ensures that during run-time inserts, there is minimal overhead in terms of incurring leaf-block splits.
Even though a higher PCTFREE may cause index-creation time to increase, it prevents the performance-hit
from being deferred to actual usage time. So, the end-user, who is waiting for a row to be inserted, does
not incur the performance penalty associated with a leaf-block split.
Based on this information, the following points emerge:
PCTFREE for an index is used primarily during creation. It is ignored during actual usage.
Specify
 a higher PCTFREE for OLTP applications, if the table is a popular one, incurring a lot of DML
changes (via interactive user-screens);
If the index-creation time is critical specify a lower PCTFREE. This will pack in more rows per leaf-
block, thereby avoiding the need for further splitting at creation-time. This is of paramount significance to
shops that have “24x7” availability requirements. Index-creation in most cases requires considerable
downtime (especially if the table is a multi-million row table). Lesser the amount of index-creation time,
80
smaller can be the maintenance-window. I have seen this tiny, often unnoticed parameter save around
20% of index-creation time. At a high-availability site, an index on a table containing around 11 million
rows took me about 80 minutes to build using PCTFREE of 30 and a parallel degree of 4. The same index
on the same table, with 13.5 million rows took me around 90 minutes to create with a PCTFREE of 0
(without any hardware/software enhancements). Nologging (minimal redo) was on during both creations.
For any column where the values are constantly increasing, it is probably a good idea to set a very low
PCTFREE (even, zero). This is because only the rightmost leaf-block will always be inserted into. This will
make the tree grow towards the right. The leftmost leaves would remain static. So, there is no sense in
leaving any part of those blocks empty with a non-zero PCTFREE.

1- Partitioned Indexes
Like tables, indexes can also be partitioned; but with indexes you have a more options because the
underlying table might or might not also be partitioned. The objective of this type of index is to separate
the index into smaller partitions, just as we do now for a database table. There are essentially two
different types of partitioned indexes available:
•Global indexes--These are created in a manner different from the underlying partitioning of the table
that is indexed.
•Local indexes--These are partitioned in the same manner as the underlying table partitioning.

Global Indexes
To create a global partitioned index, use the CREATE INDEX parameter GLOBAL. This specifies that the
index will be a global index. Further partitioning of the index is accomplished by using the following
parameters:
•GLOBAL--This parameter specifies a global partitioned index.
•PARTITION part_name--This parameter is used to identify the partition. If you do not specify the
partition name, a default name will be provided. It is not usually necessary to provide the partition
name.
•VALUES LESS THAT--This parameter is used to specify the range that is allocated for that particular
partition in the same way as the partition was specified in the CREATE TABLE statement (discussed
yesterday).

NOTE: The last partition should contain the keyword MAXVALUE for its range.

Creating a partitioned index.


CREATE INDEX "ETW".dogs_ix1
ON DOGS (ID)
PARTITION BY RANGE (ID)
PARTITION pt1 VALUES LESS THAN (`1000') TABLESPACE ts1,
PARTITION pt2 VALUES LESS THAN (MAXVALUE) TABLESPACE ts2);

ANLYSIS:
This create two partitions, the first holding values of ID that are less than 1,000, the second holding the
remaining values of ID. If you do not specify the partition name, as is the case here, a default name will
be provided.
While this does offer a great deal of flexibility, it is not without cost. Since global partitioned indexes are
not directly related to any single table partition, operations that affect any partition in the table can render
all global partitioned indexes unusable. Specifically the operations:

ADD (HASH) - COALESCE (HASH) – DROP – EXCHANGE – MERGE - MOVE -SPLIT - TRUNCATE

Local Indexes
81
In contrast to the global index, a local partitioned index is individually created on each partition. If you
specify a local partitioned index, Oracle automatically maintains the index's partitioning along with that of
the underlying table.
Local partitioned indexes are created through the use of the LOCAL parameter with the CREATE INDEX
statement. It is unnecessary to provide partitioning information because the underlying table partitioning
will be used. A local index can be created with the following syntax:

CREATE INDEX "ETW".dogs_ix1


ON DOGS (ID)
LOCAL;

Because the index is local, all partition changes to the table will be automatically reflected on the index
partitions as well.
Local partitioned indexes have some inherent advantages that are similar to the advantages you get from
partitioned tables. These advantages include the following:
•Because the index exists entirely on one partition, any maintenance operations affect only that one
partition.
•The Oracle optimizer can use the local index to generate better query plans based on the fact that a
local index is used.
•If a partition is lost and must be recovered, only the data and index for that particular partition needs
to be recovered. With a global index, the entire index would need recovery.
As you can see, there are many advantages of using both global and local partitioned indexes.

2- Index-Only Tables
Many systems contain several small tables (1 to 3 columns) where all of the elements form the primary
key. This is typically the case of tables created to physically implement conceptual relations of the type
O,n O,n. However, there exists an extremely efficient way to create such tables, by using a B*-Tree
structure. In ORACLE 8, an index-organized table, that is a table with the same physical structure as an
index, allows us to do exactly that. In an index-organized table, the database engine will place the data
values in a « table » segment, but with a B*-tree structure.
An index-only table is a schema object introduced in Oracle8. An index-only table is similar to an index,
but whereas an index contains the primary key value and a ROWID pointing to where the data is kept, the
index-only table stores the column data in the leaf block of the index.
Because the leaf blocks of the Oracle index are traditionally very small and tightly packed, there can be
some drawbacks to having large rows stored there. Oracle has developed a way to compensate for this: If
rows become too large (by a set threshold), the row data is stored in an overflow area as specified in the
CREATE TABLE statement. This creates storage more like the traditional index and table relationship.
An index-only table contains the same structure as the Oracle B*-tree index. Only the leaf blocks have
changed. Index-only tables have many of the attributes of both indexes and tables, but there are a few
exceptions:
•Because it is part index and part table, no other indexes can be added to the index-only table.
•The UNIQUE constraint is not allowed on an index-only table.
•A trigger can be added to the index-only table.
•An index-only table cannot be stored in a cluster.
•Replication is not supported at this time.
As you can see, there are some restrictions on index-only tables, but there is also a great deal of benefits.
When performing inserts, there is a performance improvement, since only a single segment needs to be
updated, which is the table itself. Data retrieval is also faster since the optimizer processes the table just
as if it were an index. Furthermore, one read operation is saved since ORACLE doesn't have to read the
index prior to performing the read in the table.

When to Use Index-Only Tables


Index-only tables are very useful whenever data is always accessed via the primary key index. If this is
the case with your data, the index-only table will cut down on the space used by both the index and the
table (by combining them) and improve performance. Performance is improved because, by the time the
ROWID would have been retrieved, you have the data.
82
Tables that are not accessed via the primary key value are not good candidates for index-only tables. Also,
tables whose primary key values are updated and tables that have frequent insertions are not good
candidates for index-only tables.

How to Create Index-Only Tables


Index-only tables are created with the CREATE TABLE command; the ORGANIZATION INDEXED qualifier is
used to identify the table as index-only. The following qualifiers are used in creating index-only tables:
•ORGANIZATION INDEXED--This qualifier specifies an index-only table organization.
•OVERFLOW TABLESPACE ts_name--This qualifier specifies the overflow tablespace name.
•PCTTHRESHOLD threshold--This qualifier specifies the percent of a block that a row must be larger
than in order to be offloaded to the overflow tablespace.

Creating an index-only table with the CREATE TABLE command.


CREATE TABLE "ETW".DOGS (
ID NUMBER,
NAME VARCHAR2(40),
OWNER_ID NUMBER,
BREED_ID NUMBER,
RANK NUMBER NULL,
NOTES VARCHAR2(80)
PRIMARY KEY(ID) )
ORGANIZATION INDEXED
PCTTHRESHOLD 40
OVERFLOW TABLESPACE "DOGS2"
TABLESPACE "DOGS";

ANLYSIS:
This specifies that the index-only tablespace be created on the DOGS tablespace, whereas the overflow
tablespace used is DOGS2.

NOTE: It is necessary to specify the PRIMARY KEY value when creating an index-only table. This is
the value on which the index is created.

Unfortunately, there are also some disadvantages to using index-organized tables. For example, it is not
possible to create additional indexes since the occurrences of this type of table do not have a ROWID. It is
also impossible to add new columns or to modify existing columns. The message « ORA-25182 feature not
currently available for index-organized table » is used to alert us to these restrictions. Another factor to
consider is the case where updating key values is allowed. ORACLE
must delete the old record and do an insert based on the new key value, that is relocate a leaf of the B*-
tree to another location in the B*-tree. The net result is 2 SQL operations. Therefore, one should weight
the advantages and disadvantages before choosing this type of table, even though it is relatively easy to
convert an index-organized table back to a conventional format using create table as and then doing an
Import/Export to that table.

3- Bitmap Indexes
Another type of index available the bitmap index. With the traditional index you saw earlier, Oracle uses a
B*-tree method to traverse the index to find the leaf block. With a bitmap index, a bitmap of ROWIDs is
kept; this bitmap indicates which rows correspond to the index item. If the bit is set, this indicates that
the corresponding row contains the key value; if the bit is not set, the opposite is true.
As you can probably tell, bitmap indexes can probably be quite useful under the right circumstances, and
useless overhead otherwise.

When to Use Bitmap Indexes


As you can probably guess, the bitmap index works well on items with low cardinality. Low cardinality
means there is a small amount of variance in the possible values stored in that column. For example, the
column representing the sex of the dog is said to have low cardinality because only two values are
83
possible. Other column types that might have low cardinality include
•Marital status
•Account status (good or bad)
•Sales region (if there are only a few)
•Rank (if there are only a few)
•Special notes (whether there is a note)
With columns that have low cardinality, the bitmap index can greatly improve performance. Columns with
high cardinality are not candidates for bitmap indexes.
Locking issues affect data manipulation operations in Oracle. As a result, bitmapped indexes are not
appropriate for OLTP applications that have a high level of concurrent insert, update and delete operations.
Concurrency is usually not an issue in a data-warehousing environment where the data is maintained by
bulk loads, inserts and updates.
In addition, bitmapped index maintenance is deferred until the end of the bulk DML operation. If 100 rows
are inserted into a table, the inserted rows are placed into a sort buffer and the updates of all 100-index
entries are applied as a group. As a result, bitmapped indexes are appropriate for most decision support
applications (even those that have bulk updates applied on a regular basis).
Mass updates, inserts and delete will run faster if you drop the bitmapped indexes, execute the DML and
recreate the bitmapped indexes when the DML completes. Run timings using the straight DML and
compare it to the total time consumed by the drop bitmapped index/execute DML/recreate bitmapped
index process.

How to Create Bitmapped Indexes


A bitmap index is created with the CREATE INDEX command with the BITMAP qualifier. For example, the
following will create a bitmap index:
CREATE BITMAP INDEX
To create a bitmap index on the SEX field in the DOGS table, you can use the following syntax:
CREATE BITMAP INDEX "ETW".dogs_bx1
ON DOGS (SEX);
This simple statement will create the bitmap index on the column specified. At this time, bitmap indexes
cannot be created with the graphical tools.

4- Reverse Key Indexes


When a column that contains an incrementally increasing value (like those provided by sequences) is
indexed, there is a very poor distribution of that value across the B-tree, and hot spots can be introduced
into the index. Reverse-key indexes can help eliminate these hot spots by providing a much wider
distribution of values than would occur in a regular B-tree index. In this situation, simultaneous inserts
requested by the users are done in the same leaf of the index. This will generate contention on the same
Oracle block buffer. This problem is exacerbated when several tasks are initiated in parallel, all processing
updates against the database. Another problem is the phenomena of « sliding » when the system deletes
older numbers and inserts new ones. In this case, the index becomes increasingly large as the space
occupied by the deleted index entries never gets reused.
The answer to this situation is the reverse-key index whereby different index leafs is used, even if the key
sequence is chronological.

How to Create Reverse Key Indexes


CREATE INDEX my_index ON my_table (my_fields) REVERSE;

To make an existing index a reverse index use:


ALTER INDEX index_name REBUILD REVERSE;

Substitute no reverse if you want to rebuild you index as a non-reversed one.


No options are without a certain cost. While reverse-key indexes can help in the distribution of data
across an index, this very strength can be a weakness. Since the actual (non-reversed) values of the
index are not stored sequentially in the index, index range-scans cannot be performed. Only fetch-by-key
84
and full index scans can be performed on a reverse-key index. Therefore, it is not a good idea to build a
reverse-key index on a column that might use range-scans.

5- Function Based Indexes


Function based indexes are really going to improve the performance of all those queries which use
functions such as SUM, AVG in the select or when the where clause includes some kind of other function
such as some mathematical calculation on two columns.
Suppose you have an application that routinely selects the sum of the two columns SALARY and BONUS
from the PAYMENTS table. In the past, such a select may have been fairly slow, because of the calculation
being performed on the select. Now, however, the calculation is stored as part of the index, making the
overall selection process that much quicker.

How to Create Function Based Indexes


CREATE INDEX my_ind ON payments (salary + bonus);

So, this query would now utilize the index:

SELECT employee_id
FROM payments
WHERE (salary + bonus) > 1000;

You need to enable function-based indexes before you can use them, as follows:

ALTER SESSION SET QUERY_REWRITE_ENABLED = true


ALTER SESSION SET QUERY_REWRITE_INTEGRITY = trusted

Or
QUERY_REWRITE_INTEGRITY = TRUSTED
QUERY_REWRITE_ENABLED = TRUE
COMPATIBLE = 8.1.0.0.0 (or greater)

Also, to use a function-based index:


The must be analyzed before it can be used
The query must be guaranteed not to need any NULL values from the indexed expression, since
NULL values are not stored in indexes.

6- Descending Indexes
Descending indexes are a special type of function-based index, in which index entries are sorted in
descending order. Using the DESC clause does this. When creating an index, as in the example below:
create index ix_desc
on phonebook2(zip desc);

There are a couple special considerations when using descending indexes:


• Descending indexes cannot be used on reverse indexes
• The DESC option is ignored if used in the creation of bitmap indexes
• The COMPATIBLE initialization parameter must be set to 8.1.0

Knowing when to Rebuild Indexes

Where is the index now?


In order to understand what we must do with the index, we must first get an idea of the current state of
85
the index. This can be accomplished by using the ANALYZE INDEX VALIDATE STRUCTURE command.
Normally, the ANALYZE INDEX command creates either computed or estimated statistics for the index that
can be seen in the DBA_INDEXES view. This action may produce unintentional side effects, especially if
the index has not previously been analyzed. The VALIDATE STRUCTURE command can be safely executed
without affecting the optimizer. The VALIDATE STRUCTURE command populates the SYS.INDEX_STATS
table only. The SYS.INDEX_STATS table can be accessed with the public synonym INDEX_STATS. The
INDEX_STATS table will only hold validation information for one index at a time. You will need to query
this table before validating the structure of the next index.
Below is an example of ANALYZE INDEX VALIDATE STRUCTURE and sample output from INDEX_STATS:
ANALYZE INDEX shopping_basket_pk VALIDATE STRUCTURE;

SELECT name,height,lf_rows,lf_blks,del_lf_rows,distinct_keys,used_space
FROM INDEX_STATS;

NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROW DISTINCT_K USED_SPACE


------------------------- --------- ---------- ---------- ---------- ---------- ----------
SHOPPING_BASKET_PK 2 1 3 1 1 65

I have the information, now what?


There are two rules of thumb to help determine if the index needs to be rebuilt. If it is determined that the
index needs to be rebuilt, this can easily be accomplished by the ALTER INDEX REBUILD command.
Although not necessarily recommended, this command could be executed during normal operating hours.
Rebuilding the index uses the existing index as a basis. The alternative is to drop and re-create the index.
Creating an index uses the base table as its data source that needs to put a lock on the table. The index is
also unavailable during creation.
First rule of thumb is if the index has height greater than four, rebuild the index. For most indexes, the
height of the index will be quite low, i.e. one or two. I have seen an index on a 3 million-row table that
had height three. An index with height greater than four may need to be rebuilt as this might indicate a
skewed tree structure. This can lead to unnecessary database block reads of the index. It is helpful to
know the data structure for the table and index. Most times, the index height should be two or less, but
there are exceptions.
The second rule of thumb is that the deleted leaf rows should be less than 20% of the total number of leaf
rows. An excessive number of deleted leaf rows indicates that a high number of deletes or updates have
occurred to the index column(s). The index should be rebuilt to better balance the tree. The INDEX_STATS
table can be queried to determine if there are excessive deleted leaf rows in relation to the total number
of leaf rows. Let’s look at an example:
ANALYZE INDEX item_basket_pk VALIDATE STRUCTURE;

SELECT name,height,lf_rows,del_lf_rows,
(del_lf_rows/lf_rows)*100 as ratio
FROM INDEX_STATS;

NAME HEIGHT LF_ROWS DEL_LF_ROW RATIO


------------------------------ ---------- ---------- ---------- ----------
ITEM_BASKET_PK 1 235 74 31.4893617

In this example, the ratio of deleted leaf rows to total leaf rows is clearly above 20%. This is a good
candidate for rebuilding. Let’s rebuild the index and examine the results.
ALTER INDEX item_basket_pk REBUILD;

ANALYZE INDEX item_basket_pk VALIDATE STRUCTURE;

SELECT name,height,lf_rows,del_lf_rows,
(del_lf_rows/lf_rows)*100 as ratio
86
FROM INDEX_STATS;

NAME HEIGHT LF_ROWS DEL_LF_ROW RATIO


------------------------------ ---------- ---------- ---------- ----------
ITEM_BASKET_PK 1 161 0 0

The index is rebuilt and validated once again. Examining the INDEX_STATS table shows that the 74
deleted leaf rows were dropped from the index. Notice that the total number of leaf rows went from 235 to
161, which is a difference of 74 leaf rows. This index should provide better performance for the
application.

Script to find indexes to rebuild

Below is a sample script that can be run to determine which indexes need to be rebuilt. For those indexes
that need to be rebuilt, the ALTER INDEX REBUILD command is dynamically generated as output. The user
can tailor the height and percentage of deleted leaf rows by altering the vMaxHeight and vMaxDel
variables. The output of this script can be spooled to a file. This file can then be run to rebuild the indexes.

-- validate_idx.sql
--
-- This script will check indexes to find candidates for rebuilding.
-- Run this script in SQL*Plus as a user with SELECT ANY TABLE
-- privileges.
--
-- This script can be used and modified without permission. Run this
-- script at your own risk! The script author is not responsible for
-- any problems that may arise from running this script.

set serveroutput on size 100000

DECLARE
vOwner dba_indexes.owner%TYPE; /* Index Owner */
vIdxName dba_indexes.index_name%TYPE; /* Index Name */
vAnalyze VARCHAR2(100); /* String of Analyze Stmt */
vCursor NUMBER; /* DBMS_SQL cursor */
vNumRows INTEGER; /* DBMS_SQL return rows */
vHeight index_stats.height%TYPE; /* Height of index tree */
vLfRows index_stats.lf_rows%TYPE; /* Index Leaf Rows */
vDLfRows index_stats.del_lf_rows%TYPE; /* Deleted Leaf Rows */
vDLfPerc NUMBER; /* Del lf Percentage */
vMaxHeight NUMBER; /* Max tree height */
vMaxDel NUMBER; /* Max del lf percentage */
CURSOR cGetIdx IS SELECT owner,index_name
FROM dba_indexes WHERE OWNER NOT LIKE 'SYS%';
BEGIN
/* Define maximums. This section can be customized. */
vMaxHeight := 3;
vMaxDel := 20;

/* For every index, validate structure */


OPEN cGetIdx;
LOOP
FETCH cGetIdx INTO vOwner,vIdxName;
EXIT WHEN cGetIdx%NOTFOUND;
/* Open DBMS_SQL cursor */
87
vCursor := DBMS_SQL.OPEN_CURSOR;
/* Set up dynamic string to validate structure */
vAnalyze := 'ANALYZE INDEX ' || vOwner || '.' || vIdxName || ' VALIDATE STRUCTURE';
DBMS_SQL.PARSE(vCursor,vAnalyze,DBMS_SQL.V7);
vNumRows := DBMS_SQL.EXECUTE(vCursor);
/* Close DBMS_SQL cursor */
DBMS_SQL.CLOSE_CURSOR(vCursor);
/* Does index need rebuilding? */
/* If so, then generate command */
SELECT height,lf_rows,del_lf_rows INTO vHeight,vLfRows,vDLfRows
FROM INDEX_STATS;
IF vDLfRows = 0 THEN /* handle case where div by zero */
vDLfPerc := 0;
ELSE
vDLfPerc := (vDLfRows / vLfRows) * 100;
END IF;
IF (vHeight > vMaxHeight) OR (vDLfPerc > vMaxDel) THEN
DBMS_OUTPUT.PUT_LINE('ALTER INDEX ' || vOwner || '.' || vIdxName || ' REBUILD;');
END IF;

END LOOP;
CLOSE cGetIdx;
END;
/

Database Comparision

Table of Contents

1.Introduction

2.Operational Concerns

a.Scalability

b.Platform Availability

c.Networking & Internet Readiness

3.Vendor Related Issues

a.Licensing
88
b.Support and Maintenance

4.User Considerations

a.DBA Concerns

b.Programmability

5.Conclusion

Appendix A - System Requirements for Microsoft SQL Server 7.0

Appendix B - Maximum Sizes and Numbers of SQL Server 7.0

Appendix C - Scorecard of Microsoft’s SQL Server & Oracle’s 8i

Appendix D - Summary of Features of Microsoft’s SQL Server & Oracle’s 8i

Appendix E - Oracle 8 and Oracle 8i Standard Edition Platform Availability

SQL Server 7.0 Introduction

SQL Server 7.0 is designed to scale from laptop databases and small business servers all the way up to
terabyte-size databases.SQL Server 7.0 is designed to operate on Microsoft’s Windows 95, 98 and NT
operating systems.However, the Windows 95 & 98 versions of SQL Server 7.0 only support desktop, laptop
and small workgroup applications.SQL Server 7.0 for Windows NT Workstation has been developed for
applications that involve a large numbers of users and transactions.

Microsoft SQL Server had 30% of the Windows NT database market in 1998, while Oracle had 46.1% of
the Windows NT market, which was a 55% growth in 1998.This statistic must be understood in the
context that it applies only to the Windows NT platform, since that is the only platform that both Microsoft
and Oracle have in common.

Although, Microsoft has taken a big step forward in enterprise capability with the SQL Server 7.0, it seems
like their product is more appropriate for departmental and small to mid-sized companies.“Rewritten for
ease of use, Microsoft’s SQL Server 7.0 is far and away the best choice for smaller organizations or branch
offices that need a full-featured relational database.Larger organizations will find SQL Server a better
performer than ever before, although competing databases including Oracle 8 and DB2 continue to provide
better programmability and scalability.”[ii]

In speaking with a few experienced database administrators and programmers, SQL Server really is not
known as a competitor with the “big boys” of the relational database software market.The well-known
database software products include Oracle, IBM’s DB2, Sybase and Informix.These companies remain
leaders in database technology at the enterprise level.

Oracle 8i Introduction

Oracle 8i is a database product that is built upon a mature Oracle 8 product, but also brings increased
capabilities to develop and integrate with Internet applications.Oracle’s databases have been developed
and proven to handle the largest of enterprise databases.But Oracle also targets smaller, mid-tier
companies who find it necessary to have 24 hours by 7 days availability due to increasing Internet
business needs.Oracle’s 8i databases are available on a wide variety of platforms.Dell, IBM, Linux, Sun,
Fujitsu and Unisys are a few of the 21 listed on Oracle’s web site.
89
Oracle’s products are definitely not the cheapest on the market. If an evaluation of the application
necessitates a high need for reliability, scalability, security and performance then Oracle should be
considered. Oracle is the world’s leading supplier of software for information management, holding 27% of
the database market share across all platforms.Oracle is the undisputed database leader on UNIX
platforms, commanding 60.9% of the market share according to Dataquest.[iii],[iv]

2.Operational Concerns

a. Scalability

Scalability in the context of database software is defined as the software’s ability to continue to perform at
a similar level with a larger amount of data and a growing number of users and transactions.

Amount of Data

Both SQL Server 7.0 and Oracle 8i are designed to be client-server database products that can take
advantage of distributed database architecture.

A distributed database is a network of databases managed by multiple database servers that appears to a
user as a single database. This means the database could be distributed across several disks and servers
with multiple processors.The data of all databases in the distributed databases can be simultaneously
accessed and modified. The database architecture on the server will dictate how fast the transaction
response time is.The speed of transactions can vary greatly based on the database design as well as
server hardware configurations, including RAM, the number and speed of the CPUs.

SQL Server 7.0 can grow up to 1,048,516 terra-bytes.Microsoft uses SMP (systems with 4 processors)
technology to distribute databases. Other maximum sizes and numbers can be referenced in Appendix C,
which outlines other technical specifications of SQL Server 7.0.

Oracle 8i is scalable up to hundreds of terabytes to manage very large databases (VLDB).Oracle takes
advantage of distributed processing and storage capabilities through architectural features that use more
than one processor to divide the processing for a set of related jobs.This distributed architecture is a good
example of the expression “the sum of the parts is greater than the whole”, because as individual
processors work on a subset of related tasks, performance of the whole system is improved.

Number of Simultaneous Users

Theoretically, there is no limit to the number of users that can access either the Oracle 8i or SQL Server
7.0 database servers at one time, given infinite processors and infinite speed.In practical terms, there is a
limit, but it should not pose any real issues to be concerned with in terms of concurrent data.One
consideration is that SQL Servers will require ODBC software to connect with clients that are not PC
based.This will require some overhead, but should be pretty negligible.

In a “Score Card” published by ZDNet[v], which is fully documented in Appendix D, the following ratings
were published.These ratings really demonstrate the equality of SQL Server and Oracle for these
performance criteria.

PC Microsoft SQL Server Oracle8i Standard Edition


7.0
Server Engine Excellent Good
Support for multiple CPUs Excellent Fair
90
Join and index selection Excellent Excellent
Degree of concurrency Good Excellent
Database Design Good Excellent
Distributed transactions Excellent Excellent

These issues of are fundamental topics when performance is being discussed.

b.Platform Availability

As discussed in the Introduction, SQL Server 7.0 is designed to operate on Microsoft’s Windows 95, 98 and
NT 4.0 operating systems.However, the Windows 95 & 98 versions of SQL Server 7.0 will only support
desktop, laptop and small workgroup applications and requires an Intel platform.

SQL Server 7.0 for Windows NT Workstation has been developed for applications that involve a large
numbers of users and transactions and is limited to Intel or Alpha platforms.

This limitation could cause hurdles for large corporations in terms of performance that can be
expected.Brian McCarthy, CEO of Insurance Holdings said “Microsoft said we’d need an all-Microsoft
application if we wanted full scalability, but who’s going to rebuild the whole system?”[vi]

It’s especially important to note that although other platforms can be used for clients to access the SQL
Server 7.0, a third party ODBC software must be used.ODBC is an interface that allows for data to be
accessed in relational databases, independent of the database vendor.

Oracle 8i is supported by a large number of hardware manufacturers, as well as several operating


systems.There are currently 21 hardware vendors listed on Oracle’s website, with at least 6 operating
systems, counting UNIX as one although the flavors may vary by manufacturer.Reference Appendix F for
details on some of the platforms and operating systems that Oracle runs on.Further details can be found
on specific platforms at Oracle’s dedicated website:www.platforms.oracle.com.

Networking & Internet Readiness

The need for a database to be Internet ready is quickly becoming a necessity in today’s rapidly
growing web based world. At every turn in the software industry, the topic of e-business and e-
commerce is a top issue for companies wanting to compete in the future. Therefore, the need
for database software to support and enhance Internet application development is a must.

SQL Server 7.0 still lags behind in its ability to support multimedia data support and in
programmability, which are necessary for many Internet applications. Third party software will
have to be used to store special images, sound, video or geographic data support. SQL Server
7.0 doesn’t support Java, which is an industry standard for developing network applications.

Oracle 8i is the best product for companies wanting to move their database applications to the
Web. Oracle leads the market in handling of multimedia objects. Multimedia support is
particularly relevant when building Web-based applications like online stores that include
91
multimedia items such as pictures or video clips of items for sale. Oracle uses a product call
JServer, which brings Java and relational databases together. It allows for controlling the
database through Java and supports the creation of JavaBeans. JavaBeans are the basic
building blocks for Java-based Internet applications, and are (or will be) supported by just
about every high-end Internet application server on the market.

In the ZDNet Scorecard (Appendix D), SQL Server 7.0 and Oracle 8i were rated as follows, in
respect to their internet readiness features:

Microsoft SQL Server 7.0 Oracle8i Standard Edition


Multimedia Data Poor Excellent
Handling
Web connectivity Poor Excellent
Support for sound, Poor Excellent
media, video, images
Full text search Good Excellent

3. Vendor Related Issues

a.Licensing

SQL Server

Pricing structures of software can be complex.Microsoft recognizes this and even offers “A Guide to
BackOffice Pricing and Licensing” to help readers understand some of the subtleties and complexities
before “cutting a check”. SQL Server can be purchased by itself in either of the above-mentioned versions,
as an upgrade to certain other database products or as a part of Microsoft BackOffice.A license for the
server is distinct from the licenses for the CAL (client access license), but every server is accompanied
with a certain number of CALS.Although the number of CALS may vary by application, there would always
be a need for at least one CAL or the database could not be accessed to retrieve data.

To compare prices the details of the system architecture must be well understood.Some of those details
include the number of servers required to support the database, how many processors a server will have,
the number of users needing to access the database, whether the system will be serving an Internet or
intranet application.

Based on an application where 250 or more users (CALs) will be accessing one database server, the cost of
SQL Server 7.0 Enterprise Version is listed as $28,999 on Microsoft web site.To get an understanding of
the price differences between the Enterprise version and the Standard version, SQL Server 7.0 Standard
Version and Five CALs would cost $3,999, while the Enterprise version with Five CALs lists at $7999.An
upgrade of the SQL Server 7.0 Enterprise version for a customer who is also upgrading the clients
accessing the database costs $3969, while a customer who is buying licenses for new clients along with
the server license costs $7099.

The benefits of the Enterprise Edition over the Standard Edition cited in the Microsoft publications get
blurred in the marketing jargon.The Enterprise version is used for increased system scalability and
reliability by providing support for SMP (systems with multiple processors) and extended memory

These prices are negotiable and the price varies in terms of how customer acquires the SQL Server.For
example, if a customer purchases software that runs on SQL Server or purchases it from an independent
software vendor then the pricing would obviously be different.
92
Oracle 8i

Oracle’s pricing structure is different from Microsoft’s, in that it doesn’t charge per server license or client
access license.Rather Oracle charges by licensing units:“named user”, “concurrent system” and “power
unit”.

A named user is defined as “an individual who is authorized by his/her company to use the Oracle
Software programs, regardless of whether the individual is actively using these programs at any given
time.”[vii]

A concurrent device is defined as “an input device accessing the program on the designated system at any
given point in time. The number of "Concurrent Devices" you are licensed for is the maximum number of
input devices accessing the programs on the Designated System at any given point in time. If multiplexing
hardware or software (e.g., a TP Monitor or a Web server product) is used, this number must be measured
at the multiplexing front-end.”vii

A power unit is defined as one MHz of power in any Intel compatible or RISC processor in any computer of
the Designated Systems on the Order Confirmation page on which the Oracle software programs are
installed and operating. (Intel refers to Intel Solaris, Linux, and Windows NT; RISC refers to Sun SPARC
Solaris, HP-UX, and IBM/AIX. A "Processor" license shall be for the actual number of processors installed
in the licensed computer and running the Oracle program(s), regardless of the number of processors
which the computer is capable of running.)”vii

A named user licensing unit costs $600, a concurrent device costs $1495 and a power unit costs $200 for
the Oracle 8i Enterprise Edition.These prices are 5 times more than what Oracle charges for the Standard
Edition.

The Enterprise Editions includes these advanced features on top of the Standard Edition:large-database
partitioning (which helps you keep monster gigabyte-size databases under control), flexible security
features, and speed features such as bitmapped indexes, summary tables, and parallelism.

Two other modules that Oracle offers for enhanced web integration and multi-media handling are Oracle
JServer Standard Edition and WebDB, which if necessary add to the total cost of the Oracle solution.They
are also priced based on the licensing unit discussed above.

b.Support and Maintenance

Availability of qualified database administrators (DBA) and programmers is one issue that cannot be
overlooked in considering which database will be best for the given organization and application.Due to the
relationship between supply, demand and cost, the shortage of Oracle DBAs and programmers can mean
only mean one thing.They are hard to find and when one is available, they command a very high salary.

The nature of Oracle is that it can be more difficult to program and administer, so it requires specially
trained personnel.SQL Server, on the other hand, is an easier product to learn and administer so the
number of available programmers is higher and less expensive to staff a database project.

No cost information could be obtained on the annual maintenance fees to remain current on the licensing
agreements with either Oracle or SQL Server.

4. User Considerations

a. Database Administrator (DBA) Concerns

Recovery & Backup


93
In every database system, the possibility of a system or hardware failure always exists. Should a failure
occur and affect the database, the database must be recovered. The goals after a failure are to ensure
that the effects of all committed transactions are reflected in the recovered database and to return to
normal operations as quickly as possible while insulating users from problems caused by the failure.
Databases can fail due power outages, operating system crashes, disk failure or operator error.

Both SQL Server 7.0 and Oracle make commitments on a two-phase commit approach, which allows for
users to control a logical set of SQL statements so they all either succeed or fail as a unit. This two-phase
mechanism guarantees that no matter what type of system or network failure might occur, a distributed
transaction either commits on all involved nodes or rolls back on all involved nodes to maintain data
consistency across the global distributed database.

Monitoring & Tuning Capabilities

SQL Server 7.0 is an exceptionally easy product to administer and is more forgiving than previous SQL
Server versions.SQL Server 7.0 has an auto-tuning feature that allows for memory to be self-managed
and there are several new wizards that simply advanced tasks such as creating databases, scheduling
backups, importing and exporting data and configuring replication.This should make the training of
database administrators much easier.

Oracle 8i databases can be administered and controlled very tightly, but it is a complex and requires
trained database administrators to do so proficiently.“Oracle8i tools are Java-based and can even be run
from a Web browser. They provide all the essentials for designing and setting up a database, including
some advanced features like letting you selectively delegate authority to users of its Enterprise Manager
administration console. This is a handy tool for branch office deployment.Like previous releases of
Enterprise Manager, though, this one is a version behind the database, and it doesn't know a thing about
new Oracle8i features such as Java stored procedures.”[viii]

b.Programmability

There are languages supported within the database software for programming and controlling the
database.For example, since PL/SQL can be stored in the database, network traffic between applications
and the database is reduced, thereby increasing application and system performance.

SQL Server 7.0 comes with an internal programming language called Transact-SQL, which has received a
poor rating in several reviews.“While everyone else in the SQL database market is moving (or has already
moved) to a modern programming language like Java, SQL Server customers are still stuck in the
programming Dark Ages—no object orient development, no big class libraries to use, and no code
interoperability with anything else.”[ix]The programming can be done, but it will require a lot more work.

Oracle gets an excellent rating for it’s internal language offerings, which include Java and PL/SQL.

5.Conclusion

In comparing these two database products, it became apparent they each hold a different place and
purpose in the market.They don’t compete in the same niche.Microsoft SQL Server, a client-server
database, continues to make strides toward the enterprise database market, but is still most appropriate
for a departmental or small to mid-sized company whose database doesn’t have such high scalability,
reliability and availability needs.SQL Server’s greatest weakness is the Windows NT platform, which it
operates on, is not mature enough to provide the kind of availability that enterprise worthy systems
require.“In the small-business market, the differentiating factors are ease of database administration, Web
connectivity, the speed and features of the database server engine, branch-office and mobile support, and
94
the ability to warehouse data efficiently. SQL Server 7.0 shines in all of these areas except Web
connectivity. Its administration tools include many wizards and self-tuning settings that make it the only
database we reviewed that might not require a specially trained administrator.”[x]

Oracle, also a client-server database, operates on the high end of the database market and is also
reaching out to start ups, small to medium sized businesses who have a need for a complete, integrated
platform for critical applications for the internet. Oracle is harder to administer is an expensive choice,
unless the application being developed requires its Java or multimedia features.Another selling point to
Oracle is that is it sold on a multitude of platforms, in comparison to SQL Server 7.0, which may be
appealing to some customers who are seeking a more mature platform.

Appendix A

System Requirements for Microsoft SQL Server 7.0*

Client Access Licenses required

Server

 PC with a Pentium (166 MHz or higher) or Alpha processor

 Microsoft Windows NT Server operating system version 4.0 or Windows NT


Server 4.0 Enterprise Edition with Service Pack 4 or later (Service Pack 4 included)

 Microsoft Internet Explorer 4.01 with Service Pack 1 or later (both included)

 32 MB of RAM

 Hard-disk space required:

 65–180 MB for Server; approximately 170 MB for typical installation

 35–50 MB for OLAP services; approximately 50 MB for typical


installation

 24–36 MB for English Query; approximately 36 MB for typical


installation

 CD-ROM drive

 VGA or higher-resolution monitor; Super VGA recommended

 Microsoft Mouse or compatible pointing device

Note SQL Server 7.0 can utilize up to four processors. Additional processor support is available with SQL
Server 7.0 Enterprise Edition.

Desktop
Identical to Server requirements with the following exceptions:
95
 Each installation of SQL Server Desktop requires a per-seat client access
license for SQL Server 7.0; SQL Server Desktop will only interact with SQL Server in per-seat
mode

 65–180 MB available hard-disk space; approximately 170 MB for typical


installation
Note The Desktop version of SQL Server 7.0 can utilize up to two processors.

Networking Support Windows 95, Windows 98, or Windows NT built-in network software (additional
network software is not required unless you are using Banyan VINES or AppleTalk ADSP; Novell NetWare
client support is provided by NWLink)

Clients Supported Windows 95, Windows 98, or Windows NT Workstation, UNIX,** Apple Macintosh,**
and OS/2**

*Actual requirements will vary based on your system configuration and the features you choose to install.

**Requires ODBC client software from a third-party vendor.

Appendix B

Maximum Sizes and Numbers of SQL Server 7.0

This table specifies the maximum sizes and numbers of various objects defined in Microsoft SQL Server
databases, or referenced in Transact-SQL statements.

Object SQL Server 7.0


Batch size 128 *Network Packet
Size
Bytes per character or binary column 8000

Bytes per text, ntext, or image column 2GB-2


96
Bytes per GROUP BY, ORDER BY 8060

Bytes per index 900


Bytes per foreign key 900
Bytes per primary key 900
Bytes per row 8060
Bytes in source text of a stored Batch size
procedure
Clustered indexes or constraints per 1
table
Columns in GROUP BY, ORDER BY Limited only by
number of bytes
Columns or expressions in a GROUP BY 10
WITH CUBE or WITH ROLLUP statement

Columns per index 16


Columns per foreign key 16
Columns per primary key 16
Columns per base table 1024
Columns per SELECT statement 4096
Columns per INSERT statement 1024
Connections per client Max. value of
configured
connections
Database size 1,048,516 TB
Files per database 32,767
File size (data) 32 TB
Object SQL Server 7.0
File size (log) 4 TB
FOREIGN KEY constraints per table 63

Foreign key table references per table 63

Identifier length (in characters) 128


Index key size (bytes) 900
Locks per connection Max. value of locks
configured
Nested subqueries 64
Nested trigger levels 32
Nonclustered indexes or constraints per 250
table
Objects in a database * 2,147,483,647
Parameters per stored procedure 1024

PRIMARY KEY constraints per table 1

Rows per table Limited by available


storage
SQL string length 128 *TDS packet size
Tables per database Limited by number of
objects in a database
Tables per SELECT statement 256
Triggers per table Limited by number of
objects in a database
97
UNIQUE constraints per table 250 nonclustered and
1 clustered

* Database objects include all tables, views, stored procedures, extended stored procedures, triggers,
rules, defaults, and constraints.

Appendix C

Scorecard of Microsoft’s SQL Server & Oracle’s 8i

PC Microsoft SQL Server Oracle8i Standard Edition


7.0
Server Administration Excellent Good
Graphical tools Excellent Good
Ease of maintenance Excellent Good
Server Engine Excellent Excellent
Support for multiple CPUs Excellent Good
Join and index selection Excellent Excellent
Degree of concurrency Good Excellent
Multimedia Data Handling Poor Excellent
Web connectivity Poor Excellent
Support for sound, media, Poor Excellent
video, images
Full text search Good Excellent
Interoperability Good Fair
Links with other databases Good Poor
Single log-on Good Good
Operating-system support Fair Good
Programmability Fair Excellent
Stored procedures and Good Excellent
triggers
Internal programming Poor Excellent
language
Database Design Good Excellent
SQL language support Excellent Excellent
Object-oriented design Poor Excellent
Branch Office Support Excellent Excellent
Replication Excellent Excellent
Distributed transactions Excellent Excellent
Remote administration Good Excellent
Data Warehousing and Excellent Excellent
Reporting
Loading tools Excellent Excellent
98

Appendix D

Summary of Features of Microsoft’s SQL Server & Oracle’s 8i

Microsoft SQL Server 7.0 Oracle 8i Standard


Edition
List price $1,399 $3,925 per CPU
Number of users included 5 users (named or concurrent) 5 concurrent users
Price for each additional concurrent user / $127 / $127 $785 / $392.50
named user
Server operating-system support Windows NT, Windows 9x Windows NT, multiple
Unix flavors
Client software operating-system support Windows NT, Windows 9x All server platforms plus
Windows 9x
Network protocols supported AppleTalk, IPX, Named Pipes, IPX, Named Pipes,
TCP/IP, Vines IP TCP/IP
ADMINISTRATION
Graphical tools for:
Installing the database server Yes Yes
Creating and managing databases/disk Yes Yes Yes Yes
devices
Creating and managing tables/indexes Yes Yes Yes Yes
Creating and managing stored procedures Yes Yes
Creating and managing users Yes Yes
Creating and managing replication links Yes Yes
Backup and restore Yes Yes
Web publishing Yes Yes
Database diagramming Yes Optional
Submitting SQL and viewing query results Yes Yes
Viewing all currently executing SQL code Yes Optional
Detecting resource-intensive queries/Killing Yes Yes Yes Yes
queries
Graphing server engine statistics Yes Yes
Execution plan display Yes Optional
Providing index suggestions based on a Yes Yes Optional Optional
single SQL statement/on overall server
usage
Tools have command line access Yes Yes
Reverse-engineer database objects/data to Yes No Yes Optional
a SQL script
Languages supplied for command scripting JScript, OS commands, SQL, Java, OS commands,
Transact-SQL, VB Script PL/SQL, SQL, TCL
Security audit log of administrator and user No Yes
activity
Job scheduler can run jobs at certain Yes Yes Yes Yes
times/when certain events happen
Online manual with search engine Yes Yes
SERVER ENGINE
Levels of locking granularity available Database, table, page, row Database, table, row
Default locking level for queries Row Row
99
Readers can block writers/writers can block No Yes No No
readers at default isolation level
Can use multiple CPUs on database Yes Yes No Yes Optional Optional
load/within a query/to create an index
Can use multiple CPUs on database backup Yes No No Optional Optional
and restore/on update/on delete Optional
Nested loop join/Hash join/Merge join Yes Yes Yes Yes Yes Yes
Semi-join for star queries Yes Yes
B-tree index/Clustered index/Bitmap index Yes Yes Yes** Yes Yes Optional
Cost-based/rule-based optimizer Yes No Yes Yes
Can use multiple indexes per query Yes Yes
Can use just the index to answer the query Yes Yes
Automatically maintains necessary Yes Yes
optimizer statistics
Dynamic SQL statements parameterized Yes Yes
and cached
Can mark tables as memory-resident Yes Yes
Dynamic memory resource sharing Yes No
Number of CPUs supported per server 4 4
Failover server Optional Yes
Query resource consumption governor Yes Yes
Can assign priorities to different groups of No Optional
users
Incremental backup/Restore to a specified Yes Yes Optional Optional
point in time
Default disk data block size 8K 2K or 8K
Users can choose disk block size No Yes
Parallel/asynchronous (multitasking) disk Yes Yes Yes Yes
operations
Data and log devices can grow when Yes Yes
needed
Year 2000-certified/Euro support Yes Yes Yes Yes
MULTIMEDIA DATA TYPES
Binary large object (BLOB) Yes Yes
Sound/Video/Images No No No Yes Yes Yes
Text documents/Geospatial data/Time Yes No Yes** Yes Yes Optional
series
INTEROPERABILITY
Gateways to other databases ODBC, OLE DB Optional
Single log-on with Windows NT / LDAP / Yes No No Yes Optional Optional
OSF DCE
PROGRAMMABILITY
Stored procedures/triggers Yes Yes Yes Yes
Internal programming languages Transact-SQL Java, PL/SQL
Debugger supplied for stored procedure Optional Optional
languages
Database client libraries supported DB/LIB, ODBC, OLE DB CORBA, Enterprise
JavaBeans, JDBC, OCI,
ODBC, Oracle Objects for
OLE
QUERY LANGUAGE AND DATABASE DESIGN
SQL language version supported SQL-92 Entry Level with SQL-92 Entry Level with
extensions extensions
ANSI isolation levels supported Read uncommitted, read Read committed,
committed, repeatable read, serializable, read only
serializable, read only
Left/right/full outer joins Yes Yes Yes Yes Yes Yes
Declarative referential integrity Yes Yes
100
Cascade delete/Cascade update No No Yes No
Object-oriented design support/Object No No Yes Yes
references (REFs)
BRANCH OFFICE SUPPORT
One-way/bidirectional replication Yes Yes Yes Yes
Pager/e-mail notification of errors Yes Yes Yes Yes
Administration tools can manage a remote Yes Yes
database
Web-based administration tools provided No Yes
Distributed transactions within Yes Yes Yes Yes
database/with other vendors' databases
DATA WAREHOUSING AND REPORTING
Data loader can load directly to disk pages Yes Yes
for speed
Data transformation and cleansing tools Yes Yes
Can delay constraint checking during a bulk Yes Yes
load
Provides precalculation of summary Yes** Optional
information
Included OLAP server Yes Optional
Queries automatically rewritten to use No Optional
summary tables
Summaries understand dimensional Yes** Optional
hierarchies
Automatic refresh of summaries when data No Optional
changes
Can use data sampling to speed processing No Optional
Top n queries/Top n percent of total queries Yes Yes Yes No
Can handle ties when ranking top rows Yes No
Cube/rollup functions Yes Yes Yes Yes

Appendix E

Oracle 8 and Oracle 8i Standard Edition Platform Availability

Operating System Chip Hardware


Data General DG-UX Intel Any, up to 4 cpus *
Digital Unix Alpha Digital AlphaServer 300, 400, 800, 1000,
1000A
Digital AlphaServer 1200, 2000, 4000
Hewlett-Packard HP-UX PA-RISC HP9000 7xx-Series wkstns
HP9000 B-Series, C-Series wkstns
101
HP9000 D-Series, E-Series, A-Series
HP9000 K360 K370 K380 R380 R390
IBM/ PowerPC RS/6000 Models: 43P, 42T/42W, C10, C20,
E30, F30, F40, F50, H10, H50, H70
Bull/Motorola AIX Bull Estrella 200, 300, 700
Bull Escala E
Motorola RISC PC Plus Series
Motorola EX Series
Motorola PowerStack II Pro2000, Pro3000,
Pro4000
IBM OS/2 Intel Any, up to 4 cpus *
Microsoft Windows NT Alpha Digital AlphaServer 300, 400, 800, 1000,
1000A
Digital AlphaServer 1200, 2000, 4000
Intel Any, up to 4 cpus *
MIPS SNI RM200, RM300
NCR MP-RAS Intel NCR S10, S40
Novell NetWare Intel Any, up to 4 cpus *
SCO UnixWare Intel Any, up to 4 cpus *
Siemens Nixdorf MIPS SNI RM200, RM300
SINIX/Reliant UNIX
Sun Solaris Intel Intel Any, up to 4 cpus *
Sun Solaris SPARC SPARC Any uniprocessor or dual processor only
machine from Sun, and the Sun E450 up to 4
cpus
SGI IRIX MIPS O2, Octane, Origin200 (Single Tower Only)

* includes Compaq ProLiant and


ProSignia, HP NetServer,

IBM Netfinity, and any other Intel-


based server with up to 4 cpus.

[i]
Dyck, Timothy. “Mission Critical.” PC Magazine.July 30, 1999.November 1, 1999
<http://www.zdnet.com/products/stories/reviews/0,4161,408644,00.html.

[ii]
Dyck, Timothy. “SQL Server makes enterprise in roads.” PC Week. November 10, 1998.November 6,
1999 <http://www.zdnet.com/pcweek/stories/news/0,4153,372285,00.html.

[iii]
O’Neill, Paige. “Oracle Trumps Microsoft in Battle for NT Database Marketshare.”March 29, 1999.October
31, 1999. <www.oracle.com/cgi-bin/press/printpr.cgi?file=199903290500.29144.html&mode=corp.

[iv]
“Oracle Charts Landmark Year for Oracle 8; Sets the Stage for the Next Release, Code-Named
Emerald.”July 2, 1999. November 5, 1999 <www.uk.oracle.com/info/news/emerald.html.
102
[v]
“Score Card.” PC Magazine.July 29, 1999.October 27, 1999
<www.zdnet.com/products/stories/reviews/0,4161,2304448,00.html.

[vi]
Deck, Stewart. “SQL users turn to Oracle 8 for bulk.” Computerworld. May 10, 1999.October 15, 1999
<www.computerworld.com/home/print.nsf/all/990510A506.

[vii]
“Licensing Terms.”November 1, 1999
<http://oraclestore.oracle.com/cec/cstage?eccookie=&ecaction=ecpassthru&template=help_licensing.en.h
tm.

[viii]
Dyck, Timothy.“Oracle 8i.”PC Magazine.July 29, 1999.November 6, 1999
<http://www.zdnet.com/products/stories/reviews/0,4161,408777,00.html.

[ix]
Dyck, Timothy.“SQL Server 7.0”PC Magazine.July 29, 1999.November 11,
1999<http://www.zdnet.com/products/stories/reviews/0,4161,408776,00.html.

[x]
Dyck, Timthy, “Editor’s Choice:SQL Server Databases.”PC Magazine.July 30, 1999.November 9,
1999<http://www.zdnet.com/products/stories/reviews/0,4161,408779,00.html.

Other References

“Oracle 8.0 Concepts Volume 1 Release 8.0.”December 1997.

“Oracle 8.0 Concepts Volume 2 Release 8.0.”December 1997.

“SQL Server 7.0 Product Enhancements Guide.”November 1998.

“Oracle 8i for Windows NT.”February 1999.

www.oracle.com

www.microsoft.com

Loading Data Faster: Testing Different Options

I will try to explain here at some of the Oracle features and techniques that can be employed to speed up
data loads from flat files into Oracle tables. To demonstrate and compare data loading methods, I loaded a
sample data set using each of the techniques that Oracle provides. I compared elapsed time, CPU time,
and caveats of each method, but please note that timings can vary from one run to the next and all
results depends on DB Activity at that time.

The Sample Data Set

To demonstrate various techniques for loading flat file data into Oracle tables, I worked with a flat file that
contains 100,000 records of data into a call center table. The file is approximately 6 Mb in size, and a few
sample records in the file look like this:
103
82302284384,04-18-2003:13:18:58,5001,COMPLAINT,SYS TICKET 183487923
82302284385,04-18-2003:13:18:59,3352,INFO-REQUEST,PAYMENT ADDRESS
82302284386,04-18-2003:13:19:01,3142,DOC-REQUEST,2002 YE CONSOL STMT

The CALLS table looks like this:

Name Null? Type Comment


------------ --------- ------------- -----------------
CALL_ID NOT NULL NUMBER Primary key
CALL_DATE NOT NULL DATE Non-unique index
EMP_ID NOT NULL NUMBER
CALL_TYPE NOT NULL VARCHAR2(12)
DETAILS VARCHAR2(25)

1- Single Row Inserts

About the simplest way for an application to load data into Oracle is to insert one row at a time using the
INSERT statement with the VALUES clause. This technique is convenient for loading small amounts of
data, such as a SQL*Plus script that creates seed data at the time of application installation. However, this
approach is quite slow when compared to more sophisticated techniques, and therefore is not appropriate
for loading large volumes of data into Oracle.
Depending on the application and data volume involved, this most basic method for loading data into
Oracle might give acceptable performance. If that is the case, then there is no need to over-engineer an
unnecessarily complicated load program. However, if single row inserts are not fast enough, then there are
more sophisticated data load methods to consider.
The chief advantages of the single row insert method are simplicity and flexibility. The key disadvantage of
this technique is that it is the slowest and most CPU-resource intensive.

Data Loading Method Elapsed Seconds Database Server


CPU Seconds

Single row inserts 172 52

2-Single Row Inserts Without Indexes

When a row is inserted into a table, Oracle needs to update all indexes on the table at the same time. It is
typically faster for Oracle to build an index from scratch after the table is populated rather than repeatedly
update an index as rows are inserted. It is often true that the index will be more compact and efficient as
well. These rules of thumb are especially true when bitmap indexes are involved.
From this, it follows that one way to speed up data loads would be to drop the indexes before loading the
data and recreate the indexes after the load is complete. For primary and unique key constraints, you can
get the same effect by disabling or dropping the constraint. (Be careful of the impact this might have on
foreign key constraints in other tables.)
This technique might not be appropriate if you are loading data into a table that already has a lot of rows
in it. Consider a table with 99 million rows and you wish to load 1 million more rows. The time saved in
loading the 1 million new rows will be lost when Oracle has to rebuild an index on the 99 million existing
rows plus the 1 million new rows.
Speeding up index creation is another issue to consider. Briefly, you can reduce the amount of sorting that
must take place on disk by increasing the sort_area_size in your session (if you are not using Oracle 9i
PGA aggregate targets). This allows your session to perform more of the sorting during index creation in
memory. Also, you can use the NOLOGGING keyword to reduce the amount of redo Oracle generates
104
during the index build. NOLOGGING has significant impacts on recoverability and standby databases, so
do your homework before using the NOLOGGING keyword.
I modified the application used in the last section to disable the primary key on the CALLS table and drop
its one non-unique index before loading the data, putting both back after the load was complete. In this
example, the CALLS table was empty before the data load. Factoring in the amount of time required to
recreate the two indexes, elapsed time for the load dropped from 172 seconds to 130 seconds. CPU time
used by the database server process dropped from 52 seconds to 35 seconds.
Dropping and rebuilding indexes before and after a data load can speed up the load and yield more
efficient indexes. Some drawbacks include the added complexity and potential embedding of schema
design information into the application code. (When you add another index to the table being loaded, will
you have to update your application code?) Dropping indexes before a load could also have significant
performance impacts if users need to be able to query the target table while the load is taking place.
Finally, dropping or disabling primary or unique key constraints could cause difficulties if foreign key
constraints reference them.

Data Loading Method Elapsed Seconds Database Server


CPU Seconds

Single row inserts without indexes 130 35


(time includes index rebuild using
NOLOGGING and large sort area)

3-Bulk Inserts Without Indexes

I modified the application used in the last example to load call center records into arrays in memory and
perform bulk inserts 100 rows at a time instead of one row at a time. I retained the logic that dropped the
indexes and rebuilt them after the data load. Elapsed time to load the data dropped from 130 seconds to
14 seconds, while CPU time used by the database server process dropped from 35 seconds to 7 seconds.
Of course, the amount of time required to rebuild the indexes was not affected by the use of bulk inserts.
The actual time to load the data (setting aside the index rebuild time) dropped by about 95%.
A clear advantage of the bulk insert technique is that load time drops dramatically, as does CPU usage.
One disadvantage of this technique is slightly increased complexity in application coding and a greater
opportunity for bugs. This disadvantage is very modest.

We apply this technique for First Data and Bank of America.

Data Loading Method Elapsed Seconds Database Server


CPU Seconds

Bulk row inserts 100 rows at a time 14 7


without indexes (time includes
index rebuild using NOLOGGING
and large sort area)

4-CREATE as SELECT Using an Oracle 9i External Table

An external table looks somewhat like a regular table in that it has columns with standard Oracle data
types and you can query it just like any other table. However, the table data is not stored inside the
database. Instead, the external table is associated with a flat file on the database server. Whenever you
query the external table, Oracle parses the flat file and returns the rows to you as if the data were stored
in Oracle.
105
We apply this technique for Bank of America and BankLink.
You can join external tables to other tables in your queries, but you cannot index external tables nor can
you insert, update, or delete rows. If the flat file on the database server changes or is deleted, this will
affect the outcome of future queries against the external table.
External tables open the door to new strategies for loading data into Oracle. We can put the flat file on the
database server and create an external table in the database. At that time we can instantly query the data
as if it had been loaded into a conventional Oracle table. If the flat file gets removed or overwritten, the
old contents will no longer be accessible in the database. Probably we can implement this for issuedata
and images files?
However, we can (if needed) use an external table as a vehicle to copy data quickly from a flat file into a
conventional Oracle table.

I wrote a SQL*Plus script to drop the CALLS table, recreate it from the CALLS_EXTERNAL table as shown
above, and add the primary key and non-unique index. Populating the CALLS table in this way took 15
seconds, using 8 seconds of CPU time in the database server process. This case was slightly slower than
the bulk insert method discussed in the last section.
One advantage of this technique is that it achieves very good performance with very little coding effort. It
is also a method that works entirely within the database, so we can even schedule data loads using the
Oracle dbms_job scheduler.

Data Loading Method Elapsed Seconds Database Server


CPU Seconds

CREATE as SELECT from an 15 8


external table with NOLOGGING
(time includes index build using
NOLOGGING and large sort area)

5- SQL*Loader Conventional and Direct Path

SQL*Loader is a utility provided by Oracle specifically for the purpose of loading large volumes of data
from flat files into Oracle tables. It is very powerful, flexible, and fast. However, we must launch it from
the operating system. (Invoking SQL*Loader within a dbms_job or a PL/SQL stored procedure is tricky but
we can do it.).
SQL*Loader still seems to be about the fastest and most efficient way to get flat file data into Oracle. By
default, SQL*Loader uses what it calls "conventional path" loading–bulk inserts, basically. The
performance is not phenomenal, and there are faster alternatives. However, with a simple "direct=true" on
the command line, we can invoke "direct path" loading. In a direct path load, SQL*Loader writes rows
directly into new data blocks above the table's high water mark. This is like inserting with the Append hint
as seen in the last section, but it's even better. In a direct path load, SQL*Loader also updates indexes
using a very efficient bulk method.
Loading the call center data using SQL*Loader with conventional path load took about 81 seconds. The
database server process used about 12 seconds of CPU time. This was with indexes in place and
SQL*Loader bulk inserting rows 100 at a time. With a direct path load, SQL*Loader got the data into the
database in just 9 seconds, with only 3 CPU seconds used by the database server process. This was with
indexes in place.
Direct path SQL*Loader is fast and efficient. It is even fast when indexes on the target table are not
dropped before the load. As for disadvantages, direct path loading has implications for recovery akin to
the NOLOGGING keyword. Also, indexes on the target table are unusable for a period during the load. This
can impact users trying to access the target table while a load is in progress. Also, indexes can be left in
an unusable state if the SQL*Loader session were to crash midway through the load.

Data Loading Method Elapsed Seconds Database Server


CPU Seconds
106

SQL*Loader conventional path 81 12


(indexes in place and rows=100)

SQL*Loader direct path (indexes 9 3


in place)
Conclusion

There are many different ways to load data into Oracle. Each technique offers its own balance between
speed, simplicity, scalability, recoverability, and data availability.To recap, here are all of the timing figures
in one place:

Data Loading Method Elapsed Seconds Database Server


CPU Seconds

Single row inserts 172 52

Single row inserts without indexes 130 35


(time includes index rebuild using
NOLOGGING and large sort area)

Bulk row inserts 100 rows at a time 14 7


without indexes (time includes
index rebuild using NOLOGGING
and large sort area)

CREATE as SELECT from an 15 8


external table with NOLOGGING
(time includes index build using
NOLOGGING and large sort area)

SQL*Loader conventional path 81 12


(indexes in place and rows=100)

SQL*Loader direct path (indexes 9 3


in place)

Please keep in mind that I did not even touch on the subject of parallelism in data loads. (Inserts with the
Append hint can use parallelism in the Enterprise Edition of the Oracle software.

Oracle XML

What is XML and what is it used for?

XML (eXtensible Markup Language) is a W3C initiative that allows information and services to be encoded
with meaningful structure and semantics that both computers and humans can understand. XML is great
for information exchange, and can easily be extended to include user-specified and industry-specified
tags. Look at this simple example defining a FAQ:
<?xml version="1.0"?>
<!DOCTYPE question-list SYSTEM "faq.dtd">
<?xml-stylesheet type="text/xml" href="faq.xsl"?>

<FAQ-LIST>
<QUESTION>
<QUERY>Question goes here</QUERY>
<RESPONSE>Answer goes here.</RESPONSE>
</QUESTION>
107
<QUESTION>
<QUERY>Another question goes here.</QUERY>
<RESPONSE>The answer goes here.</RESPONSE>
</QUESTION>
</FAQ-LIST>

What is a DTD and what is it used for?

A Document Type Definition (DTD) defines the elements or record structure of a XML document. A DTD
allows your XML files to carry a description of its format with it. The DTD for the above XML example looks
like this:
<?xml version="1.0"?>

<!ELEMENT faq-list (question+)>


<!ELEMENT question (query, response*)>
<!ELEMENT query (#PCDATA)>
<!ELEMENT response (#PCDATA)>
Notes:

• #PCDATA (parsed character data) means that the element contains data that can be parsed by a
parser like HTML
• The + sign in the example above declares that the "QUESTION" element must occur one or more
times inside the "FAQ-LIST" element.
• The * sign in the example above declares that the "QUERY" element can occur zero or more times
inside the "QUESTION" element.

The W3C also formulated a new standard, called XML Schemas that superceded DTD's. Schemas allow for
more complex data types within your tags and better ways to constrain (validate) data within these tags.

What XML compliant products do Oracle provide?

• XMLDB

Standard option that ships with the Oracle 9i database (from 9.2.0). Previously called
Project XDB.

• XML-SQL Utility (XSU)

Programatic interfaces (packages) for PL/SQL and Java

• XML Developer's kits (XDK) for Java, C, C++ and PL/SQL.


See demo programs in the $ORACLE_HOME/xdk/ directory.
• XSQL Servlet
Processing SQL queries and generating an XML result set
See http://technet.oracle.com/tech/xml/xsql_servlet/htdocs/relnotes.htm
• Other products
Oracle integrated XML in various other product offerings like the Oracle iFS, Oracle InterMedia,
JDeveloper, etc.

How does one map relational data from tables to XML?


108
If you're using Oracle 8i, use the DBMS_XMLQUERY and DBMS_XMLSAVE JAVA based packages. For Oracle
9i, use the C-based package DBMS_XMLGEN.

Look at the following Oracle 9i code example:

connect scott/tiger
set serveroutput on
DECLARE
Ctx DBMS_XMLGEN.ctxHandle; -- Var's to convert SQL output to XML
xml clob;
emp_no NUMBER := 7369;

xmlc varchar2(4000); -- Var's required to convert lob to varchar


off integer := 1;
len integer := 4000;
BEGIN
Ctx := DBMS_XMLGEN.newContext('SELECT * FROM emp WHERE empno = '||emp_no);
DBMS_XMLGen.setRowsetTag(Ctx, 'EMP_TABLE');
DBMS_XMLGen.setRowTag(Ctx, 'EMP_ROW');
DBMS_XMLGEN.closeContext(Ctx);
xml := DBMS_XMLGEN.getXML(Ctx);

DBMS_LOB.READ(xml, len, off, xmlc); -- Display first part on screen


DBMS_OUTPUT.PUT_LINE(xmlc);
END;
/
The same results can be achieved using SQLX (see http://sqlx.org/). Some of the SQLX functions are
XMLElement(), XMLForest(), XMLSequence(), etc. Look at this example.
set long 32000
SELECT XMLELEMENT("EMP_TABLE",
(select XMLELEMENT("EMP_ROW",
XMLFOREST(empno, ename, job, mgr, hiredate, sal, deptno)
)
from emp
where empno = 7369))
from dual;

An older Oracle 8i example:

connect scott/tiger
set serveroutput on
DECLARE
Ctx DBMS_XMLQuery.ctxType; -- Var's to convert SQL output to XML
xml clob;
emp_no NUMBER := 7369;

xmlc varchar2(4000); -- Var's required to convert lob to varchar


off integer := 1;
len integer := 4000;
BEGIN
Ctx := DBMS_XMLQuery.newContext('SELECT * FROM emp WHERE empno = :empno');
DBMS_XMLQuery.setBindValue(Ctx, 'empno', emp_no);
xml := DBMS_XMLQuery.getXML(Ctx);
DBMS_XMLQuery.closeContext(Ctx);
109
DBMS_LOB.READ(xml, len, off, xmlc); -- Display first part on screen
DBMS_OUTPUT.PUT_LINE(xmlc);
END;
/

How does one store and extract XML data from Oracle?

XML data can be stored in Oracle (9.2.0 and above) using the XMLType data type. Look at this example:
connect scott/tiger

create table XMLTable (doc_id number, xml_data XMLType);

insert into XMLTable values (1,


XMLType('<FAQ-LIST>
<QUESTION>
<QUERY>Question 1</QUERY>
<RESPONSE>Answer goes here.</RESPONSE>
</QUESTION>
</FAQ-LIST>'));

select extractValue(xml_data, '/FAQ-LIST/QUESTION/RESPONSE') -- XPath expression


from XMLTable
where existsNode(xml_data, '/FAQ-LIST/QUESTION[QUERY="Question 1"]') = 1;

The XML Developer Kit

Beginning with the first release of Oracle 8i, Oracle has offered the XDK for the Oracle database. For
Oracle 8i releases 1 and 2 (8.1.5 and 8.1.6) you must download the XDK from Oracle Technology Network
and install it manually. With Oracle 8i release 3 (8.1.7) and Oracle 9i, the XDK is integrated into the
database and is installed automatically when the database is installed.

The XDK may be accessed from PL/SQL, Java, C, and C++ applications. PL/SQL applications must run
inside the database (i.e. stored procedures, packages, triggers, or anonymous PL/SQL blocks) in order to
access the XDK. Java applications may reside inside or outside the database. C and C++ applications, of
course, reside outside the database.

In order to access the XDK from PL/SQL or Java applications residing inside the database, you must install
Oracle’s JVM (sometimes called the Java option or Oracle JServer). It may seem counter-intuitive that you
must have Oracle’s JVM installed in order to use the XDK from PL/SQL, but there is a reason for it: The
XDK is itself written in Java. Oracle merely put PL/SQL wrappers on the Java code in order to make it
accessible from PL/SQL.

The XDK contains an XML parser, an XSLT processor, an XPath engine, an XSQL page processor and
servlet, and an XML SQL utility. The first three are Oracle’s implementations of the XML 1.0 specification,
while the last two are features unique to Oracle.

Note that the XSQL page facility is only accessible from Java applications. Also note that in earlier releases
of the XDK, the XML SQL utility was a separate tool that you had to install separately from the XDK. You
can download the XML SQL utility from Oracle Technology Network.

Hello! Could you elaborate the example of serializing the XML into a table,
let's say the XML file has data which corresponds to 2 tables (repetitive tags
for a detail table), is it possible?
110
Followup:
Sean here...

This is pretty easy to accomplish. There is no automated XML utility in Oracle


to insert a single XML document into two different tables... What you could do,
however, is create a join view on the two tables, then write an INSTEAD OF
trigger on the join view. Insert the XML document into the join view. The
INSTEAD OF trigger's job would be to insert rows into the appropriate tables
based on the values of the parent key found in each ROWSET of the XML document.

As an example, I have an XML document that looks like SO:

<?xml version = "1.0"?>


<ROWSET>
<ROW num="1">
<DEPTNO>10</DEPTNO>
<DNAME>SALES</DNAME>
<EMPNO>100</EMPNO>
<ENAME>MARK JOHNSON</ENAME>
</ROW>
<ROW num="2">
<DEPTNO>20</DEPTNO>
<DNAME>TECHNOLOGY</DNAME>
<EMPNO>200</EMPNO>
<ENAME>TOM KYTE</ENAME>
</ROW>
<ROW num="3">
<DEPTNO>20</DEPTNO>
<DNAME>TECHNOLOGY</DNAME>
<EMPNO>300</EMPNO>
<ENAME>SEAN DILLON</ENAME>
</ROW>
</ROWSET>

So you can see... the department data and the employee data co-mingled. We want
to normalize this into two tables... so here's what I'd do:

-----------------------------
system@SLAP> create table dept (
2 deptno number
3 primary key,
4 dname varchar2(30));
Table created.

system@SLAP> create table emp (


2 empno number
3 primary key,
4 deptno number,
5 ename varchar2(30));
Table created.

system@SLAP> create view deptemp as


2 select d.deptno, d.dname, e.empno, e.ename
3 from dept d, emp e
4 where d.deptno = e.empno;
View created.
111
system@SLAP> create or replace trigger deptemp_ioifer
2 instead of insert on deptemp
3 declare
4 begin
5 begin
6 insert into dept (deptno, dname)
7 values (:new.deptno, :new.dname);
8 exception
9 when DUP_VAL_ON_INDEX then
10 update dept
11 set dname = :new.dname
12 where deptno = :new.deptno;
13 end;
14 --
15 insert into emp (empno, deptno, ename)
16 values (:new.empno, :new.deptno, :new.ename);
17 end;
18 /
Trigger created.

system@SLAP> declare
2 l_clob clob := '<?xml version = "1.0"?>
3 <ROWSET>
4 <ROW num="1">
5 <DEPTNO>10</DEPTNO>
6 <DNAME>SALES</DNAME>
7 <EMPNO>100</EMPNO>
8 <ENAME>MARK JOHNSON</ENAME>
9 </ROW>
10 <ROW num="2">
11 <DEPTNO>20</DEPTNO>
12 <DNAME>TECHNOLOGY</DNAME>
13 <EMPNO>200</EMPNO>
14 <ENAME>TOM KYTE</ENAME>
15 </ROW>
16 <ROW num="3">
17 <DEPTNO>20</DEPTNO>
18 <DNAME>TECHNOLOGY</DNAME>
19 <EMPNO>300</EMPNO>
20 <ENAME>SEAN DILLON</ENAME>
21 </ROW>
22 </ROWSET>';
23
24 l_ctx dbms_xmlsave.ctxType;
25 l_rows number;
26 begin
27 l_ctx := dbms_xmlsave.newContext('DEPTEMP');
28 l_rows := dbms_xmlsave.insertxml(l_ctx,l_clob);
29 dbms_xmlsave.closeContext(l_ctx);
30 dbms_output.put_line(l_rows || ' rows inserted...');
31 end insert_xml_emps;
32 /
PL/SQL procedure successfully completed.

system@SLAP> select * from dept;

DEPTNO DNAME
112
---------- ------------------------------
10 SALES
20 TECHNOLOGY

system@SLAP> select * from emp;

EMPNO DEPTNO ENAME


---------- ---------- ------------------------------
100 10 MARK JOHNSON
200 20 TOM KYTE
300 20 SEAN DILLON

-----------------------------

...and there you have it. Hope that helps!

How can I handle this? One row for the department table and 3 rows for the
employee table in the same XML row.
<?xml version = "1.0"?>
<ROWSET>
<ROW num="1">
<DEPTNO>10</DEPTNO>
<DNAME>SALES</DNAME>
<EMPLOYEE>
<EMPNO>100</EMPNO>
<ENAME>MARK JOHNSON</ENAME>
</EMPLOYEE>
<EMPLOYEE>
<EMPNO>200</EMPNO>
<ENAME>VICTOR JAEN</ENAME>
</EMPLOYEE>
<EMPLOYEE>
<EMPNO>300</EMPNO>
<ENAME>JHON SMITH</ENAME>
</EMPLOYEE>
</ROW>
</ROWSET>

Thanks a lot!

Followup:

1* select dbms_xmlgen.getxml( 'select deptno, dname, cursor( select empno,


ename from emp where emp.deptno = dept.deptno ) employee from dept where deptno
= 10' ) from dual
scott@ORA920> /

DBMS_XMLGEN.GETXML('SELECTDEPTNO,DNAME,CURSOR(SELECTEMPNO,ENAMEFROMEMPWHEREEMP.D
--------------------------------------------------------------------------------
<?xml version="1.0"?>
<ROWSET>
<ROW>
<DEPTNO>10</DEPTNO>
<DNAME>ACCOUNTING</DNAME>
<EMPLOYEE>
113
<EMPLOYEE_ROW>
<EMPNO>7782</EMPNO>
<ENAME>CLARK</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7839</EMPNO>
<ENAME>KING</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7934</EMPNO>
<ENAME>MILLER</ENAME>
</EMPLOYEE_ROW>
</EMPLOYEE>
</ROW>
</ROWSET>

Oracle 9i and 8i Database Limits

Oracle8i ONLY

Advanced Queuing Processes maximum per instance 10

Oracle8i ONLY

Job Queue maximum per 36


Processes instance
I/O Slave maximum per 15
Processes background
process (DBWR,
LGWR, etc.) 15
114

maximum per
Backup
session
Sessions maximum per 32K, limited by PROCESSES and SESSIONS init parameters
instance
LCK maximum per 10
Processes instance

MTS maximum per Unlimited within constraints set by PROCESSES and SESSIONS init
Servers instance parameters, for instance

Dispatchers maximum per Unlimited within constraints set by PROCESSES and SESSIONS init
instance parameters, for instance.
Parallel maximum per Unlimited within constraints set by PROCESSES and SESSIONS init
Execution instance parameters, for instance.
Slaves
Backup maximum per Unlimited within constraints set by PROCESSES and SESSIONS
Sessions instance init
parame
ters, for instance.

Oracle 9i Limits

Datatype Limits

VARCHAR2 Maximum size is 4000


NVARCHAR2 Maximum size is determined by the national character
set definition, with an upper limit of 4000 bytes
NUMBER( p,s) The precision p can range from 1 to 38. The scale s
can range from -84 to 127
LONG up to 2 gigabytes, or 231 -1 bytes
DATE range from January 1, 4712 BC to December 31, 9999
AD
TIMESTAMP( fractional_seconds_precision) fractional_seconds_precisionis the number of digits in
the fractional part of the SECOND datetime field.
Accepted values of fractional_seconds_precision are 0
to 9. The default is 6
TIMESTAMP( fractional_seconds_precision) WITH fractional_seconds_precisionis the number of digits in
TIME ZONE the fractional part of the SECOND datetime field.
Accepted values are 0 to 9. The default is 6
TIMESTAMP( fractional_seconds_precision)WITH same as TIMESTAMP( fractional_seconds_precision)
LOCAL TIME ZONE WITH TIME ZONE
INTERVAL YEAR( year_precision) TO MONTH year_precision is the number of digits in the YEAR
datetime field. Accepted values are 0 to 9. The default
is 2
INTERVAL DAY (day_precision) TO SECOND day_precision is the maximum number of digits in the
( fractional_seconds_precision) DAY datetime field. Accepted values are 0 to 9. The
default is 2

fractional_seconds_precision is the number of digits in


the fractional part of the SECOND field. Accepted
values are 0 to 9. The default is 6
115
RAW( size) Maximum size is 2000 bytes
LONG RAW up to 2 gigabytes
ROWID Base 64 string representing the unique address of a
row in its table
UROWID [( size)] Base 64 string representing the logical address of a
row of an index-organized table. The optional size is
the size of a column of type UROWID. The maximum
size and default is 4000 bytes
CHAR( size)[BYTE |CHAR] Fixed-length character data of length size
bytes.Maximum size is 2000 bytes. Default and
minimum size is 1 byte
NCHAR( size) Fixed-length character data of length size characters.
Maximum size is determined by the national character
set definition, with an upper limit of 2000 bytes.
Default and minimum size is 1 character
CLOB Maximum size is 4 gigabytes
NCLOB Maximum size is 4 gigabytes
BLOB Maximum size is 4 gigabytes
BFILE Maximum size is 4 gigabytes

Physical Database Limits

Database Block Size Minimum 2048 bytes; must be a multiple of operating system
physical block size

Maximum Operating system dependent; never more than 32 KB


Database Blocks Minimum in initial 2 blocks
extent of a segment

Maximum per Platform dependent; typically 222-1 blocks


datafile
Controlfiles Number of control 1 minimum; 2 or more (on separate devices) strongly
files recommended

Size of a control file Dependent on operating system and database creation


options; maximum of 20,000 x (database block size)
Database files Maximum per Operating system dependent; usually 1022
tablespace

Maximum per 65533. May be less on some operating systems


database Limited also by size of database blocks and by the
116
DB_FILES initialization parameter for a particular
instance
Database extents Maximum 2 GB, regardless of the maximum file size allowed by
the operating system
Database file size Maximum Operating system dependent. Limited by maximum
operating system file size; typically 222 or 4M blocks
MAXEXTENTS Default value Derived from tablespace default storage or
DB_BLOCK_SIZE initialization parameter

Maximum Unlimited
Redo Log Files Maximum number of Limited by value of MAXLOGFILES parameter in the
logfiles CREATE DATABASE statement. Control file can be
resized to allow more entries; ultimately an operating
system limit
Maximum number of
logfiles per group Unlimited
Redo Log File Size Minimum size 50 KB
Maximum size Operating system limit; typically 2 GB
Tablespaces Maximum number 64 KB Number of tablespaces cannot exceed the
per database number of database files, as each tablespace must
include at least one file

Logical Database Limits

GROUP BY clause Maximum length The GROUP BY expression and all of the
nondistinct aggregates functions (for example,
SUM, AVG) must fit within a single database block.
Indexes Maximum per table Unlimited

total size of indexed column 75% of the database block size minus some
overhead
Columns Per table 1000 columns maximum
Per index (or clustered 32 columns maximum
index)
Per bitmapped index 30 columns maximum
Constraints Maximum per column Unlimited

Subqueries Maximum levels of subqueries Unlimited in the FROM clause of the top-level
in a SQL statement query;
255 subqueries in the WHERE clause
Partitions Maximum length of linear 4 KB - overhead
partitioning key

Maximum number of columns 16 columns


in partition key

Maximum number of 64 K-1 partitions


117
partitions allowed per table or
index
Rollback Segments Maximum number per No limit; limited within a session by
database MAX_ROLLBACK_SEGMENTS initialization
parameter
Rows Maximum number per table Unlimited

SQL Statement Length Maximum length of 64 KB maximum; particular tools may impose
statements lower limits
Stored Packages Maximum size PL/SQL and Developer/2000 may have limits on
the size of stored procedures they can call. The
limits typically range from 2000 to 3000 lines of
code. See Also: Your PL/SQL or Developer/2000
documentation for details
Trigger Cascade Limit Maximum value Operating system-dependent, typically 32

Users and Roles Maximum 2,147,483,638

Tables Maximum per clustered table 32 tables


Maximum per database Unlimited

Process and Runtime Limits

Instances per database Maximum number of cluster Operating system-dependent


database instances per database
Locks Row-level Unlimited

Distributed Lock Manager Operating system dependent


SGA size Maximum value Operating system-dependent;
typically 2 to 4 GB for 32-bit
operating systems, and > 4 GB for
64-bit operating systems
Advanced Queuing Processes Maximum per instance 10
Job Queue Processes Maximum per instance 1000

I/O Slave Processes Maximum per background process 15


(DBWR, LGWR, etc.)

Maximum per Backup session 15


Sessions Maximum per instance 32K; limited by PROCESSES and
SESSIONS initialization parameters
Global Cache Service Processes Maximum per instance 10

Shared Servers Maximum per instance Unlimited within constraints set by


PROCESSES and SESSIONS
initialization parameters, for
instance
Dispatchers Maximum per instance Unlimited within constraints set by
PROCESSES and SESSIONS
initialization parameters, for
instance
Parallel Execution Slaves Maximum per instance Unlimited within constraints set by
PROCESSES and SESSIONS
initialization parameters, for
instance
118
Backup Sessions Maximum per instance Unlimited within constraints set by
PROCESSES and SESSIONS
initialization parameters, for
instance

Oracle 8i Limits
Datatype Limits

Datatypes Limit Comments


BFILE maximum size: 4GB The maximum number of BFILEs is limited
maximum size of file name: 255 by SESSION_MAX_OPEN_FILES, which is
characters itself limited by the maximum number of
maximum size of open
open BFILEs: see files the operating system will allow.
comments
BLOB 4GB maximum The number of LOB columns per table is
limited only by the maximum number
of
columns per table (i.e., 1000)
CHAR 2000 bytes maximum
CHAR VARYING 4000 bytes
CLOB 4GB maximum The number of LOB columns per table
is limited only by
the maximum number
of
columns per table (i.e., 1000)
Literals (characters or 4000 characters maximum
numbers in SQL or PL/SQL
LONG 231-1 bytes (2GB) maximum Only one LONG column allowed per table
NCHAR 2000 bytes
NCHAR VARYING 4000 bytes
NCLOB 4GB maximum The number of LOB columns per table is
limited only by the maximum number of
columns per table (i.e., 1000)
NUMBER 999...(38 9's)x10 to power of 25 Can be represented to full 38-digit
maximum value precision (the mantissa).
-999...(38 9's)x 10 to power of 125
minimum value
Precision 38 significant digits
RAW 2000 bytes maximum
VARCHAR 4000 bytes maximum
VARCHAR2 4000 bytes maximum

Physical Database Limits

Item Type of Limit Limit Value


Database Block Size minimum 2048 bytes; must be a multiple of
O/S physical
maximum block size
O/S-dependent never more than 32KB
Database Blocks minimum in initial extent of a 2 blocks
segment platform dependent; typically 2 to power of
maximum per datafile 22 blocks
Controlfiles number of controlfiles 1 minimum: 2 or more (on separate devices)
119
strongly recommended
size of controlfile dependent on O/S and database creation
options; maximum of 20,000 x (database
block size)
Database files maximum per tablespace O/S dependent, usually 1022

maximum per database 65533; may be less on some operating


systems; limited also by size of database
blocks, and by the DB_FILES init
parameter for
a particular instance
Database file size maximum O/S dependent, limited by maximum O/S file
size; typically 2 to power of 22 or 4M blocks
MAXEXTENTS default value derived from tablespace default storage or
DB_BLOCK_SIZE

maximum unlimited
Redo Log Files maximum number of logfiles LOG_FILES initialization parameter,
or
MAXLOGFILES in CREATE DATABASE;
controlfile can be resized to allow more
entries; ultimately an O/S limit

maximum number of logfiles per Unlimited


group
Redo Log File Size minimum size 50K bytes
maximum size O/S limit, typically 2GB
Tablespaces maximum number per database 64K ; Number of tablespaces cannot exceed
the number of database files, as each
tablespace must include at least one file.

Logical Database Limits

Item Type Limit


GROUP BY clause maximum length The group-by expression and all of
the
non-distinct aggregate (e.g., sum,
avg)
need to fit within a single database
block.
Indexes maximum per table Unlimited

total size of indexed column 40% of the database block size


minus some overhead.
Columns table 1000 columns maximum
indexed (or clustered index) 32 columns maximum
bitmapped index 30 columns maximum
Constraints maximum per column Unlimited
Nested Queries maximum number 255
Partitions maximum length of linear 4KB - overhead
partitioning key
maximum number of columns in 16 columns
partition key
maximum number of partitions 64K-1 partitions
allowed per table or index
Rollback Segments maximum number per no limit; limited within a session by
120
database MAX_ROLLBACK_SEGMENTS init
parameter
Rows maximum number per table no limit
SQL Statement Length maximum length of 64K maximum; particular tools may
statements impose lower limits
Stored Packages maximum size PL/SQL and Developer/2000 may
have limits on the size of stored
rocedures they can call. Consult
your PL/SQL or Developer/2000
documentation for details. The
limits typically range from 2000-
3000 lines of code.
Trigger Cascade Limit maximum value O/S dependent, typically 32
Users and Roles maximum 2,147,483,638
Tables maximum per clustered table 32 tables
maximum per database unlimited

Schema Object Naming

Item Limit
Names of databases 8 bytes
Names of database links 128 bytes
All other schema objects 30 bytes

Process / Runtime Limits

Item Type Limit


Instances per database maximum number of OPS O/S dependent
instances per database
Locks row-level unlimited
Distributed Lock Manager O/S dependent
SGA size maximum value O/S dependent, typically 2-4 GB for 32-
bit O/S, > 4 GB for 64 bit O/S
121

Effective Coding Style

Revealing Logical Structure with Indentation


Indentation is one of the most common and effective techniques used to display a program's logic via
format. As illustrated in the following examples, programs that are indented are easier to read than those
that are not indented. Here is an unindented IF statement:

IF to_number(the_value) > 22 THEN


IF max_totals = 0 THEN
calc_totals;
ELSE WHILE more_data
LOOP
analyze_results;
END LOOP;
END IF;
END IF;

I have found that a three (or four)-space indentation not only adequately reveals the logical structure of
the code but also keeps the statements close enough together to read comfortably. And, with deeply
nested structures, you won't run off the right margin as quickly! Here is the three-space indented version
of the previous nested IF statement:
IF to_number(the_value) > 22 THEN
IF max_totals = 0 THEN
calc_totals;
ELSE
WHILE more_data
LOOP
analyze_results;
END LOOP;
END IF;
END IF;

Using Case to Aid Readability


PL/SQL code is made up of many different components: variables, form items, report fields, procedures,
functions, loops, declarations, control elements, etc. But they break down roughly into two types of text:
reserved words and application-specific names or identifiers.Reserved words are those names of language
elements that are reserved by PL/SQL and have a special meaning for the compiler. Some examples of
reserved words in PL/SQL are:
122
WHILE
IF
BEGIN
TO_CHAR

Application-specific identifiers are the names that you give to data and program structures that are
specific to your application and that vary from system to system. The compiler treats these two kinds of
text very differently. You can improve the readability of your code greatly by reflecting this difference in
the way the text is displayed. Many developers make no distinction between reserved words and
application-specific identifiers. Consider the following lines of code:

if to_number(the_value)>22 and num1 between lval and hval then


newval := 100;
elsif to_number(the_value) < 1 then
calc_tots(to_date('12-jan-95'));
else
clear_vals;
end if;

While the use of indentation makes it easier to follow the logical flow of the IF statement, all the words in
the statements tend to blend together. It is difficult to separate the reserved words and the application
identifiers in this code. Changing entirely to uppercase also will not improve matters. Indiscriminate, albeit
consistent, use of upper- or lowercase for your code reduces its readability. The distinction between
reserved words and application-specific identifiers is ignored in the formatting. This translates into a loss
of information and comprehension for a developer.

The UPPER-lower Style


You can easily solve this problem by adopting a guideline for using a mix of upper- and lowercase to your
code. I have recoded my previous example below, this time using the UPPER-lower style: all reserved
words are written in UPPERCASE and all application names are kept in lowercase:

IF to_number(the_value) > 22 AND num1 BETWEEN lval AND hval THEN


newval := 100;
ELSIF TO_NUMBER (the_value) < 1 THEN
calc_tots (TO_DATE ('12-jan-95'));
ELSE
clear_vals;
END IF;

Using a mixture of upper- and lowercase words increases the readability of the code by giving a sense of
dimension to the code. The eye can more easily cruise over the text and pick the different syntactical
elements of each statement. You can focus quickly on the lowercase words for the application-specific
content. Consistent use of this method makes the program listings more attractive and accessible at a
glance.

Formatting Single Statements


Most of your code consists of individual statements, such as assignments, calls to modules, and
declarations. A consistent approach to formatting and grouping such statements will improve the
readability of your program as a whole. This section suggests some guidelines.

Use at most one statement per line


PL/SQL uses the semicolon (;) as the logical terminator for a statement, as a result you can have more
than one statement on a line and you can continue a single executable statement over more than one line.
You will sometimes be tempted to place several statements on a single line, particularly if they are very
123
simple. Consider the following line:
new_id := 15; calc_total (new_id); max_dollars := 105 * sales_adj;

It is very difficult to pick out the individual statements in this line, in addition to the fact that a procedure
is called in the middle of the line. By placing each statement on its own line you mirror the complexity of a
program--the simple lines look simple and the complex statements look complex--and reinforce the top-
to-bottom logic of the program:
new_id := 15;
calc_total (new_id);
max_dollars := 105 * sales_adj;

Use whitespace inside a statement


You can use all the indentation and blank lines you want to reveal the logic of a program and still end up
with some very dense and unreadable code. It is also important to employ whitespace within a single line
to make that one statement more comprehensible. Always include a space between every identifier and
separator in a statement. Instead of this:
WHILE(total_sales<maximum_sales AND company_type='NEW')LOOP
write this:
WHILE (total_sales < maximum_sales AND company_type = 'NEW') LOOP

Use indentation to offset all continuation lines under the first line.
This is the most important guideline. The best way to identify continuation lines is to use indentation to
logically subsume those lines under the main or first line of the statement. The following call to
generate_company_statistics is obscured because the continuation line butts right up against the left
margin with the module name:

generate_company_statistics (company_id, last_year_date,


rollup_type, total, average, variance, budgeted, next_year_plan);

If I indent the continuation line, the relationship of the second line to the first becomes clear:

generate_company_statistics (company_id, last_year_date,


rollup_type, total, average, variance, budgeted, next_year_plan);

Formatting SQL Statements


Because PL/SQL is an extension to the SQL language, you can place SQL statements directly in your
PL/SQL programs. You can also define cursors based on SELECT statements. This section summarizes my
suggestions for formatting SQL statements and cursors for maximum readability.PL/SQL supports the use
of four SQL DML (Data Manipulation Language) statements: INSERT, UPDATE, DELETE, and SELECT. Each
of these statements is composed of a series of "clauses," as in the WHERE clause and the ORDER BY
clause. SQL statements can be very complex, to say the least. Without a consistent approach to
indentation and alignment inside these statements, you can end up with a real mess. I have found the
following guidelines useful:

• Right-align the reserved words for the clauses against the DML statement.

I recommend that you visually separate the SQL reserved words which identify the separate
clauses from the application-specific column and
table names. The following table shows how I use right-alignment on the reserved words to create
a vertical border between them and the rest of
the SQL statement:

SELECT INSERT UPDATE DELETE


124

SELECT INSERT INTO UPDATE DELETE


FROM VALUES SET FROM
WHERE WHERE WHERE
AND INSERT INTO
OR SELECT
GROUP BY FROM
HAVING WHERE
AND
OR
ORDER BY
Here are some examples of this format in use:
SELECT last_name, first_name
FROM employee
WHERE department_id = 15
AND hire_date < SYSDATE;

SELECT department_id, SUM (salary) AS total_salary


FROM employee GROUP BY department_id
ORDER BY total_salary DESC;

INSERT INTO employee (employee_id, ... )


VALUES (105 ... );

UPDATE employee
SET hire_date = SYSDATE
WHERE hire_date IS NULL
AND termination_date IS NULL;

This right alignment makes it very easy to identify the different clauses of the SQL
statement, particularly with extended SELECTs. You might also consider placing a blank line
between clauses of longer SQL statements (this is possible in PL/SQL, but is not acceptable
in "native" SQL executed in SQL*Plus).

• Use meaningful abbreviations for table and column aliases

It's impossible to read a program when a query has a six-table join and the tables have been
assigned aliases A, B, C, D, E, and F. How can you possibly decipher the WHERE clause in the
following SELECT?
SELECT ... select list ...
FROM employee A, company B, history C, bonus D, profile E, sales F
WHERE A.company_id = B.company_id
AND A.employee_id = C.employee_id
AND B.company_id = F.company_id
AND A.employee_id = D.employee_id
AND B.company_id = E.company_id;

With more sensible table aliases (including no tables aliases at all where the table name was short
enough already), the relationships are much clearer:
SELECT ... select list ...
FROM employee EMP, company CO, history HIST, bonus, profile PROF, sales
WHERE EMP.company_id = CO.company_id
AND EMP.employee_id = HIST.employee_id
AND CO.company_id = SALES.company_id
AND EMP.employee_id = BONUS.employee_id
AND CO.company_id = PROF.company_id;
125

Formatting Exception Handlers


PL/SQL provides a very powerful facility for dealing with errors. An entirely separate exception section
contains one or more "handlers" to trap exceptions and execute code when that exception occurs.
Logically, the exception section is structured like a conditional CASE statement (which, by the way, is not
supported by PL/SQL). As you might expect, the format for the exception section should resemble that of
an IF statement. Here is a general example of the exception section:

EXCEPTION
WHEN NO_DATA_FOUND THEN
executable_statements1;

WHEN DUP_VAL_ON_INDEX THEN


executable_statements1;
...
WHEN OTHERS THEN
otherwise_code;
END;

Formatting PL/SQL Blocks


The PL/SQL block structure forms the backbone of your code. A consistent formatting style for the block,
therefore, is critical. This formatting should make clear these different sections. Consider the following
function:

FUNCTION
company_name (company_id_in IN company.company_id%TYPE) RETURN
VARCHAR2 IS cname company.company_id%TYPE; BEGIN
SELECT name INTO cname FROM company
WHERE company_id = company_id_in;
RETURN cname;
EXCEPTION WHEN NO_DATA_FOUND THEN
RETURN NULL;
END;
You know that this program is a function because the first word in the program is FUNCTION. Other than
that, however, it is very difficult to follow the structure of this program. Where is the declaration section?
Where does the executable section begin and end? Here is that same function after we apply some
straightforward formatting rules to it:
FUNCTION company_name (company_id_in IN company.company_id%TYPE)
RETURN VARCHAR2
IS
cname company.company_id%TYPE;
BEGIN
SELECT name INTO cname FROM company
WHERE company_id = company_id_in;
RETURN cname;

EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN NULL;
END;

The declaration section, which comes after the IS and before the BEGIN, clearly consists of a single
declaration of the cname variable. The executable section consists of all the statements after the BEGIN
and before the EXCEPTION statement; these are indented in from the BEGIN. Finally, the exception
section shows a single specific exception handler and a WHEN OTHERS exception. Generally, indent the
statements for a given section from the reserved words which initiate the section. You can also include a
126
blank line before each section, as I do above, for the executable section (before BEGIN) and the exception
section (before EXCEPTION). I usually place the IS keyword on its own line to clearly differentiate between
the header of a module and its declaration section.

Make Comments Easy to Enter and Maintain


You shouldn't spend a lot of time formatting your comments. You need to develop a style that is clean and
easy to read, but also easy to maintain. When you have to change a comment, you shouldn't have to
reformat every line in the comment. Lots of fancy formatting is a good indication that you have a high-
maintenance documentation style. The following block comment is a maintenance nightmare:

/*
===========================================================
| Parameter Description |
| |
| company_id The primary key to company |
| start_date Start date used for date range |
| end_date End date for date range |
===========================================================
*/

The right-justified vertical lines and column formatting for the parameters require way too much effort to
enter and maintain. What happens if you add a
parameter with a very long name? What if you need to write a longer description? A simpler and more
maintainable version of this comment might be:

/*
===========================================================
| Parameter - Description
|
| company_id - The primary key to company
| start_date - Start date used for date range
| end_date - End date for date range
===========================================================
*/

Maintain Indentation
Inline commentary should reinforce the indentation and therefore the logical structure of the program. For
example, it is very easy to find the comments in the make_array procedures shown below.

PROCEDURE make_array (num_rows_in IN INTEGER)


/* Create an array of specified numbers of rows */
IS
/* Handles to Oracle Forms structures */
col_id GROUPCOLUMN;
rg_id RECORDGROUP;
BEGIN
/* Create new record group and column */
rg_id := CREATE_GROUP ('array');
col_id := ADD_GROUP_COLUMN ('col');
/*
|| Use a loop to create the specified number of rows and
|| set the value in each cell.
*/
FOR row_index IN 1 .. num_rows_in
127
LOOP
/* Create a row at the end of the group to accept data */
ADD_GROUP_ROW (return_value, END_OF_GROUP);
FOR col_index IN 1 .. num_columns_in
LOOP
/* Set the initial value in the cell */
SET_GROUP_NUMBER_CELL (col_id, row_index, 0);
END LOOP;
END LOOP;
END;

Documenting the Entire Package


A package is often a complicated and long construct. It is composed of many different types of objects,
any of which may be public (visible to programs and users outside of the package) or private (available
only to other objects in the package). You can use some very simple documentation guidelines to clarify
the structure of the package. As usual when discussing packages, one must consider the specification
separately from the body. As a meta-module or grouping of modules, the specification should have a
standard header. This header needn't be as complicated as that of a specific module, because you do not
want to repeat in the package header any information which also belongs in specific modules. I suggest
using the template header shown in the following example. In the "Major Modifications" section of the
header, do not include every change made to every object in the package. Instead note significant
changes to the package as a whole, such as an expansion of scope, a change in the way the package and
global variables are managed, etc. Place this header after the package name and before the IS statement:

PACKAGE package_name
/*
|| Author:
||
|| Overview:
||
|| Major Modifications (when, who, what)
||*/
IS
...
END package_name;

Document the Package Specification


The package specification is, in essence, a series of declaration statements. Some of those statements
declare variables, while others declare modules. Follow the same recommendation in commenting a
package as you do in commenting a module's declaration section: provide a comment for each
declaration. In addition to the comments for a specific declaration, you may also find it useful to provide a
banner before a group of related declarations to make that connection obvious to the reader. Surround the
banner with whitespace (blank lines for the start/end of a multiline comment block). While you can use
many different formats for this banner, use the simplest possible design that gets the point across.
Everything else is clutter. The package specification below illustrates the header and declaration-level
comment styles, as well as group banners:

PACKAGE rg_select
/*
|| Author: Diego Pafumi
||
|| Overview: Manage a list of selected items correlated with a
|| block on the screen.
||
|| Major Modifications (when, who, what)
|| 12/94 - DP - Create package
|| 3/95 - IS - Enhance to support coordinated blocks
128
||*/
IS
/*----------------- Modules to Define the List -------------------*/

/* Initialize the list/record group. */


PROCEDURE init_list (item_name_in IN VARCHAR2);

/* Delete the list */


PROCEDURE delete_list;

/*------------------ Modules to Manage Item Selections -----------*/

/* Mark item as selected */


PROCEDURE select_item (row_in IN INTEGER);

/* De-select the item from the list */


PROCEDURE deselect_item (row_in IN INTEGER);

END rg_select;

Oracle Naming Coding Standards

Tables
Table names should reflect the data in the table. The names are a mix of upper and lowercase letter with no
underscores and in singular. The names should be fully spelled out and in singular or plural, but please keep only one
rule: plural or singular.
Example: UserProfile.

Columns
Table columns should be named in the same was tables are named.
Examples: FormID, ScanDate, ZoneName.

Stored Procedures
Atlantis related stored procedures start with "ap_".
Project specific stored procedure should be prefixed by "p_".
Following the prefix should be an abbreviation for a verb, or action word. Some typical actions on data are select
(sel), insert (ins), update (upd) and delete (del). If a combination of actions is possible in a stored procedure, then
use the first letters of the action. For example, if a stored procedure can do an insert or update, then use iu. An
underscore character should then follow the verb abbreviation.
Next, the name of the table affected or some other noun should be used to describe what is affected. Each word in
this part of the spec is a mix of upper and lowercase, where the first character is uppercase and the rest lowercase.
There should be no underscores in this part of the name.
Examples are listed below:
ap_ins_Form - This procedure name is an Atlantis stored procedure that inserts a record into the Form table.
ap_iu_Document - This procedure name is Atlantis stored procedure that inserts or updates a record into the
Document table.
ap_sel_Zone - This procedure name is an Atlantis stored procedure that selects records from the Zonetable.
p_del_Account - This procedure is project specific procedure that deletes records from the Zone table.

Triggers
Atlantis triggers start with "at_" and project specific triggers start with "t_".
Following this prefix is an indicator describing the action that triggers the trigger. Use del for delete, ins for insert and
upd for update. If a trigger is used for more than one of these operations, then just include the first letter. For
example, use iu for an insert/update trigger. Next comes an underscore followed by the suffix. The suffix contains an
upper/lowercase name that includes the table name and/or any other descriptive text.
Examples: at_del_Document, t_iu_UserProfile.
129
Views
Atlantis views will have a prefix of "av_". A project specific view should have a prefix of "v_" or "view_".
Following the prefix the view name should contain some sort of descriptive reference. If the view contains a simple
join of two tables, then include the table names. For example: v_Table1Table2.
The suffix should be upper/lowercase.

Indexes
Index names should have an "in_" prefix. The rest of the name is upper and lowercase. This suffix contains some
meaningful text about the nature of the index.
Example: in_EmployeeID.

Constraints
Primary keys are to be prefixed with "pk_", unique keys with "uk_" or "unique_" and foreign keys start with "fk_".
The remainder of the name is upper and lowercase and usually contains the name of the field(s) included in the key.
Example: pk_FormID, fk_ImageType.

Sequence
Sequence names begin with an "s_", followed by an underscore and then the field name (i.e. s_Field). If field name is
ambiguous, then precede the field name with table name s_TableField.
[
[
[
[
[
[
v
v
[
[
[
[
[
[
[
[
[
[
[
[

Оценить