Вы находитесь на странице: 1из 96

Silberschatz, Korth and Sudarshan 12.

1 Database System Concepts


Chapter 12: Indexing and Hashing
Basic Concepts
Ordered Indices
B+-Tree Index Files
B-Tree Index Files
Static Hashing
Dynamic Hashing
Comparison of Ordered Indexing and Hashing
Index Definition in SQL
Multiple-Key Access
Silberschatz, Korth and Sudarshan 12.2 Database System Concepts
Data Dictionary Storage
Information about relations
names of relations
names and types of attributes of each relation
names and definitions of views
integrity constraints
User and accounting information, including passwords
Statistical and descriptive data
number of tuples in each relation
Physical file organization information
How relation is stored (sequential/hash/)
Physical location of relation
Information about indices (Chapter 12)
Data dictionary (also called system catalog) stores metadata:
that is, data about data, such as
Silberschatz, Korth and Sudarshan 12.3 Database System Concepts
Data Dictionary Storage (Cont.)
Catalog structure
Relational representation on disk
specialized data structures designed for efficient access, in memory
A possible catalog representation:

Relation_metadata = (relation_name, number_of_attributes,
storage_organization, location)
Attribute_metadata = (attribute_name, relation_name, domain_type,
position, length)
User_metadata = (user_name, encrypted_password, group)
Index_metadata = (index_name, relation_name, index_type,
index_attributes)
View_metadata = (view_name, definition)

Silberschatz, Korth and Sudarshan 12.4 Database System Concepts
Indexing: Basic Concepts
Indexing mechanisms are used to speed up access to desired
data.
E.g., author catalog in library
Search Key - attribute to set of attributes used to look up
records in a file.
An index file consists of records (called index entries) of the
form


Index files are typically much smaller than the original file
Two basic kinds of indices:
Ordered indices: search keys are stored in sorted order
Hash indices: search keys are distributed uniformly across
buckets using a hash function.
search-key pointer
Silberschatz, Korth and Sudarshan 12.5 Database System Concepts
Basic concepts (Cont.)
Index definition: Index is a file whose records (entries) have the
following structure:
(search-key, pointer or set of pointers to data records).
When the search key is unique the index has fixed size records,
otherwise it may have variable size records.
What is the pointer?
Block number or RID!

Silberschatz, Korth and Sudarshan 12.6 Database System Concepts
Index Evaluation Metrics
Access types supported efficiently. E.g.,
records with a specified value in the attribute
or records with an attribute value falling in a specified range of
values.
Access time
Insertion time
Deletion time
Space overhead
Silberschatz, Korth and Sudarshan 12.7 Database System Concepts
Ordered Indices
In an ordered index, index entries are stored sorted on the
search key value. E.g., author catalog in library.
Primary index: in a sequentially ordered file, the index whose
search key specifies the sequential order of the file.
Also called clustering index
The search key of a primary index is usually but not necessarily the
primary key.
Secondary index: an index whose search key specifies an order
different from the sequential order of the file. Also called
non-clustering index.
Index-sequential file: ordered sequential file with a primary index.
Indexing techniques evaluated on basis of:
Silberschatz, Korth and Sudarshan 12.8 Database System Concepts
Classes of Indexes

Unique/non-unique whether search key is unique/non-unique
Dense/Nondense every/not every search key (every record for Unique
key) in the data file has a corresponding pointer in the index.
Clustered/Nonclustered order of index search key is equal/not equal to
file order.
Primary/Secondary search key equal/not equal to primary key. Also,
the answer is a single/set of data file records.

Note difference in definition of Primary index (us vs. Silberschatz)
Note: for Unique key Dense index indexes every record, Non-dense
does not index every record
Note: Non-dense index must be clustered!


Silberschatz, Korth and Sudarshan 12.9 Database System Concepts
Dense Clustered Index File
Dense index Index record appears for every search-key value
in the file. (note, for every search key vs. for every record)
Silberschatz, Korth and Sudarshan 12.10 Database System Concepts
Sparse Index Files
Sparse Index: contains index records for only some search-key
values.
Applicable when records are sequentially ordered on search-key
To locate a record with search-key value K we:
Find index record with largest search-key value < K
Search file sequentially starting at the record to which the index
record points
Less space and less maintenance overhead for insertions and
deletions.
Generally slower than dense index for locating records.
Good tradeoff: sparse index with an index entry for every block in
file, corresponding to least search-key value in the block.
Basis of the ISAM index
Silberschatz, Korth and Sudarshan 12.11 Database System Concepts
Example of Sparse Clustered Index File
Silberschatz, Korth and Sudarshan 12.12 Database System Concepts
Multilevel Index
If primary index does not fit in memory, access becomes
expensive.
To reduce number of disk accesses to index records, treat
primary index kept on disk as a sequential file and construct a
sparse index on it.
outer index a sparse index of primary index
inner index the primary index file
If even outer index is too large to fit in main memory, yet another
level of index can be created, and so on.
Indices at all levels must be updated on insertion or deletion from
the file.
Silberschatz, Korth and Sudarshan 12.13 Database System Concepts
Multilevel Index (Cont.)
Silberschatz, Korth and Sudarshan 12.14 Database System Concepts
A two level primary Non-dense clustered index resembling
Isam
Silberschatz, Korth and Sudarshan 12.15 Database System Concepts
Index Update: Deletion
If deleted record was the only record in the file with its particular
search-key value, the search-key is deleted from the index also.
Single-level index deletion:
Dense indices deletion of search-key is similar to file record
deletion.
Sparse indices if an entry for the search key exists in the index, it
is deleted by replacing the entry in the index with the next search-
key value in the file (in search-key order). If the next search-key
value already has an index entry, the entry is deleted instead of
being replaced.
Silberschatz, Korth and Sudarshan 12.16 Database System Concepts
Index Update: Insertion
Single-level index insertion:
Perform a lookup using the search-key value appearing in the
record to be inserted.
Dense indices if the search-key value does not appear in the
index, insert it.
Sparse indices if index stores an entry for each block of the file, no
change needs to be made to the index unless a new block is
created. In this case, the first search-key value appearing in the
new block is inserted into the index.
Multilevel insertion (as well as deletion) algorithms are simple
extensions of the single-level algorithms
Silberschatz, Korth and Sudarshan 12.17 Database System Concepts
Secondary Indices
Frequently, one wants to find all the records whose
values in a certain field (which is not the search-key of
the primary index satisfy some condition.
Example 1: In the account database stored sequentially
by account number, we may want to find all accounts in a
particular branch
Example 2: as above, but where we want to find all
accounts with a specified balance or range of balances
We can have a secondary index with an index record
for each search-key value; index record points to a
bucket that contains pointers to all the actual records
with that particular search-key value.
Silberschatz, Korth and Sudarshan 12.18 Database System Concepts
Primary and Secondary Indices
Secondary indices have to be dense.
Indices offer substantial benefits when searching for records.
When a file is modified, every index on the file must be updated,
Updating indices imposes overhead on database modification.
Sequential scan using primary index is efficient, but a sequential
scan using a secondary index is expensive
each record access may fetch a new block from disk since index is
usually un-clustered
Silberschatz, Korth and Sudarshan 12.19 Database System Concepts
Dense Unclustered Secondary Index
on balance field of account- Access List
Silberschatz, Korth and Sudarshan 12.20 Database System Concepts
B
+
-Tree Index Files
All paths from root to leaf are of the same length
Each node that is not a root or a leaf has between [n/2] and
n children.
A leaf node has between [(n1)/2] and n1 values
Special cases:
If the root is not a leaf, it has at least 2 children.
If the root is a leaf (that is, there are no other nodes in the
tree), it can have between 0 and (n1) values.
A B
+
-tree is a rooted tree satisfying the following properties:
Silberschatz, Korth and Sudarshan 12.21 Database System Concepts
B
+
-Tree Node Structure
Typical node



K
i
are the search-key values
P
i
are pointers to children (for non-leaf nodes) or pointers to records
or buckets of records (for leaf nodes).
The search-keys in a node are ordered
K
1
< K
2
< K
3
< . . .

< K
n1




Silberschatz, Korth and Sudarshan 12.22 Database System Concepts
Leaf Nodes in B
+
-Trees
For i = 1, 2, . . ., n1, pointer P
i
either points to a file record with
search-key value K
i
, or to a bucket of pointers to file records,
each record having search-key value K
i
. Only need bucket
structure if search-key does not form a primary key.
If L
i
, L
j
are leaf nodes and i < j, L
i
s search-key values are less
than L
j
s search-key values
P
n
points to next leaf node in search-key order
Properties of a leaf node:
Silberschatz, Korth and Sudarshan 12.23 Database System Concepts
Non-Leaf Nodes in B
+
-Trees
Non leaf nodes form a multi-level sparse index on the leaf nodes.
For a non-leaf node with m pointers:
All the search-keys in the subtree to which P
1
points are less than
K
1

For 2 s i s n 1, all the search-keys in the subtree to which P
i
points
have values greater than or equal to K
i1
and less than K
i
P
n
points to the subtree with keys greater than

K
n1
and less than the
first key in next leaf
Silberschatz, Korth and Sudarshan 12.24 Database System Concepts
Silberschatz, Korth and Sudarshan 12.25 Database System Concepts
Example of a B
+
-tree (index on first
Key)
B
+
-tree for account file (n = 3)
Note, index points to FIRST key!
Silberschatz, Korth and Sudarshan 12.26 Database System Concepts
Example of B
+
-tree
Leaf nodes must have between 2 and 4 values
((n1)/2( and n 1, with n = 5).
Non-leaf nodes other than root must have between 3
and 5 children ((n/2( and n with n =5).
Root must have at least 2 children.
B
+
-tree for account file (n= 5)
Silberschatz, Korth and Sudarshan 12.27 Database System Concepts
Observations about B
+
-trees
Since the inter-node connections are done by pointers, logically
close blocks need not be physically close.
The non-leaf levels of the B
+
-tree form a hierarchy of sparse
indices.
The B
+
-tree contains a relatively small number of levels
(logarithmic in the size of the main file), thus searches can be
conducted efficiently.
Insertions and deletions to the main file can be handled
efficiently, as the index can be restructured in logarithmic time
(as we shall see).
Differences between B+ and B Tree
Differences between B-tree Index and B-tree file
Silberschatz, Korth and Sudarshan 12.28 Database System Concepts
Queries on B
+
-Trees
Find all records with a search-key value of k.
1. Start with the root node
1. Examine the node for the smallest search-key value > k.
2. If such a value exists, assume it is K
j
. Then follow P
i
to the
child node
3. Otherwise k > K
m1
, where there are m pointers in the
node. Then follow P
m
to the child node.
2. If the node reached by following the pointer above is not a leaf
node, repeat the above procedure on the node, and follow the
corresponding pointer.
3. Eventually reach a leaf node. If for some i, key K
i
= k follow
pointer P
i
to the desired record or bucket. Else no record with
search-key value k exists.
Silberschatz, Korth and Sudarshan 12.29 Database System Concepts
Queries on B
+-
Trees (Cont.)
In processing a query, a path is traversed in the tree from
the root to some leaf node.
If there are K search-key values in the file, the path is no
longer than log
n/2(
(K)(.
A node is generally the same size as a disk block,
typically 4 kilobytes, and n is typically around 100 (40
bytes per index entry).
With 1 million search key values and n = 100, at most
log
50
(1,000,000) = 4 nodes are accessed in a lookup.
Contrast this with a balanced binary free with 1 million
search key values around 20 nodes are accessed in a
lookup
above difference is significant since every node access
may need a disk I/O, costing around 20 milliseconds!
Silberschatz, Korth and Sudarshan 12.30 Database System Concepts
Updates on B
+
-Trees: Insertion
Find the leaf node in which the search-key value would appear
If the search-key value is already there in the leaf node, record is
added to file and if necessary a pointer is inserted into the
bucket.
If the search-key value is not there, then add the record to the
main file and create a bucket if necessary. Then:
If there is room in the leaf node, insert (key-value, pointer) pair in the
leaf node
Otherwise, split the node (along with the new (key-value, pointer)
entry) as discussed in the next slide.
Silberschatz, Korth and Sudarshan 12.31 Database System Concepts
Updates on B
+
-Trees: Insertion (Cont.)
Splitting a node:
take the n(search-key value, pointer) pairs (including the one being
inserted) in sorted order. Place the first n/2 ( in the original node,
and the rest in a new node.
let the new node be p, and let k be the least key value in p. Insert
(k,p) in the parent of the node being split. If the parent is full, split it
and propagate the split further up.
The splitting of nodes proceeds upwards till a node that is not full
is found. In the worst case the root node may be split increasing
the height of the tree by 1.
Result of splitting node containing Brighton and Downtown on
inserting Clearview
Silberschatz, Korth and Sudarshan 12.32 Database System Concepts
Updates on B
+
-Trees: Insertion (Cont.)
B
+
-Tree before and after insertion of Clearview
Silberschatz, Korth and Sudarshan 12.33 Database System Concepts
An example of insertion in a b+- tree with p=3 (index
points to LAST key)
Silberschatz, Korth and Sudarshan 12.34 Database System Concepts
Updates on B
+
-Trees: Deletion
Find the record to be deleted, and remove it from the
main file and from the bucket (if present)
Remove (search-key value, pointer) from the leaf node
if there is no bucket or if the bucket has become empty
If the node has too few entries due to the removal, and
the entries in the node and a sibling fit into a single
node, then
Insert all the search-key values in the two nodes into a
single node (the one on the left), and delete the other
node.
Delete the pair (K
i1
, P
i
), where P
i
is the pointer to the
deleted node, from its parent, recursively using the
above procedure.
Silberschatz, Korth and Sudarshan 12.35 Database System Concepts
Updates on B
+
-Trees: Deletion
Otherwise, if the node has too few entries due to the removal,
and the entries in the node and a sibling do not fit into a single
node, then
Redistribute the pointers between the node and a sibling such that
both have more than the minimum number of entries.
Update the corresponding search-key value in the parent of the
node.
The node deletions may cascade upwards till a node which has
n/2 ( or more pointers is found. If the root node has only one
pointer after deletion, it is deleted and the sole child becomes the
root.
Silberschatz, Korth and Sudarshan 12.36 Database System Concepts
Examples of B
+
-Tree Deletion
The removal of the leaf node containing Downtown did not
result in its parent having too little pointers. So the cascaded
deletions stopped with the deleted leaf nodes parent.
Before and after deleting Downtown
Silberschatz, Korth and Sudarshan 12.37 Database System Concepts
Examples of B
+
-Tree Deletion (Cont.)
Node with Perryridge becomes underfull (actually empty, in this special case)
and merged with its sibling.
As a result Perryridge nodes parent became underfull, and was merged with its
sibling (and an entry was deleted from their parent)
Root node then had only one child, and was deleted and its child became the new
root node
Deletion of Perryridge from result of previous example
Silberschatz, Korth and Sudarshan 12.38 Database System Concepts
Example of B
+
-tree Deletion (Cont.)
Parent of leaf containing Perryridge became underfull, and borrowed a
pointer from its left sibling
Search-key value in the parents parent changes as a result
Before and after deletion of Perryridge from earlier example
Silberschatz, Korth and Sudarshan 12.39 Database System Concepts
Silberschatz, Korth and Sudarshan 12.40 Database System Concepts
Performance of a B
+
-tree
N no. of index records.
d degree of node (i.e no. of entries between d and 2d)
h height of tree
h number of page accesses to get to the leaf
Number of page accesses for single key record retrieval: R = h +
1. If root in memory h+1-1=h



When average utilization is p%, n = 2d*p/100
h = log
n
N

1+Log
d
N/2 h log
2d
N
Silberschatz, Korth and Sudarshan 12.41 Database System Concepts
B
+
-Tree File Organization
Index file degradation problem is solved by using B
+
-Tree
indices. Data file degradation problem is solved by using B
+
-
Tree File Organization.
The leaf nodes in a B
+
-tree file organization store records,
instead of pointers.
Since records are larger than pointers, the maximum number of
records that can be stored in a leaf node is less than the number
of pointers in a nonleaf node.
Leaf nodes are still required to be half full.
Insertion and deletion are handled in the same way as insertion
and deletion of entries in a B
+
-tree index.
Simplest to implement sequential file organization no need for
overflows!
Silberschatz, Korth and Sudarshan 12.42 Database System Concepts
B
+
-Tree File Organization (Cont.)
Good space utilization important since records use more space than
pointers.
To improve space utilization, involve more sibling nodes in redistribution
during splits and merges
Involving 2 siblings in redistribution (to avoid split / merge where possible)
results in each node having at least entries

Example of B
+
-tree File Organization

3 / 2n
Silberschatz, Korth and Sudarshan 12.43 Database System Concepts
B-Tree Index Files
Similar to B+-tree, but B-tree allows search-key values to
appear only once; eliminates redundant storage of search
keys.
Search keys in nonleaf nodes appear nowhere else in the B-
tree; an additional pointer field for each search key in a
nonleaf node must be included.
Generalized B-tree leaf node


Nonleaf node pointers B
i
are the bucket or file record
pointers.

Silberschatz, Korth and Sudarshan 12.44 Database System Concepts
B-Tree Index File Example
B-tree (above) and B+-tree (below) on same data
Silberschatz, Korth and Sudarshan 12.45 Database System Concepts
B-Tree Index Files (Cont.)
Advantages of B-Tree indices:
May use less tree nodes than a corresponding B
+
-Tree.
Sometimes possible to find search-key value before reaching leaf node.
Disadvantages of B-Tree indices:
Only small fraction of all search-key values are found early
Non-leaf nodes are larger, so fan-out is reduced (because of the extra pointer!).
Thus B-Trees typically have greater depth than corresponding B
+
-Tree
Insertion and deletion more complicated than in B
+
-Trees
Implementation is harder than B
+
-Trees.
Reading in sorted order is hard
Typically, advantages of B-Trees do not out weigh disadvantages.
Silberschatz, Korth and Sudarshan 12.46 Database System Concepts
Assume a file with 10**5 records, a key is 8 bytes, a pointer is 4
bytes, a block (page) is 1K. Compute time to access a single
record via index or a range of 1000 records
Number of entries in a node: 1024/12 = 85 lets round to 80.
meaning 2d = 80, d =40 (we dont distinguish between number of
pointers and number of keys (-1) with large numbers)
With 3 levels we can store at least 2*40**2 keys and at most we
can store upto 80**3, so we need h=3. In average we need about
47 keys per node (lets round to 50)
Assuming root is in memory we need 2+1 page accesses to get to
a record
To access 1000records we need 2 to get to the first leave +20
leaves. Then depending whether the file is clustered on this index
or not:
Clustered: 22 + number of blocks to store 1000 records (need to
compute)
Unclustered: 1022
B+-tree Example computation
Silberschatz, Korth and Sudarshan 12.47 Database System Concepts
B
+
-trees Implementations issues

Fixed vs. variable size keys.
Internal search (sequential, binary, jump).
Key compression.
Secondary index implementation (Access list vs. composite
keys).
Bulk loading.
Silberschatz, Korth and Sudarshan 12.48 Database System Concepts
Both direct and sequential access, including Range queries.
Always Balanced
2-3 page accesses even for very large files.
At least 50% storage utilization and in average 70%
Dynamic, no need for Reorganization.
B+-tree Advantages
Silberschatz, Korth and Sudarshan 12.49 Database System Concepts
Memory based Hashing

1) Hash function maps key space to a smaller address space
address = H(Key)

2) Collision resolution required to handle two keys hashed into
the same address
Silberschatz, Korth and Sudarshan 12.50 Database System Concepts
Memory based Hashing Collision
Resolution techniques
1) Linear overflow or open addressing find first empty slot and
insert.

2) Independent or chained overflow create overflow chains in
overflow areas.
Silberschatz, Korth and Sudarshan 12.51 Database System Concepts
Silberschatz, Korth and Sudarshan 12.52 Database System Concepts
Performance of Memory based Hashing
n number of records
m number of slots
R time to fetch a record
P mean number of overflow records
n
/
m
= loading factor
Chained overflow
P =
n
/
m
(0.33 for m=1.5n)
Silberschatz, Korth and Sudarshan 12.53 Database System Concepts
Linear overflow
P =
n
/
(m-n)
(1 for m=1.5n)

Clearly when n approaches m performance of linear overflow is
much worse!
For loading factor 0.9 (n = 0.9m) p = 5 ! (avg. of 5 overflow
records per hash!)
Silberschatz, Korth and Sudarshan 12.54 Database System Concepts
Disk based Static Hashing
A bucket is a unit of storage containing one or more records (a
bucket is typically a disk block).
In a hash file organization we obtain the bucket of a record
directly from its search-key value using a hash function.
Hash function h is a function from the set of all search-key
values K to the set of all bucket addresses B.
Hash function is used to locate records for access, insertion as
well as deletion.
Records with different search-key values may be mapped to
the same bucket; thus entire bucket has to be searched
sequentially to locate a record.
Silberschatz, Korth and Sudarshan 12.55 Database System Concepts
Disk based Hashing - Hash Functions
Worst hash function maps all search-key values to the same
bucket; this makes access time proportional to the number of
search-key values in the file.
An ideal hash function is uniform, i.e., each bucket is assigned
the same number of search-key values from the set of all
possible values.
Ideal hash function is random, so each bucket will have the
same number of records assigned to it irrespective of the actual
distribution of search-key values in the file.
Typical hash functions perform computation on the internal
binary representation of the search-key.
For example, for a string search-key, the binary representations of
all the characters in the string could be added and the sum modulo
the number of buckets could be returned. .
Silberschatz, Korth and Sudarshan 12.56 Database System Concepts
Silberschatz, Korth and Sudarshan 12.57 Database System Concepts
Example of Hash File Organization (Cont.)
There are 10 buckets,
The binary representation of the I-th character is assumed to
be the integer i.
The hash function returns the sum of the binary
representations of the characters modulo 10
E.g. h(Perryridge) = 5 h(Round Hill) = 3 h(Brighton) = 3

Hash file organization of account file, using branch-name as key
(See figure in next slide.)
Silberschatz, Korth and Sudarshan 12.58 Database System Concepts
Example of Hash File Organization
Hash file organization of account file, using branch-name as key
(see previous slide for details).
Silberschatz, Korth and Sudarshan 12.59 Database System Concepts
Handling of Bucket Overflows
Bucket overflow can occur because of
Insufficient buckets
Skew in distribution of records. This can occur due to two
reasons:
multiple records have same search-key value
chosen hash function produces non-uniform distribution of key
values
Although the probability of bucket overflow can be reduced, it
cannot be eliminated; it is handled by using overflow buckets.
Silberschatz, Korth and Sudarshan 12.60 Database System Concepts
Handling of Bucket Overflows (Cont.)
Overflow chaining the overflow buckets of a given bucket are
chained together in a linked list.
Above scheme is called closed hashing.
An alternative, called open hashing, which does not use overflow
buckets, is not suitable for database applications.

Silberschatz, Korth and Sudarshan 12.61 Database System Concepts
Silberschatz, Korth and Sudarshan 12.62 Database System Concepts
Performance of disk-based Hashing
Searching within a bucket
does not cost I/O!
Silberschatz, Korth and Sudarshan 12.63 Database System Concepts
Hash Indices
Hashing can be used not only for file organization, but also for index-
structure creation.
A hash index organizes the search keys, with their associated record
pointers, into a hash file structure.
Strictly speaking, hash indices are always secondary indices
if the file itself is organized using hashing, a separate primary hash index on
it using the same search-key is unnecessary.
However, we use the term hash index to refer to both secondary index
structures and hash organized files.

I dont agree! You may have Hash index as primary and the file is clustered
on another key or a Heap file
Note the difference between a Hash Index and a Hash file whether the
buckets contain Pointers to data records or the records themselves! (same
as difference between A B+-tree index and a B+-tree file! )

Silberschatz, Korth and Sudarshan 12.64 Database System Concepts
Example of Hash Index
Silberschatz, Korth and Sudarshan 12.65 Database System Concepts
Deficiencies of Static Hashing
In static hashing, function h maps search-key values to a fixed
set of B of bucket addresses.
Databases grow with time. If initial number of buckets is too small,
performance will degrade due to too much overflows.
If file size at some point in the future is anticipated and number of
buckets allocated accordingly, significant amount of space will be
wasted initially.
If database shrinks, again space will be wasted.
One option is periodic re-organization of the file with a new hash
function, but it is very expensive.
These problems can be avoided by using techniques that allow
the number of buckets to be modified dynamically.
Silberschatz, Korth and Sudarshan 12.66 Database System Concepts
Summary of Static Hashing
Advantages:
Simple.
Very fast (1-2 accesses).

Disadvantages:
Direct access only (no ordered access!).
Non dynamic, needs Reorganization.
Silberschatz, Korth and Sudarshan 12.67 Database System Concepts
Dynamic Hashing
Good for database that grows and shrinks in size
Allows the hash function to be modified dynamically
Extendable hashing one form of dynamic hashing
Hash function generates values over a large range typically b-bit
integers, with b = 32.
At any time use only a prefix of the hash function to index into a
table of bucket addresses.
Let the length of the prefix be i bits, 0 s i s 32.
Bucket address table size = 2
i.
Initially i = 0
Value of i grows and shrinks as the size of the database grows and
shrinks.
Multiple entries in the bucket address table may point to a bucket.
Thus, actual number of buckets is < 2
i

The number of buckets also changes dynamically due to
coalescing and splitting of buckets.
Silberschatz, Korth and Sudarshan 12.68 Database System Concepts
General Extendable Hash Structure
I is called: Global depth; I
j
is called: Local depth
In this structure, i
2
= i
3
= i, whereas i
1
= i 1 (see
next slide for details)
Silberschatz, Korth and Sudarshan 12.69 Database System Concepts
Use of Extendable Hash Structure
Each bucket j stores a value i
j
; all the entries that point to the
same bucket have the same values on the first i
j
bits.
To locate the bucket containing search-key K
j
:
1. Compute h(K
j
) = X
2. Use the first i high order bits of X as a displacement into bucket
address table, and follow the pointer to appropriate bucket
To insert a record with search-key value K
j

follow same procedure as look-up and locate the bucket, say j.
If there is room in the bucket j insert record in the bucket.
Else the bucket must be split and insertion re-attempted (next slide.)
Overflow buckets used instead in some cases (will see shortly)

Silberschatz, Korth and Sudarshan 12.70 Database System Concepts
Updates in Extendable Hash Structure
If i > i
j
(more than one pointer to bucket j)
allocate a new bucket z, and set i
j
and i
z
to the old i
j
-+ 1.
make the second half of the bucket address table entries pointing
to j to point to z
remove and reinsert each record in bucket j.
recompute new bucket for K
j
and insert record in the bucket (further
splitting is required if the bucket is still full)
If i = i
j
(only one pointer to bucket j)
increment i and double the size of the bucket address table.
replace each entry in the table by two entries that point to the same
bucket.
recompute new bucket address table entry for K
j

Now i > i
j
so use the first case above.
To split a bucket j when inserting record with search-key value K
j
:
Silberschatz, Korth and Sudarshan 12.71 Database System Concepts
Updates in Extendable Hash Structure
(Cont.)
When inserting a value, if the bucket is full after several splits
(that is, i reaches some limit b) create an overflow bucket instead
of splitting bucket entry table further.
To delete a key value,
locate it in its bucket and remove it.
The bucket itself can be removed if it becomes empty (with
appropriate updates to the bucket address table).
Coalescing of buckets can be done (can coalesce only with a
buddy bucket having same value of i
j
and same i
j
1 prefix, if it is
present)
Decreasing bucket address table size is also possible
Note: decreasing bucket address table size is an expensive
operation and should be done only if number of buckets becomes
much smaller than the size of the table
Silberschatz, Korth and Sudarshan 12.72 Database System Concepts
Example
Directory is array of size 4.
To find bucket for r, take last `global
depth # bits of h(r); we denote r by
h(r).
If h(r) = 5 = binary 101, it is in
bucket pointed to by 01.
Insert: If bucket is full, split it (allocate new page, re-distribute).
If necessary, double the directory. (As we will see, splitting a
bucket does not always require doubling; we can tell by
comparing global depth with local depth for the split bucket.)
13* 00
01
10
11
2
2
2
2
2
LOCAL DEPTH
GLOBAL DEPTH
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
DATA PAGES
10*
1* 21*
4* 12* 32* 16*
15* 7* 19*
5*
Silberschatz, Korth and Sudarshan 12.73 Database System Concepts
Insert h(r)=20 in previous slide
(Causes Doubling)
20*
00
01
10
11
2
2
2
2
LOCAL DEPTH
2
2
DIRECTORY
GLOBAL DEPTH
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2
(`split image'
of Bucket A)
1* 5* 21* 13*
32* 16*
10*
15* 7* 19*
4* 12*
19*
2
2
2
000
001
010
011
100
101
110
111
3
3
3
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2
(`split image'
of Bucket A)
32*
1* 5* 21* 13*
16*
10*
15* 7*
4* 20* 12*
LOCAL DEPTH
GLOBAL DEPTH
Silberschatz, Korth and Sudarshan 12.74 Database System Concepts
Use of Extendable Hash Structure:
Example
Initial Hash structure, bucket size = 2
Silberschatz, Korth and Sudarshan 12.75 Database System Concepts
Example (Cont.)
Hash structure after insertion of one Brighton and two
Downtown records
Silberschatz, Korth and Sudarshan 12.76 Database System Concepts
Example (Cont.)
Hash structure after insertion of Mianus record
Silberschatz, Korth and Sudarshan 12.77 Database System Concepts
Example (Cont.)
Hash structure after insertion of three Perryridge records
Using OVERFLOW BUCKET!
Silberschatz, Korth and Sudarshan 12.78 Database System Concepts
Example (Cont.)
Hash structure after insertion of Redwood and Round Hill
records
Silberschatz, Korth and Sudarshan 12.79 Database System Concepts
Performance of Extendable Hashing
Assume a file with 10**7 records, record size is 200 bytes, key
size is 12 bytes, pointer size is 8 bytes.
Assume Data block is 4K and blocks are in average half full
Assume Index blocks are also 4K and are in average half full
Therefore an Index block contain about 100 entries and a data
block contains about 10 records.
With an Hash-index organization, the number of entries for the
directory is: 10**7 / 100 = 10**5, therefore size of address is 17
bits and size of directory 2**17 *4 = 512K. Access time is 2 (1 for
Index +1 for data file) ( or 2**17*8 = 1M )
With Hash file organization, the number of entries for the
directory is 10**7/10 = 10**6, therefore size of address is 20 bits
and size of directory is 2**20*4 = 4M. Access time is only 1!

Silberschatz, Korth and Sudarshan 12.80 Database System Concepts
Extendable Hashing vs. Other Schemes
Benefits of extendable hashing:
Hash performance does not degrade with growth of file
Minimal space overhead
Disadvantages of extendable hashing
Extra level of indirection to find desired record
Bucket address table may itself become very big (larger than
memory)
Need a tree structure to locate desired record in the structure!
Changing size of bucket address table is an expensive operation
Linear hashing is an alternative mechanism which avoids these
disadvantages at the possible cost of more bucket overflows
Silberschatz, Korth and Sudarshan 12.81 Database System Concepts
Linear Hashing
This is another dynamic hashing scheme, an alternative to Extendible
Hashing.
LH handles the problem of long overflow chains without using a directory,
and handles duplicates.
Idea: Use a family of hash functions h
0
, h
1
, h
2
, ...
h
i
(key) = h(key) mod(2
i
N); N = initial # buckets
h is some hash function (range is not 0 to N-1 but a very large range, e.g. 0..
2
32
)
If N = 2
d0
, for some d0, h
i
consists of applying h and looking at the last di bits,
where di = d0 + i.
h
i+1
doubles the range of h
i
(similar to directory doubling)
Silberschatz, Korth and Sudarshan 12.82 Database System Concepts
Linear Hashing (Contd.)
Directory avoided in LH by using overflow pages, and choosing
bucket to split round-robin.
Splitting proceeds in `rounds. Round ends when all N
R
initial (for round
R) buckets are split. Buckets 0 to Next-1 have been split; Next to N
R
yet
to be split.
Current round number is Level.
Search: To find bucket for data entry r, find h
Level
(r):
If h
Level
(r) in range `Next to N
R


, r belongs here.
Else, r could belong to bucket h
Level
(r) or bucket h
Level
(r) + N
R
;
must apply h
Level+1
(r) to find out.
Silberschatz, Korth and Sudarshan 12.83 Database System Concepts
Overview of LH File
In the middle of a round.
Level
h
Buckets that existed at the
beginning of this round:
this is the range of
Next
Bucket to be split
of other buckets) in this round
Level
h search key value ) (
search key value ) (
Buckets split in this round:
If
is in this range, must use
h
Level+1
`split image' bucket.
to decide if entry is in
created (through splitting
`split image' buckets:
Silberschatz, Korth and Sudarshan 12.84 Database System Concepts
Linear Hashing (Contd.)
Insert: Find bucket by applying h
Level
/ h
Level+1
:
If bucket to insert into is full:
Add overflow page and insert data entry.
(Maybe) Split Next bucket and increment Next.
Can choose any criterion to `trigger split, usually but not necessarily
when an overflow occurs.
Since buckets are split round-robin, long overflow chains dont develop!
Doubling of directory in Extendible Hashing is similar; switching of hash
functions is implicit in how the # of bits examined is increased.
Silberschatz, Korth and Sudarshan 12.85 Database System Concepts
Example of Linear Hashing
On split, h
Level+1
is used to re-
distribute entries.
0
h h
1
(This info
is for illustration
only!)
Level=0, N=4
00
01
10
11
000
001
010
011
(The actual contents
of the linear hashed
file)
Next=0
PRIMARY
PAGES
Data entry r
with h(r)=5
Primary
bucket page
44* 36*
32*
25* 9* 5*
14* 18* 10* 30*
31* 35* 11* 7*
0
h h
1
Level=0
00
01
10
11
000
001
010
011
Next=1
PRIMARY
PAGES
44* 36*
32*
25* 9* 5*
14* 18* 10* 30*
31* 35* 11* 7*
OVERFLOW
PAGES
43*
00
100
Silberschatz, Korth and Sudarshan 12.86 Database System Concepts
Example: End of a Round
0
h h
1
22*
00
01
10
11
000
001
010
011
00
100
Next=3
01
10
101
110
Level=0
PRIMARY
PAGES
OVERFLOW
PAGES
32*
9*
5*
14*
25*
66* 10* 18* 34*
35* 31* 7* 11* 43*
44* 36*
37* 29*
30*
0
h h
1
37*
00
01
10
11
000
001
010
011
00
100
10
101
110
Next=0
Level=1
111
11
PRIMARY
PAGES
OVERFLOW
PAGES
11
32*
9* 25*
66* 18* 10* 34*
35* 11*
44* 36*
5* 29*
43*
14* 30* 22*
31* 7*
50*
page = 2K
Silberschatz, Korth and Sudarshan 12.87 Database System Concepts
Comparison of Ordered Indexing and Hashing
Cost of periodic re-organization
Relative frequency of insertions and deletions
Is it desirable to optimize average access time at the expense of
worst-case access time?
Expected type of queries:
Hashing is generally better at retrieving records having a specified
value of the key.
If range queries are common, ordered indices are to be preferred
Silberschatz, Korth and Sudarshan 12.88 Database System Concepts
Index Definition in SQL
Create an index
create index <index-name> on <relation-name>
<attribute-list>)
E.g.: create index b-index on branch(branch-name)
Use create unique index to indirectly specify and enforce the
condition that the search key is a candidate key is a candidate
key.
Not really required if SQL unique integrity constraint is supported
To drop an index
drop index <index-name>
Silberschatz, Korth and Sudarshan 12.89 Database System Concepts
Multiple-Key Access
Use multiple indices for certain types of queries.
Example:
select account-number
from account
where branch-name = Perryridge and balance - 1000
Possible strategies for processing query using indices on single
attributes:
1. Use index on branch-name to find accounts with balances of $1000; test
branch-name = Perryridge.
2. Use index on balance to find accounts with balances of $1000; test
branch-name = Perryridge.
3. Use branch-name index to find pointers to all records pertaining to the
Perryridge branch. Similarly use index on balance. Take intersection of
both sets of pointers obtained.
Silberschatz, Korth and Sudarshan 12.90 Database System Concepts
Indices on Multiple Attributes
With the where clause
where branch-name = Perryridge and balance = 1000
the index on the combined search-key will fetch only records
that satisfy both conditions.
Using separate indices in less efficient we may fetch many
records (or pointers) that satisfy only one of the conditions.
Can also efficiently handle
where branch-name - Perryridge and balance < 1000
But cannot efficiently handle
where branch-name < Perryridge and balance = 1000
May fetch many records that satisfy the first but not the
second condition.
Suppose we have an index on combined search-key
(branch-name, balance).
Silberschatz, Korth and Sudarshan 12.91 Database System Concepts
Indices on Multiple Attributes
Example: find all objects within the following boundaries:
0 <= X <= 10
0 <= Y <= 20
0 <= Z <= 30
Using an Index?

Solution: Multi-dimensional indexes e.g. R-trees
How to handle the symmetric case?, the case of several
attributes with ranges?
Silberschatz, Korth and Sudarshan 12.92 Database System Concepts
Why Sort?
A classic problem in computer science!
Data requested in sorted order
e.g., find students in increasing gpa order
Sorting is first step in bulk loading B+ tree index.
Sorting useful for eliminating duplicate copies in a collection of records
(Why?)
Sort-merge join algorithm involves sorting.
Problem: sort 1Gb of data with 1Mb of RAM.
why not virtual memory?
Silberschatz, Korth and Sudarshan 12.93 Database System Concepts
2-Way Sort: Requires 3 Buffers
Pass 1: Read a page, sort it, write it.
only one buffer page is used
Pass 2, 3, , etc.:
three buffer pages used.
Main memory buffers
INPUT 1
INPUT 2
OUTPUT
Disk
Disk
Silberschatz, Korth and Sudarshan 12.94 Database System Concepts
Two-Way External Merge Sort
Each pass we read + write each
page in file.
N pages in the file => the number of
passes

So toal cost is:


Idea: Divide and conquer: sort
subfiles and merge
(
= + log
2
1 N
(
( )
2 1
2
N N log +
Input file
1-page runs
2-page runs
4-page runs
8-page runs
PASS 0
PASS 1
PASS 2
PASS 3
9
3,4 6,2 9,4 8,7 5,6 3,1 2
3,4 5,6 2,6 4,9 7,8 1,3 2
2,3
4,6
4,7
8,9
1,3
5,6 2
2,3
4,4
6,7
8,9
1,2
3,5
6
1,2
2,3
3,4
4,5
6,6
7,8
Silberschatz, Korth and Sudarshan 12.95 Database System Concepts
General External Merge Sort
To sort a file with N pages using B buffer pages:
Pass 0: use B buffer pages. Produce sorted runs of B pages each.
Pass 2, , etc.: merge B-1 runs.
(
N B /
B Main memory buffers
INPUT 1
INPUT B-1
OUTPUT
Disk
Disk
INPUT 2
. . .
. . . . . .
More than 3 buffer pages. How can we utilize them?
Silberschatz, Korth and Sudarshan 12.96 Database System Concepts
Cost of External Merge Sort
Number of passes:
Cost = 2N * (# of passes)
E.g., with 5 buffer pages, to sort 108 page file:
Pass 0: = 22 sorted runs of 5 pages each (last run is only
3 pages)
Pass 1: = 6 sorted runs of 20 pages each (last run is only
8 pages)
Pass 2: 2 sorted runs, 80 pages and 28 pages
Pass 3: Sorted file of 108 pages


For example: 0 passes (main memory) = 1N (writing output not
considered)
1 pass = 3N
2passes = 5N
(
(
1
1
+

log /
B
N B
(
108 5 /
(
22 4 /

Вам также может понравиться