Вы находитесь на странице: 1из 37

Instructors Manual

File Structures
An Object-Oriented Approach with C++ Chapters 9-12 Michael J. Folk
University of Illinois

Bill Zoellick
CAP Ventures

Greg Riccardi
Florida State University

Chapter 9
Multilevel Indexing and B-Trees
This chapter has been substantially modified from the 2nd edition of Folk and Zoellick. B-trees are presented as a multi-level indexing strategy, rather than as a tree-based indexing strategy.

The evolution of tree indexes


The second and third sections of this chapter present two precursors to the B-tree data structure. This material is not crucial to the understanding of B-trees, but may help students to understand how B-trees succeed where other structures do not.

How B-trees work


Based on the experiences we have had teaching students about B-trees, we feel that there are many different perspectives from which one can understand how B-trees work. The first involves understanding why B-trees succeed in reducing the worst case search behavior. It is because they are broad and all leaves are on the same level. The presentation of Btree as an index of an existing file illustrates this without requiring an understanding of the insertion method. I suggest that you reference figure 9.15e as an example of a B-tree. The second involves understanding how insertion and deletion operations operate on B-trees in order to guarantee that they remain balanced and grow in breadth rather than in height when possible. We can also see that the cost of insertion and deletion is limited by the height. A third perspective involves understanding how the B-tree data structure and B-tree operations can be implemented in a programming language. This way of viewing B-trees is just as important as the other two because it forces the student to look carefully at what is really going on when operations are performed on B-trees. The text mixes these three perspectives on B-trees with the understanding that the reader will not only practice inserting and deleting items from a B-tree, but will also study an object-oriented presentation of insertion and deletion, and in so doing will become very comfortable with them.

Object-oriented B-trees
This chapter provides an opportunity for each student to assess his or her understanding of O-O methods. The presentation of the B-tree classes is brief (a total of 10 pages in Appendix I), and depends heavily on the IOBuffer and Index classes. All of the OO aspects of C++ are included: type hierarchies, virtual functions, interdependence of classes, template classes, etc. By this point in the book, students should be able to understand the meaning of the classes and how to use them.

Chapter 9, Multilevel Indexing and B-Trees It is highly appropriate to have the students implement a BtreeIndexedFile class, or to build a B-tree index for one of the classes from previous chapters.

Performance issues
Performance issues are covered in two ways. First, B-tree performance can be compared with that of other data structures and seen in most cases to be very good. Second, we can look at how B-tree performance can be improved by altering some of the ways we implement the basic data structure, and by using some imagination in processing B-trees. This second approach to improving performance gets a lot of attention in the later sections of the chapter. This is done in order to stress the very important principle that no data structure has to be implemented exactly according to its textbook definition. The textbook definition should be viewed as a starting point only.

Other index structures


We concur with the statement by Comer [1979] that the B-tree has become the de facto standard for implementing large indexes on secondary storage. There are other known structures that use similar strategies to keep the maximum number of accesses low (extendible hashing, for instance), but as far as we know none has proved to be competitive with B-trees for general use. Despite the B-trees domination, it is important to stress that other index structures can, in certain circumstances, be superior. Small indexes, indexes that reside permanently in memory, and indexes that never change, often do not have performance demands that merit the overhead that comes with B-trees.

Answers to exercises: Chapter 9


Exercise 1.
a) The second. When an item is inserted into or deleted from a binary search tree, only local changes have to be made to the data structure. But similar changes to a simple index (i.e. one stored as an array) usually require rearranging a substantial part of the index. If a search tree is balanced, the maximum search length is minimized. Otherwise some branches of the tree can become very long, making searches for certain items very slow. An AVL tree can be kept balanced with a smaller amount of overhead. Not paged: ceiling[log2(1,000,000 + 1)] = 20 15 keys per page: ceiling[log16(1,000,000 + 1)] = 5 511 keys per page: ceiling[log512(1,000,000 + 1)] = 3 e) Why is it difficult to create a tree-balancing algorithm that has only local effects? When changes are made in an AVL tree, a small number of pointer reassignments occur. These involve pointers to nodes, not to pages. If the reassignment of a pointer also involves a second, different page from the original one, it could involve moving more than just the single item pointed to on that page. It could involve moving all of the items on that page to another page, and rearranging that other page accordingly. And even then, what happens if the other page doesnt have enough space to accommodate the items that are moved to it? When page size increases, why does it become difficult to guarantee a minimum number of keys per page? Suppose a node is to be added to the bottom When a node of a B-tree overflows, it is split into two nodes and one key is promoted up to a higher level node. If the node is a root (there is no higher level node), then a new root is created and the tree grows up one level. When an item is inserted into a binary tree, a branch is added to a leaf of the tree and the item is inserted at the end of the branch, one level lower than the leaf. g) The key to the problem has to do with paging. When data is to be pulled off secondary storage, paging can improve performance tremendously. When a search tree is kept in memory, there is no advantage to be gained from paging, so the disadvantages that go with paged binary trees are not incurred. As long as the tree can be kept reasonably well balanced (e.g. as an unpaged AVL tree), then it is as easy to search as the equivalent paged, multiway tree. f)

b)

c) d)

Answers to Exercises, Chapter 9

Exercise 2.
a) J

CG b)

X EJU

ABC c)

GHI

NOS J

EH

ABC
d)

FG

NOS
JU

EH FG I LN

OS QR T VW

X Z

ABC

Exercise 3.
a) b) c) d) m = 256. ceiling(m/2) = 128. 0 = minimum number of descendants from the root. When the root is the only node in the tree, it is also a leaf and has no descendants.

0 = minimum number of descendants from a leaf. A leaf has no descendants that are B-tree nodes. However, leaves may have pointers to data records, and there could be 256 of these. e) f) 199 keys are on a nonleaf page with 200 descendants. From the derivation on pages 365-366: d <= 1 + log ceiling(m/2)((N + 1)/2) d <= 1 + log128(100,000/2) d <= 1 + 2.23. I.e. maximum depth of the tree = 3.

Exercise 4. revise
The best case, or minimum depth, occurs when every node is completely full. Level 1 (root) Maximum number of descendants m 5

Answers to Exercises, Chapter 9 m x m or m2 m3 m4

2 3 4 . .

d md We know that a tree with N keys has N descendants from its leaf level. Lets call the depth of the tree at the leaf level d. We can express the relationship between the N descendants and the maximum number of descendants from a tree of height d as N <= md since we know the number of descendants from any tree cannot be more than the number for a best-case tree of that depth. Solving for d, we arrive at the expression d >= logmN. This expression gives us a lower bound for the depth of a B-tree with N keys. For the tree described in the preceding question d >= log256(100,000) = 2.07. So, alas, the minimum depth of the tree is 3.

Exercise 5.
For simplicity, we have assumed that there is only enough buffer space to hold one node in primary memory. This is of course unlikely, but it lets us focus on the number of nodes that get visited. You might consider using these figures as a starting point in a discussion of ways to improve performance. For part (d) we need to know the number of records in the file as well as the number of keys per page. If we let N stand for the total number of records, and let m stand for the order of the tree, we can include these values in our formula. Recall that the root node is always in memory.
Operation a) Retrieve a record Maximum d. Since the root is always in memory we have to go d-1 levels to a leaf, then from there one access to get the data record. 3d. If the insertion causes splits that propagate all the way up to the root, we have to make d-1 accesses to get to the leaf level, d access to write the changes to the existing nodes, d accesses to create the new nodes for, the splits and one access to add the data record to its file. 4d - 3. We make d-1 accesses to find the key at the leaf level, then the concatenation propagates to the Minimum d. Since we all have to seek to the leaf level.

b) Add a record

d + 1. If the insertion does not cause a split, we go d-1 levels to the leaf where the key is inserted, then one access to rewrite the index node, and one access to write the data record to the data file.

c)

Delete record

d + 2. No underflow occurs. d-1 access to search to the leaf, then 1 access to write the leaf node

Answers to Exercises, Chapter 9


root, d-1 levels. At each level we read two sibling nodes and rewrite the concatenated result, for a total of 3x(d-1) accesses. Finally, 1 access to mark the data record as deleted N + N/(m/2) = N (1 + 2/m) N accesses to read the data records. In the worst case, index pages will be half full (m/2 keys per page), so N/(m/2) accesses will be required to read all of the index pages. and 2 to mark the data record as. deleted.

d) Retrieve all records in sorted order

N + n/(m-1). N to read the data records. In the best case, index pages will be full (m-1 keys per page), so N/(m-1) accesses will be required to read all of the index pages.

Exercise 6.
Changes in trees are shown a) N is deleted from the leaf node that held N O P
P Q Y

MP JKLM OP

b) P is deleted from the leaf node that held O P. This causes an underflow, so a key is nd borrowed from J K L M and the 2 level node is fixed
P Q Y

LP JKL MOP

c) Q is deleted from the leaf node that held Q R S T.


P Q Y

T W Z R S T U V W Y Z

c) Y is deleted from the leaf node that held Y Z. Underflow occurs, so two nodes can be merged.
P Q Y

T Z R S T U V W Z

Exercise 7.

Answers to Exercises, Chapter 9 Only a branch has to be full in order for an insertion to propagate all the way to the root and cause the root to split.

Exercise 8.
Normally redistribution is preferable because it is guaranteed to have only local effects.

Exercise 9.
Differences between a B* tree and a B-tree:

A B* tree page (except root ) has at least (2m-1)/3 descendants, whereas a B-tree has at least ceiling(m/2). In other words, all B* tree nodes are guaranteed to be at least about 2/3 full, but B-tree nodes are guaranteed only to be at least 1/2 full. B* trees generally result in better storage utilization than B-trees. B* tree processing is more complicated than B-tree processing because the root has to be treated as a special case whenever it must be split. Minimum depth is not affected. Minimum depth is guaranteed when either tree is completely full, and when they are completely full they are identical.

Exercise 10.
From the Key Terms: A virtual B-tree is one in which several pages are kept in memory in anticipation of the possibility that one or more of them will be needed by a later access. It is possible to average fewer than one disk access because in some cases all pages needed to search for a key may already be in memory, so no disk accesses are required. If this happens often enough, the average number of access will be less than one.

Exercise 11.
If the data are stored with the key, there will be less space for keys and descendant pointers, so the order of the B-tree will be less. This increases the likelihood that the tree will be taller and thinner. On the positive side, if data is stored with the key there is no need for an extra access to retrieve the data record. So, if by storing data with key the tree height is increased by one or fewer levels, it is a good strategy.

Exercise 12.
There is no clear definition of what the middle key is. It might be the median key, or it might be a key whose position among the other keys is such that approximately half of the characters used in keys come before it and half come after it, or it might be given some other definition.

Answers to Exercises, Chapter 9 If a bias is built in toward promotion of shorter keys, there is a better chance that the height of the tree will be reduced. But this may result in a lack of balance among the nodes that are split, possibly increasing the likelihood of having to split, redistribute or concatenate when insertions and deletions occur later on.

Chapter 10
Indexed Sequential File Access and Prefix B+ Trees

Starting with the idea of a sequence set


When we began writing this chapter we took the obvious approach: we described indexed sequential file organizations by describing the connections between an index and a set of records that somehow need to form a sequence. As we reviewed the results, it became clear that we unnecessarily had deviated too much from the philosophy of the book. We were not looking at indexed sequential file structures (specifically B+ trees) in terms of the conceptual components that make them up, and as a consequence some very important, subtle concepts had become obscure and in danger of being missed by many readers. Hence, after a brief description of indexed sequential access (Section 10.1), the focus of the text dwells for several pages (Section 10.2) on the what is new about indexed sequential file structures: the sequence set. This serves not only to help give the reader a sound understanding of special properties of a sequence set, but it also helps point out ways in which a sequence set is different from a corresponding index set. We have found it to be very common for students not to understand these differences, and that this approach helps. A side benefit of this approach is that it isolates the discussion of certain conceptual tools that can be used in other completely different contexts -- tools such as page splitting, sequential linking of blocks, and partial filling of data blocks.

Adding an index set


In Sections 10.3 and 10.4, an index set is added to the sequence set. But once again we want both to keep things simple and to underscore the fact that the structure of one part of the file structure (the sequence set) does not necessitate any particular structure for the other (the index set). So the index set that is described is just a simple index, properly adapted to separate blocks in the sequence set. The idea of a separator is a new one, so we give it a section all its own. The particular type of separator introduced here should be treated as an exemplar of the class of all types of separators that will do the job. Depending on special characteristics of the data and our own cleverness, there are often opportunities to effect substantial performance gains by intelligent and imaginative choice of separators. The discussion of separators might also be used as a starting point for discussing an important topic related to file structures that we do not address in the text: data compression. With the explosive growth of information storage media and data communication, data compression has become and should continue to be very important. We regret that our text does not cover this topic. 10

Chapter 10, Indexed Sequential File Access and Prefix B+ Trees

The simple prefix B+ tree


The remainder of the chapter is devoted primarily to filling in details about simple prefix B+ trees. Following the approach and philosophy used in earlier chapters, there is emphasis on the basic operations, on implementation, and on configuring the system environment in which processing occurs.

Commercial products
Except for IBM, we know of no commercial developer of large indexed sequential files systems who has been willing to disclose very much useful information about how their system works. This is understandable but for our purposes unfortunate. We are able to tell that virtually all major isam implementations are based on the B+ tree approach: an index set and sequence set that change their size through block splitting and concatenation and/or redistribution. Despite the dearth of information about commercial products, it is very useful for most students to see how real products operate. Many students aspire to be applications programmers, and they really do want to see how all this relates to what they may be doing someday. We do this in several ways: 1) We give one or two lectures on how VSAM is constructed. Several good sources of information are described in the Further Readings on page 443 of the text. 2) We provide a lab exercise in which students build a small indexed sequential file using a file system on one of our mainframes. We use either VSAM on our IBM mainframe, or RMS's interactive file creation facility on our VAX. Similar types of exercises can be carried out in certain COBOL and PL/1 programming environments. 3) We have students with a special interest in applications programming (or grad students who have to do extra work to get graduate credit) report to the class on one or more commercial systems. Another thing that we like to do in one or more of these ways is provide some introductory material on databases. Its a good way to finish the last few days of the course, and points the way to the next important files-related course that many students will take.

11

Answers to exercises: Chapter 10


Exercise 1.
This is a discussion question that is meant to elicit a review of the range of file structures that we have considered so far. An example of a sequential access only file might be an unindexed variable length record file. A B-tree indexed file is an example of a file that is generally designed for direct access only, although there is nothing to keep users from accessing the data file sequentially if they know how. B+ tree files permit indexed sequential access.

Exercise 2.
The major advantage of a B-tree is that it is simpler and easier to manage. If sequential access is not needed or is rarely needed, then we might as well use the simpler structure. If no sequential access is required, the extra work required to implement a B+ tree is not useful.

Exercise 3.
After adding DOVER and EARNEST:
Block 1 ADAMS . . . BAIRD . . . BIXBY . . . BOONE . . .

Block 2

BYNUM . . . CARSON . . . CARTER . . .

Block 3

DENVER . . . DOVER . . . EARNEST . . . ELLIS . . .

Block 4

COLE . . . DAVIS

When we then delete DAVIS underflow occurs in Block 4. We have to decide whether to concatenate or redistribute. We cannot concatenate with Block 4s successor (Block 3) because it is already full. Block 4s predecessor (Block 2) has room for one more name, so we could concatenate with it, but this would give us two completely full blocks. Or, we could redistribute the keys between Blocks 2 and 4 or between Blocks 3 and 4. Here is what the sequence set would look if we redistribute by moving DENVER from Block 3 back to Block 4:
Block 1 ADAMS . . . BAIRD . . . BIXBY . . . BOONE . . .

Block 2

BYNUM . . . CARSON . . . CARTER . . .

Block 3

DOVER . . . EARNEST . . . ELLIS . . .

Block 4

COLE . . . DENVER

12

Answers to Exercises, Chapter 10

Exercise 4.
From the discussion in Section 10.2.2, the following considerations affect our choice of a block size:

q q q

We want to be able to work with as much of our sequence set in memory as possible with a single access. So we want our blocks to be as large as possible, subject to at least the following two constraints. We want to be able to hold enough blocks in memory to allow us to perform operations like splitting and redistribution efficiently. Blocks must be small enough to permit this. We want our blocks to be small enough that the time it takes to transmit the block is not excessive. We especially do not want blocks to be so big that an extra seek must occur when some blocks are read. If we know something about expected patterns of access: If a large proportion of the file accesses will be random, blocks should be relatively short, because random accesses normally use only one record from a block. If most of the file accesses are sequential, blocks should be relatively long, because more records will be processed per access. If sectors and clusters are used: No block should be shorter than a sector since a sector is the minimum amount at must be transferred in a single access. Block sizes normally should be integral multiples of sector sizes to avoid having to load sectors with little useful data in them. Since a cluster is a set of contiguous sectors, it can be read without incurring an extra seek. So the size of a cluster suggests a good block size when larger blocks are desired.

q q q q q

Exercise 5.
The best way to answer these questions is to look at the reasons for developing the B-tree and + tree structures to replace the simple index structures developed earlier. Regarding the use of a B simple index:

q q

We find information in a simple index by binary searching, which can be expensive if the index is too large to keep in memory.

If a simple index is volatile, incurring many additions and deletions, it can be very expensive to keep it sorted, especially if the index is too large to sort in memory. So, if the index is small enough to keep in memory, and if it undergoes few changes, a simple index might suffice.

13

Answers to Exercises, Chapter 10 Regarding the use of a balanced binary tree rather than a B-tree for the index, the problem of keeping the file sorted is solved, but the cost of doing a binary search remains high. The solution suggested for this problem is paging. Paging can be used with binary trees, but when paging is used it can become expensive to keep the tree balanced when insertions and deletions occur. So, if the index is either small enough to be kept in memory (avoiding the necessity for paging) or if it undergoes few changes, an index in the form of a binary tree might suffice.

Exercise 6.
In the case of a B-tree, the separators must be keys because they identify specific individual records by key. In a B+ tree, separators merely provide a guide to the block containing a desired record whose key is within some range.

Exercise 7.
In the case of a B-tree, a block split results in the promotion of a key and associated data. A sequence set is not a tree but a linked list, so when a sequence set block is split we just rearrange the links to maintain the sorted list structure. Similarly, when underflow occurs in a B-tree, adjustments have to be made to the parent node and possibly through higher levels in the tree. In a sequence set, only adjacent nodes and their links are affected. Of course, when the sequence set and B-tree index are put together, any change in the sequence set can result in changes in the index set, which in turn can require a lot of work.

Exercise 8.
In need not be affected at all, since it is still a perfectly valid separator.

Exercise 9.
a) A shortest separator must be found between FINGER and FINNEY. Using the scheme in the text, we choose FINN, which then gets inserted into the leaf node in the index set. If we assume that the leaf node is too small to hold the three separators F, FINN, and FOLKS, we promote FINN to the root. So the tree looks like this:
E FINN BO CAM F FOLKS

EMBRY-EVANS 4

FABER-FINGER 5

FINNEY-FOLK 8

FOLKS-GADDIS 6

b) After the concatenation, Block 5 is no longer needed and can be placed on an avail list. Consequently, the separator F is no longer needed. Removing F from

14

Answers to Exercises, Chapter 10 its node in the index set forces a concatenation of index set nodes, bringing FINN back down from the root.
E BO CAM FINN FOLKS

EMBRY-FINGER 4

FINNEY-FOLK 8

FOLKS-GADDIS 6

c) Given the tree shown in part (a), suppose we decide to redistribute by moving the name between node 5 and node 4. Suppose that the name stored after FABER in node 5 is FARCE. We move FABER over to the end of node 4, then find a new separator between FABER and FARCE, say FAR. FAR replaces F in the parent. The new tree looks like this:
E FINN BO CAM FAR FOLKS

EMBRY-FABER FARCE-FINGER 4 5

FINNEY-FOLK 8

FOLKS-GADDIS 6

Exercise 10.
Same block size for index and sequence set blocks:

q q

The choice of an index set block size is governed by many of the same performance considerations, so the block size that is best for the sequence set block is usually best for the index set. A common block size makes it easier to implement buffering schemes that improve performance. Same file for sequence set and index set: Avoids seeking between two separate files while accessing the simple prefix B+ tree.

Exercise 11.
Conceptual view Concatenated separators: Index to separators: More detailed view: AbArchAstronBBea 00 02 06 12 13

15

Answers to Exercises, Chapter 10


Separator count Total length of separators 5 16 AbArchAstronBBea Separators 00 02 06 12 13 Index to Separators B00 B01 B02 B03 B04 Relative block numbers

Exercise 12.
(This question should prompt the reader to think about the type of procedure that is carried out in Exercises 20 and 21.) Since the file is sorted, we know the exact order in which the keys will go in the sequence set. We can simply place them in the sequence set in that order, without searching down through the tree every time a new key is entered. Once we have the sequence set built (or, alternatively, while we are building the sequence set) we can identify the separators between pages and place that separator in parent nodes. Every time a parent node overfills, we can create a new parent node to take the next set of separators, and promote a separator between parent nodes. This process can be repeated until the entire index is built. Some advantages of this approach:

q q

It saves an enormous amount of time. Instead of searching down through the tree for every key to find a place for it, a process that takes several random accesses, we place the keys and separators into the file in a sequential manner. We are free to make each index set node and sequence set page as full as we like, so we have complete control over the percentage of space utilization by every part of the tree. If we expect the file to be very volatile, we might want to leave a lot of free space in every node in order to avoid excessive splitting, concatenation, and redistribution when the file is in use. If the file will not change at all, we might benefit from filling every node completely.

Exercise 13.
The result is similar to the one produced in Fig. 10.17 in that a new index block is created that contains no separators. Also, a separator between this index block and its left sibling goes into the root. This separator (ITEMI) is just the shortest separator between the new sequence set block and its left sibling. In the figure, additions are shown in bold face.
CATDRITEMI 00 03 05 ALWASPBET .... CLCOSDE .... ... ... ACCESS-ALSO ALWAYS-ASK . . . IGNORE-ITEM ITEMIZE-JAR EFHIG .... .. ... ITEMI -1 -1 -1

16

Answers to Exercises, Chapter 10

Exercise 14.
The sequence set remains the same, but the index set looks like this:
CATCHDRUM 00 05

ALWAYSASPECTBETTER 00 06 12 EFFORTHEADIGNORE 00 06 10 CLASSCOSTDELETE 00 06 10

The B+ tree has the same number of nodes as the simple prefix B+ tree, but each node has to be much bigger to accommodate the full keys. If the node sizes were to remain the same, the simple prefix B+ tree would be much broader. See pages 431-434 for a discussion of the tradeoffs between the two types of trees.

Exercise 15.
a) When we use variable length separators (or variable length keys, for that matter) the order of a B-tree is no longer the maximum number of descendants from every node, because the maximum number of descendants from a node depends on the sizes of the separators within the node. When the separators in a node are relatively short, there will be more descendants from the node than when the separators tend to be longer. b) Instead of counting keys to determine when overflow or underflow occur, we need some other measures. We could assume that overflow occurs when there simply is not enough room to insert a new separator and its corresponding index entry and descendant reference. We might assume that underflow occurs when 50% of the available space in a node is unused. c) In the case of a B-tree with fixed length keys we had a fixed value m that gave us the order of the tree. The value m was just the maximum number of descendants that a node could have. We were able to use m to estimate tree height, maximum number of accesses, and space requirements. Since separators are variable in length we need some value other than m to give a similar measure of order. We could, for instance, estimate the average separator size from a sample of the sequence set data and use this to arrive at an estimated order for the tree.

Exercise 16.
In all cases except (e) the values for the B-tree are the same as those given for Exercise 5 in Chapter 9. See the discussion in Chapter 9 for the reasoning behind these. The values for the B+tree and the simple prefix B+ tree are the same in all cases, so no column is given for the simple prefix B+ tree. It should be noted, however, that the value of h might be smaller for the simple prefix B+ tree, and also that there is likely to be more

17

Answers to Exercises, Chapter 10 information in a simple prefix B+ tree node. Both of these should result in better performance when processing a simple prefix B+ tree Also note that in many cases, especially the worst cases, the number of accesses for the B+ tree and the simple prefix B+ tree are the same as those for the B-tree. The following assume that the root of the B+ tree is in memory.
+ Operation B-tree B tree -------------------------------------------------------------------------------------------------------------a) Retrieve worst: h h best: h h. Go to bottom of index (h-1), then get node from sequence set (1). b) Insert worst: 3h 3h best: h+1 h+1 c) Delete worst: 4h - 3 4h - 3 best: h+1 h + 1. Get sequence set block with record (h), delete record from sequence set block, then rewrite sequence set block (1). d) Process a file of n keys in sequence (unbuffered) worst: 2n n/(m/2). Traverse the sequence set. best: 2n n/k. Traverse the sequence set. e) Process a file of n keys in sequence (buffered) worst: n + n/(k/2). Each index n/(k/2). Traverse the sequence set. node needs to be read only once, and there are n/(k/2) of these. best: n + n/k. Same idea, but there n/k. Traverse the sequence set. are only n/k index nodes.

Exercise 17.
See the readings for sources.

Exercise 18.
Compared to B+ tree organizations like VSAM, the obsolete ISAM organization is very inflexible. Perhaps the one situation in which ISAM is preferable occurs when volatility is very low. Since ISAM makes no attempt to accommodate dynamic growth, data files can be stored as compactly as possible and hence can lead to very efficient retrieval. Also the physical and logical organizations of ISAM files are very close. This has an advantage and a disadvantage. The advantage is that one can take maximum advantage of characteristics of the disk drive that is being used. The disadvantage is that we are less free to impose differing logical organizations on files, depending on our needs. B+ tree vs. ISAM organizations:

18

Answers to Exercises, Chapter 10 sequential access: As long as locality can be maintained in the sequence set, B+ trees perform well. After many insertions and deletions, or even after initial loading if locality has not been considered, sequential access could become inefficient. After initial loading, sequential access in an ISAM file will be very efficient, since data records are stored in sequence on consecutive tracks and cylinders. After many insertions and deletions, there will be increasingly many side trips to overflow areas. ISAM stores records unblocked in overflow areas, so each such side trip takes care of only one record. direct access: Direct access in B+ trees is always consistently good. Direct access to ISAM records is very good after initial loading, but deteriorates as the files contents change and chains of records are put into overflow areas. additions and deletions: B+ trees perform consistently well, although among fuller trees splitting occurs more often, and among emptier trees concatenation and redistribution occurs more often. Again, ISAM files perform better when little change has taken place. As overflow chains grow, additions and deletions become increasingly costly.

19

Chapter 11
Hashing
Many students will have had previous exposure to hashing. Our experience has been that they can read with good understanding most of the basic material in the chapter. They know what hashing is and they know something about collisions. In fact, in our lectures we probably spend more time presenting or discussing material from the exercises than we do on these basics. If students understand the basics of hashing, it is likely that you can skip this chapter and concentrate on the extended hashing strategy presented in Chapter 12.

Distributions
Many students do have trouble understanding some of the material on distributions. It is particularly important that they learn the difference between a random distribution and a uniform distribution, something that is not obvious to many of them. The text emphasizes the use of the Poisson function for analyzing hashing performance, but students need not have a rigorous understanding of the function in order to profit from its use. We do recommend that students be encouraged to apply the Poisson distribution in the ways that we do in the text, because it helps them to understand better the mechanisms at work when hashing is used. It may also be worthwhile indicating how similar analyses can be applied to understanding performance for other file structures. One subtle point that is difficult for some to catch without special emphasis is that the Poisson analysis is based on the ratio of the number of addresses to the number of records, and not on packing density.

Buckets
The mathematical approach is particularly useful in understanding the advantage of using buckets. It is not immediately obvious to most persons that one can improve performance merely by dividing an address space into slots that are fewer but bigger. (We seem to be getting something for nothing, always a disconcerting prospect.) The mathematical analysis not only shows numerically that this is true, but also gives some insight into why. Unfortunately, the mathematics behind the computation of average search length seemed beyond the scope of the text, so we had to rely on reference to the literature for our results.

Deletions
As with other file structures, record deletion seems more complicated than addition. We emphasize the use of tombstones, but we find that students enjoy trying to find other approaches.

20

Chapter 11. Hashing Hanson [1982, page 202] has some interesting things to say about different file housekeeping strategies. (In fact, Hansons whole treatment of direct access files makes fascinating reading. It is loaded with interesting analytical and experimental results that make good material for lectures and readings.)

Other collision resolution techniques


Section 11.8 covers some very important approaches to hashing files, but not in much detail. It is well worth the time to discuss and lecture on these approaches, for they help address important shortcomings related to hashing. Extendible hashing seems to be gaining particular importance. One very satisfying thing about scatter tables and some forms of extendible hashing methods is that they harken back to important conceptual tools associated with indexing. If there is time, we introduce students to trie structures and digital search methods at this point. (See Knuth [1973], Section 6.3.)

Patterns of record access


Section 11.9, though brief, can be the basis of some good class discussion. Consider discussing Exercise 11 when you cover this topic.

21

Answers to exercises: Chapter 11 Exercise 1.


a) J a 74 97 hash("Jacobs", 101) = 12: c o b s 99 111 98 115 32 32 32 7497 + 10011 -> 17508 mod 19937 -> 17508 + 9915 -> 27423 mod 19937 -> 7486 + 3232 -> 10718 mod 19937 -> 10718 + 3232 -> 13950 mod 19937 -> 13950 + 3232 -> 17182 mod 19937 -> 17182 mod 101 = 12 They just have to have the same hash value.

32 32 17508 7486 10718 13950 17182

32

b) c)

Some sort of multiple precision arithmetic might be suggested.

Exercise 2.
The purpose of this question is to help students understand the relationships among the different quantities involved that describe a file environment in which hashing occurs. a) b) c) d) 265 (Slightly less than 12 million.) n<r r<M r=n=M

Exercise 3.
The purpose of this question is to help the student see how the numbers that describe three types of distribution (random, worse than random, approximately uniform) relate to the basic nature of the distributions. a) Functions Cs distribution compares best to the Poisson.

b) Function B. It has the largest number of single-key addresses, and all other addresses that have keys have only two keys. c) Function A. Too many addresses have no keys assigned, and very few have one or two keys. This leaves a relatively large proportion of the addresses with more than two keys assigned. d) Function B. We want our distribution to be as close to uniform as possible because it results in the fewest collisions.

22

Answers to Exercises, Chapter 11

Exercise 4.
We can think of the 365 days in a year as corresponding to 365 addresses, and the 23 peoples randomly distributed birthdays as 23 randomly distributed record addresses. The event two persons have the same birthday then corresponds to the event two keys hash to the same address. The birthday paradox then illustrates the fact that even if there are far more addresses than keys the likelihood of a collision occurring from a random distribution of keys is very high.

Exercise 5.
N = 10,000 r = 8,000 a) b) 8,000/10,000 = .8. N x p(0) = 10,000 [0.80 e -0.8]/0! = 10,000 x 0.4493 = 4,493 addresses. N x p(1) = 10,000 x [0.81 e -0.8]/1! = 3,594 addresses. N x [p(2) + p(3) + p(4) + . . .] = 10,000 [0.1438 + 0.0383 + 0.0077 + 0.0012 + 0.0002] = 1,912 addresses. e) N x [p(2) + 2p(3) + 3p(4) + . . .] = 10,000 [.1438 + 2 x .0383 + 3 x .0077 + 4 x .0012 + 5 x .0002] = 2493 overflow records. f)2,493 / 8,000 = 31%

c) d)

Exercise 6.
r = 8,000 a) b=2 r/N = 1.6 N [p(3) + 2p(4) + 3p(5) + 4p(6) + . . .] = 5,000 [.13783 + 2 x .05513 + 3 x .01764 + 4 x .00470 + 5 x .00108 + 6 x .00022 + 7 x .00003] = 1635 overflow records. N = 5,000

23

Answers to Exercises, Chapter 11 b) b = 10 r/N = 8 N * [p(11) + 2p(12) + 3p(13) + 4p(15) + . . .] = 1,000 [.07219 + 2 x .04813 + 3 x .02962 + 4 x .01692 + 5 x .00903 + 6 x .00451 + 7 x .00212 + 8 x .00094 + 9 x .00040 + 10 x .00016 + 11 x .00006 + 12 x .00002] = 426 overflow records. N = 1,000

Exercise 7. format
Table of Poisson values for different values for r/N .1 .5 1 2 5 10 p(0) 0.9048 0.6065 0.3679 0.1353 0.0067 p(1) 0.0905 0.3033 0.3679 0.2707 0.0337 0.0005 p(2) 0.0045 0.0758 0.1839 0.2707 0.0842 0.0023 p(3) 0.0002 0.0126 0.0613 0.1804 0.1404 0.0076 p(4) 0.0016 0.0153 0.0902 0.1755 0.0189 p(5) 0.0002 0.0031 0.0361 0.1755 0.0378 p(6) 0.0005 0.0120 0.1462 0.0631 p(7) 0.0034 0.1044 0.0901 p(8) 0.0009 0.0653 0.1126 p(9) 0.0002 0.0363 0.1251 p(10) 0.0181 0.1251 p(11) 0.0082 0.1137 p(12) 0.0034 0.0948 p(13) 0.0013 0.0729 p(14) 0.0005 0.0521 p(15) 0.0002 0.0347 p(16) 0.0217 p(17) 0.0128 p(18) 0.0071 p(19) 0.0037 p(20) 0.0019 p(21) 0.0009 p(22) 0.0004 p(23) 0.0002 This table is a very useful source of discussion. Things to note:

q q

Increasing values of r/N lead to increasing numbers of synonyms. For smaller values of r/N the likelihood that a given address will be selected for many keys is relatively small.

24

Answers to Exercises, Chapter 11

q q q q

For larger values of r/N, the likelihood that an address will not be chosen becomes very small. For r/N = 10, we can expect less than one address in ten thousand to be empty. r/N is a ratio of keys to addresses and should not be confused with packing density. One normally assumes that as r/N gets larger, we are increasing bucket size as well. That is why the last column, even though it shows many more synonyms than the other columns, will ultimately show better performance. It may not be immediately obvious that the increasing bucket size compensates sufficiently for the increasing number of synonyms. That is why it is useful to perform the calculations on expected number of overflow records (Exercise 6).

Exercise 8.
a) It has the effect of using a bucket that is the size of a track, yet the entire track does not have to be transmitted to RAM because the desired record within the bucket can be isolated by the drive itself. Only the desired record has to be transmitted. b) The at-the-track search is accomplished by looking at the key in the key subblock that is the key to the last record in the corresponding data block. Since keys are not sorted, it is not possible to tell if a given key is in a block unless it is the last key. So the block size has to be 1. The disadvantage of this is that there have to be count and key subblocks for every record, adding a tremendous amount of overhead to a file. (The minimum amount of overhead required on an IBM 3380 is more than 700 bytes per block. For a 100-byte record this represents a seven-fold increase in file size due to subblock overhead.)

Exercise 9.
Several common approaches to doing this are described in Sections 11.8.3 - 11.8.5.

Exercise 10.
After adding Evans: Home Search Address Key Address 0 Alan 0 1 1 Dean 0 2 2 Bates 2 1 3 Evans 1 3 4 Cole 4 1 5 / / / / / /6 / / / / / /After deleting Bates and Cole: a)

Length

25

Answers to Exercises, Chapter 11 0 Alan 0 1 1 Dean 0 2 2 ###### 3 Evans 1 3 4 / / / / / /(No tombstone needed.) 5 / / / / / /6 / / / / / /After deleting Bates, Evans could have been moved closer to its home address. This would have been expensive, however, as it would have required us to check that the subsequent opening of Evans old slot (address 3) did not break an overflow chain. Instead, a tombstone was used to fill the slot that Evans would have been moved into. After adding Finch and Gates, deleting Alan, then adding Hart: 0 ##### 1 Dean 0 2 2 Finch 0 3 3 Evans 1 3 4 Gates 2 3 5 Hart 3 3 6 / / / / / /Due to the use of tombstones, Dean, Finch, Evans, and Gates are all further from their home addresses than they need to be. The average search length is (2 + 3 + 3 + 3 + 3)/5 = 2.8 After reloading the file in the order Dean, Evans, Finch, Gates, Hart: 0 Dean 0 1 1 Evans 1 1 2 Finch 0 3 3 Gates 2 2 4 Hart 3 2 5 / / / / / /6 / / / / / /The effect of doing this is that the search is decreased to (1 + 1 + 3 + 2 + 2)/5 = 1.8. b) On the first pass, we could load Dean, Evans, Gates and Hart: 0 Dean 0 1 1 Evans 1 1 2 Gates 2 1 3 Hart 3 1 4 / / / / / /5 / / / / / /6 / / / / / /On the second pass, we load Finch:

26

Answers to Exercises, Chapter 11 0 Dean 0 1 1 Evans 1 1 2 Gates 2 1 3 Hart 3 1 4 Finch 0 5 5 / / / / / /6 / / / / / /The effect: more keys are in their home addresses. Note, however, that the search length is still (1 + 1 + 1 + 1 + 5)/5 = 1.8. One hopes that Finch will not need to be accessed an inordinate proportion of the time.

Exercise 11.
This exercise demonstrates dramatically the value of paying attention to patterns of access when loading records. We find that the computations involved become easier for students to understand if they assume a specific number of records and addresses. If we let the number of records be 8000 and the number of addresses be 2000, then the following explanation works pretty well. Let r ra = number of records = 8000 = number of active records = .2r = 1600

N = number of addresses = 2000 b = bucket size = 5 When fully loaded the packing density = r/bN = 8000/10,000 = .8. When 20% loaded:

q q q q q

the packing density = ra/bN = 1600/10,000 = .16. the ratio of records to addresses = 1600/2000 = .8. the number of overflow records = N x [p(6) + 2p(7) + 3p(8) + 4p(9) + ...] (with p(x) based on ra/N = .8.) the number of overflow records is 2000 [ p(6) + 2p(7) + ...] = 2000 [.00016 + .00002 + ...] = .36 the percentage of the 1600 active records that would be overflow records is 100 x .36 / 1600 = .0225%

So about two out of every ten thousand of the most active records will not be stored in their home address. All others should be retrievable in only one disk access

27

Answers to Exercises, Chapter 11

Exercise 12.
This is a good exercise for class discussion. It would very useful to have a look at Knuths (Vol. 3, p. 534-539) coverage of the question beforehand, however, because he brings out some subtle ideas and generates some surprising results. Our first thoughts about the question probably are that progressive overflow will be pretty bad, especially when a file becomes fairly full, because clusters of synonyms mixed with nonsynonyms can cause very long lists of contiguous keys. This is indeed the case if the bucket size is small. If the bucket size is large, however, there will be very few completely full buckets, so there will be few occasions in which more than two buckets have to be accessed. When we use chaining into a separate overflow area, the results are much better -- for small bucket sizes -- because only synonyms are considered. Whereas progressive overflow mixes synonyms with non-synonyms, chaining requires only that we search through synonyms. Knuths analysis turns up some surprising results for larger bucket size. Since very little overflow occurs when buckets are large, he assumes that chained overflow records are not stored in buckets but linked together, so that the (b+k)th record of a list requires k+1 accesses. The result of this is that when packing density is high enough and buckets are very large progressive overflow actually performs better than chaining into a separate overflow area. There are also other, less tangible factors to consider, including the following:

q q q

Progressive overflow is simpler to implement. Chaining uses extra space for link fields. For small records this space might be better used to lower packing density. Chaining into a separate overflow area can lead to problems of locality -- seeking to different cylinders for overflow records -- unless space is somehow allocated on the home buckets cylinder for overflow records.

Exercise 13.
The factors that have to be considered fall into the general areas of space and time considerations. Space considerations. HASHING: * * * * * packing density type of overflow method used bucket size space used by the index if a scatter table or other type of index is used record key or separator size and pointer size, as both affect the resulting index size

INDEXED SEQUENTIAL (well just look at B+ trees):

28

Answers to Exercises, Chapter 11 * space utilization within nodes (in a sense equivalent to packing density for hashed files)

Of course, something to consider that is just as important as the factors that cause extra space to be used is the cost of space. If space is cheap, then there may be little need to take its cost into consideration. Time considerations. Here we have to consider two types of access, random and sequential. If all accesses are to be direct, hashing should almost certainly be chosen. Lets first look at the factors that affect random access performance. Random access. HASHING: * search length (both for successful and unsuccessful searches) * locality, or proximity of overflow records to home records (in order to know how much seeking might be needed) * If an indirect method is used, such as a scatter table, whether or not (or how much of) the table can be kept in RAM. INDEXED SEQUENTIAL: * * * height of the index placement of index: if index can be kept in RAM, isam will be about as fast as hashing for direct access if buffering strategy used, effectiveness of buffering strategy needs to be analyzed. locality, or proximity of index set and sequence set blocks

Sequential access. It would be useful here to know how frequently sequential access was likely to occur, and also how much of the file would be affected (the hit rate). If the entire file is to be accessed often, and must be accessed in some lexical order, there is no question that the indexed sequential method will be superior. But if the hit rate is usually relatively small, hashing may be as good or better. Hanson[1982] illustrates this nicely by estimating the average time required to locate various sized sets of records in sequential mode from a large (400,000 record) file. In his example, 100 records are accessed in an average of 176 msec each if the file is organized as an indexed sequential file, and only 59 msec each if hashing is used. As the hit rate grows, both organizations perform better, but the indexed sequential method improves more rapidly. At 7500 records, they are even (10.6 msec), and at 10,000 records the indexed sequential method is slightly better (10.1 vs. 10.5). If some sort of cosequential process is involved, it may be worth sorting the file before processing. In the text we mention the possibility of considering the file sorted by address. Keys from a transaction file could be similarly sorted, allowing a cosequential process to be performed between it and the main file. Hanson briefly addresses this approach also, and indicates again that for smaller numbers of records hashing can perform better than indexed sequential methods. 29

Answers to Exercises, Chapter 11 Summary of factors affecting sequential access: HASHING: * * * * number of records to be accessed whether a cosequential process is involved. if the hit rate is so small that a sequence of direct accesses is performed: the same factors that were listed under direct access the hit rate whether a cosequential process (hence requiring sorting of input files) is to be performed.

INDEXED SEQUENTIAL:

Exercise 14.
The approach discussed here provides a good example of the principle that if we are willing to question every assumption that we make about a file handling technique or file organization, we leave open the possibility for enormous improvements in performance. This is what, in our opinion, makes designing fast flexible file structures one of the most interesting and satisfying activities in computer science.

30

Chapter 12
Extendible Hashing

A case study in computer science and file structures


As we have said before, we abhor the cookbook approach to teaching file structures. We are committed to showing students that there are only a handful of basic problems in file structure design, and a handful of basic techniques for solving them. Understanding the problems and mastering the basic techniques allows programmers to develop a tremendous number of unique, interesting implementations that solve specific instances of the general problems. The developments leading to extendible hashing, dynamic hashing, linear hashing, and other approaches to creating self-adjusting hashing schemes are an instructive case study of these core concepts. The general problem was finding a way to modify O(1) address transformation techniques so that the address space could grow and shrink as the number of keys increased and decreased. Since tries use only as much of a key as necessary to uniquely identify an address, they were a logical starting point for solutions to this problem. It is useful for the students to see that all current approaches to extendible hashing are derived, one way or another, from the notion of tries and an extendible address. Section 12.1 sets up this background. Many instructors may want to strengthen and emphasize this theme. The key distinction is between access methods that work well for static files and ones that work for dynamic files. For example, it is interesting to compare sequential access to an array of records and access to a linked list. The linked list permits us to deal with dynamic sequences: it is the dynamic approach to O(n) access. Similarly, AVL trees provide a dynamic, self-adjusting approach to O(log n) access, whereas plain binary trees are more appropriate for static files. We must, of course, distinguish between approaches that work well in memory and those that are appropriate for file structures. B-trees and their derivatives are an excellent file structure mechanism for O(log n) access. The development of extendible hashing techniques was a response to the need for dynamic O(1) access for files.

The basics of extendible hashing


We have chosen to focus first on extendible hashing as described by Fagin, Nievergelt, Pippenger, and Strong (1979), since it is an approach that grows almost intuitively from a consideration of tries. Once students have a grasp of what is happening in this approach to extendible hashing we are well positioned to show them alternative ways of accomplishing much the same thing. Section 12.2 shows how tries make use of more addressing information as we have more keys to distinguish, and then shows how we can flatten the trie into a directory. This is the essence of extendible hashing. This section also introduces the basic, periodic growth pattern

31

Chapter 12, Extendible Hashing associated with extendible hashing: a doubling of the directory, followed by period of filling in the new address space, followed by another doubling. Once the student has an overview of how extendible hashing works, we turn to implementing the process, starting first with the addition of keys and then turning to deletion. The pseudocode and associated description form the bulk of the chapter. We have chosen a relatively simple, straightforward implementation that does not make any use of overflow or splitting control. This simplicity makes it easier for the student to see what is going on; it also opens interesting possibilities for students to improve upon what we have done. A number of the exercises at the end of the chapter suggest ways to explore such improvements.

Extensions: a basis for student projects


The sections on implementation are followed by a brief description of extendible hashing performance. Instructors interested in a more mathematical treatment of performance should consult the sources we cite, particularly Flajolet (1983) and Mendelson (1982). The discussion of performance leads naturally to a consideration of alternative approaches. The most interesting of these is Litwins linear hashing, which does away entirely with the directory. Linear hashing, which makes use of overflow buckets, leads to a discussion of how we might make limited use of overflow to control splitting and improve performance for all forms of extendible hashing. This is another fruitful area for students to explore through research assignments and programming exercises. Michel Scholl's New File Organizations Based on Dynamic Hashing (1981) will be a good starting point for a number of student projects. Another good set of papers for students to explore is Larson's work on partial expansions and linear probing.

32

ANSWERS TO EXERCISES: CHAPTER 12


Exercise 1.
This is a review question in the purest sense -- students should be able to satisfy themselves that they can repeat these basic distinctions. If they find that they cannot, the answer to this question is in the Chapter Summary, and appears again in the list of Key Terms.

Exercise 2.
This is not a very attractive idea, but the question does test the students understanding of the directory expansion mechanism. Increasing the radix means that when the directory must expand, it is by a larger multiple. Suppose, for example, that the radix is three. Each time we needed to increase the size of the directory, we would multiply the number of directory elements by three. It is reasonable to assume that this larger expansion of the directory would result in a longer time between expansions, providing some amelioration for the initial waste of space due to the larger directory expansion. However, this would not be the case for distributions that happen to hash into one area of the directory. Another consequence of using a radix other than two is that we could no longer implement as much of the system through bit operations, resulting in some performance loss.

Exercise 3.
The key phase in this question is required number of low-order bits in the same left-to-right order. This question is not concerned with the issue of whether to use low-order bits or highorder bits -- the text deals with that directly. Here the issue is one of sequence -- why not just use the low order bits in the order that we find them? As the question suggests, the answer revolves around the matter of extending the address. Suppose we are using the lowest four bits of an address. When we need to extend the address to five bits, we want the new bit to distinguish between two buddy buckets. In other words, we want the new bit to be the lowest bit. If we used the low-order bits in the order that they actually occur in the address, the additional, fifth bit would be the highest bit, distinguishing between the first and second halves of the tree, rather than between adjacent buckets. In short, we reverse the order of the low-end bits so that as we add what are, in fact, higher order bits, they are effectively added at the low end of the directory address.

Exercise 4.
Shifting to the left can be done by multiplying by two; shifting to the right is dividing by two. The bitwise AND with a mask of 1 can be replaced with a modulo 2 operation. The OR operation is the same as adding the 1 or 0 resulting from the modulo 2 operation.

33

Answers to Exercises, Chapter 12

Exercise 5.
The redistribution of keys after a split is implemented in Bucket::Redistribute (p. 701). For each key in the current bucket (this), the bucket address (bucketAddr) is calculated. Since the new bucket has already been inserted, the key address may be either the current bucket (BucketAddr) or the new bucket (bucketAddr != BucketAddr). In the latter case, the key is removed from the current bucket and inserted into the new bucket.

Exercise 6.
If you think through the approach described in exercise 5, you see that it is possible (though not likely for large bucket sizes) for each of the keys to hash to the directory address that points to one of the split buckets, with no keys hashing to the address of the buddy bucket. If the new key that precipitated the split in the first place also hashes to this same bucket, we have an overflow situation, once again, despite the fact that we just added a new bucket. This will not cause any great difficulty: Bucket::Insert working through a recursively called Directory::Insert will simply call Bucket::Split again (and again, if necessary) until it can add the key.

Exercise 7.
We just described one of them in answering question 6: we allocate a new bucket during a split, but none of the keys in the original bucket ends up hashing to the address for the new one. We can also end up with empty buckets during the deletion operations described in this chapter. We can combine a bucket with another only when it has a buddy bucket. Consequently, even if we delete the last key from a bucket, we will not necessarily be able to combine it with another and free its space -- we would, instead, have an empty bucket.

Exercise 8.
This is another test of fundamental understanding question. We can collapse a directory if and only if each successive pair of directory addresses contains redundant addresses, each pointing to the same bucket. If this condition is met, the directory is twice as big as it needs to be and can be collapsed. Bucket::TryCombine looks to see if buckets can be collapsed. Directory::Collapse checks to see if the directory can be collapsed.

Exercise 9.
Early in the chapter we show how the directory could be formed by collapsing a completely full binary tree. If you think of the directory as a tree, it is easy to see that only those buckets that are at the full depth of the directory can be leaves of the tree. Having a buddy implies having a sibling at the leaf level. So, only buckets that are at the full address depth of the directory can have buddies.

34

Answers to Exercises, Chapter 12

Exercise 10.
This is an extension of exercise 7. Our answer to exercise 7 describes the situations that can create empty buckets. The root cause underlying both of these situations is that the algorithms described in this chapter assume that each cell in the directory points to some bucket. It may be that many cells point to the same bucket, but, given a cell, you have a bucket address. By changing the allocation rules so that directory entries can contain null pointers, with the understanding that space for a bucket is allocated only when it is needed, and is released as soon as the bucket is empty, it would be possible to do away with empty buckets. An interesting follow-on question is whether the students think that making such modifications would be worthwhile, or whether they would add to the complexity of the code merely to handle a space inefficiency that would occur rarely, if ever.

Exercise 11.
This is a good follow-on to exercise 10, and is really a much more interesting question. If the student pursues this problem, he or she will explore the same thinking that leads to work on deferred splitting, use of overflow buckets, and so on. Here are some possible approaches to avoiding nearly empty buckets -- there are more -- good new ideas may be publishable. - The directory serves the important function of inserting a level of indirection between the hash addresses and the buckets themselves. Why do all buckets have to be the same size? Why not start out with relatively small buckets, and then expand them for a while before splitting? - In section 12.6.3 we mention Veklerov's suggestion of using buddy buckets as overflow areas. This approach increases storage utilization, and should therefore tend to fill up nearly empty buckets. - In general, approaches using overflow buckets, particularly if the overflow buckets are smaller than the standard buckets, or are shared by several buckets, have the potential for decreasing the number of nearly empty buckets and for increasing storage utilization. Students should think about how deletion might be modified to get rid of nearly empty buckets.

Exercise 12.
This question checks for basic understanding of what is happening in linear hashing. The key phrase in this question is assuming an uncontrolled splitting implementation -- that makes this a pretty simple question, since the choice of bucket size does not have an effect on the timing of the splitting. Consequently, small overflow buckets tend to improve space utilization while making access time longer, and large overflow buckets tend to improve access time at a cost of decreased space utilization. Answers that suggest more complexity than this indicate that the student has either read the question carelessly or does not understand linear hashing with uncontrolled splitting. This is a good warm up for exam question #5 for this chapter, which demands more sophisticated analysis of this problem. It is also a good question to assign along with exercise 15, below.

35

Answers to Exercises, Chapter 12

Exercise 13.
Because an unsuccessful search always searches to the end of any overflow chain existing as part of the search path.

Exercise 14.
This question will challenge many students. Answering it involves going to the library and finding some conference proceedings, reading through some reasonably technical papers, and then demonstrating understanding of what is in those papers. The key paper here is Larsons Linear hashing with partial expansions, which appeared in the Proceedings of the 6th Conference on Very Large Databases (Montreal, Canada, October 1-3, 1980). These proceedings have been published by ACM/IEEE. Larson does provide an Algol-like pseudocode description of the addressing scheme and of the algorithm itself, but turning Larson's description into a pseudocode description of the entire procedure will require that the students really think through the process. Allow a week or more for this assignment, and consider putting several copies of Larson's paper on reserve status in the library, perhaps along with a few copies of Enbody and Du's description from Computing Surveys.

Exercise 15.
This question moves beyond the simple issues addressed in question 12 because it deals with controlled, instead of uncontrolled splitting. We intentionally omitted any mention of the control mechanism in the question so that the student can discover on his or her own that the choice of overflow bucket size is often linked to the control mechanism. This linkage might take the form of deferring splitting until we have filled a single overflow bucket, of tying splitting to overall space utilization, or some other mechanism to ensure a certain level of utilization while still avoiding long chains of overflow buckets. Enbody and Du (1988) include a good discussion of the general issues revolving around controlled splitting and overflow bucket size, and Larson (1982) develops a performance analysis and tables that show the way that overflow bucket size interacts with search length and storage capacity. Litwin (1980) includes a number of graphs that reflect the same trade-offs. In brief, the issues that the students should address in their answers include the following:

q q q

Calculations of space utilization need to include the space consumed by overflow buckets. Small overflow buckets tend to keep utilization high, but can produce longer overflow chains. If splitting is deferred until a certain overall space utilization is reached, as is often the case in linear hashing, use of larger overflow buckets will defer splitting longer. Use of large overflow buckets tends to increase the probability that, even after splitting, there will still be overflow records. Conversely, using smaller buckets makes it more likely that the split will succeed in placing all records back in primary buckets. Consider an overflow bucket that is half the size of a primary bucket. A split that divides all the records from the initial primary bucket and the overflow

36

Answers to Exercises, Chapter 12 bucket between two primary buckets requires only 75% utilization over these buckets, which means that it is likely that all the records will fit without overflow.

For approaches that use directories, the space utilization costs associated with using larger overflow buckets are offset somewhat by savings in the size of the directory. The question also asks the student to compare the use of smaller overflow buckets with sharing overflow buckets. Superficially, these appear to be two ways of accomplishing the same thing. Students who think about the problem more carefully, however, will see that sharing buckets allows us to cut down on the number of disk accesses, particularly in a system where we buffer some number of the buckets in memory.

37

Вам также может понравиться