Вы находитесь на странице: 1из 2

# Trees Fibonacci Heap

  (1+ 5)
Red-Black Tree Maximum degree: D (n) ≤ logϕ n ; ϕ = 2
Minimum size of degree k: sk ≥ Fk+2
1. Red Rule: A red child must have a black father
Marking: Every node which lost one child is marked.
2. Black Rule: All paths to external nodes pass through the
Cascading Cut: Cut every marked node climbing upwards.
same number of black nodes.
Keeps amortized O(log n) time for deleteMin. Otherwise
3. All the leaves are black, and the sky is grey. √
O( n).
Rotations are terminal cases. Only happen once per fixup.
Proof of ϕk bound:
If we have a series of insert-delete for which the insertion point
1. All subtrees of junction j, sorted by order of insertion are of
is known, the amortized cost to each action is O (n).
degree D[si ] ≥ i − 2 (Proof: when x’s largest subtree was added,
Height:log n ≤ h ≤ 2 log n
since D [x] was i − 1, so was the subtree. Since then, it could
Limit of rotations: 2 per insert.
2 lose only one child, so it is at least i − 2)
Bound of ratios between two branches L, R: S (R) ≤ (S (L)) Pk
2. Fk+2 = 1 + i=0 Fi ; Fk+2 ≥ ϕk
Completely isomorphic to 2-4 Trees.
3. If x is a node and k = deg [x], Sx ≥ Fk+2 ≥ ϕk .
(Proof: Assume induction after the base cases and then sk =
B-Tree Pk Pk Pk
2 + i=2 Si−2 ≥ 2 + i=2 Fi = 1 + i=0 Fi = Fk+2 )
d defines the minimum number of keys on a node
Height: h ≈ logd n Structures
1. Every node has at most d children and at least d2 children Median Heap: one min-heap and one max-heap with ∀x ∈
(root excluded). min, y ∈ max : x > y then the minimum is on the median-heap
2. The root has at least 2 children if it isn’t a leaf.
3. A non-leaf node with k children contains k − 1 keys.
4. On B+ trees, leaves appear at the same level. Sorting
5. Nodes at each level form linked lists
d is optimized for HDD/cache block size Comparables
Insert: Add to insertion point. If the node gets too large, QuickSort O (n log n) O n2

In-Place
split.O (log n) ≤ O (logd n) Partiton:
Split: The middle of the node (low median) moves up to be the BubbleSort
edge of the father node. O (d) HeapSort (O (n log n) ,aux):
Delete: If the key is not in a leaf, switch with succ/pred. Delete, InsertionSort
and deal with short node v: MergeSort (O (n log n),aux):
1. If v is the root, discard; terminate. 
SelectionSort (O n2 , inplace): Traverse n slots keeping score
2. If v has a non-short sibling, steal from it; terminate. of the maximum. Swap it with A [n]. Repeat with A [n − 1] .
3. Fuse v with its sibling, repeat with p ← p [v].
Linear Time
Traversals
BucketSort Θ (n):
Traverse(t): If the range is known, make the appropriate number of buckets,
if t==null then return then:
→ print (t) //pre-order 1. Scatter: Go over the original array, putting each object in its
Traverse(t.left) bucket.
→ (OR) print(t) //in-order 2. Sort each non-empty bucket (recursively or otherwise)
Traverse(t.right) 3. Gather: Visit the buckets in order and put all elements back
→ (OR) print(t) //post-order into the original array.
CountSort Θ (n):
Heaps 1. Given an array A bounded in the discrete range C, initialize
an array with that size.
Binary Binomial Fibonacci 2. Passing through A, increment every occurence of a number i
findMin Θ(1) Θ(1) Θ(1) in its proper slot in C.
deleteMin Θ(log n) Θ(log n) O(log n) 3. Passing through C, add the number represented by i into A
insert Θ(log n) O(log n) Θ(1) a total of C [i] times.
decreaseKey Θ(log n) Θ(log n) Θ(1) RadixSort Θ (n):
meld Θ(n) Θ(log n) Θ(1) 1. Take the least significant digit.
2. Group the keys based on that digit, but otherwise keep the
Binary original order of keys. (This is what makes the LSD radix sort
a stable sort).
Melding: If the heap is represented by an array, link the two 3. Repeat the grouping process with each more significant digit.
arrays together and Heapify-Up. O (n).

Binomial Selection

Melding: Unify trees by rank like binary summation. O (log n) QuickSelect O (n) O n2
5-tuple Select Union by Rank: The larger tree remains the master tree in
every union.
Path Compression: every find operation first finds the master
Hashing root, then repeats its walk to change the subroots.
Universal Family: a family of mappings H. ∀h ∈ H. h : U →
1
[m] is universal iff ∀k1 6= k2 ∈ U : P rh∈H [h(k1 ) = h(k2 )] ≤ m Recursion
Example: If U = [p] = {0, 1, . . . , p − 1}then Hp,m =
Master Theorem: for T (n) = aT nb + f (n) ; a ≥ 1, b >

{ha,b | 1 ≤ a ≤ p; 0 ≤ b ≤ p} and every hash function is
ha,b (k) = ((ak + b) mod (p)) mod (m)  ε > 0:
1,
 
Linear Probing: searching in incremental order through the 

 T (n) = Θ nlogb a f (n) = O nlogb (a)−ε
   
Open Addressing: Use h1 (x) to hash and h2 (x)to permute.

T (n) = Θ nlogb a logk+1 n

f (n) = Θ nlogb a logk n ; k ≥ 0
No pointers. 
Open Hashing:

 f (n) = Ω nlogb a+ε
T (n) = Θ (f (n))

af nb ≥ cf (n)
 
Perfect Hash: When one function clashes, try another. O (∞).
Load Factor α: The length of a possible collision chain. When Building a recursion tree: build one tree for running times (in
|U | = n, α = mn.
T (αn)) and one for f (n).

Methods
Orders of Growth
f = O (g) lim supx→∞ fg < ∞ f = o(g) g → 0
f = Θ (g) limx→∞ fg ∈ R+
Performance f x→∞
f = Ω (g) lim inf x→∞ fg > 0 f = ω(g) g → ∞
Chaining E [X] Worst Case
1
Successful Search/Del 2 (1 + α) n
Failed Search/Verified Insert 1+α n Amortized Analysis
Potential Method: Set Φ to examine a parameter on data
Probing
stucture Di where i indexes the state of the structure. If ci is
Linear: h (k, i) = (h0 (k) + i) mod m the actual cost of action i, then cˆi = ci + Φ (Di ) − Φ (Di−1 ).
Pn
Quadratic:h k, i) = h0 (k) + c1 i + c2 i2 mod m

The total potential amortized cost will then be
Pn Pn i=1 cˆi =
Double: h (k, i) = (h1 (k) + ih2 (k)) mod m (c
i=1 i + Φ (D i ) − Φ (Di−1 )) = c
i=1 i + Φ (D i ) − Φ (D0 )
E [X] Unsuc. Search Succ. Search Deterministic algorithm: Always predictable.
√ n
Uni. Probing 1 1
ln 1 Stirling’s Approximation: n! ∼ 2πn ne ⇒ log n! ∼
 1−α  2  α 1−α  x log x − x
Lin. Probing 12 1 + 1−α 1 1
2 1 + 1−α
1

## So Linear Probing is slightly worse but better for cache.

Collision Expectancy:P [X ≤ 2E [X]] ≥ 12
So:
1. if m = n then E [|Col| < n] ≥ n2
2. if m = n2 then E [|Col| < 1] ≥ 12 And with 2 there are no
collisions.

Two-Level Hashing
 
Pn−1 ni
Computing the number of collisions per level: i=0 =
2
|Col|
1. Choose m = n and h such that |Col| < n.
2. Store the ni elements hashed to i in a small table of size n2i
using a perfect hash function hi .
Random algorithm for constructing a perfect two level
hash table:
1. Choose a random h from H(n) and compute the number of
collisions. If there are more than n collisions, repeat.
2. For each cell i,if ni > 1, choose a random hash function from
H(ni2). If there are any collisions, repeat.
Expected construction time – O(n)
Worst Case search time - O (1)

Union-Find
MakeSet(x) Union(x, y) Find(x)
O (1) O (1) O (α (k))