Академический Документы
Профессиональный Документы
Культура Документы
UNIT II Introduction: DAC algorithms works according to following plan 1.Problems instance is divided into several smaller instances of the same problem. 2.Smaller instances are solved and the solutions thus obtained are combined to get a solution to original problem. It is similar to top down approach but, TD approach is a design methodology (procedure) where as DAC is a plan or policy to solve the given problem. TD approach deals with only dividing the program into modules where as DAC divides the size of input. In DAC approach the problem is divided into sublevels Where it is abstract. This process continues until we reach concrete level, where we can conquer(get solution). Then combine to get solution for the original problem. The problem containing n inputs is divided into into K distinct sets, yielding k sub problems. These sub problems must be solved and sub solutions are combined to form solution. DIVIDE AND CONQUER GREEDY METHOD
Problem of size n
n n/2 n/2
N. David Raju
If input size is large split range into two data sets (p, m) and (m + 1, q) and repeat. Repeat the process until sub problem is small enough to solve.
g ( n) 2T (n / 2) + f (n)
n is small otherwise
f(n) is time taken to divide and combine g(n) is time taken for conquer let n = 2m
m T(n) = 2T 2 + f (2 )
2m
T (2 m ) T (2 m -1 ) f (2 m ) =2 + 2m 2m 2m T (2 m -1 ) k (2 m ) = + m 2 m -1 2
Let k=1
T (2 m ) T (2 m -1 ) = +1 2m 2 m -1 T ( 2 m -2 ) = +1+1 2 m -2 T ( 2 m -m ) = +m 2 m -m
= T(1) + m =m+1 T(2m) = 2m(m + 1) T(n) = n(logn + 1) = O(nlogn)
N. David Raju
Example (1): Find time complexity for DAC a) If g(n) = O(1) and f(n) = O(n), b) If g(n) = 0(1) and f(n) = O(1) a) It is the same as above case i.e., O(n) = k.2m = f(n) T(n) = nlogn b) T(n) = 2T(n/2) + f(n) = 2T(n/2) + O(1) = 2(2T(n/4) + 1) + 1 = 2T(n/4) + 2 + 1 = 2kT(n/2k) + 2k 1 + ..+ 2 + 1 = nT(1) +
k
2
i =0 i -1
i -1
=n+
2
i =0
- 2-1 )
Binary search using DAC: Problem: To search whether an element x is present in the given list. If x is preset, the value j must be displayed such that aj = x Algorithm: Step 1: DAC suggests to break the input n elements I = (a1, a2, I1 = (a1, a2, ak 1), I2 = ak and I3 = (ak + 1, ak + 2, .an) into data sets. Step 2: If x = ak no need of searching I1, I3, Step 3: If x < ak only I1 is searched Step 4: If x < ak only I3 is searched ..an) into
N. David Raju
n is 1 g ( n) T ( n / 2) + g ( n ) n > 1
n/2 + 2g(n) 2
= T(n/22) + 2g(n) = T(n/2k) + k g(n) If n = 2k and g(n) =1 T(n)= T(1) + k = k + 1 = O(log n) k 1 n 2k It is valid if n is 2 Example (1): Take instances 12 14 18 22 24 38
N. David Raju
Theorem 2: Prove average successful search time S is equal to average unsuccessful search time U, if n . ltn Savg = ltn Uavg Uavg =
u (n + 1) - 2n +1 n
2+1=U
ltn S = U Savg =
1 @ U
Total int enal path lengths +1 n Total external path lengths Uavg = n +1
N. David Raju
1, 9, 60, -8 i, j, max,min
1, 5, 22, -8 i, j, max,min
6, 9, 60, 17 i, j, max,min
1, 3, 22, -5 i, j, max,min
4, 5, 15, -8 i, j, max,min
6, 7, 60, 17 i, j, max,min
8, 9, 47, 31 i, j, max,min
1, 2, 22, 13 i, j, max,min
3, 3, -5, -5 i, j, max,min
N. David Raju
Algorithm 1: Procedure: Max { Min (A[1] ..A[n]) for i = 2 to n do if A[i] > max then min = A[i] endif if A[i] < min then min = A[i] endif endfor
} This algorithm requires (n 1) comparisons for finding largest number and (n 1) comparisons for smallest number. T(n) = 2(n 1) Algorithm 2: If the above algorithm is modified as If A[i] > max then max = A[i] Else A[i] < min then min = A[i] Best case occurs when elements are in increasing order. Number of comparisons for this case is (n 1). Worst case occurs when elements are in decreasing order. Number of comparisons for this case is 2(n-1). \Average number of comparisons = ALGORITHM BASED ON DAC Step 1: I = (a[1], a[2] ..a[n]) is divided into
(n - 1) + 2(n - 1) 3n = -1 2 2
N. David Raju
Step 2: Max(I) = large of (Max(I1); Max(I2)) Min(I) = Small of (Min(I1); Min(I2)) Procedure MAX MIN(i, j, Max,Min) { If i = j then Max = Min = A[I] else If j i = 1 If A[i] < A[j] max = A[j] min = A[j] else max = A[j] min = A[I] else { p=
i+ j 2
MIN ((i, p, hmax, hmin) MIN ((p + 1, j, Lmax, Lmin) Max = large (hmax, Lmax) Min = small (hmin, Lmin)
MAX MAX
} }
Time complexity:
N. David Raju
n 2 n/2 = 2{ 2T + 2} + 2 2
T(n) = 2T + 2 = 22 T(n/22) + 22 + 2 = 2k
1T(n/2k-1)
+ 2k-1 +
+2
2
i =1 i
k -1
2
i =1
k -1
-1
Theorem 3: Prove that T(n) = O(nlogm) if T(n) = m(Tn/2) + an2 Ans: T(n) = mT(n/2) + an2 = m(mT(n/2/2) + an2)an2 = m2T(n/2k) + (m +1)an2 = mkT(n/2k) + an2 n= 2k = mk + an2 = mk + an2
m
i =1
i -1
m
i =1
i -1
= l / m
m k +1-1 1 m -1 - m
N. David Raju
mk -1 1 m -1 - m
@ mk(1 + an2/m)
if m >> n = mk = mlogn but O(mlogn) = O(nlogm) since (xlogy = ylogx) 1.Merge Sort: Problem: Sorting the given list of elements in ascending or decreasing order. Analysis: Step 1: Given a sequence of n elements, A(1),A(2), .A(n). Split them into two sets A(1) ..A(n/2) and A(n/2+1) ..A(n). Step 2: Splitting is carried over until each set contain only one element. Step 3: Each set is individually sorted and merged to produce a single sorted sequence. Step 4:Step 3 is carried over until a single sequence of n elements is produiced Example: Let us consider the instances 10. They are 310,285, 179, 652, 351, 423, 861, 254, 450, 520 310, 285, 179, 652, 351, 423, 861, 254, 450, 520 1 2 3 4 5 6 7 8 9 10 For first recursive call left half will be sorted (310) 1,10 and (285) will be merged, then (285, 310) and 179 will be merged and at last (179, 285, 310) and (652, 351) are merged to give 179, 285, 310, 351, 652 Similarly, for second recursive call right half will be sorted to give 254, 423, 520, 681 A partial view of merge sorting is shown in fig Finally merging of these two lists produces 179, 254, 285, 310, 351, 423, 450, 520, 652, 820
1,1
N. David Raju
10
N. David Raju
11
a n =1 n T(n) = 2T + cn n > 1 2
= 2(2T(n/4) + 2cn = 4T(n/4) + 2cn = 2kT(n/2k) + kcn n = 2k = n.a + kcn = na = cn logn = O(n logn) QUICK SORT Problem: Arranging all the elements in a list in either acending or descending order. Analysis: File A(1 n) was divided into subfiles, so that sorted subfiles does not need merging. This can be achieved by rearranging the elements in A(1 n) such that
N. David Raju
12
N. David Raju
13
Time complexity: Let us assume file size n is 2m, m = log2n Also assume position of pivot is exactly in the middle of array. In first pass (n 1) comparisons are made. In second pass (n/2 1) comparisons are made. In average case, the total number of comparisons are given by (n 4(n/4 1) + n*(n/2m 1) mn \T(n) = O(n logn) 1) + 2*(n/2 1) +
In the worst case, if the original file is already sorted, the file is split into sub files of size into 0 and n = 1. Total comparisons = (n 1) + (n 2) + (n 3) ..(n n) nn = 0(n2)
N. David Raju
14
A11 A 21
A12 A22
B11 B 21
C11 = A11B11 + A12B12 C12 = A11B12 + A12B22 C21 = A21B11 + A22B21 C22 = A21B12 + A22B22 To find AB we need 8 multiplications and 4 additions, Since two matrices can be added in time cn2 for a constant c. Stressen s matrix multiplication Stressen s equations are: P = (A11 + A12) (B11 + B22) R = A11(B12 B22) Q = (A21 + A22)B11 S = A22(B21 U = (A21 B11)
C12 = R + T C22 = P + R
Q+U
b 2 8T (n / 2) + cn
n2 n>2
N. David Raju
15
b 2 7T (n / 2) + an
n2 n>2
Where a and b are constants. Working with this formula, we get T(n) = an2[1 + 7/4 + (7/4)2 + + (7/4)k 1] + 7kT(1) cn2 (7/4)logn + 7logn, c a constant = 0(nlog7) O(n2.81).
Chapter 3
Greedy Method
N. David Raju
16
N. David Raju
17
If all the programs are retrieved with equal probabilities, mean retrieval time MRT =
1 t j . To minimize MRT, we must store the programs such that their lengths are n 1 j n
in non decreasing order l1 < l2 < ln. MRT =
1 n 1 j n
1 k j
ik
Example 1: There are 3 programs of length I = (5, 10, 3) 1, 2, 3 5 + 15 + 18 = 38 1, 3, 2 5 + 5 + 18 = 31 3, 1, 2 3 + 8 + 18 = 29 is the best case where retrieval time is minimized, if the next program of least length is added. Time complexity: If programs are assumed to be in order, then there is only on loop in algorithm, thus Time complexity = O(n). If we consider sorting also, sorting requires O(nlogn) T(n) = O(nlogn) + O(n) = O(nlogn)
Problem 2:
N. David Raju
18
arranged such that ratio of frequency and length must be in non increasing order. l1 l2 l3 ..ln f1 f2 f3 ..fn
f1 f 2 l1 l2
Knapsack problem
fn ln
Given n objects and a knapsack, object i has a weight wi and knapsack has capacity of M. If a fraction xi, 0 xj 1 of object i is placed into knapsack then a profit pjxI is earned. Problem: To fill the knapsack so that total profit is maximum and satisfies two conditions. 1) Objective
1 i n
px
i i
= maximum
2)
Feasibility
1 i n
w x
i i
Types of knapsack: 1. Normal knapsack: in normal knapsack, a fraction of the last object is included to maximize the profit. i.e., 0 x 1 so that 2.
w x
1 1
1 1
=M
O/I knapsack: In 0/1 knapsack, if the last object is able to insert completely, then only it is selected otherwise it not selected. i.e., x = 1/0 so that
w x
=M
Control Abstract for Normal Knapsack: Let P(l : n), w(l : n) contains profits & weights of n objects are ordered so that
Pi Pi +1 and wi wi +1
N. David Raju
19
x[2] = 1 x[4] = 1
N. David Raju
20
X=1 1 1 1 1 2/3 0 Rearranging into original form X = 1 2/3 1 0 1 1 1 Solution vector for O/I knapsack is X = 1010111 Total profit pi xi = 10 + 15 + 0 + 6 + 18 + 3 = 52
Theorem: If p1/w1 p2/w2 ..pn/wn then greedy knapsack generate optimal solution. Proof: Let X = (x1 = (x1, ..x2) be solution vector, let j be the least index such that xj 1 Xi = 1 for 1 i j Xi = 0 for j i n Xi = 0 for 0 xj 1 Let y = (y1 ..yn) be optimal solution
w x
i i
=M
1) 2) 3)
let k be least index such that yk xk and yk < xk if k < j then xk = 1, but yk < xk if k = j then if k > j
w x
i i
w x
i i
If we increase yk to xk and decrease as many of (yk + 1 ..yn), this results a new solution. Z = (z1, zn) with z1 = xi,1 i k and wi(yi - zi) = wk(zk yk) pIzI = 1 < i < n piyi + (zk yk)wkpk/wk - k < i < n (yi zi)wIpi/wi = 1 < i < n piyi + [(zk yk)wk - k < i < n (yi zi)wi] pk/wk = 1 < i < n piyi Job Sequencing: Let there are n jobs, for any job i, profit Pi is earned if it is completed in dead line di. Object: Obtain a feasible solution j = (j1, j2 ..) so that the profits i.e.,
p
i j
is
maximum. The set of jobs J is such that each job completes within deadline.
N. David Raju
21
N. David Raju
22
Optical solution is 1, 2, 4 Try all possible permutations and if J can be produced in any of these permutations without violating dead line. If J is a set of k jobs and s = i1, i2, .ik , such that d i1 d i2 ..d ik then j is a feasible solution. Optimal Merge pattern: Merging of an n record file and m record file requires n + m records to move. At each step merge smallest size files. Two way merge pattern is represented by a merge tree. Leaf nodes (squares) represent the given files. File obtained by merging the files is a parent file(circles). Number in each node is length of file or number of records. Problem: To merge n sorted files, at the cost of minimum comparisons. Objective: To merge given field. If d i is the distance from root to external node for file fi, qI is the length of fi, then total number of record moves in a tree = external path length. Example:
d q
i =1
i i
, known as weighted
2 3 5 7 9 13 5 7 9 13 3 2 5
10 5
13 23 10 5 2 3
39 16 7 9
5 2
Algorithm: Procedure Tree(l, n) For I = l to n
N. David Raju
23
1 2 The edge (i, j) to be is such that i is a vertex already included in 35 the tree, j is a vertex not included and cost of (i, j) must be 30 45 40 minimum among all edges (k, l) such that k is in the tree and l 4 3 is not in the tree. 20 5 35
Example. Edge Cost ST
N. David Raju
24
1 1 30 4 1 30 4 10
2 2
(1, 4)
30
10 20 10
2 5 2
(4, 5)
20
(5, 3) Algorithm:
35
1 30 4 20
Procedure PRIM(E, COST, n, mincost, int NEAR(n)< [n] [n], I, j, k, l) { (k, 1) = edge with mini cost min cost = cost(k, 1) (T(1, 1), T(1, 2)) = (k, 1) for i = 1 to n { if COST (i, l) < cost (i, k) then NEAR(i) = 1 else NEAR(i) = k } } NEAR(k) = NEAR(l) = 0 For j = 2 to n 1 { If Near(j) = 0 find cost(j, NEAR(j)) which is mini T(i, l), T(i, 2) = (j, NEAR(j)) NEAR(j) = 0 For k = 1 to n { If NEAR(k) ! = 0 and cost (k, NEAR(k)) >COST(k, j) Then NEAR(k) = j; } } Time complexity: The total time required for the algorithm is q(n2).
35
N. David Raju
25
N. David Raju
26
30 4
45 25 6 20
40 3 5 35 55 15
N. David Raju
27
1, 2
10
1 1
2 2 6 3
3, 4
15
1
4, 6 20
2 6 2 4 6 3
4 1
2, 6
25
1, 4
30
1
3, 5 35
2 4 6 3 3
Single source shortest paths Graphs may be used to represent highway structure in which vertices represent cities and edges represent sections of highway. The edges are assigned with weight which might be distance or time. Now the problem of single source shortest paths is to find shortest path between given source and distination. In the problem a directed graph G = (V, E), a weighting function C(e) for the edges of G and source vertex V, are given. So we must find the shortest paths from v0 to all remaining vertices of G. Greedy algorithm for single source shortest path Step 1 : A shortest paths are build one by one so that the problem becomes multisage solution problem Step 2 : For optimization, at every stage sum of the lengths all paths so far generated is minimum. The greedy way to generate shortest paths from v0 to remaining vertices woulbe be to generate these paths in non-decreasing order of path length.
N. David Raju
28
2 300 9
800
1200
1500 1000
5 250 6
1000
1 0 300 2 0 3 1000 800 0 4 1200 0 5 1500 0 250 6 1000 0 900 1400 7 0 1000 8 1700 0
Analysis of SHORTEST PATHS Algorithm S 5 5, 6 5, 6, 7 5, 6, 7,4 5, 6, 7, 4, 8 5, 6, 7,4,8,3 Vertex Selecte d 6 7 4 8 3 2 1 3350 3350 2 3250 3 2450 2450 2450 4 1250 1250 1250 1250 1250 1250 5 0 0 0 0 0 0 6 250 250 250 250 250 250 7 1150 1150 1150 1150 1150 8 1650 1650 1650 1650 1650
The edges on the shortest paths from a vertex v to all remaining vertices in a graph G form a spanning true. This spanning tree is called shortest path spanning tree is called shortest path spanning tree. Time Complexity: The algorithm must examine each edge in the graph, since any of the edges could be in a shortest path. Therefore the minimum possible time is O(n), if n edges are
N. David Raju
30
b) T(n) = 2T(n/2) +c where c is a constant = 2{ 2T(n/4) +c} +c since f(n) = O(1) =22T(n/22) + 2c+c = = 2mT(n/2m) + 2m-1c+ 2m-2c+ 2m-3c+ .c =2mT(n/2m) + (2m-1)c let 2m =n, m= log n T(n) = nT(n/n) + c(n-1) = nc+c(n-1)=O(n)
N. David Raju
31
2..If K is a non negative constant then solve T(n)=k for n=1 =3T(n/2)+nk for n>1 for n a power of 2 show that T(n)=3knlog3-2kn ANS: T(n) = 3T(n/2) +kn where k is a constant =3[3T(n/4)+kn/2]+kn =32T(n/22) + 3kn/2+kn =33T(n/23) +9kn/4+ 3kn/2+kn = . =3mT(n/2m) +(3/2)m-1kn+ +9kn/4+ 3kn/2+kn =3mT(n/2m) +((3/2)m 1) kn/(3/2)-1 let 2m =n, m= log n =3mT(1) +(3m - 2m) 2k =3m k +3m 2k - 2kn =3knlog3-2kn 3. Solve the following recurrence relation T(n)=k =aT(n/b)+f(n) for the following choices of a,b and f(n) a)a=1,b=2 and f(n)=O(n) b)a=5,b=4 and f(n)=O(n2) for n=1 for n>1
ANS: a) T(n) = T(n/2) + cn = { T(n/4) +nc/2} +cn since f(n) = O(n) =T(n/22) + 3cn/2 = = T(n/2m) + cn(1/2m-1 +1/2m-2 + 1/2m-3 + .1) let 2m =n, m= log n T(n) = T(n/n) + 2c(2m 1)/(2-1) = T(1) + 2c(n-1) = O(n)
N. David Raju
32
.)
let 4m =n, m= log n T(n) =5lognT(n/n)+(16/11) cn2(1 (5/16)m) =5logn+(16/11) cn2(1 (5/16)logn) =5logn+(16/11) cn2(1 5logn/ n2) =O( n2)
3.
Write the algorithm for trinary search and derive the expression for its time complexity
TRINARY SEARCH Procedure: TRISRCH (p, q) { a = pt((q p)/3) b = p + ((2*(q p))/3 if A[a] = x then return(a); else if A[b] = x then return(b); else if A[b] = x then return(b); else if A[a] > x then TRISRCH(p, (a else If A[b] > x then I))
N. David Raju
33
g ( n)
n2 n>2
T(n) = T(n/3) + f(n) Let f(n) = 1 = T(n/3/3) + 1 + 1 = T(n/32) + 2 = T(n/3k) + k = k + 1 n = 3k = log3n T(n) = O(log3n) 4. Give an algorithm for binary search so that the two halfs are unequal. Derive the expression for its time complexity? Binary Search: (Split the input size into 2 sets such that 1/3rd and 2/3rd) procedure BINSRCH(p, q) { mid = p + (q p)/3 if A(mid) = x then return(mid); else if A(mid) > x then BINSRCH (p, mid Else BINSRCH (mid + 1, q); } }
1);
N. David Raju
34
g ( n) if n = 1 T(n) = 2n T T + g ( n) / T + g ( n) 3 3
Worst case best case T(n) = T(cn/3) + g(n) = T(ck n/3k) + kg(n) let n = 3k T(n) = T(ck) + log3n if c = 2 T(n) = O(2k) + logn = O(2k) C=1 T(n) = O(1) + logn = logn 5. Give the algorithm for finding min-max elements when the given list is divided into 3 sub sets. Derive expression for its time complexity? MAX MIN If each leaf node has 3 elements If I = j then Max = min = a[i] Else If j I = 1 then If a[i] > a(j) Max = a[i]; Min = a[j]; Else Max = a[j]; Min = a[i]; } if j i = 2 then if a[i] > a[j] { if a[i] > a[i + 1] then max = a[i]; { if a[I] > A[I + 1)
N. David Raju
35
} Time Complexity:
N. David Raju
36
2
i =1
2k
= 3.2k + 2k + 1 2 = n + 2n/3 2 = 5n/3 2 O(n) = O(5n/3) 6. Why O/I Knapsack does not necessarily yield optimal solution. Solution: If w[i] > c for a certain i then w[i] c w(i 1) In other words remaining capacity at (i 1) stage may have been better utilized such that by including w[i], c becomes 0 or minimum. Hence it is optimal.
N. David Raju
37