Академический Документы
Профессиональный Документы
Культура Документы
Topics Covered: Diff. b/w Dynamic Programming and Divide and Conquer Strategy,Shortest Path Problems in Graphs,Longest Common Subsequence,Traveling Salesman Problem, Matrix Multiplication
Introduction:
Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Solves problems by combining the solutions to subproblems that contain common sub-sub-problems.
DP can be applied when the solution of a problem includes solutions to subproblems We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end well get the solution of the whole problem
Steps to Designing a Dynamic Programming Algorithm 1. Characterize optimal sub-structure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution
Example: Quicksort, Mergesort, Binary search Divide-and-conquer algorithms can be thought of as top-down algorithms
Divide & Conquer can be used for any kind of problems. Only one decision sequence is ever generated.
Bellman-Ford Algorithm Concept: Initialize the d and values of all vertices in the Graph Make |V| - 1 passes over the edges of the graph. Each pass consists of relaxing each edge of the graph once. After each of the passes are complete check for a negative-weight cycle and return the appropriate boolean value.
Example:
Example (Cont):
In this particular example, each pass relaxes the edges in the order (t, x), (t, y), (t, z), (x, t), (y, x), (y, z), (z, x), (z, s), (s, t), (s, y). (a) The situation just before the first pass over the edges. (b)-(e) The situation after each successive pass over the edges. The d and values in part (e) are the final values. The Bellman-Ford algorithm returns TRUE in this example.
Bellman-Ford algorithm :
BELLMAN-FORD(G, w, s) 1 INITIALIZE-SINGLE-SOURCE(G, s) 2 for i 1 to |V[G]| - 1 3 do for each edge (u, v) E[G] 4 do RELAX(u, v, w) 5 for each edge (u, v) E[G] 6 do if d[v] > d[u] + w(u, v) 7 then return FALSE 8 return TRUE
Definitions:
Multistage Graph G(V,E) A directed graph in which the vertices are partitioned into k disjoint sets Vi , 1<i<k
If <u,v> E, then u Vi and v Vi+1 for some i, 1<i<k |V1|= |Vk|=1, and s(source) is V1 and t(sink) is Vk c(i,j)=cost of edge <i,j> Multistage graph problem Find a minimum-cost path from s to t of the Multistage Graph.
D
18
e.g.
S
13 2
The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.
A B
d (A , T ) d (B , T )
d (C , T )
D E
d(D, T)
11
d(E, T)
Dynamic programming
d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9. The above way of reasoning is called D 9 d(D, T) backward reasoning.
B
5
E
16
d(E, T)
d(F, T)
d(S, B) = 2 d(S, C) = 5 d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)} = min{ 1+4, 2+9 } = 5 d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)} = min{ 1+11, 2+5 } = 7 d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)} = min{ 2+16, 5+2 } = 7
d(S,T) = min{d(S, D)+d(D, T),d(S,E)+ d(E,T), d(S, F)+d(F, T)} = min{ 5+18, 7+13, 7+2 } =9
Floyd-Warshall Method :
1 2 i . : k-1 j
k+1 . : n
-4 7 5
-5
2 3 4 5 4 3: 5 2
3 8
3 4
4 5
-4 7 5
-5
3 5:
3 4:
-4 7 5
-5
Floyd-Warshall Method: The algorithm considers the "intermediate" vertices of a shortest path, where an intermediate vertex of a simple path p = v1, v2,..., vl is any vertex of p other than v1 or vl.
by
Example:
Example (Cont):
Example (Cont):
Example (Cont):
Example (Cont):
Example (Cont):
Distance Matrices:
Predecessor Matrices:
Floyd-Warshall Algorithm:
FLOYD-WARSHALL(W) 1 n rows[W] 2 D(0) W 3 for k 1 to n 4 do for i 1 to n 5 do for j 1 to n 6 do dij(k) min (dij(k-1), dik(k-1) + dkj(k-1)) 7 return D(n)
Subsequence:
Given a sequence X = x1, x2, ..., xm, another sequence Z = z1,z2, ..., zk is a subsequence of X if there exists a strictly increasing sequence i1,i2, ..., ik of indices of X such that for all j = 1, 2, ..., k, we have xij = zj . For example, Z = B, C, D, B is a subsequence of X = A, B, C, B, D, A, B with corresponding index sequence 2, 3, 5, 7
Common Subsequence:
Given two sequences X and Y , we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y. For example, if X = A, B, C, B, D, A, B and Y = B, D, C, A, B, A, the sequence B, C, A is a common subsequence of both X and Y .
Example:
S1: a S2: b
b a
c d
d c
a a
c b
e e
Length LCSS = 4
Zk is Zk -1 followed by zk = yj = xi where Zk-1 is an LCS of Xi-1 and Yj -1 and LenLCS(i, j)=LenLCS(i-1, j-1)+1
Zk is an LCS of Xi and Yj -1
Zk is an LCS of Xi -1 and Yj
LCS-LENGTH(X, Y) 1 m length[X] 2 n length[Y] 3 for i 1 to m 4 do c[i, 0] 0 5 for j 0 to n 6 do c[0, j] 0 7 for i 1 to m 8 do for j 1 to n 9 do if xi = yj 10 then c[i, j] c[i - 1, j - 1] + 1 11 b[i, j] 12 else if c[i - 1, j] c[i, j - 1] 13 then c[i, j] c[i - 1, j] 14 b[i, j] 15 else c[i, j] c[i, j - 1] 16 b[i, j] 17 return c and b