Вы находитесь на странице: 1из 54

Dynamic Programming

Topics Covered: Diff. b/w Dynamic Programming and Divide and Conquer Strategy,Shortest Path Problems in Graphs,Longest Common Subsequence,Traveling Salesman Problem, Matrix Multiplication

Introduction:
Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Solves problems by combining the solutions to subproblems that contain common sub-sub-problems.

DP can be applied when the solution of a problem includes solutions to subproblems We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end well get the solution of the whole problem

Steps to Designing a Dynamic Programming Algorithm 1. Characterize optimal sub-structure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution

Diff. B/w Dynamic Programming and Divide & Conquer:


Divide-and-conquer algorithms split a problem into separate subproblems, solve the subproblems, and combine the results for a solution to the original problem. Dynamic Programming split a problem into subproblems, some of which are common, solve the subproblems, and combine the results for a solution to the original problem. Example: Matrix Chain Multiplication, Longest Common Subsequence Dynamic programming can be thought of as bottom-up

Example: Quicksort, Mergesort, Binary search Divide-and-conquer algorithms can be thought of as top-down algorithms

Diff. B/w Dynamic Programming and Divide & Conquer (Cont):


In divide and conquer, subproblems are independent. Divide & Conquer solutions are simple as compared to Dynamic programming . In Dynamic Programming , subproblems are not independent. Dynamic programming solutions can often be quite complex and tricky. Dynamic programming is generally used for Optimization Problems. Many decision sequences may be generated.

Divide & Conquer can be used for any kind of problems. Only one decision sequence is ever generated.

Shortest Path Problems in Graphs


Includes:
Single Source Shortest Path Problem (Dynamic Programming Approach-Bellman Ford Algo.) Multistage Graph problem All-Pair Shortest Path Problem

Single Source Shortest Path Problem:


Dynamic Programming Approach (Bellman Ford Algorithm): The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights may be negative. If there is such a negative-weight cycle reachable from the source vertex, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights.

Bellman-Ford Algorithm Concept: Initialize the d and values of all vertices in the Graph Make |V| - 1 passes over the edges of the graph. Each pass consists of relaxing each edge of the graph once. After each of the passes are complete check for a negative-weight cycle and return the appropriate boolean value.

Example:

Example (Cont):
In this particular example, each pass relaxes the edges in the order (t, x), (t, y), (t, z), (x, t), (y, x), (y, z), (z, x), (z, s), (s, t), (s, y). (a) The situation just before the first pass over the edges. (b)-(e) The situation after each successive pass over the edges. The d and values in part (e) are the final values. The Bellman-Ford algorithm returns TRUE in this example.

Bellman-Ford algorithm :
BELLMAN-FORD(G, w, s) 1 INITIALIZE-SINGLE-SOURCE(G, s) 2 for i 1 to |V[G]| - 1 3 do for each edge (u, v) E[G] 4 do RELAX(u, v, w) 5 for each edge (u, v) E[G] 6 do if d[v] > d[u] + w(u, v) 7 then return FALSE 8 return TRUE

Multistage Graph Problem

Definitions:
Multistage Graph G(V,E) A directed graph in which the vertices are partitioned into k disjoint sets Vi , 1<i<k

If <u,v> E, then u Vi and v Vi+1 for some i, 1<i<k |V1|= |Vk|=1, and s(source) is V1 and t(sink) is Vk c(i,j)=cost of edge <i,j> Multistage graph problem Find a minimum-cost path from s to t of the Multistage Graph.

Shortest path in Multistage Graph: To find a shortest path in a multi-stage graph


3 1 2 4 7

Apply the greedy method : the shortest path from S to T : 1+2+5=8

Shortest path in Multistage Graph (Cont):


A
4 11 2 5 16 5 9

D
18

e.g.
S

13 2

The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.

Dynamic programming approach


Dynamic programming (Forward Approach):
1 2

A B

d (A , T ) d (B , T )

d (C , T )

d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}


d(A,T) = min{4+d(D,T), 11+d(E,T)} = min{4+18, 11+13} = 22.
A
4

D E

d(D, T)

11

d(E, T)

Dynamic programming
d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9. The above way of reasoning is called D 9 d(D, T) backward reasoning.
B
5

E
16

d(E, T)

d(F, T)

Backward approach (forward reasoning) d(S, A) = 1

d(S, B) = 2 d(S, C) = 5 d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)} = min{ 1+4, 2+9 } = 5 d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)} = min{ 1+11, 2+5 } = 7 d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)} = min{ 2+16, 5+2 } = 7

d(S,T) = min{d(S, D)+d(D, T),d(S,E)+ d(E,T), d(S, F)+d(F, T)} = min{ 5+18, 7+13, 7+2 } =9

All Pair Shortest Path Problem


(Floyd-Warshall Algorithm)

Floyd-Warshall Method :
1 2 i . : k-1 j

k+1 . : n

Floyd-Warshall Algorithm (example):


2 3 1 4 3

-4 7 5

-5

Floyd-Warshall Algorithm (example):

2 3 4 5 4 3: 5 2

3 8

3 4

4 5

Floyd-Warshall Algorithm (example):


2 3 1 4 3

-4 7 5

-5

Floyd-Warshall Algorithm (example):


1 3 4 5 4 5: 2 1 5: 4 3 4 5 2 1 7 4 5 1

3 5:
3 4:

Floyd- Warshall Algorithm (example):


2 3 1 4 3

-4 7 5

-5

Floyd-Warshall Method: The algorithm considers the "intermediate" vertices of a shortest path, where an intermediate vertex of a simple path p = v1, v2,..., vl is any vertex of p other than v1 or vl.

Floyd-Warshall Method (Cont):


Let be the weight of a shortest path from vertex i to vertex j for which all intermediate vertices are in the set {1, 2,..., k}. When k = 0, a path from vertex i to vertex j with no intermediate vertex numbered higher than 0 has no intermediate vertices at all. Such a path has at most one edge, and hence .A recursive definition following the above discussion is given

by

Example:

Example (Cont):

Example (Cont):

Example (Cont):

Example (Cont):

Example (Cont):

Distance Matrices:

Predecessor Matrices:

Floyd-Warshall Algorithm:
FLOYD-WARSHALL(W) 1 n rows[W] 2 D(0) W 3 for k 1 to n 4 do for i 1 to n 5 do for j 1 to n 6 do dij(k) min (dij(k-1), dik(k-1) + dkj(k-1)) 7 return D(n)

Time Complexity: The algorithm runs in time (n3).

Longest Common Subsequence Problem

Subsequence:
Given a sequence X = x1, x2, ..., xm, another sequence Z = z1,z2, ..., zk is a subsequence of X if there exists a strictly increasing sequence i1,i2, ..., ik of indices of X such that for all j = 1, 2, ..., k, we have xij = zj . For example, Z = B, C, D, B is a subsequence of X = A, B, C, B, D, A, B with corresponding index sequence 2, 3, 5, 7

Common Subsequence:
Given two sequences X and Y , we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y. For example, if X = A, B, C, B, D, A, B and Y = B, D, C, A, B, A, the sequence B, C, A is a common subsequence of both X and Y .

Example:

S1: a S2: b

b a

c d

d c

a a

c b

e e

Length LCSS = 4

Longest Common Subsequence problem:


In the longest-common-subsequence problem, we are given two sequences X = x1, x2, ...,xm and Y = y1, y2, ..., yn and wish to find a maximum-length common subsequence of X and Y .

Optimal Substructure of LCS problem:


If Xi and Yj end with the same character xi= yj,the LCS must include the character. If it did not we could get a longer LCS by adding the common character. If Xi and Yj do not end with the same character there are two possibilities: Either the LCS does not end with xi, or it does not end with yj Let Zk denote an LCS of Xi and Yj

Optimal Substructure of LCS problem (Cont):


Xi and Yj end with xi= yj Xi Yj Zk x1 x2 xi-1 xi y1 y2 yj-1 yj=xi z1 z2zk-1 zk =yj=xi

Zk is Zk -1 followed by zk = yj = xi where Zk-1 is an LCS of Xi-1 and Yj -1 and LenLCS(i, j)=LenLCS(i-1, j-1)+1

Optimal Substructure of LCS problem (Cont):

Xi and Yj end with xi yj


Xi Yj Zk x1 x2 xi-1 xi y1 y2 yj-1 yj=xi z1 z2zk-1 zk yji Xi Yj Zk x1 x2 xi-1 xi y1 y2 yj-1 yj=xi z1 z2zk-1 zk xi

Zk is an LCS of Xi and Yj -1

Zk is an LCS of Xi -1 and Yj

LenLCS(i, j)=max{LenLCS(i, j-1), LenLCS(i-1, j)}

Optimal Substructure of LCS problem (Cont):


Let X = x1, x2, ..., xm and Y = y1, y2, ..., yn be sequences, and let Z = z1, z2, ..., zk be any LCS of X and Y. 1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xm yn, then zk xm implies that Z is an LCS of Xm-1 and Y. 3. If xm yn, then zk yn implies that Z is an LCS of X and Yn-1.

LCS-LENGTH(X, Y) 1 m length[X] 2 n length[Y] 3 for i 1 to m 4 do c[i, 0] 0 5 for j 0 to n 6 do c[0, j] 0 7 for i 1 to m 8 do for j 1 to n 9 do if xi = yj 10 then c[i, j] c[i - 1, j - 1] + 1 11 b[i, j] 12 else if c[i - 1, j] c[i, j - 1] 13 then c[i, j] c[i - 1, j] 14 b[i, j] 15 else c[i, j] c[i, j - 1] 16 b[i, j] 17 return c and b

Traveling Salesman Problem

Вам также может понравиться