Вы находитесь на странице: 1из 21

Lecture 07

Dynamic Programming

CSE373: Design
and Analysis of
Fibonacci Numbers

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …

Computing the nth Fibonacci number recursively:
• F(n) = F(n-1) + F(n-2)
• F(0) = 0
• F(1) = 1
• Top-down approach
int Fib(int n) F(n)
{
if (n <= 1)
return n;
F(n-1) + F(n-2)
else
return Fib(n - 1) + Fib(n - 2);
}
F(n-2) + F(n-3) F(n-3) + F(n-4)
Fibonacci Numbers

Why is the top-down approach so inefficient?
Fib(5)
• Recomputes many sub-problems.
+

Fib(4) Fib(3)
+ +

Fib(3) Fib(2) Fib(2) Fib(1)


+ + +

Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0)


+

Fib(1) Fib(0)
Fibonacci Numbers

F(9)
F(8) F(7)

h=n F(7) F(6) F(6) F(5) h = n/2

F(6) F(5) F(5) F(4) F(5) F(4) F(4) F(3)

Time complexity between 2n/2 and 2n


Memoized Version (Top Down
Approach)
F[0] =0
F[1] = 1
For i = 2 to n
F[i] = -1

function fib(n)
if F[n] != -1
return F[n]
else F[n] = fib(n-1) + fib(n-2);
return F[n];
Iterative Version (Bottom Up
Approach)
function fib(n)
F[0] = 0; F[1] = 1;
for i = 2 to n
F[i] = F[i-1] + F[i-2];
Return F[n];

Bottom up progression
Fibonacci Numbers
• Computing the nth Fibonacci number using a
bottom-up approach:
• F(0) = 0
• F(1) = 1
• F(2) = 1+0 = 1
• …
• F(n-2) = 0 1 1 . . . F(n-2) F(n-1) F(n)
• F(n-1) =
• F(n) = F(n-1) + F(n-2)
Summery

Problem with the recursive Fib algorithm:
• Each subproblem was solved for many times!


Solution: avoid solving the same subproblem more than once
(1) pre-compute all subproblems that may be needed later
(2) Compute on demand, but memorize the solution to avoid recomputing


Can you always speedup a recursive algorithm by making it
an iterative algorithm?
• E.g., merge sort
• No. since there is no overlap between the two sub-problems
Dynamic Programming

Dynamic Programming is an algorithm design
technique for optimization problems: often
minimizing or maximizing.

Like divide and conquer, DP solves problems by
combining solutions to sub-problems.

Unlike divide and conquer, sub-problems are not
independent.
• Sub-problems may share sub-sub-problems,
Dynamic Programming

The term Dynamic Programming comes from Control
Theory, not computer science. Programming refers
to the use of tables (arrays) to construct a solution.

In dynamic programming we usually reduce time by
increasing the amount of space

We solve the problem by solving sub-problems of
increasing size and saving each optimal solution in a
table (usually).

The table is then used for finding the optimal solution
to larger problems.

Time is saved since each sub-problem is solved only
once.
Algorithm Design
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a
bottom up fashion.
4. Construct an optimal solution from computed
information.
Example: Rod Cutting

You are given a rod of length n ≥ 0 (n in inches)

A rod of length i inches will be sold for pi dollars

Cutting is free (simplifying assumption)

Problem: given a table of prices pi determine the
maximum revenue rn obtainable by cutting up the
rod and selling the pieces.
Length i 1 2 3 4 5 6 7 8 9 10
Price pi 1 5 8 9 10 17 17 20 24 30
Example: Rod Cutting

Greedy approach:

Select the length that has the maximum price/length

Reduce to problem to a smaller sub-problem

Does it work?

Use the greedy approach for length = 4

Greedy solution is {3, 1} with revenue 9

Optimal solution is {2, 2} with revenue 10
Length i 1 2 3 4 5 6 7 8 9 10
Price pi 1 5 8 9 10 17 17 20 24 30
Pi / i 1 2.5 2.67 2.25 2 2.83 2.43 2.5 2.67 3
Example: Rod Cutting

Brute force Approach:

Determine revenues from all possible ways to cut the rod into pieces.


How many different ways can we cut a rod of length n?
• For a rod of length 4:

24 - 1 = 23 = 8

For a rod of length n: 2n-1. Exponential: we cannot try all possibilities
Example: Rod Cutting

Recursive solution:

rn = max(pn, r1 + rn-1, r2 + rn-2, …, rn-1 + r1)

A slightly different way of stating the same recursion, which
avoids repeating some computations, is

rn = max1≤i≤n(pi + rn-i)

And this latter relation can be implemented as a simple top-
down recursive procedure:


Example: Rod Cutting
Let’s call Cut-Rod(p, 4), to see the effects on a simple case:

The recursion leads to recomputing same number (overlapping subproblems) – how


many?
The number of nodes for a tree corresponding to a rod of size n is:
n 1
T 0 1, T(n) 1 n
T( j) 2 , n 1.
j0
Example: Rod Cutting
Top-down with memoization:
Example: Rod Cutting
Bottom-up:

Whether we solve the problem in a top-down or bottom-up manner the


asymptotic time is Θ(n2), the major difference being recursive calls as
compared to loop iterations.
Example: Rod Cutting
Length i 0 1 2 3 4 5 6 7 8 9 10
Price pi 0 1 5 8 9 10 17 17 20 24 30
Rev Ri 0 1 5 8 10 13 17 18 22 25 30
We begin by constructing (by hand) the optimal solutions for i = 1, …, 10:
r1 = 1 (no cuts)
r2 = 5 (no cuts)
r3 = 8 (no cuts)
r4 = 10 2 + 2
r5 = 13 2 + 3
r6 = 17 (no cuts)
r7 = 18 2 + 2 + 3
r8 = 22 2 + 6
r9 = 25 3 + 6
r10 = 30 (no cuts)
Reconstructing a Solution
Reconstructing a Solution

Length i 0 1 2 3 4 5 6 7 8 9 10
Price pi 0 1 5 8 9 10 17 17 20 24 30
Rev Ri 0 1 5 8 10 13 17 18 22 25 30
Si 0 1 2 3 2 2 6 1 2 3 10