Вы находитесь на странице: 1из 51

Sanders/van Stee: Approximations- und Online-Algorithmen 1

The Knapsack Problem


15
20 W
20
10

n items with weight wi N and profit pi N

Choose a subset x of items

Capacity constraint ix wi W
wlog assume i wi > W , i : wi < W

Maximize profit ix pi
Sanders/van Stee: Approximations- und Online-Algorithmen 2

Reminder?: Linear Programming

Definition 1. A linear program with n variables and m


constraints is specified by the following minimization problem

Cost function f (x) = c x


c is called the cost vector

m constraints of the form ai x ./i bi where ./i {, , =},


ai Rn We have

L = {x Rn : 1 i m : xi 0 ai x ./i bi } .
Let ai j denote the j-th component of vector ai .
Sanders/van Stee: Approximations- und Online-Algorithmen 3

Complexity

Theorem 1. A linear program can be solved in polynomial time.

Worst case bounds are rather high

The algorithm used in practice (simplex algorithm) might


take exponential worst case time

Reuse is not only possible but almost necessary


Sanders/van Stee: Approximations- und Online-Algorithmen 4

Integer Linear Programming


ILP: Integer Linear Program, A linear program with the
additional constraint that all the xi Z

Linear Relaxation: Remove the integrality constraints from an


ILP
Sanders/van Stee: Approximations- und Online-Algorithmen 5

Example: The Knapsack Problem

maximize p x
subject to

w x W, xi {0, 1} for 1 i n.

xi = 1 iff item i is put into the knapsack.


0/1 variables are typical for ILPs
Sanders/van Stee: Approximations- und Online-Algorithmen 6

Linear relaxation for the knapsack problem

maximize p x
subject to

w x W, 0 xi 1 for 1 i n.

We allow items to be picked fractionally


x1 = 1/3 means that 1/3 of item 1 is put into the knapsack
This makes the problem much easier. How would you solve it?
Sanders/van Stee: Approximations- und Online-Algorithmen 7

The Knapsack Problem


15
20 W
20
10

n items with weight wi N and profit pi N

Choose a subset x of items

Capacity constraint ix wi W
wlog assume i wi > W , i : wi < W

Maximize profit ix pi
Sanders/van Stee: Approximations- und Online-Algorithmen 8

How to Cope with ILPs


Solving ILPs is NP-hard

+ Powerful modeling language

+ There are generic methods that sometimes work well

+ Many ways to get approximate solutions.

+ The solution of the integer relaxation helps. For example


sometimes we can simply round.
Sanders/van Stee: Approximations- und Online-Algorithmen 9

Linear Time Algorithm for


Linear Relaxation of Knapsack
pi
Classify elements by profit density into B, {k}, S such that
wi
pi pk pj
i B, j S : , and,
wi wk w j
10
wi W but wk + wi > W . B 15
iB iB W
k
20
1
if i B
W iB wi
Set xi = if i = k
wk
S 20

0 if i S
Sanders/van Stee: Approximations-
und Online-Algorithmen 10

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 2. x is the optimal solution of the linear relaxation.

Proof. Let x denote the optimal solution

w x = W otherwise increase some xi


10
i B : xi = 1 otherwise B 15 W
increase xi and decrease some xj for j {k} S k
20
j S : xj = 0 otherwise
decrease xj and increase xk S 20
This only leaves
Sanders/van Stee: Approximations-
und Online-Algorithmen 11

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 2. x is the optimal solution of the linear relaxation.

Proof. Let x denote the optimal solution

w x = W otherwise increase some xi


10
i B : xi = 1 otherwise B 15 W
increase xi and decrease some xj for j {k} S k
20
j S : xj = 0 otherwise
decrease xj and increase xk S 20
This only leaves
Sanders/van Stee: Approximations-
und Online-Algorithmen 12

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 2. x is the optimal solution of the linear relaxation.

Proof. Let x denote the optimal solution

w x = W otherwise increase some xi


10
i B : xi = 1 otherwise B 15 W
increase xi and decrease some xj for j {k} S k
20
j S : xj = 0 otherwise
decrease xj and increase xk S 20
This only leaves
Sanders/van Stee: Approximations-
und Online-Algorithmen 13

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 2. x is the optimal solution of the linear relaxation.

Proof. Let x denote the optimal solution

w x = W otherwise increase some xi


10
i B : xi = 1 otherwise B 15 W
increase xi and decrease some xj for j {k} S k
20
j S : xj = 0 otherwise
decrease xj and increase xk S 20
This only leaves
Sanders/van Stee: Approximations-
und Online-Algorithmen 14

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 2. x is the optimal solution of the linear relaxation.

Proof. Let x denote the optimal solution

w x = W otherwise increase some xi


10
i B : xi = 1 otherwise B 15 W
increase xi and decrease some xj for j {k} S k
20
j S : xj = 0 otherwise
decrease xj and increase xk S 20
This only leaves xk = W wiB
k
wi
Sanders/van Stee: Approximations-
und Online-Algorithmen 15

1
if i B
W iB wi
xi = if i = k
wk


0 if i S
Lemma 3. For the optimal solution x of the linear relaxation:

opt xi pi 2opt
i
10
B 15 W
Proof. We have k 20
iB pi opt. Furthermore, since wk < W ,
pk opt.
S 20
We get

opt xi pi pi + pk opt + opt = 2opt


i iB
Sanders/van Stee: Approximations- und Online-Algorithmen 16

Two-approximation of Knapsack
10
B
15
1 if i B
W
W iB wi k
xi = if i = k 20
wk


0 if i S
S 20
Exercise: Prove that either B or {k} is a
2-approximation of the (nonrelaxed) knapsack problem.
Sanders/van Stee: Approximations- und Online-Algorithmen 17

Dynamic Programming
Building it Piece By Piece
Principle of Optimality
An optimal solution can be viewed as constructed of
optimal solutions for subproblems
Solutions with the same objective values are
interchangeable

Example: Shortest Paths


Any subpath of a shortest path is a shortest path
Shortest subpaths are interchangeable
s u v t
Sanders/van Stee: Approximations- und Online-Algorithmen 18

Dynamic Programming by Capacity


for the Knapsack Problem

Define
P(i,C) = optimal profit from items 1,. . . ,i using capacity C.
Lemma 4.

1 i n : P(i,C) = max(P(i 1,C),


P(i 1,C wi ) + pi )
Sanders/van Stee: Approximations- und Online-Algorithmen 19

Lemma 4.
1 i n : P(i,C) = max(P(i 1,C), P(i 1,C wi ) + pi )

Proof
P(i,C) P(i 1,C): Set xi = 0, use optimal subsolution.
Sanders/van Stee: Approximations- und Online-Algorithmen 20

Lemma 4.
1 i n : P(i,C) = max(P(i 1,C), P(i 1,C wi ) + pi )

Proof
P(i,C) P(i 1,C): Set xi = 0, use optimal subsolution.

P(i,C) P(i 1,C wi ) + pi : Set xi = 1 . . .

Therefore P(i,C) max(P(i 1,C), P(i 1,C wi ) + pi ).


Sanders/van Stee: Approximations- und Online-Algorithmen 21

Lemma 4.
1 i n : P(i,C) = max(P(i 1,C), P(i 1,C wi ) + pi )

Proof
P(i,C) max(P(i 1,C), P(i 1,C wi ) + pi ).
Assume the contrary :
x that is optimal for the subproblem such that

P(i 1,C) < p x P(i 1,C wi ) + pi < p x


Sanders/van Stee: Approximations- und Online-Algorithmen 22

Lemma 4.
1 i n : P(i,C) = max(P(i 1,C), P(i 1,C wi ) + pi )

Proof
P(i,C) max(P(i 1,C), P(i 1,C wi ) + pi )
Assume the contrary :
x that is optimal for the subproblem such that

P(i 1,C) < p x P(i 1,C wi ) + pi < p x


Case xi = 0: x is also feasible for P(i 1,C). Hence,
P(i 1,C) p x. Contradiction
Sanders/van Stee: Approximations- und Online-Algorithmen 23

Lemma 4.
1 i n : P(i,C) = max(P(i 1,C), P(i 1,C wi ) + pi )

Proof
P(i,C) max(P(i 1,C), P(i 1,C wi ) + pi )
Assume the contrary :
x that is optimal for the subproblem such that

P(i 1,C) < p x P(i 1,C wi ) + pi < p x


Case xi = 0: x is also feasible for P(i 1,C). Hence,
P(i 1,C) p x. Contradiction
Case xi = 1: Setting xi = 0 we get a feasible solution x0 for
P(i 1,C wi ) with profit p x0 = p x pi . Add pi . . .
Sanders/van Stee: Approximations- und Online-Algorithmen 24

Computing P(i,C) bottom up:

Procedure knapsack(p, c, n, W )
array P[0 . . .W ] = [0, . . . , 0]
bitarray decision[1 . . . n, 0 . . .W ] = [(0, . . . , 0), . . . , (0, . . . , 0)]
for i := 1 to n do
// invariant: C {1, . . . ,W } : P[C] = P(i 1,C)
for C := W downto wi do
if P[C wi ] + pi > P[C] then
P[C] := P[C wi ] + pi
decision[i,C] := 1
Sanders/van Stee: Approximations- und Online-Algorithmen 25

Recovering a Solution

C := W
array x[1 . . . n]
for i := n downto 1 do
x[i] := decision[i,C]
if x[i] = 1 then C := C wi
endfor
return x

Analysis:

Time: O (nW ) pseudo-polynomial

Space: W + O (n) words plus W n bits.


Sanders/van Stee: Approximations- und Online-Algorithmen 26

Example: A Knapsack Instance

maximize (10, 20, 15, 20) x


subject to (1, 3, 2, 4) x 5
P(i,C), (decision[i,C])

i \C 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1)
2
3
4
Sanders/van Stee: Approximations- und Online-Algorithmen 27

Example: A Knapsack Instance

maximize (10, 20, 15, 20) x


subject to (1, 3, 2, 4) x 5
P(i,C), (decision[i,C])

i \C 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1)
2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1)
3
4
Sanders/van Stee: Approximations- und Online-Algorithmen 28

Example: A Knapsack Instance

maximize (10, 20, 15, 20) x


subject to (1, 3, 2, 4) x 5
P(i,C), (decision[i,C])

i \C 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1)
2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1)
3 0, (0) 10, (0) 15, (1) 25, (1) 30, (0) 35, (1)
4
Sanders/van Stee: Approximations- und Online-Algorithmen 29

Example: A Knapsack Instance

maximize (10, 20, 15, 20) x


subject to (1, 3, 2, 4) x 5
P(i,C), (decision[i,C])

i \C 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1)
2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1)
3 0, (0) 10, (0) 15, (1) 25, (1) 30, (0) 35, (1)
4 0, (0) 10, (0) 15, (0) 25, (0) 30, (0) 35, (0)
Sanders/van Stee: Approximations- und Online-Algorithmen 30

Dynamic Programming by Profit


for the Knapsack Problem

Define
C(i, P) = smallest capacity from items 1,. . . ,i giving profit P.
Lemma 5.

1 i n : C(i, P) = min(C(i 1, P),


C(i 1, P pi ) + wi )
Sanders/van Stee: Approximations- und Online-Algorithmen 31

Dynamic Programming by Profit

Let P:= bp x c where x is the optimal solution of the linear


relaxation.
Thus P is the value (profit) of this solution.

Time: O nP pseudo-polynomial

Space: P + O (n) words plus Pn bits.


Sanders/van Stee: Approximations- und Online-Algorithmen 32

A Faster Algorithm

Dynamic programs are only pseudo-polynomial-time


A polynomial-time solution is not possible (unless P=NP...),
because this problem is NP-hard
However, it would be possible if the numbers in the input were
small (i.e. polynomial in n)
To get a good approximation in polynomial time, we are going
to ignore the least significant bits in the input
Sanders/van Stee: Approximations- und Online-Algorithmen 33

Fully Polynomial Time Approximation Scheme

Algorithm A is a
(Fully) Polynomial Time Approximation Scheme
minimization
for problem if:
maximization

Input: Instance I, error parameter


1+
Output Quality: f (x) ( )opt
1
Time: Polynomial in |I| (and 1/)
Sanders/van Stee: Approximations- und Online-Algorithmen 34

Example Bounds

PTAS FPTAS
1
n + 21/ 2
n +

log 1 1
n n+ 4

1
n n/
42/3 ..
n .
21000/ ..
n+2 .
.. ..
. .
Sanders/van Stee: Approximations- und Online-Algorithmen 35

FPTAS for Knapsack

P:= maxi pi // maximum profit


P
K:= // scaling factor
n
p0i := Kpi // scale profits
x0 := dynamicProgrammingByProfit(p0 , c,C)
output x0
Sanders/van Stee: Approximations- und Online-Algorithmen 36

FPTAS for Knapsack

P:= maxi pi // maximum profit


P
K:= // scaling factor
n
p0i := Kpi // scale profits
x0 := dynamicProgrammingByProfit(p0 , c,C)
output x0
16
Example: 20 W
21
11
= 1/3, n = 4, P = 20 K = 5/3
p = (11, 20, 16, 21) p0 = (6, 12, 9, 12)
(equivalent to p0 = (2, 4, 3, 4))
Sanders/van Stee: Approximations- und Online-Algorithmen 37

Lemma 6. p x0 (1 )opt.

Proof. Consider the optimal solution x .


j p k
0 i
p x Kp x = pi K
ix K
p
i
pi K 1 = |x |K nK,
ix K
i.e., Kp0 x p x nK. Furthermore,
jp k pi
0 0 0 i
Kp x Kp x = K K = p x0 .
ix0
K ix0
K
Hence,

p x0 Kp0 x p x nK = opt P (1 )opt


Sanders/van Stee: Approximations- und Online-Algorithmen 38

Lemma 6. p x0 (1 )opt.

Proof. Consider the optimal solution x .


j p k
0 i
p x Kp x = pi K
ix K
p
i
pi K 1 = |x |K nK,
ix K
i.e., Kp0 x p x nK. Furthermore,
jp k pi
0 0 0 i
Kp x Kp x = K K = p x0 .
ix0
K ix0
K
Hence,

p x0 Kp0 x p x nK = opt P (1 )opt


Sanders/van Stee: Approximations- und Online-Algorithmen 39

Lemma 6. p x0 (1 )opt.

Proof. Consider the optimal solution x .


j p k
0 i
p x Kp x = pi K
ix K
p
i
pi K 1 = |x |K nK,
ix K
i.e., Kp0 x p x nK. Furthermore,
jp k pi
0 0 0 i
Kp x Kp x = K K = p x0 .
ix0
K ix0
K
Hence,

p x0 Kp0 x p x nK = opt P (1 )opt


Sanders/van Stee: Approximations- und Online-Algorithmen 40

Lemma 7. Running time O n3 / .

Proof. The running time of dynamic programming dominates.



Recall that this is O nPb0 where Pb0 = bp0 x c.
We have
3
P Pn n
nP0 n (n max p0i ) = n2 = n2 .
K P
Sanders/van Stee: Approximations- und Online-Algorithmen 41

A Faster FPTAS for Knapsack

Simplifying assumptions:

1/ N: Otherwise := 1/ d1/e.

Upper bound P is known: Use linear relaxation.

mini pi P: Treat small profits separately. For these items


greedy works well. (Costs a factor O (log(1/)) time.)
Sanders/van Stee: Approximations- und Online-Algorithmen 42

A Faster FPTAS for Knapsack


1
M:= 2 ; K:= P2 = P/M

jp k 1
i
p0i := // p0i ,...,M
K
value of optimal solution was P, is now M
C j := {i 1..n : p0i =
j j}
k
M
remove all but the j lightest (smallest) items from C j
do dynamic programming on the remaining items
Lemma 8. px0 (1 )opt.

Proof. Similar as before, note that |x| 1/ for any


solution.
Sanders/van Stee: Approximations- und Online-Algorithmen 43

Lemma 9. Running time O (n + Poly(1/)).

Proof.

preprocessing time: O (n)

values: M = 1/2
M M
M 1 log(1/)
pieces: M M ln M = O 2
i=1/
j i=1/
j

time dynamic programming: O log(1/)
4
Sanders/van Stee: Approximations- und Online-Algorithmen 44

The Best Known FPTAS

[Kellerer, Pferschy 04]


( )!
1 log2 1
O min n log + 3 , . . .

Less buckets C j (nonuniform)

Sophisticated dynamic programming


Sanders/van Stee: Approximations- und Online-Algorithmen 45

Optimal Algorithm for the Knapsack Problem

The best work in near linear time for almost all inputs! Both in a
probabilistic and in a practical sense.
[Beier, Vcking, An Experimental Study of Random Knapsack
Problems, European Symposium on Algorithms, 2004.]
[Kellerer, Pferschy, Pisinger, Knapsack Problems, Springer
2004.]
Main additional tricks:

reduce to core items with good profit density,

Horowitz-Sahni decomposition for dynamic programming


Sanders/van Stee: Approximations- und Online-Algorithmen 46

A Bicriteria View on Knapsack


10 15 15 20 20 20 20 20 20 20
10
P 10 15 15
60 20 10 20 20 20 20
50 20 10
20 15 15
40 20 10
10 20
30
15
20 15 15
10 20
10 dominated
20
10
1 2 3 4 5 6 7 8 9 10 C

n items with weight wi N and profit pi N


Choose a subset x of items
Minimize total weight ix wi
Maximize total profit ix pi
Problem: How should we model the tradeoff?
Sanders/van Stee: Approximations- und Online-Algorithmen 47

Pareto Optimal Solutions


[Vilfredo Frederico Pareto (gebrtig Wilfried Fritz)
* 15. Juli 1848 in Paris, 19. August 1923 in Cligny]
Solution x dominates solution x0 iff

p x p x0 c x c x0

and one of the inequalities is proper.


Solution x is Pareto optimal if

6 x0 : x0 dominates x

Natural Question: Find all Pareto optimal solutions.


Sanders/van Stee: Approximations- und Online-Algorithmen 48

In General
d objectives

various problems

various objective functions

arbitrary mix of minimization and maximization


Sanders/van Stee: Approximations- und Online-Algorithmen 49

Enumerating only Pareto Optimal Solutions

[Nemhauser Ullmann 69]

L := h(0, 0)i // invariant: L is sorted by weight and profit


for i := 1 to n do
L 0 := h(w + wi , p + pi ) : (w, p) L i // time O (|L |)
L := merge(L , L 0 ) // time O (|L |)
scan L and eliminate dominated solutions // time O (|L |)

Now we easily lookup optimal solutions for various


constraints on C or P

We can prune L if a constraint is known beforehand


Sanders/van Stee: Approximations- und Online-Algorithmen 50

Example

Items: (1, 10), (3, 20), (2, 15), (4, 20), prune at W = 5

L =<(0,0)>
(1,10)->L= <(1,10)>
merge ->L =<(0,0),(1,10)>
(3,20)->L= <(3,20),(4,30)>
merge ->L =<(0, 0),(1,10),(3,20),(4,30)>
(2,15)->L= <(2,15),(3,25),(5,35)>
merge ->L=<(0, 0),(1,10),(2,15),(3,25),(4,30),(5,35)>
(4,20)->L= <(4,20),(5,30)>
merge ->L=<(0, 0),(1,10),(2,15),(3,25),(4,30),(5,35)>
Sanders/van Stee: Approximations- und Online-Algorithmen 51

Horowitz-Sahni Decomposition

Partition items into two sets A, B

Find all Pareto optimal solutions for A, LA

Find all Pareto optimal solution for B, LB

The overall optimum is a combination of solutions from LA


and LB . Can be found in time O (|LA | + |LB |)

|LA | 2n/2

Question: What is the problem in generalizing to three (or


more) subsets?

Вам также может понравиться