Академический Документы
Профессиональный Документы
Культура Документы
Overview
LP duality. LP Duality Some Duality Theory Approximation using LP Duality Primal-Dual Algorithms Examples
Part 1 - LP Duality
min s.t.
Lower Bound
Question: How can we nd a good lower bound? min s.t. 7x1 + x2 + 5x3 x1 x2 + x3 10 5x1 + 2x2 x3 6 x1 , x2 , x3 0 A lower bound: 7x1 + x2 + 5x3 x1 x2 + 3x3 10 A better lower bound: 7x1 + x2 + 5x3 (x1 x2 + 3x3 ) + (5x1 + 2x2 x3 ) 10 + 6 = 16
Lower Bound
min s.t.
Assign a non-negative coecient yi to every primal inequality such that y1 (x1 x2 + x3 ) + y2 (5x1 + 2x2 x3 ) 7x1 + x2 + 5x3 Lower bound is 10y1 + 6y2 .
LP Duality
The problem of nding the best lower bound can be formulated as a linear program.
LP Duality
min s.t.
n j =1 cj xj n j =1 aij xj
max bi i j s.t.
cj
j i
xj 0
yi 0
Or, in a compact formulation: min s.t. cT x Ax b x0 For every x and y : cT x bT y . Thus, Opt(Primal) Opt(Dual). The dual of the dual is the primal.
8
max bT y s.t. AT y c y0
Example
Primal min s.t. 7x1 + x2 + 5x3 x1 x2 + x3 10 5x1 + 2x2 x3 6 x1 , x2 , x3 0 Dual max s.t. 10y1 + 6y2 y1 + 5y2 7 y1 + 2y2 1 3y1 y2 5 y1 , y2 0
x = (7/4, 0, 11/4) and y = (2, 1) are feasible solutions. 7 7/4 + 0 + 5 11/4 = 10 2 + 6 1 = 26 Thus, x and y are optimal.
9
fi e E Pi Path(s, t)
fi ce
fi 0
eE ce de ePi
de 1
Pi Path(s, t) e E
de 0
LP-duality Theorem
Theorem: Opt(primal) is nite Opt(dual) is nite If x and y are optimal then
n m
cj x j =
j =1 i=1
bi yi
11
cj xj
j =1 i=1
bi yi
Proof:
n n m m n m
cj xj
j =1
(
j =1 i=1
ai,j yi )xj =
(
i=1 j =1
ai,j xj )yi
i=1
bi yi
12
13
cj xj =
j =1
(
j =1 i=1 m i=1
(
i=1 j =1
ai,j xj )yi =
i=1
bi yi
Thus,
n j =1 (cj
ai,j yi 0.
aij yi = cj
Similar arguments work for the dual conditions. For other direction read the proof upwards
14
LP Duality - Summary
(P ) min s.t. Pn
j =1 cj xj Pn j =1 aij xj bi
(D ) i j
max s.t.
Pm
cj
j i
xj 0
yi 0
Weak Duality Theorem: bT y cT x Complementary Slackness Conditions: x and y are optimal Primal: Dual: j, xj > 0 i, yi > 0
m i=1 aij yi = cj n j =1 aij xj = bi
15
16
Integer Programming
NP-hard Branch and Bound LP relaxation: min s.t. cT x Ax b x0 x Nn Observation: Opt(LP) Opt(IP). Denition: Integrality Gap = Opt(IP)/Opt(LP)
17
18
cj xj aij xj bi i j
xj {0, 1}
Idea: Find an integral primal solution x and a dual solution y such that: c T x r bT y By the Weak Duality Theorem: cT x r bT y r Opt(LP) r Opt(IP) Question: How do we nd such solutions?
19
aij yi = cj
n j =1
bi
aij xj r bi
cj xj =
j =1 j =1 i=1
aij yi
xj =
i=1
j =1
aij xj yi r
RD
bi yi
i=1
20
21
Vertex Cover
Denition: Instance: An undirected graph G = (V, E ) A weight function c : V R+ Solution: Measure: U V
uU
c (u )
22
Vertex Cover
xu = 1 i u is taken into the cover. IP formulation: (VC) min s.t. (LP-relaxation) The Dual: max s.t.
eE uU
c(u)xu (u, v ) E u V
xu + xv 1 xu {0, 1} xu 0
ye ye c(u) u V e E
e:ue
ye 0
23
2-approximation Algorithm
1. U ; y 0 2. For each edge e = (u, v ) E : 3. ye min c(u) e :ue ye , c(v ) e :ve ye 4. U U argmin c(u) e :ue ye , c(v ) e :ve ye 5. Output U Remark: We can construct U at the end.
24
c (u ) =
uU e:ue
RD
ye =
eE uU e
ye 2
eE
ye
Thus, x is 2-approx. x and y satisfy the relaxed complemetary slackness conditions with r = 2: Primal: Relaxed Dual: xu = 1 ye > 0
e:ue
ye = c(u)
1 xu + xv 2
25
c (u ) +
e:eU =
c(e)
GVC is a version of VC in which you pay for uncovered edges. When c(e) = for every e E we get VC.
26
c(u)xu +
eE
c(e)xe e = (u, v ) E u V e E
xu + xv + xe 1 xu {0, 1} xe {0, 1}
(LP-relaxation)
xu 0, xe 0
The Dual:
max s.t. P
eE
ye ye c(u) u V e E e E
e:ue
ye c(e) ye 0
27
2-approximation Algorithm
Observation: For any minimal solution x: 1 xu + xv + xe 2 Algorithm: 1. U ; y 0 2. For each edge e = (u, v ) E : 3. ye min c(u) e :ue ye , c(v ) 4. Output U = u : c(u) = e:ue ye Remarks: U need not be minimal, but Observation still holds. We implicitly pay for uncovered edges.
e :v e
ye , c(e)
28
=
ubU
c (u ) +
e:eU =
c(e) ye
e:eU =
=
uU e:ue
ye + ye +
eE ueU
=
RD
ye
e:eU =
2
eE
ye
29
ye = c(u)
ye = c(e) 1 xu + xv + xe 2
X
eE
min s.t.
X
tV E
c(t)xt e = (u, v ) u e
max s.t.
ye ye c(u) u V e E e E
xu + xv + xe 1 xu 0 xe 0
X
e:ue
ye c(e) ye 0
30
c (u ) +
s: s U =
c (s )
Remark: We must pay for sets that are not hit by U Example: U = {1, 2, 3, 4, 5} S = {{1, 2, 3} , {2, 4} , {3, 5}} u U, c(u) = 3 s S, c(s) = 2
31
c(u)xu +
sS
c(s)xs s S u, s
xu + xs 1
xu , xs {0, 1} xu , xs 0
The Dual:
max s.t. P
sS
ys ys c(u) u U s S s S
s:us
ys c(s) ys 0
32
-approximation Algorithm
Observation: For any minimal solution x: 1
us
xu + xs = max {|s|}
s S
Algorithm: 1. U ; y 0 2. For each set s S : 3. ys min c(u) s:us ys : u s {c(s)} 5. Return U = u : s:us ys = c(u)
33
=
uU
c (u ) +
s: s U =
c (s ) ys
s: s U =
=
uU s:us
ys + ys +
sS usU
=
RD
ys
s: s U =
s S
ys
34
ys = p(u) xu + xs
ys = p(s) 1
us
min s.t.
X
tU S
p(t)xt s S u, s
max s.t.
X
sS
ys ys p(u) u U s S s S
X
us
xu + xs 1
X
s:us
xu , xs 0
ys p(s) ys 0
35
p(t)xt s S u, s
max s.t.
X
sS
ys ys p(u) u U s S s S
X
us
xu + xs 1
X
s:us
xu , xs 0
ys p(s) ys 0
36
cj
j i
xj 0
yi 0
An advancement step: Find a vector = (1 , . . . , m ) such that: y + is feasible y + is better than y , i.e., bT (y + ) bT y = bT 0 For any i, if i > 0 then for any solution x, bi n j =1 aij xj r bi
37