Вы находитесь на странице: 1из 30

Lagrangean Relaxation: A Short Course

Monique GUIGNARD The Wharton School University of Pennsylvania

VII CLAIO Santiago, Chile July 1994

FRANCORO Mons, Belgium June 1995

INFORMS, Dallas October 1997

2/8/2007

latest revision 8/98

LAGRANGEAN RELAXATION Notation Definition of Relaxations for Optimization Problems Lagrangean Relaxation For Linear Integer Problems Geometric Interpretation Problem Splitting Tricks Integer Linearization Principle Characteristics of the Lagrangean Function Primal and Dual Methods to Solve Relaxation Duals Subgradient Optimization Constraint Generation Method Two-Phase Hybrid Method Primal Relaxation Method Extensions Lagrangean Decomposition Lagrangean Substitution Primal Relaxation For Nonlinear Integer Problems Lagrangean Heuristics Two Examples Conclusion

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Notation

If (P) is an optimization problem, we use the following notation: FS(P) OS(P) v(P) Max Min the set of feasible solutions of (P) the set of optimal solutions of (P) the optimal value of (P) either Maximize (problem) or Maximum (value) either Minimize (problem) or Minimum (value) (see context)

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Definition of Relaxations for Optimization Problems


(P) Max { f(x) x X }
v(RP)

g(x) v(P) f(x)

X x Y

(RP)

Max { g(x) x Y }

(RP) is a relaxation of (P) if (i) (ii) Y X, and x X, g(x) f(x) It follows that v(RP) v(P).

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Lagrangean Relaxation For Linear Integer Problems


(Held and Karp 1970)

(P)

Max x {f x Ax b, Cx d, xX}
complicating constraints OK constraints integer constraints

If one knows how to solve Max x {f x Cx d, x X}, one can construct a Lagrangean relaxation of (P): let 0 be a vector of multipliers, then let (LR) be the problem (LR) Max x {f x + (b-Ax) Cx d, x X}. FS(LR) FS(P) x FS(P), f x +(b-Ax) f x, v(LR) v(P) , for all 0.

(LR) is a relaxation of (P): (i) (ii) therefore

V(P)

v(LR1) V(LR) Min 0 V(LR)

v(LR2)

(LR) is called the Lagrangean dual .

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Geometric Interpretation

(Geoffrion 1974)

The Lagrangean dual (LR) is equivalent to the primal relaxation (PR) i.e. Max x {fx Ax b, x Co{ x X Cx d }}, v(LR) = v(PR).
(proof based on LP duality)

RELAX KEEP
x x x x

f v(LP) v(PR) v(P)

{x|Ax b} Co{xX |Cx d}}

Co{xXCx d} {xCx d}

{xAx b}

If Co{xXCx d} = {xCx d}, then v(P) v(PR) =v(LR) = v(LP). One says that (LR) has the Integrality Property, and in that case the Lagrangean relaxation bound is equal to the LP bound.

If Co{xXCx d} {xCx d}, then v(P) v(PR) = v(LR) v(LP), and the Lagrangean bound can be strictly better than the LP bound.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Problem Splitting Tricks


(1) Isolate a well known subproblem and dualize the other constraints

(P)

Max x {f x Ax b,
complicating constraints

Cx d, xX}
well-known subproblem integer constraints

RELAX (2) Dualize linking constraints: Max x,y { f x + gy Ax b, xX, KEEP 1

KEEP

Cy d, yY,

Ex + Fy h }

KEEP 2 RELAX (3) If there are two interesting subproblems with common variables, split these variables first and then dualize the copy constraint: (P) Max x {f x Ax b, Cx d, xX} is equivalent to

(P')

Max x,y { f x Ax b, xX KEEP 1

Cy d, yX

x=y }

KEEP 2

RELAX

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

Integer Linearization Principle

(Geoffrion 1974, Geoffrion, McBride 1978) By b,

(LR) Max { fx + gyAi xi pi yi, all i, xX,


separates, one per i

yi = 0 or 1, all i}
over y only

X = i (Xi) may contain some integrality requirement. xi may be a vector. (i) ignore at first the constraints over integer variables only, By b.

(ii) the problem separates into one problem for each i: (LRi) Max { fi xi + gi yi Ai xi pi yi, xi Xi, yi = 0 or 1, all i} where yi is a 0-1 parameter: for yi = 0, xi = 0, and fi xi + gi yi = 0. for yi = 1, solve (LRi yi =1 ): vi = Max{ fi xi + gi Ai xi pi, xi 0}. vi is the contribution of yi = 1 in the objective function. (iii) replace v(LR) by v(PL) where (PL) is (PL) Max {i vi yi By b, yi = 0 or 1, all i}.

This process makes use of the integrality of variable yi and therefore even in cases where (LRi yi =1 ) and (PL) have the Integrality Property, it is possible to have v(LR) = Min v(PL) (integer linearization principle, cont.) = Min v(LR) < v(LP).

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

latest revision 8/98

v(LRi)

v(LRi)

0 yi 1

yi = 0 or 1

Example:
Consider the capacitated p-median problem: Minx,y i j cij xij + i fi yi s.t. i xij = 1, all j i: dualize with multipliers uj 0 j dj xij ai yi xij yi ignore temporarily vi (LR i u)

j dj xij ai yi, all i, xij yi, all i, j, i yi p, xij 0, yi = 0 or 1, all i, j.

one computes a strong bound v(LR) = Maxu v(LRu) = Maxu v(PLu) where v(PLu) = Min y { i vi yi i yi p, yi = 0 or 1, all i} - j uj
trivial knapsack problem in y

and vi = v(LRiuyi =1) = Minx {j (cij+uj) xij+ fij dj xij ai, xij 1}
continuous knapsack problem in x

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

10

latest revision 8/98

Characteristics of the Lagrangean Function

For problem (P) Max x {f x Ax b, Cx d, xX}


complicating constraints kept constraints integer constraints

one constructs a Lagrangean relaxation (LR) Max x { f x + (b-Ax)Cx d, xX} (assume one can solve it)

and the corresponding Lagrangean dual (LR) Min 0 v(LR).

The Lagrangean function is It is an implicit function of .

z() = v(LR) .

Let { xX Cx d } = { x1, x2,..., xK }, then (LR) = Max x {f x + (b - Ax) Cx d, x X} Maxk=1,...,K {f xk + (b - Axk)}

and z() is the upper envelope of a family of linear functions of , and is therefore a convex function of .

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

11

latest revision 8/98

(characteristics of the Lagrangean function, cont 2)

z() f x1 f x2 z = f x2+ (b-A x2)

fx
k

z = f xk+ (b-A xk) z = f x1+ (b-A x1)

(LR)

Min 0 v(LR) = Min 0 z() = Min 0 Maxk=1,...,K {f xk + (b - A xk )} = Min 0, { f xk + (b - A xk ), k=1,...,K}

v(LR) is the minimum of a piecewise linear convex function, known only implicitly. This function z() has breakpoints where it is not differentiable. Its level sets C() = {0z()}, a scalar, are convex polyhedral sets.
(characteristics of the Lagrangean function, cont. 3) _____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

12

latest revision 8/98

One is looking for the smallest possible value of for which C() is nonempty. Let * be a minimizer of z(), and let * = z(*).

-sk = -(b-Axk) C(=k) *

k Hk = {f xk + (b-Axk) = k} region where xk is optimal for (LR ) Space of

Let k be a current "guess" at *, and let k = z(k). Let Hk = {f xk + (b-Axk) = k} be a level hyperplane passing through k . Hk defines part of the boundary of C(k). If z() is differentiable at k, it has a gradient z(k) at k: z(k) = (b - Axk)t Hk . If z() is nondifferentiable at k, it has a subgradient sk at k: sk = (b - Axk)t Hk .

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

13

latest revision 8/98

Primal and Dual Methods to Solve Relaxation Duals 1. Subgradient Method


(Held and Karp, 1970)

It is an iterative method in which steps are taken along the negative of a subgradient of z(). Let k be the current iterate, and xk be an optimal solution of (LRk). Then sk = (b - Axk )t is a subgradient of z() at k. If * is an optimal solution of (LR), with * = z(*), let 'k+1 be the projection of k on the hyperplane H* parallel to Hk: H* = { f xk + (b-Axk) = *}. sk is perpendicular to both Hk and H*, therefore 'k+1 - k is a negative multiple of sk:

-sk = -(b-Axk)

C(=k)

H*={f xk+(b-Axk) = *} *

'k +1

k Hk = {f xk + (b-Axk) = k} region where xk is optimal for (LR )


(subgradient method, cont. 2) _____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

14

latest revision 8/98

'k+1 - k = - sk, 0. Also, k+1 belongs to H*: f xk + 'k+1(b-Axk) = * therefore f xk + (k - sk)(b-Axk) = k - sk.(sk )t = * and = (k - *) / ||sk||2 , so that 'k+1 = k + sk. (*- k) / ||sk||2 . Then the next iterate k+1 is the projection of 'k+1 onto the nonnegative region: k+1 = Max (0, 'k+1) . Remarks. 1. A subgradient is not necessarily a direction of improvement, as seen on the picture (one immediately gets out of the current level set). Yet the method will converge, basically because k+1 is closer to * than k. 2. This formula unfortunately uses the unknown optimal value * of (LR). If one uses an estimate for that value, then one may be using either too small or too large a multiple of -sk. If too small, one may be making steps which are too small and convergence will be slow. If too big, one is actually projecting on a hyperplane which is too far away from k, possibly beyond *. If one sees that the objective function values do not improve for a certain number of iterations, one should suspect that * has been underestimated (min. problem) and one should reduce the difference k - *, multiplying it by a factor k less than 1. One uses the revised formula: k+1 = k + sk. k (*- k)/ ||sk||2 , where k is reduced when there is no improvement for too long.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

15

latest revision 8/98

2. Constraint Generation Method

(LR)

Min 0 v(LR) = Min 0 z() = Min 0 Maxk=1,...,K {f xk + (b - A xk )} = Min 0, { f xk + (b - A xk ), k=1,...,K}


= f x + (b-A x )
1 1

= f xk+ (b-A xk)

= f x2+ (b-A x2) f x1 f x2

z(k+1)

k+1 f xk

At each iteration, one generates one or more cuts of the form f xk + (b - A xk ), by solving the Lagrangean subproblem (LRk) with solution xk. These cuts are added to those generated in previous iterations to form the current LP master problem: Min 0, { f xh + (b - A xh ), h=1,...,k}, (MPk) whose solution is the next iterate k+1. The process terminates when v(MPk) = z(k+1). This value is the optimal value of (LR).

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

16

latest revision 8/98

3. Two-Phase Hybrid Method (Guignard, Zhu 1994)


This combines the subgradient method in a first phase, and constraint generation in a second phase. The multipliers are first adjusted according to the subgradient formula, and at the same time, constraints corresponding to all known solutions of the Lagrangean subproblems are added to the LP master problem. The value of the LP master problem is taken as the current estimate of the optimum of the Lagrangean dual. The estimate gets more and more accurate as iterations go by, so there is no need for any adjustment. In other words, the scalar k is kept equal to 1 at all iterations. The Lagrangean bound and the value of the master problem provide a bracket on the dual optimum, and this provides a convergence test, like for the pure constraint generation method. Contrary to what happens if the multipliers are adjusted according to the master problem, there is no guarantee that the new multipliers give an improving value for the master problem. One must therefore make sure that the process does not cycle. This can be done simply as follows. One checks whether the constraints generated from the Lagrangean solutions differ from iteration to iteration. If constraints get repeated, the master problem cannot improve. After the same cut has been generated a given number of times (say, 5 times), one switches to a pure constraint generation phase. This process is accelerated if one generates as many Lagrangean solutions as possible. For instance if a heuristic phase follows the solution of the Lagrangean subproblem at any iteration, and if a feasible integer solution is produced, such a solution is a fortiori feasible for the Lagrangean subproblem and the corresponding cut is added to the master problem. The method has been tested on a variety of problems and/or relaxation schemes for which the traditional subgradient method was known to behave poorly: generalized assignment problems of large size and of type C; integrated forest management problems (IRPM); biknapsack problems; Lagrangean decomposition and/or substitution... and convergence was always achieved quickly. The method appears to be quite robust.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

17

latest revision 8/98

4. Primal Relaxation Method (Michelon and Maculan 1992)


One can solve the Lagrangean relaxation (LR) of a linear integer programming problem (P) Maxx {f x Ax = b, Cx d, xX} Maxx {f x Ax = b, x Co{ xX Cx d }}. Maxx { (x) = f x - ||Ax-b||2 x Co{ xX Cx d }}.

by means of its primal equivalent relaxation (PR): (PR) For large enough, (PR) is equivalent to the penalized problem (PP)

The constraint set is a polyhedron, and the objective function is concave. One can use a linearization method such as Frank and Wolfe (slow convergence) or described below). At each iteration, (1) one has a current iterate xk. One linearizes the penalized objective function at xk and one solves the linearized LP (PLk ) Maxx {(xk) . x x Co{ xX Cx d }}, but because the objective function is linear, (PLk ) can also be written as (PLk ) Maxx {(xk).x xX , Cx d } whose solution is called yk. Lagrangean subproblem, Simplicial Decomposition (faster convergence,

(Primal Relaxation Method, cont.)

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

18

latest revision 8/98

(2) one solves a continuous nonlinear problem (NLPk) Maxx { (x) = f x - ||Ax-b||2 xCo{y1,y2,...,yk}} whose solution is called xk+1.

y k-1
x x x

x x

xk y1
x

(xk)
x

y2

Co{xXCxd}

{xXCxd}

After the algorithm converges to a point x*, one checks whether the penalty term ||Ax-b||2 is acceptably small. If not, one increases and continues the optimization.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

19

latest revision 8/98

Extensions of Lagrangean relaxation:

Lagrangean Decomposition and Substitution


Purpose: (1) induce decomposition of the problem into independent subproblems (2) capture different structural characteristics of the problem (3) obtain stronger bounds than by standard Lagrangean relaxation schemes How? (1) identify the parts of the problem that should be split (2) replace variables in each part by copies or substitute new expressions (3) dualize the copy or substitution expression

Remark: It is not necessary that each resulting subproblem be of a special type. Yet it should be much less complex or much smaller than the overall problem so that it is solvable by existing (commercial) software.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

20

latest revision 8/98

(i) Lagrangean Decomposition


processes, in the 70s, for instance Soenen 1977)

(automatic control of

(P) Max f x s.t. Ax b Cx d xX

(P') Max f x s.t. Cx d xX x= y

Ay b yX

Max s.t. (f-u)x + Max uy s.t. Ay b Cx d xX yY (LDu) Max s.t. f x + u(y-x) Ay b Cx d xX yX

(LD) Min u v(LDu) is the Lagrangean Decomposition dual.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

21

latest revision 8/98

Geometric Interpretation

(Guignard, Kim 1987)

f
x x x x x x x x x

V(LR) V(LD)

x x

{xAx b}

{xCxd} Co{xXCxd} Co{xXAxb}

It follows that: (1) v(LD) is always at least as good as v(LR). (2) If the second subproblem has the Integrality Property:
{xA x b} = Co{xXA x b},

then (LD) is equivalent to (LR): v(LD) = v(LR). (3) If both subproblems have the Integrality Property:
{xA x b} = Co{xXA x b}, and {xC x d } = Co{xXC x d},

then (LD) is equivalent to (LP): v(LD) = v(LP). (4) If neither problem has the Integrality Property, then (LD) can be strictly better than (LR).

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

22

latest revision 8/98

(ii) Lagrangean Substitution


(Reinoso and Maculan 1988) (Guignard 1991)

(P) Max f x s.t. Ax b Cx d xX

(P') Max f x s.t. (y) b Cx d xX y Y A x = (y)

Max (f-uA) x + Max u (y) s.t. s.t. (y) b Cx d x X yY (LSu) Max f x + u[ (y) - Ax] s.t. (y) b Cx d xX y Y

(LS) Min u v(LSu) is the Lagrangean Substitution dual.

Y should be such that xX, yY: Ax = ( y), i.e. such that (P') is not more constrained than (P). Lagrangean substitution bounds are not always comparable to (LR) or (LD) bounds. Also one will often use a combination of schemes simultaneously.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

23

latest revision 8/98

(Lagrangean substitution, cont.)

Example 1: Capacitated Plant Location Problem


(CPLP) Minx,y i j cij xij + i fi yi s.t. i xij = 1, all j xij yi, all i, j i ai yi j dj, all j j dj xij ai yi, all i xij 0, yi = 0 or 1, all i, j. Three best Lagrangean schemes: (LR) (Geoffrion and McBride 1978, Guignard and Ryu 1992) dualize (D) then use integer linearization property. Subproblems solved by one continuous knapsack problem per plant and one 0-1 knapsack over all plants. Strong bound. Small computational cost. (LD) (Guignard and Kim 1987) Copy xij = x'ij and yy = y'i in (C). Duplicate (T). Split {(D), (B), (T)} APLP (Thizy 1993, Ryu 1993) {(B), (T), (C)} like (LR) Strong bounds, but expensive. (LS) (Chen and Guignard 1992) Copy j dj xij = j dj xij' and yi = y'i in (C). Same split as (LD). Same bound. Fewer multipliers. Less expensive. meet 100% of customer demand ship nothing if plant is closed (D) (B)

enough plants to meet total demand (T) ship no more than plant capacity (C)

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

24

latest revision 8/98

Example 2: Hydroelectric power management


(Guignard, Yan 1994)

low water level = of k Power Plant k

high water level of k+1

Power Plant k+1

A series of power plants are located along a river, separated by reservoirs and falls. Managing the power generation requires decisions concerning water releases at each plant in each time period. This results in a large complex mixed-integer program. A possible decomposition of the problem consists in "cutting" each reservoir in half, i.e. "splitting" the water level variable in each reservoir, and dualizing the copy constraint: high water level in k+1 = low water level in k. This Lagrangean decomposition produces one power management problem per power plant. This subproblem does not have a special structure, but it is much simpler and smaller than the original problem and is readily solvable by commercial software. It does not have the Integrality Property. The resulting bound is much stronger than the LP bound.
_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

25

latest revision 8/98

(iii) Primal Relaxation For Nonlinear Integer Problems For a nonlinear integer programming problem (P) Maxx { f(x) Ax = b, Cx d, xX}

with a nonlinear concave objective function, one can construct a primal relaxation (PR) similar to the primal equivalent relaxation for the linear case: (PR) Maxx { f(x) Ax = b, x Co{ xX Cx d }}. Maxx { (x) = f(x) - ||Ax-b||2 x Co{ xX Cx d }}. As goes to infinity, (PR) becomes equivalent to the penalized problem (PP)

The constraint set is a polyhedron, and the objective function is still concave. One can use a linearization method such as Simplicial Decomposition. The solution method is identical to that in the linear case. It is repeated here for sake of completeness. At each iteration, (1) one has a current iterate xk. One linearizes the penalized objective function at xk and one solves the linearized LP (PLk ) Maxx {(xk) . x x Co{ xX Cx d }}, but because the objective function is linear, (PLk ) can also be written as (PLk ) Maxx {(xk).x xX , Cx d } whose solution is called yk. Lagrangean subproblem,

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

26

latest revision 8/98

(Primal Relaxation Method, cont.)

(2) one solves a continuous nonlinear problem (NLPk) Maxx { (x) = f(x) - ||Ax-b||2 xCo{y1,y2,...,yk}} whose solution is called xk+1.
y k-1
x x x

x x

xk y1
x

(xk)
x

y2

Co{xXCxd}

{xXCxd}

After the algorithm converges to a point x*, one checks whether the penalty term ||Ax-b||2 is acceptably small. If not, one increases and continues the optimization. The bound obtained this way is superior to the continuous relaxation bound: v(PR) v(CR)

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

27

latest revision 8/98

since the feasible sets are included in each other: FS(PR) = {xAx = b, x Co{ xX Cx d}} FS(CR) = {xAx = b, Cx d, xCo(X)}.

One can similarly define primal decompositions and substitutions.

Application: CPLP with nonlinear transportation costs


(Ahn and Guignard 1995)

Minx,y c(x)+ i fi yi s.t. i xij = 1, all j xij yi, all i, j i ai yi j dj, all j j dj xij ai yi, all i xij 0, yi = 0 or 1, all i, j. where c(x) is a convex function of x. meet 100% of customer demand ship nothing if plant is closed (D) (B)

enough plants to meet total demand (T) ship no more than plant capacity (C)

In the numerical implementation the following expressions were used: c(x) = ij cij xij + ij qij xij2 and c(x) = ij cij xij + (ij qij xij)2 .
_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

28

latest revision 8/98

The three best primal relaxation schemes are, like in the linear case: (LR) where one penalizes constraint (D), (LD) where one penalizes the copy constraints xij=x'ij and yi=y'i in order to split the problem into {(D), (B), (T)} and {(B), (T), (C)} (LS) where one penalizes the copy constraints jdj xij= jdjx'ij and yi=y'i in order to split the problem into {(D), (B), (T)} and {(B), (T), (C)}.

The method does not converge well for (LD), but easily yields bounds for (LR) and (LS). (LS) is somewhat more expensive, because more problems have to be solved for the same number of iterations, yet the bound it yields dominates so clearly the (LR) bound that it probably is worth the extra computational expense. This is not the case when the objective function is linear; (LR) and (LD) (or (LS)) yield bounds of comparable quality in that case.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

29

latest revision 8/98

Lagrangean Heuristics
Lagrangean relaxation provides not only stronger bounds than LP relaxation, it also generates nearly feasible integer solutions. The Lagrangean subproblem solutions typically violate some, but not all, violated constraints. Depending on the actual problem, one may choose to try to get feasible solutions in different ways: (1) by modifying the solution to correct its infeasibilities while keeping the objective function deterioration small. Example: in production scheduling, if one relaxes the demand constraints, one may try to change production (down or up) so as to meet the demand. (2) by fixing (at 1 or 0) some of the meaningful decision variables and solving optimally the remaining (hopefully much smaller) problem. Example: In CPLP, for a dense problem, the {(C), (B), (T)} subproblem opens enough plants to satisfy total demand, and ships no more than the availability of the plant if it is open. One can fix open the "open" plants, and solve the remaining problem over the plants closed in the Lagrangean solution, and the shipments. Afterwards, one may still be able to modify the solution by closing unused plants.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

2/8/2007

30

latest revision 8/98

Conclusion
Lagrangean relaxation is a powerful family of tools for solving approximately integer programming problems. It provides (*) stronger bounds than LP relaxation when the problem(s) don't have the Integrality Property.

(**) good starting points for heuristic search

The availability of powerful interfaces (GAMS, AMPL, ...) and of flexible IP packages makes it possible for the user to try various schemes and to implement and test them. It is not necessary to have special structures embedded in a problem to try to use Lagrangean schemes. If it is possible to decompose the problem structurally into meaningful components and to split them through constraint dualization, possibly after having introduced new variable expressions, it is probably worth trying. Finally solutions to one or more of the Lagrangean subproblems might lend themselves to interchanges which will re-establish feasibility. Lagrangean bounds and Lagrangean heuristics provide the analyst with brackets around the optimal integer value. These are usually much tighter than with LP based bounds and heuristics.

_____________________________________________________________________________________________ Monique Guignard-Spielberg Lagrangean Relaxation

Вам также может понравиться