Вы находитесь на странице: 1из 18

Chapter 1

LINEAR PROGRAMMING
1.1. Using optimal limited resources

One practical optimization problem, which appears frequently in the management activity in every organization is the following: we have several limited resources such as materials, labor, nancial etc and using these resources we can perform several activities. The optimization problem must comply an optimal (maximize or minimize) criteria and consist in determination of the level of these activities, constrained by the available limited resources. Let us denote by i, 1 i m, the resource type and with bi the level of available resource of type i. Denote by j , 1 j n, the type of activity and by xj the level (unknown) at which will be performed this activity. Finally, denote by aij the resource of type i, 1 i m, required for the production of one unit of type j , 1 j n, (in general, activity of type j ). We suppose here that aij depends only of the resource type and process type and does not depend of the level at which these activity will be performed. Using the introduced notations we can express the total quantity of resource of i-th type which is used in the production phase: ai1 x1 + ai2 x2 + . . . + ain xn . Because is not allowed to use from i-th resource more than the available quantity bi , it result that the following constraints must be accomplished:
n

aij xj bi , 1 i m.
j =1

(1.1)

LINEAR PROGRAMMING

Because xj represents the level at which is performed the j -th activity will result that the following constraints must be accomplished: xj 0, 1 j n. (1.2)

Inequalities 1.1 are called the constraints of the problem and 1.2 are called the non negativity conditions of the problem. The system of inequalities can have an innite of solutions, a unique solution or no solution (incompatible system). In many cases when the practical problems are well dened we are in the situation in which the system 1.1 and 1.2 has an innite number of solutions. Thus is possible to organize the production process for the products of type j , 1 j n, in several ways which respects the constraints 1.1 of using the limited resources. Adoption of plan is made using some economic criteria such as: the revenue, the labor time, production costs etc. Let us assume that we can integrate all these economic criteria in one criteria which allows us to adopt the decision. Moreover we shall assume that criteria is a linear one. An example of such criteria we obtain in the following way. If we denote by cj the revenue per unit obtained by the activity of type j , 1 j n, then the total revenue is:
n

cj xj .
j =1

(1.3)

The problem which must be solved, is to nd the solutions of the systems of linear inequalities 1.1, 1.2 which gives the maximum value for the total revenue 1.3. In others words, from mathematical point of view we need to solve the problem: n sup cj xj j =1
n

j =1

aij xj bi , 1 i m,

xj 0, 1 j n,

which is called linear programming problem (or linear program).The linear function which must be maximized is called objective function, criteria function or eciency function. The above problem can be solved with simplex algorithm or simplex dual algorithm. These algorithms are presented in every basic optimization book. Some software products, such as MAPLE have specialized routines for solving linear optimizations problems.

FORMS OF LINEAR PROGRAMMING PROBLEMS

1.2.

Forms of linear programming problems

Standard form of a linear programming problem is: min(max)c x Ax = b x0 Let us remark that a maximum problem can be transformed in a minimum problem using the formulamax(f ) = min(f ). Canonical form of a linear programming problem is: min c x Ax b x0 or max c x Ax b x0 MAPLE code for transforming in canonical form a maximum problem is: > standardize(set of constraints); or > convert(set of constraints, stdle); A constraint of a linear programming problem is called concordant if it is an inequality for the minimum problem and an for the maximum problem. The mixed form of a linear programming problem contains constraints (some times we use the term of restrictions) which are equations. Because, using mathematical operations, every linear programming problem can be transformed in canonical form, for the minimum optimization problem, we shall work only with such of these problems, thus with problems in canonical form. Denition 1.2.1. Let be the linear programming problem in standard form. Then, we dene the set of programs by: P = {x Rn |Ax = b, x 0}. A minimum point of the objective function z = c x over the set of programs P is called optimal solution and the set of optimal solutions will be denoted with: P = {x P | min c x = c x}.
xP

LINEAR PROGRAMMING

Dene the base solution of the system Ax = b a solution Rn for which to nonzero components corresponds linear independent columns. If B is a basis formed with the columns aj1 , . . . , ajm of the matrix A then the system of equations Ax = b can be written in the explicit form: xB = B1 b B1 RxR where R is the matrix obtained from A by elimination the columns j1 , . . . , jm . If we denote:
B , 1 j n, B1 b =x , B1 aj = yj B

results:

B xB =x j R

B yj xj

equivalently xB i =xi
j R B B yij xj , i B .

where B = {j1 , . . . , jm } si R = {1, . . . , n} B. B si xR = 0. It The base solution which corresponds to the basis B is xB =x results that these base solution is a program if we have: B1 b 0. A basis B which meet the above inequality is called primal feasible basis. In practice these kind of basis are determined using the articial basis method. In many situations this basis is available directly, the feasible basis being the identity (unit) matrix. We have the following theorem called the fundamental theorem of the linear programming. Theorem 1.2.1. i) If the linear programming problem: min(max)c x Ax = b x0 has an optimum, then the problem has a basis program. ii) If the above problem has a optimum program, then it has a optimum basic program. The fundamental algorithm for solving the linear programming programs is called simplex primal algorithm and it proposed by George Dantzig in 1951. This algorithm is described in every fundamental book of Operational Research and is implemented in every dedicated software package.

SIMPLEX ALGORITHM (DANTZIG)

1.3.

Simplex algorithm (Dantzig)

For solving linear programming problems we can use simplex algorithm developed by Dantzig (1951). This algorithm allows systematically searching, in the set of base programs of the linear programming problem in standard form, the transition form a base program to another program as least as good as the previous. The algorithm gives us criteria for situations in which the optimum is innity or for the case in which the set of programs is the empty set. STEP 0. Transform the optimization problem in standard form: inf cT x Ax = b, x 0. (If a problem has inequality constraints rs we transform this inequalities; subtracting the slack variables y, the maximum problem will be transformed in a minimum problem etc.). A is a matrix with m rows and n columns for which we have rang(A) = m < n. Denote by z the objective function, thus z = cT x. STEP 1. We nd a primal feasible basis B (available directly or using articial base method) and compute: xB = B1 b, B B z = cT B x , B 1 yj = B aj , 1 j n, z B c , 1 j n. j j These values are written in the simplex table (table 1.1) and we go to the next step. Let us denote by B the set of j indexes which determines the matrix B and by R = {1, . . . , n} B . The initial simplex table has the form:
TABLE 1.1

cB

B.V. xB z

V.B.V xB zB

c1 x1 B y1 Bc z1 1

... ... ... ...

cj xj B yj Bc zj j

... ... ... ...

cn xn B yn Bc zn n

where we denoted by B.V. the set of base variables and by V.B.V. the values of the B base variables (x ). B c 0 j R, we stop (STOP): xB is an optimum program. STEP 2. If zj j

6 Otherwise we nd the set (non empty):

LINEAR PROGRAMMING

B R+ = {j R|zj cj > 0}

and go to the next step. B 0 we stop (STOP): optimum STEP 3. If there exists j R+ for which yj of the problem is innity. Otherwise, we nd k R+ with entering variable choice rules: B B max(zj cj ) = zk ck
j B > 0} with leaving variable choice rules: si r B+ = {i B|yik

iB+

min (

xB xB i r ) = B B yik yrk

B is called pivot. Go to the next step. The element yrk

STEP 4. Let us consider the basis B obtained by B replacing the ar column with the column ak , and compute the values (using the rules for changing variables)
B , z B c and go to the Step 2 replacing the basis B with the basis xB , z B , yj B. j j The computations can by simplied using the rules for simplex table transformation: i) the elements from the pivot lined are divided at the pivot value; ii) the elements from the pivot column became zero, with the exception of the pivot which became 1; iii) the others values are transformed using the rectangle rule: if we imagine the B rectangle with the diagonal determined by the element under transformation yij B , then the new values y and the pivot yrk ij is obtained by dividing to the pivot the B y B placed on the above dened dierences between the product of the elements yij rk B y B placed on the other diagonal of the rectangle. diagonal and the product yrj ik B c < 0 j R then Remarks. i) If at the ending of the algorithm we have zj j the solution of the problem is unique. ii) During algorithm execution in is possible to have the cycling phenomena (passing to another basis which was already processed). There are several techniques to avoid this. iii) We have also, in bidimensional case, a geometrical interpretation of the solutions of a linear programming problem. The feasible domain can be: -a convex polyhedron, in this case one of its vertex is the solution of the optimization problem. Also we may have the solution being one side of the polyhedron, in this case we have many solutions; B

THE DUAL OF THE LINEAR PROGRAMMING PROBLEM -unbounded domain, in this case the optimum is innity; -empty set, in this case the optimization problem has no solution.

1.4.

The dual of the linear programming problem

The primal problem deals with physical quantities. With all inputs available in limited quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to produce so as to maximize total revenue? The dual problem deals with economic values. With oor guarantees on all output unit prices, and assuming the available quantity of all inputs is known, what input unit pricing scheme to set so as to minimize total expenditure? To each variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed by output type. To each inequality to satisfy in the primal space corresponds a variable in the dual space, both indexed by input type. The coecients that bound the inequalities in the primal space are used to compute the objective in the dual space, input quantities in this example. The coecients used to compute the objective in the primal space bound the inequalities in the dual space, output unit prices in this example. Both the primal and the dual problems make use of the same matrix. In the primal space, this matrix expresses the consumption of physical quantities of inputs necessary to produce set quantities of outputs. In the dual space, it expresses the creation of the economic values associated with the outputs from set input unit prices. Since each inequality can be replaced by an equality and a slack variable, this means each primal variable corresponds to a dual slack variable, and each dual variable corresponds to a primal slack variable. This relation allows us to complementary slackness. Thus if we have a linear programming problem we can construct the dual problem using the following rules: a) the free terms from the primal problem became coecients of the objective function from the dual problem; b) coecients of the objective function from the dual problem became free terms in the dual problem; c) the dual of a maximization (minimization) primal problem is a minimization (maximization) problem; d) the matrix of the constraints from the dual problem is the transposed of the matrix of the primal problem; e) dual (primal) variables associated with concordant primal (dual) constraints are supposed to the non negative constraints;

LINEAR PROGRAMMING

f) primal (dual) variables associated with non-concordant dual (primal) constraints are supposed to the non positivity g) dual (primal) variables associated with primal (dual) constraints which are equations have no constraints regarding the sign. Let us remark that the dual of a problem in the canonical form is also in canonical form. Here is the fundamental the fundamental theorem of duality: Theorem 1.4.1. If we have the couple of dual problems: min c x Ax b x0 and max b y Ayc y0

only one of the following problems is true: a) both problems have programs. In this case, both problems have optimal programs and the optimal values of the objective functions are the same; b) one of the problems have programs and the other one has no programs. In this case, the problem which has programs has optimum innity; c) none of the problems have programs. The simplex dual algorithm solve the primal problem by the dual problem. Similarly, after solving the primal algorithm we can construct the solution of the dual problem. MAPLE software has a procedure for the construction of the dual problem. The syntax is the following: > dual(objective function, set of constraints, dual variable set);

1.5.

Transportation problems

Assume that we have m warehouses and n shops. A homogenus product is stored in the quantities ai , 1 i m, in the warehouses and it is requested by the shops in the quantities bj , 1 j n,. We assume that the following conditions are satised: ai 0, 1 i m, bj 0, 1 j n, a1 + . . . + am = b1 + . . . + bn . (1.4)

TRANSPORTATION PROBLEMS

In others words the available quantities and the requested quantities are non negative and the request it is equal with the availability. Such a problem is called a balanced transportation problem. The problem consists in planning the transportation from the warehouses to the shops with the minimum transportation cost. Denote by xij the quantity (unknown) which will be supplied from the warehouse i to the shop j . The total quantity transported from the warehouse i to the all shops must be equal with the availability of the warehouse i, thus:
n

xij = ai , 1 i m.
j =1

(1.5)

Similarly, all the total requested quantity of the shop j must be equal with the total quantities transported from all the warehouses to this shop, thus:
m

xij = bj , 1 j n.
i=1

(1.6)

The transported quantities are non negative: xij 0, 1 i m, 1 j n. (1.7)

The system of linear inequalities 1.5-1.7 in the constraints 1.4 has an innite number of solutions. For adopting a transportation plan we need a economic criteria given by the transportation cost. Denote by cij the transportation cost per unit from the warehouse i to the shop j ; we shall assume that the cost per unit does not depends of the total quantity transported from i to j . The transportation cost is:
m n

cij xij .
i=1 j =1

The transportation problem consist in determination the solution of the system 1.5-1.7 for which the total cost is minimum, thus: m n cij xij , inf i=1 j =1 n x = a , 1 i m, ij i j =1 m xij = bj , 1 j n, i=1 xij 0, 1 i m, 1 j n.

10

LINEAR PROGRAMMING

1.6.

Applications

Exercise 1.6.1. Find the dual of the linear programming problem: min(2x1 + 3x2 + x3 ), x1 + x2 + 3x4 3, 2x2 + 5x3 + 4x4 = 5, x + x3 2, 1 x1 , x2 0, x3 arbitrary, x4 0. Find the optimal solution of the primal problem. Solution. Using the rules for construction the dual problem we get: max(3u1 + 5u2 2u3 ), u1 + u3 2, u1 + 2u2 3, 5u2 + u3 = 1, 3u1 + 4u2 0, u1 0, u2 arbitrary, u3 0. The solution of the primal problem can be found with simple dual or primal algorithm. MAPLE code for solving this problem is: > with(simplex) : > objective := 2 x1 + 3 x2 + x3 ; > constraints := {x1 + x2 +3 x4 >= 3, 2 x2 +5 x3 +4 x4 = 5, x1 + x3 <= 2}; > minimize(objective, constraints union {x1 >= 0, x2 >= 0, x4 <= 0}); Solution given by the program will be: x1 = 0, x2 = 15/2, x3 = 2, x4 = 0, the value (optimum) for the objective function will be in this case 41/2. Exercise 1.6.2. Solve the following linear programming problem. Also solve its dual. min(2x1 + 3x2 + x3 ), x1 + x2 + 3x4 3, 2x2 + 5x3 + 4x4 = 5, x + x3 2, 1 x1 , x2 0, x3 arbitrary, x4 0. where is a real parameter.

APPLICATIONS Exercise 1.6.3. Solve the linear programming problem: min(2x1 + 3x2 + x3 ), x1 + x2 + 3x4 3 + , 2x2 + 5x3 + 4x4 = 5 , x + x3 2 + 2, 1 x1 , x2 0, x3 arbitrary, x4 0. where is a real parameter. Also solve the dual problem.

11

Solution. Exercises 1.6.2-1.6.3 can be solved using simplex primal algorithm or using post optimization technique in the nal simplex table, for the problem 1.6.1, with = 0. Exercise 1.6.4. Using simplex algorithm solve the following linear programming problem: max(2x1 + x2 ), x1 x2 4, 3x1 x2 18, x1 + 2x2 6, x1 , x2 0. Exercise 1.6.5. Minimize 2x1 + 3x2 under the constraints: x1 x2 + x3 = 1, x1 + x2 x4 = 1, x1 2x2 + x5 = 1, 2x1 + x3 x4 = 2, xi 0, i = 1, . . . , 5.
Solution. Applying simplex algorithm we nd the solution x 1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 0, and the minimum for the objective function z = 2.

Exercise 1.6.6. Solve the following problem: min(x1 + 6x2 ), 2x1 + x2 3, x + 3x2 4, 1 x1 , x2 0.

12

LINEAR PROGRAMMING

Fig. 1.1: Feasibility domain and the objective function. Solution. We shall solve the problem graphically. We represent in bi dimensional space the feasibility domain. On the same graphic 1.1 we represent the function 1 x1 + 6x2 = 0. Now we draw parallel lines with x2 = x1 until we intersect the 6 feasibility domain. The rst line which intersect this domain gives us the minimum of the objective function zmin = 4. Optimal solution is x 1 = 4, x2 = 0. Exercise 1.6.7. Find the minimum of the function 2x1 + 3x2 + x3 with the constraints: x1 + x2 + 3x4 3, 2x2 + 5x3 + 4x4 = 5, x + x3 2, 1 x1 , x2 , x3 , x4 {0, 1}. Exercise 1.6.8. Solve the following linear programming problem: min(2x1 + 3x2 ), x1 x2 + x3 = 1, x1 + x2 x4 = 1, x1 2x2 + x5 = 1, 2x1 + x3 x4 = 2, xi 0, 1 i 5.

APPLICATIONS

13

Solution. For solving the linear programming problem by simplex algorithm rst we solve the following problem: min(x6 + x7 + x8 ), x 1 x2 + x3 + x6 = 1, x1 + x2 x4 + x7 = 1, x1 2x2 + x5 = 1, 2 x1 + x3 x4 + x8 = 2, xi 0, 1 i 8. (we have introduced the slack variables x6 , x7 , x8 in the constraints of the initial problem, with an exception for the third constraint where the variable x5 is associated with a unit vector of the matrix of the coecients). The primal admissible basis is constructed from the columns of matrix A corresponding to the variables x6 , x7 , x5 , x8 . The simplex associated table 1.2 is:
TABLE 1.2

B.V. x6 x7 x5 x8

V.B.V. 1 1 1 2 4

x1 1 1 1 2 4

x2 1 1 2 0 0

x3 1 0 0 1 2

x4 0 1 0 1 2

x5 0 0 1 0 0

x6 1 0 0 0 0

x7 0 1 0 0 0

x8 0 0 0 1 0

Using entering variable choice rules the vector which entry in the basis will be a1 . Leaving variable choice rules indicates that we can eliminate any of the basis variables: we choose for example the variable that corresponds to x6 , thus a6 . We obtain the simplex table 1.3:
TABLE 1.3

B.V. x1 x7 x5 x8

V.B.V. 1 0 0 0 0

x1 1 0 0 0 0

x2 1 2 1 2 4

x3 1 1 1 1 2

x4 0 1 0 1 2

x5 0 0 1 0 0

x6 1 1 1 2 4

x7 0 1 0 0 0

x8 0 0 0 1 0

From table 1.3 we see that the value of the objective function is 0. All the slack variables are 0 but x7 and x8 are still basis variables. It is easy to see that the variable x7 can be eliminated from the basis and replaced with x2 , because the pivot B = 2 is dierent from 0. After the computations we obtain the simplex table: y72

14
TABLE 1.4

LINEAR PROGRAMMING

B.V. x1 x2 x5 x8

V.B.V. 1 0 0 0 0

x1 1 0 0 0 0

x2 0 1 0 0 0

x3 1/2 1/2 3/2 0 0

x4 1/2 1/2 1/2 0 0

x5 0 0 1 0 0

x6 1/2 1/2 3/2 1 2

x7 1/2 1/2 1/2 1 2

x8 0 0 0 1 0

The slack variable x8 can not be eliminated from the basis variables because all the values y8j , 1 j 5, are equal to 0. This shows us that the four-th equation, from the initial linear programming problem, is a consequence of the others equations (the four-th equation is the sum of the rst two equations, situation that can be observed from the beginning of computations). In this case, the four-th equation can be eliminated and also the corresponding line from the simplex table. The remainder table does not contain any slack variables in the basis, thus the rst phase is nished. In the second phase, we solve the initial linear programming problem, using initial basis the basis obtained in table 1.4, after the elimination the last row of the table. The corresponding simplex table is the following:
TABLE 1.5

B.V. x1 x2 x5 z

V.B.V. 1 0 0 2

x1 1 0 0 0

x2 0 1 0 0

x3 1/2 1/2 3/2 1/2

x4 1/2 1/2 1/2 5/2

x5 0 0 1 0

Because zj cj 0 for all j, 1 j 5, results that we obtained the optimal solution (degenerated): x 1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 0, and the optimal value of the objective function is z = 2. Exercise 1.6.9. (diet problem) In a bakery the are two kinds of cakes: brownies, which cost 50 cents each, and minicheesecakes, which cost 80 cents. The bakery is service-oriented and can sell a fraction of any item. The bakery requires three ounces of chocolate to make each brownie (no chocolate is needed in the cheesecakes). Two ounces of sugar are needed for each brownie and four ounces of sugar for each cheesecake. Finally, two ounces of cream cheese are needed for each brownie and ve ounces for each cheesecake. A snack consumer health-conscious, has decided that he needs at least six total ounces

APPLICATIONS

15

of chocolate in his snack, along with ten ounces of sugar and eight ounces of cream cheese. The consumer wishes to optimize his purchase by nding the least expensive combination of brownies and cheesecakes that meet these requirements. The data is sumarized in the following table:
TABLE 1.6

Brownie Cheesecake Requirements

Chocolate 3 0 6

Sugar 2 4 10

Cream Cheese 2 5 8

Cost 50 80

Find the optimal solution. Solution. The problem that the consumer must to solve is: min(50x + 80y ), 3x 6, 2x + 4y 10, 2x + 5y 8, x, y 0, where x and y represent the number of brownies and cheesecakes purchased, respectively. By applying the simplex method of the previous selection, we nd that the unique solution is (2, 3/2) , thus the value of the objective function, computed for the optimal solution is 220. We now adopt the perspective of the wholesaler who supplies the baker with the chocolate, sugar, and cream cheese needed to make the goodies. The baker informs the supplier that he intends to purchase at least six ounces of chocolate, ten ounces of sugar, and eight ounces of cream cheese, to meet the consumer minimum nutritional requirements.He also shows the supplier the other data from the table. The supplier now solves the following optimization problem: How can I set the prices per ounce of chocolate, sugar, and cream cheese so that the baker will buy from me, and so that I will maximize my revenue? The baker will buy only if the total cost of raw materials for brownies is below 50 cents; otherwise he runs the risk of making a loss if the student opts to buy brownies. This restriction imposes the following constraint on the prices: 3u1 + 2u2 + 2u3 50. Similarly, he requires the cost of the raw materials for each cheesecake to be below 80 cents, leading to a second constraint:

16

LINEAR PROGRAMMING

4u2 + 5u3 80. Clearly, all the prices must be nonnegative. Moreover, the revenue from the guaranteed sales is 6u1 + 10u2 + 8u3 . In summary, the problem that the supplier solves to maximize his guaranteed revenue from the consumer snack is as follows (dual problem): max(6u1 + 10u2 + 8u3 ) 3u1 + 2u2 + 2u3 50 4u2 + 5u3 80 u1 , u2 , u3 0. The solution of this problem is u = (10/3, 20, 0) , thus the value of the dual objective function is 220 (which is the same with the value of the objective function computed for optimal solution of the primal problem). Exercise 1.6.10. A gardener produce two types of mixtures for planting: gardening mixture and potting mixture. A package of gardening mixture requires 2 kg of soil, 1 kg of peat moss and 1 kg of fertilizer. A package of potting mixture requires 1 kg of soil, 2 kg of peat moss and 3 kg of fertilizer. The gardener has at most 16 kg of soil, 11 kg of peat moss and 15 kg of fertilizer. A package of garden mixture sells for 3 EURO and a package of potting mixture sells for 5 EURO. How many packages of each type of mixture must the gardener to produce to maximize the revenue? Solution. The problem that the gardener must to solve is: max(3x + 5y ), 2x + y 16, x + 2y 11, x + 3y 15, x, y 0, where x and y are the packeges of gardening mixture and potting mixture. The gardener must produce 7 packages of gardening mixture and 2 packages of potting mixture, the revenue is 31 EURO. Let see the problem from the point of view of the supplier which is informed by the gardener that he want to sell with at least 3 EURO a package of gardening mixture and at least 5 EURO a package of potting mixture. Also, the gardener

APPLICATIONS

17

will inform the supplier that he intend to buy 16 kg of soil, 11 kg of peat moss and 15 kg of fertilizer. The supplier will setup the prices u,v and w for 1 kg of soil, 1 kg of peat moss respectivelly 1 kg of fertilizer such that to minimize the function 16u + 11v + 15w (otherwise the gardener will buy from another supplier). Also the restriction regarding the cost of gardening mixture and potting mixture must be imposed: 2u + v + w 3 respectivelly u + 2v + 3w 5. Thus, the problem which must be solved by the supplier is the dual problem of the gardener: min(16u + 11v + 15w), 2u + v + w 3, u + 2v + 3w 15, u, u, w 0.

18

LINEAR PROGRAMMING

Вам также может понравиться