If a Linear Programming (LP) problem involving several variables and constraints is to be solved by using the simplex method described in Chapter 3, it requires a large amount of computer storage and time. Some techniques, which require less computational time and storage space compared to the original simplex method, have been developed. Among these techniques, the revised simplex method is very popular. The principal difference between the original simplex method and the revised one is that in the former we transform all the elements of the simplex table, while in the latter we need to transform only the elements of an inverse matrix. Associated with every LP problem, another LP problem, called the dual, can be formulated. The solution of a given LP problem, in many cases, can be obtained by solving its dual in a much simpler manner.
As stated above, one of the difficulties in certain practical LP problems is that the number of variables and/or the number of constraints is so large that it exceeds the storage capacity of the available computer. If the LP problem has a special structure, a principle known as the decomposition principle can be used to solve the problem more efficiently. In many practical problems, one will be interested not only in finding the optimum solution to a LP problem, but also in finding how the optimum solution changes when some parameters of the problem, such as cost coefficients change. Hence, the sensitivity or postoptimality analysis becomes very important.
An important special class of LP problems, known as transportation problems, occurs often in practice. These problems can be solved by algorithms that are more efficient (for this class of problems) than the simplex method. Karmarkar's method is an interior method and has been shown to be superior to the simplex method of Dantzig for large problems. The quadratic programming problem is the best‐behaved nonlinear programming problem. It has a quadratic objective function and linear constraints and is convex (for minimization problems). Hence the quadratic programming problem can be solved by suitably modifying the linear programming techniques. All these topics are discussed in this chapter.
We notice that the simplex method requires the computing and recording of an entirely new tableau at each iteration. But much of the information contained in the table is not used; only the following items are needed.
determines the variable x s that has to be brought into the basis in the next iteration.
and the values of the basic variables
have to be calculated. With this information, the variable x r that has to be removed from the basis is found by computing the quantity
and a pivot operation is performed on . Thus, only one nonbasic column of the current tableau is useful in finding x r . Since most of the linear programming problems involve many more variables (columns) than constraints (rows), considerable effort and storage is wasted in dealing with the for j ≠ s. Hence, it would be more efficient if we can generate the modified cost coefficients and the column , from the original problem data itself. The revised simplex method is used for this purpose; it makes use of the inverse of the current basis matrix in generating the required quantities.
Theoretical Development. Although the revised simplex method is applicable for both phase I and phase II computations, the method is initially developed by considering linear programming in phase II for simplicity. Later, a step‐by‐step procedure is given to solve the general linear programming problem involving both phases I and II.
Let the given linear programming problem (phase II) be written in column form as
Minimize
subject to
where the jth column of the coefficient matrix A is given by
Assuming that the linear programming problem has a solution, let
be a basis matrix with
representing the corresponding vectors of basic variables and cost coefficients, respectively. If X B is feasible, we have
As in the regular simplex method, the objective function is included as the (m + 1)th equation and −f is treated as a permanent basic variable. The augmented system can be written as
where
Since B is a feasible basis for the system of Eq (4.4), the matrix D defined by
will be a feasible basis for the augmented system of Eq (4.6). The inverse of D can be found to be
Definition. The row vector
is called the vector of simplex multipliers relative to the f equation. If the computations correspond to phase I, two vectors of simplex multipliers, one relative to the f equation, and the other relative to the w equation are to be defined as
By premultiplying each column of Eq. (4.6) by D −1, we obtain the following canonical system of equations 2:
where
From Eq. (4.8), the updated column can be identified as
and the modified cost coefficient as
Equations (4.9) and (4.10) can be used to perform a simplex iteration by generating and from the original problem data, A j and c j .
Once and are computed, the pivot element can be identified by using Eqs. (4.1) and (4.2). In the next step, P s is introduced into the basis and P jr is removed. This amounts to generating the inverse of the new basis matrix. The computational procedure can be seen by considering the matrix:
where e i is a (m + 1)‐dimensional unit vector with a one in the ith row. Premultiplication of the above matrix by D −1 yields
By carrying out a pivot operation on , this matrix transforms to
where all the elements of the vector β are, in general, nonzero and the second partition gives the desired matrix .3 It can be seen that the first partition (matrix I) is included only to illustrate the transformation, and it can be dropped in actual computations. Thus, in practice, we write the m + 1 × m + 2 matrix
and carry out a pivot operation on . The first m + 1 columns of the resulting matrix will give us the desired matrix .
Procedure. The detailed iterative procedure of the revised simplex method to solve a general linear programming problem is given by the following steps.
Here the constants b i , i = 1 to m, are made nonnegative by changing, if necessary, the signs of all terms in the original equations before the addition of the artificial variables x n+i , i = 1 to m. Since the original infeasibility form is given by
the artificial variables can be eliminated from Eq. (4.15) by adding the first m equations of Eq. (4.14) and subtracting the result from Eq. (4.15). The resulting equation is shown as the last equation in Eq. (4.14) with
Equations (4.14) are written in tableau form as shown in Table 4.1.
In general, at the start of some cycle k (k = 0 to start with) we open a table similar to Table 4.2, as shown in Table 4.4. This can also be interpreted as composed of the inverse of the current basis, B −1 = [β ij ], two rows for the simplex multipliers π i and σ i , a column for the values of the basic variables in the basic solution, and a column for the variable x s . At the start of any cycle, all entries in the table, except the last column, are known.
and entered in a table form as shown in Table 4.3. For cycle 0, σ T = 0 and hence ≡ d j .
If some , choose x s as the variable to enter the basis in the next cycle in place of the present rth basic variable (r will be determined later) such that
On the other hand, if the current cycle corresponds to phase II, find whether all . If all , the current basic feasible solution is also an optimal solution and hence terminate the process. If some , choose x s to enter the basic set in the next cycle in place of the rth basic variable (r to be found later), such that
that is,
and enter in the last column of Table 4.2 (if cycle 0) or Table 4.4 (if cycle k).
if x ji is a basic variable, and x j = 0 if x j is a nonbasic variable (j ≠ s), satisfies the original system and has the property
Hence terminate the process. On the other hand, if some , select the variable x r that can be dropped in the next cycle as
In the case of a tie, choose r at random.
Table 4.1 Original System of Equations.
Admissible (original) variable | Artificial variable | Objective variable | |||||||
x 1 | x 2 | ⋯x j | ⋯x n | x n+1 | x n+2 ⋯ | x n+m | −f | −w | Constant |
Initial basis | |||||||||
1 | b 1 | ||||||||
1 | b 2 | ||||||||
1 | b m | ||||||||
c 1 | c 2 | c j | c n | 0 | 0 | 0 | 1 | 0 | 0 |
d 1 | d 2 | d j | d n | 0 | 0 | 0 | 0 | 1 | −w 0 |
Table 4.2 Table at the Beginning of Cycle 0.
Columns of the canonical form | ||||||||||
Basic variables | x n+1 | x n+2 | ⋯ | x n+r | ⋯ | x n+m | −f | −w | Value of the basic variable | x s a |
x n+1 | 1 | b 1 | ||||||||
x n+2 | 1 | b 2 | ||||||||
⋮ | ⋮ | |||||||||
x n+r | 1 | b r | ||||||||
⋮ | ⋮ | |||||||||
x n+m | 1 | b m | ||||||||
Inverse of the basis | ||||||||||
−f | 0 | 0 | ⋯ | 0 | ⋯ | 0 | 1 | 0 | ||
−w | 0 | 0 | ⋯ | 0 | ⋯ | 0 | 1 |
a This column is blank at the beginning of cycle 0 and filled up only at the end of cycle 0.
Table 4.3 Relative Cost Factor or .
Variable x j | ||||||||
Cycle number | x 1 | x 2 | ⋯ | xn | x n+1 | x n+2 | ⋯ | x n+m |
d 1 | d 2 | ⋯ | d n | 0 | 0 | ⋯ | 0 | |
Use the values of σ i (if phase I) or π i (if phase II) of the current cycle and compute | ||||||||
or | ||||||||
Enter or in the row corresponding to the current cycle and choose the pivot column s such that = min (if phase I) or = min (if phase II) |
Table 4.4 Table at the Beginning of Cycle k.
Columns of the original canonical form | |||||
Basic variable | x n+1 ⋯ x n+m | −f | −w | Value of the basic variable | xs a |
[β ij ] = [ i,n+j ] | |||||
← Inverse of the basis → | |||||
x j1 | β 11 ⋯ β 1m | ||||
⋮ | ⋮ ⋮ | ⋮ | |||
x jr | β r1 ⋯ β rm | ||||
⋮ | ⋮ ⋮ | ⋮ | |||
x jm | β m1 ⋯ β mm | ||||
−f | −π 1 ⋯ −π m | 1 | |||
(−π j = + n+j ) | |||||
−σ 1 ⋯ −σ m | 1 | ||||
(−σ j = + n+j ) |
a This column is blank at the start of cycle k and is filled up only at the end of cycle k.
Associated with every linear programming problem, called the primal, there is another linear programming problem called its dual. These two problems possess very interesting and closely related properties. If the optimal solution to any one is known, the optimal solution to the other can readily be obtained. In fact, it is immaterial which problem is designated the primal since the dual of a dual is the primal. Because of these properties, the solution of a linear programming problem can be obtained by solving either the primal or the dual, whichever is easier. This section deals with the primal–dual relations and their application in solving a given linear programming problem.
A nearly symmetric relation between a primal problem and its dual problem can be seen by considering the following system of linear inequalities (rather than equations).
Primal Problem
Dual Problem. As a definition, the dual problem can be formulated by transposing the rows and columns of Eq. (4.17) including the right‐hand side and the objective function, reversing the inequalities and maximizing instead of minimizing. Thus, by denoting the dual variables as y 1 , y 2 , …, y m , the dual problem becomes
Equations (4.17) and (4.18) are called symmetric primal–dual pairs and it is easy to see from these relations that the dual of the dual is the primal.
Although the primal–dual relations of Section 4.3.1 are derived by considering a system of inequalities in nonnegative variables, it is always possible to obtain the primal–dual relations for a general system consisting of a mixture of equations, less than or greater than type of inequalities, nonnegative variables or variables unrestricted in sign by reducing the system to an equivalent inequality system of Eq. (4.17). The correspondence rules that are to be applied in deriving the general primal–dual relations are given in Table 4.11 and the primal–dual relations are shown in Table 4.12.
Table 4.11 Correspondence Rules for Primal–Dual Relations.
Primal quantity | Corresponding dual quantity |
Objective function: Minimize c T X | Maximize Y T b |
Variable x i ≥ 0 | ith constraint Y T A i ≤ c i (inequality) |
Variable x i unrestricted in sign | ith constraint Y T A i = c i (equality) |
jth constraint, A j X = b j (equality) | jth variable y j unrestricted in sign |
jth constraint, A j X ≥ b j (inequality) | jth variable y j ≥ 0 |
Coefficient matrix A ≡ [A 1 … A m ] | Coefficient matrix A T ≡ [A 1 , …, A m ]T |
Right‐hand‐side vector b | Right‐hand‐side vector c |
Cost coefficients c | Cost coefficients b |
Table 4.12 Primal–Dual Relations.
Primal problem | Corresponding dual problem |
where | where |
x i ≥ 0, i = 1, 2, …, n *; | y i ≥ 0, i = m * + 1, m * + 2, …, m; |
and | and |
x i unrestricted in sign, i = n * + 1, n * + 2, …, n | y i unrestricted in sign, i = 1, 2, …, m * |
If m * = m and n * = n, primal problem shown in Table 4.12 reduces to the standard form and the general primal–dual relations take the special form shown in Table 4.13. It is to be noted that the symmetric primal–dual relations, discussed in Section 4.3.1, can also be obtained as a special case of the general relations by setting m * = 0 and n * = n in the relations of Table 4.12.
Table 4.13 Primal–Dual Relations Where m * = m and n * = n.
Primal problem | Corresponding dual problem |
Minimize | Maximize |
subject to | subject to |
where | where |
x i ≥ 0, i = 1, 2, …, n | y i is unrestricted in sign, i = 1, 2,⋯, m |
In matrix form | In matrix form |
Minimize f = c T X | Maximize ν = Y T b |
subject to | subject to |
AX = b | A T Y ≤ c |
where | where |
X ≥ 0 | Y is unrestricted in sign |
The following theorems are useful in developing a method for solving LP problems using dual relationships. The proofs of these theorems can be found in Ref. [10].
There exist a number of situations in which it is required to find the solution of a linear programming problem for a number of different right‐hand‐side vectors b (i). Similarly, in some cases, we may be interested in adding some more constraints to a linear programming problem for which the optimal solution is already known. When the problem has to be solved for different vectors b (i), one can always find the desired solution by applying the two phases of the simplex method separately for each vector b (i). However, this procedure will be inefficient since the vectors b (i) often do not differ greatly from one another. Hence the solution for one vector, say, b (1) may be close to the solution for some other vector, say, b (2). Thus a better strategy is to solve the linear programming problem for b (1) and obtain an optimal basis matrix B. If this basis happens to be feasible for all the right‐hand‐side vectors, that is, if
then it will be optimal for all cases. On the other hand, if the basis B is not feasible for some of the right‐hand‐side vectors, that is, if
then the vector of simplex multipliers
will form a dual feasible solution since the quantities
are independent of the right‐hand‐side vector b (r). A similar situation exists when the problem has to be solved with additional constraints.
In both the situations discussed above, we have an infeasible basic (primal) solution whose associated dual solution is feasible. Several methods have been proposed, as variants of the regular simplex method, to solve a linear programming problem by starting from an infeasible solution to the primal. All these methods work in an iterative manner such that they force the solution to become feasible as well as optimal simultaneously at some stage. Among all the methods, the dual simplex method developed by Lemke [2] and the primal–dual method developed by Dantzig et al. [3] have been most widely used. Both these methods have the following important characteristics:
We shall consider only the dual simplex algorithm in this section.
Algorithm. As stated earlier, the dual simplex method requires the availability of a dual feasible solution that is not primal feasible to start with. It is the same as the simplex method applied to the dual problem but is developed such that it can make use of the same tableau as the primal method. Computationally, the dual simplex algorithm also involves a sequence of pivot operations, but with different rules (compared to the regular simplex method) for choosing the pivot element.
Let the problem to be solved be initially in canonical form with some of the , the relative cost coefficients corresponding to the basic variables c j = 0, and all other c j ≥ 0. Since some of the are negative, the primal solution will be infeasible, and since all , the corresponding dual solution will be feasible. Then the simplex method works according to the following iterative steps.
If all ≥ 0, the primal will not have any feasible (optimal) solution.
The following example is considered to illustrate the dual simplex method.
Some of the linear programming problems encountered in practice may be very large in terms of the number of variables and/or constraints. If the problem has some special structure, it is possible to obtain the solution by applying the decomposition principle developed by Dantzing and Wolfe [4]. In the decomposition method, the original problem is decomposed into small subproblems and then these subproblems are solved almost independently. The procedure, when applicable, has the advantage of making it possible to solve large‐scale problems that may otherwise be computationally very difficult or infeasible. As an example of a problem for which the decomposition principle can be applied, consider a company having two factories, producing three and two products, respectively. Each factory has its own internal resources for production, namely, workers and machines. The two factories are coupled by the fact that there is a shared resource that both use, for example, a raw material whose availability is limited. Let b 2 and b 3 be the maximum available internal resources for factory 1, and let b 4 and b 5 be the similar availabilities for factory 2. If the limitation on the common resource is b 1, the problem can be stated as follows:
subject to
where x i and y j are the quantities of the various products produced by the two factories (design variables) and the a ij are the quantities of resource i required to produce 1 unit of product j.
An important characteristic of the problem stated in Eq. (4.24) is that its constraints consist of two independent sets of inequalities. The first set consists of a coupling constraint involving all the design variables, and the second set consists of two groups of constraints, each group containing the design variables of that group only. This problem can be generalized as follows:
subject to
where
It can be noted that if the size of the matrix A k is (r 0 × m k ) and that of B k is (r k × m k ), the problem has constraints and variables.
Since there are a large number of constraints in the problem stated in Eq. (4.25), it may not be computationally efficient to solve it by using the regular simplex method. However, the decomposition principle can be used to solve it in an efficient manner. The basic solution procedure using the decomposition principle is given by the following steps.
The subsidiary constraint set
represents r k equality constraints. These constraints along with the requirement X k ≥ 0 define the set of feasible solutions of Eq. (4.27). Assuming that this set of feasible solutions is a bounded convex set, let s k be the number of vertices of this set. By using the definition of convex combination of a set of points,4 any point X k satisfying Eq. (4.27) can be represented as
where are the extreme points of the feasible set defined by Eq. (4.27). These extreme points , can be found by solving the Eq. (4.27).
By substituting Eq. (4.28) into Eq. (4.25), it is possible to eliminate the subsidiary constraint sets from the original problem and obtain the following equivalent form:
subject to
Since the extreme points are known from the solution of the set B k X k = b k , X k ≥ 0 , k = 1, 2, …, p, and since c k and A k , k = 1, 2, …, p, are known as problem data, the unknowns in Eq. (4.32) are μ j,i , i = 1, 2, …, s j ; j = 1, 2, …, p. Hence μ j,i will be the new decision variables of the modified problem stated in Eq. (4.32).
where
In most practical problems, we are interested not only in optimal solution of the LP problem, but also in how the solution changes when the parameters of the problem change. The change in the parameters may be discrete or continuous. The study of the effect of discrete parameter changes on the optimal solution is called sensitivity analysis and that of the continuous changes is termed parametric programming. One way to determine the effects of changes in the parameters is to solve a series of new problems once for each of the changes made. This is, however, very inefficient from a computational point of view. Some techniques that take advantage of the properties of the simplex solution are developed to make a sensitivity analysis. We study some of these techniques in this section. There are five basic types of parameter changes that affect the optimal solution:
In general, when a parameter is changed, it results in one of three cases:
Suppose that we have found the optimal solution to a LP problem. Let us now change the b i to b i + Δb i so that the new problem differs from the original only on the right‐hand side. Our interest is to investigate the effect of changing b i to b i + Δb i on the original optimum. We know that a basis is optimal if the relative cost coefficients corresponding to the nonbasic variables are nonnegative. By considering the procedure according to which are obtained, we can see that the values of are not related to the b i . The values of depend only on the basis, on the coefficients of the constraint matrix, and the original coefficients of the objective function. The relation is given in Eq. (4.10):
Thus, changes in b i will affect the values of basic variables in the optimal solution and the optimality of the basis will not be affected provided that the changes made in b i do not make the basic solution infeasible. Thus, if the new basic solution remains feasible for the new right‐hand side, that is, if
then the original optimal basis, B, also remains optimal for the new problem. Since the original solution, say5
is given by
Equation (4.34) can also be expressed as
where
Hence the original optimal basis B remains optimal provided that the changes made in b i , Δb i , satisfy the inequalities (4.36). The change in the value of the ith optimal basic variable, Δx i , due to the change in b i is given by
that is,
Finally, the change in the optimal value of the objective function (Δf) due to the change Δb i can be obtained as
Suppose that the changes made in b i (Δb i ) are such that the inequality (4.34) is violated for some variables so that these variables become infeasible for the new right‐hand‐side vector. Our interest in this case will be to determine the new optimal solution. This can be done without reworking the problem from the beginning by proceeding according to the following steps:
The procedure outlined above saves considerable time and effort compared to the reworking of the problem from the beginning if only a few variables become infeasible with the new right‐hand side. However, if the number of variables that become infeasible are not few, the procedure above might also require as much effort as the one involved in reworking of the problem from the beginning.
The problem here is to find the effect of changing the cost coefficients from c j to c j + Δc j on the optimal solution obtained with c j . The relative cost coefficients corresponding to the nonbasic variables, x m+1, x m+2 , …, x n are given by Eq. (4.10):
where the simplex multipliers π i are related to the cost coefficients of the basic variables by the relation
that is,
From Eqs. (4.40) and (4.41), we obtain
If the c j are changed to c j + Δc j , the original optimal solution remains optimal, provided that the new values of , satisfy the relation
where indicate the values of the relative cost coefficients corresponding to the original optimal solution.
In particular, if changes are made only in the cost coefficients of the nonbasic variables, Eq. (4.43) reduces to
If Eq. (4.43) is satisfied, the changes made in c j , Δc j , will not affect the optimal basis and the values of the basic variables. The only change that occurs is in the optimal value of the objective function according to
and this change will be zero if only the c j of nonbasic variables are changed.
Suppose that Eq. (4.43) is violated for some of the nonbasic variables. Then it is possible to improve the value of the objective function by bringing any nonbasic variable that violates Eq. (4.43) into the basis provided that it can be assigned a nonzero value. This can be done easily with the help of the previous optimal table. Since some of the are negative, we start the optimization procedure again by using the old optimum as an initial feasible solution. We continue the iterative process until the new optimum is found. As in the case of changing the right‐hand‐side b i , the effectiveness of this procedure depends on the number of violations made in Eq. (4.43) by the new values c j + Δc j .
In some of the practical problems, it may become necessary to solve the optimization problem with a series of objective functions. This can be accomplished without reworking the entire problem for each new objective function. Assume that the optimum solution for the first objective function is found by the regular procedure. Then consider the second objective function as obtained by changing the first one and evaluate Eq. (4.43). If the resulting ≥ 0, the old optimum still remains as optimum and one can proceed to the next objective function in the same manner. On the other hand, if one or more of the resulting < 0, we can adopt the procedure outlined above and continue the iterative process using the old optimum as the starting feasible solution. After the optimum is found, we switch to the next objective function.
Suppose that the optimum solution of a LP problem with n variables x 1 , x 2 , …, x n has been found and we want to examine the effect of adding some more variables x n+k , k = 1, 2, …, on the optimum solution. Let the constraint coefficients and the cost coefficients corresponding to the new variables x n+k be denoted by a i, n+k , i = 1 to m and c n+k , respectively. If the new variables are treated as additional nonbasic variables in the old optimum solution, the corresponding relative cost coefficients are given by
where π 1 , π 2 , …, π m are the simplex multipliers corresponding to the original optimum solution. The original optimum remains optimum for the new problem also provided that ≥ 0 for all k. However, if one or more < 0, it pays to bring some of the new variables into the basis, provided that they can be assigned a nonzero value. For bringing a new variable into the basis, we first have to transform the coefficients a i,n+k into , n+k so that the columns of the new variables correspond to the canonical form of the old optimal basis. This can be done by using Eq. (4.9) as
that is,
where B −1 = [β ij ] is the inverse of the old optimal basis. The rules for bringing a new variable into the basis, finding a new basic feasible solution, testing this solution for optimality, and the subsequent procedure is same as the one outlined in the regular simplex method.
Here the problem is to investigate the effect of changing the coefficient a ij to a ij + Δa ij after finding the optimum solution with a ij . There are two possibilities in this case. The first possibility occurs when all the coefficients a ij , in which changes are made, belong to the columns of those variables that are nonbasic in the old optimal solution. In this case, the effect of changing a ij on the optimal solution can be investigated by adopting the procedure outlined in the preceding section. The second possibility occurs when the coefficients changed a ij correspond to a basic variable, say, x j0 of the old optimal solution. ddd The following procedure can be adopted to examine the effect of changing a i, j0 to a i, j0 + Δa i, j0.
and cost coefficient
where Δc k = 0 for k = 1, 2, …, j 0 − 1, j 0 + 1, …, m and Δc j0 = N − c j0.
Suppose that we have solved a LP problem with m constraints and obtained the optimal solution. We want to examine the effect of adding some more inequality constraints on the original optimum solution. For this we evaluate the new constraints by substituting the old optimal solution and see whether they are satisfied. If they are satisfied, it means that the inclusion of the new constraints in the old problem would not have affected the old optimum solution, and hence the old optimal solution remains optimal for the new problem also. On the other hand, if one or more of the new constraints are not satisfied by the old optimal solution, we can solve the problem without reworking the entire problem by proceeding as follows.
This section deals with an important class of LP problems called the transportation problem. As the name indicates, a transportation problem is one in which the objective for minimization is the cost of transporting a certain commodity from a number of origins to a number of destinations. Although the transportation problem can be solved using the regular simplex method, its special structure offers a more convenient procedure for solving this type of problems. This procedure is based on the same theory of the simplex method, but it makes use of some shortcuts that yield a simpler computational scheme.
Suppose that there are m origins R 1 , R 2 , ···, R m (e.g. warehouses) and n destinations, D 1 , D 2 , ···, D n (e.g. factories). Let a i be the amount of a commodity available at origin i (i = 1, 2, …, m) and b j be the amount required at destination j (j = 1, 2,…, n). Let c ij be the cost per unit of transporting the commodity from origin i to destination j. The objective is to determine the amount of commodity (x ij ) transported from origin i to destination j such that the total transportation costs are minimized. This problem can be formulated mathematically as
subject to
Clearly, this is a LP problem in mn variables and m + n equality constraints.
Equation (4.53) state that the total amount of the commodity transported from the origin i to the various destinations must be equal to the amount available at origin i (i = 1, 2, …, m), while Eq. (4.54) state that the total amount of the commodity received by destination j from all the sources must be equal to the amount required at the destination j (j = 1, 2, …, n). The nonnegativity conditions Eq. (4.55) are added since negative values for any x ij have no physical meaning. It is assumed that the total demand equals the total supply, that is,
Equation (4.56), called the consistency condition, must be satisfied if a solution is to exist. This can be seen easily since
The problem stated in Eqs. (4.52)-(4.56) was originally formulated and solved by Hitchcock in 1941 [6]. This was also considered independently by Koopmans in 1947 [7]. Because of these early investigations the problem is sometimes called the Hitchcock–Koopmans transportation problem. The special structure of the transportation matrix can be seen by writing the equations in standard form:
We notice the following properties from Eq. (4.58):
These are the special properties of the transportation problem that allow development of the transportation technique. To facilitate the identification of a starting solution, the system of equations (4.58) is represented in the form of an array, called the transportation array, as shown in Figure 4.2. In all the techniques developed for solving the transportation problem, the calculations are made directly on the transportation array.
Computational Procedure. The solution of a LP problem, in general, requires a calculator or, if the problem is large, a high‐speed digital computer. On the other hand, the solution of a transportation problem can often be obtained with the use of a pencil and paper since additions and subtractions are the only calculations required. The basic steps involved in the solution of a transportation problem are
The details of these steps are given in Ref. [10].
Karmarkar proposed a new method in 1984 for solving large‐scale linear programming problems very efficiently. The method is known as an interior method since it finds improved search directions strictly in the interior of the feasible space. This is in contrast with the simplex method, which searches along the boundary of the feasible space by moving from one feasible vertex to an adjacent one until the optimum point is found. For large LP problems, the number of vertices will be quite large and hence the simplex method would become very expensive in terms of computer time. Along with many other applications, Karmarkar's method has been applied to aircraft route scheduling problems. It was reported [19] that Karmarkar's method solved problems involving 150 000 design variables and 12 000 constraints in one hour while the simplex method required four hours for solving a smaller problem involving only 36 000 design variables and 10 000 constraints. In fact, it was found that Karmarkar's method is as much as 50 times faster than the simplex method for large problems.
Karmarkar's method is based on the following two observations:
It is well known that in many numerical problems, by changing the units of data or rescaling (e.g. using feet instead of inches), we may be able to reduce the numerical instability. In a similar manner, Karmarkar observed that the variables can be transformed (in a more general manner than ordinary rescaling) so that straight lines remain straight lines while angles and distances change for the feasible space.
Karmarkar's method requires the LP problem in the following form:
subject to
where X = {x 1 , x 2 ,…, x n }T , c = {c 1 ,c 2 ,…,c n }T, and [a] is an m × n matrix. In addition, an interior feasible starting solution to Eq. (4.59) must be known. Usually,
is chosen as the starting point. In addition, the optimum value of f must be zero for the problem. Thus
Although most LP problems may not be available in the form of Eq. (4.59) while satisfying the conditions of Eq. (4.60), it is possible to put any LP problem in a form that satisfies Eqs. (4.59) and (4.60) as indicated below.
Let the given LP problem be of the form
subject to
To convert this problem into the form of Eq. (4.59), we use the procedure suggested in Ref. [20] and define integers m and n such that X will be an (n − 3)‐component vector and [α] will be a matrix of order m − 1 × n − 3. We now define the vector = {z 1 , z 2 ,···, z n−3}T as
where β is a constant chosen to have a sufficiently large value such that
for any feasible solution X (assuming that the solution is bounded). By using Eq. (4.62), the problem of Eq. (4.61) can be stated as follows:
subject to
We now define a new vector z as
and solve the following related problem instead of the problem in Eq. (4.64):
subject to
where e is an (m − 1)‐component vector whose elements are all equal to 1, z n−2 is a slack variable that absorbs the difference between 1 and the sum of other variables, z n−1 is constrained to have a value of 1/n, and M is given a large value (corresponding to the artificial variable z n ) to force z n to zero when the problem stated in Eq. (4.61) has a feasible solution. Eq. (4.65) are developed such that if z is a solution to these equations, X = β will be a solution to Eq. (4.61) if Eq. (4.61) have a feasible solution. Also, it can be verified that the interior point z = (1/n) e is a feasible solution to Eq. (4.65). Equation (4.65) can be seen to be the desired form of Eq. (4.61) except for a 1 on the right‐hand side. This can be eliminated by subtracting the last constraint from the next‐to‐last constraint, to obtain the required form:
subject to
Note: When Eq. (4.66) are solved, if the value of the artificial variable z n > 0, the original problem in Eq. (4.61) is infeasible. On the other hand, if the value of the slack variable z n−2 = 0, the solution of the problem given by Eq. (4.61) is unbounded.
Starting from an interior feasible point X (1), Karmarkar's method finds a sequence of points X (2) , X (3) , ··· using the following iterative procedure:
Set the iteration number as k = 1.
where ε is a small number. If Eq. (4.67) is not satisfied, go to step 3.
where ||c|| is the length of the vector c, [I] the identity matrix of order n, [D(X (k))] an n × n matrix with all off‐diagonal entries equal to 0, and diagonal entries equal to the components of the vector X (k) as
[P] is an (m + 1) × n matrix whose first m rows are given by [a] [D(X (k))] and the last row is composed of 1's:
and the value of the parameter α is usually chosen as α = to ensure convergence. Once Y (k+1) is found, the components of the new point X (k+1) are determined as
Set the new iteration number as k = k + 1 and go to step 2.
A quadratic programming problem can be stated as
subject to
where
In Eq. (4.72) the term X T DX /2 represents the quadratic part of the objective function with D being a symmetric positive‐definite matrix. If D = 0, the problem reduces to a LP problem. The solution of the quadratic programming problem stated in Eqs. (4.72)-(4.74) can be obtained by using the Lagrange multiplier technique. By introducing the slack variables , i = 1, 2, …, m, in Eq. (4.73) and the surplus variables , j = 1, 2, ..., n, in Eq. (4.74), the quadratic programming problem can be written as (see Eq. (4.72)) subject to the equality constraints
where
The Lagrange function can be written as
The necessary conditions for the stationariness of L give
By defining a set of new variables Y i as
Equation (4.81) can be written as
Multiplying Eq. (4.79) by s i and Eq. (4.80) by t j , we obtain
Combining Eqs. (4.84) and (4.85), and Eqs. (4.82) and (4.86), we obtain
Thus, the necessary conditions can be summarized as follows:
We can notice one important thing in Eqs. (4.89)-(4.96). With the exception of Eqs. (4.95) and (4.96), the necessary conditions are linear functions of the variables x j , Y i , λ i , and θ j . Thus, the solution of the original quadratic programming problem can be obtained by finding a nonnegative solution to the set of m + n linear equations given by Eqs. (4.89) and (4.90), which also satisfies the m + n equations stated in Eqs. (4.95) and (4.96).
Since D is a positive‐definite matrix, f (X) will be a strictly convex function,6 and the feasible space is convex (because of linear equations), any local minimum of the problem will be the global minimum. Further, it can be seen that there are 2 (n + m) variables and 2 (n + m) equations in the necessary conditions stated in Eqs. (4.89)-(4.96). Hence the solution of the Eqs. must be unique. Thus, the feasible solution satisfying all the Eqs. (4.89)-(4.96), if it exists, must give the optimum solution of the quadratic programming problem directly. The solution of the system of equations above can be obtained by using phase I of the simplex method. The only restriction here is that the satisfaction of the nonlinear relations, Eqs. (4.95) and (4.96), has to be maintained all the time. Since our objective is just to find a feasible solution to the set of Eqs. (4.89)-(4.96), there is no necessity of phase II computations. We shall follow the procedure developed by Wolfe [21] to apply phase I. This procedure involves the introduction of n nonnegative artificial variables z i into the Eq. (4.89) so that
Then we minimize
subject to the constraints
While solving this problem, we have to take care of the additional conditions
Thus, when deciding whether to introduce Y i into the basic solution, we first have to ensure that either λ i is not in the solution or λ i will be removed when Y i enters the basis. Similar care has to be taken regarding the variables θ j and x j . These additional checks are not very difficult to make during the solution procedure.
The solution of different types of optimization problems using MATLAB is presented in Chapter 17. Specifically, the MATLAB solution of a linear programming problem using the interior point method is given in Example 17.3. Also, the MATLAB solution of a quadratic programming problem (given in Example 4.14) is presented as Example 17.4.
(a) | Karmarkar's method | Moves from one vertex to another |
(b) | Simplex method | Interior point algorithm |
(c) | Quadratic programming | Phase I computations not required |
(d) | Dual simplex method | Dantzig and Wolfe method |
(e) | Decomposition method | Wolfe's method |
(a) |
x
i
is nonnegative x i is unrestricted |
ith constraint is of less‐than or equal‐to type Maximization type |
(b) | ith constraint is of equality type | ith variable is unrestricted |
(c) | ith constraint is of greater‐than or equal‐to type | ith variable is nonnegative |
(d) | Minimization type | ith constraint is of equality type |
Solve LP problems 4.1–4.3 by the revised simplex method.
subject to
subject to
Solve this problem using the dual simplex method.
Quantity | Stream 1 (i = 1) | Stream 2 (i = 2) |
Capacity of reservoir i | 9 | 7 |
Available release from reservoir i | 9 | 6 |
Capacity of channel below reservoir i | 4 | 4 |
Actual release from reservoir i | x 1 | x 2 |
The capacity of the main channel below the confluence of the two streams is 5 units. If the benefit is equivalent to $2 × 106 and $3 × 106 per unit of water released from reservoirs 1 and 2, respectively, determine the releases x 1 and x 2 from the reserovirs to maximize the benefit. Solve this problem using duality theory.
subject to
subject to
has a feasible solution. Verify your result graphically.
subject to
subject to
subject to
Product | ||||||
A | B | C | D | Maximum quantity available | ||
Copper (lb) | 4 | 9 | 7 | 10 | 6000 | |
Zinc (lb) | 2 | 1 | 3 | 20 | 4000 | |
Profit per unit ($) | 15 | 25 | 20 | 60 |
Find the number of units of the various products to be produced for maximizing the profit.
subject to
Investigate the change in the optimum solution of Problem 29 when the following changes are made (a) by using sensitivity analysis and (b) by solving the new problem graphically:
subject to
subject to
subject to
subject to
by quadratic programming.
18.222.117.109