110 Vehicle Scheduling in Port Automation
5.2.2 Steps of NSA
e NSA maintains a feasible spanning tree structure at each iteration and success-
fully transforms it into an improved spanning tree structure until it becomes opti-
mal. e algorithm in Figure5.2 species steps of this method (Ahuja, Magnanti,
and Orlin 1993; Kelly and O’Neill 1993).
Figure5.2 shows the following four main steps in the algorithm:
Step 0: Create an initial or a basic feasible solution (BFS).
Step 1: Select an entering arc (which is appended to the spanning tree).
Step 2: Determine the leaving arc (which must be removed from the span-
ning tree).
Step 3: Pivoting (exchange the entering and leaving arcs).
Step 0: To create an initial or a basic feasible solution, the graph has to be con-
nected, which corresponds to the MCF-AGV model in Chapter 4. In Line
3, creating an initial feasible spanning tree solution (see Denition 5.2) for
every connected graph can be made in an easy way (Ahuja, Magnanti, and
Orlin 1993). It is obtained by adding an articial root node 0 to N and the
articial slack arcs (i,0) and (0,i), respectively, to A. Each articial slack arc has
the lower bound of zero, the upper bound of innity, and a suciently large
cost coecient. e initial basic tree consists of all articial arcs, each original
arc becomes nonbasic at its lower bound and no arc becomes nonbasic at the
upper bound. We examine each node j, other than 0, one by one. If b(j)0,
we include (j, 0) in T with a ow value of b(j). If b(j) <0, we include arc (0,j)
in T with a ow value of −b(j). e set L consist of the remaining arcs, and
the set U is empty.
Step 1: At each iteration of the algorithm, an entering arc is selected by some
pricing schemes(Kelly and O’Neill 1993). is arc is selected from the
1: Algorithm Network simplex method
2: Begin
3: Create initial BFS; (T, L, U)
4: (k, l)
(p, q)
6:
7:
8:
9:
10:
11:
12:
13: End while
14: End algorithm
5: While (k, l) < > NULL Do
Entering arc {L + U}
Entering arc {L + U}
(k, l)
Find cycle W {T + (k, 1)}
Update flow in W by θ
Flow change
Update BFS; Tree T
Update node potentials
Leaving arc W
θ
Step 0: Create a basic feasible solution
Step 1: Select an entering arc
Step 2: Determine the leaving arc
Step 3: Exchange the entering and leaving arc
Figure 5.2 Network simplex algorithm.
Network Simplex 111
nonbasic arcs (L+U). ere are several schemes for selecting the entering
arc and these determine the speed of the algorithm. A literature review of
these schemes is presented later in this chapter. An arc may be admitted
to the basis to improve the objective function if it violates the optimal-
ity conditions. us, an arc (i,j)A, with the following conditions are
admissible:
If c
ij
0 and f
ij
= m
ij
or c
ij
f 0 and f
ij
= M
ij
If no admissible arc exists, then the current solution is optimal, and the algo-
rithm terminates. Otherwise, Step 2 is performed.
Step 2: Appending the entering arc (k, l) to the spanning tree forms a unique
cycle, W, with the arcs of the basis. In Line 6, the algorithm nds out the cycle.
In order to eliminate this cycle in the tree, one of its arcs must leave the basis.
By augmenting ow in a negative cost augmenting cycle, the objective value
of the solution can be improved. e cycle is eliminated when there is an aug-
mented ow by a sucient amount to force the ow in one or more arcs of the
cycle to their upper or lower bounds. In Line 7, the ow change is determined
by the following equation:
θ = Min Δf
ij
for all (i,j) W
{ }
e leaving arc is selected based on cycle W and θ, in Line 8.
Step 3: In this step, the entering arc and the leaving arc are exchanged, and
the new BFS is constructed. e construction of a new basis tree is called
the pivot, adjusting ows, making the new spanning tree, and updating the
node potentials accordingly in the spanning tree solution (T, L, U).ese
operations are performed in Lines 9, 10, and 11, respectively. We refer to
cycle W (see Step 2) as the basis cycle. e algorithm sends a maximum pos-
sible amount of ow in the basis cycle without violating any of the lower and
upper bound constraints on arcs. An arc that blocks further increase of ow
in the basis cycle is called a blocking arc. e ow in every arc of the cycle
W is increased or decreased by the amount θ depending on the orientation
of the arc in relation to the orientation of the cycle. Generally, a basic arc
is exchanged with a nonbasic arc. e algorithm drops a blocking arc, say
(p,q), from T. is gives a new basis structure. Let T1 and T2 be the two
subtrees formed by deleting arc (p,q) from the previous basis T, where T1
contains the root. In the new basis, potentials of all nodes in T1 remain
unchanged and potentials of all nodes in T2 change by a constant amount.
If (k,l) was a nonbasic arc at its lower bound in the previous basis structure,
then the amount of change is an increase by
C
kl
, else it is a decrease by an
amount
C
kl
.
112 Vehicle Scheduling in Port Automation
5.2.3 Difference between NSA and Original Simplex
ose steps in NSA can be compared with those in the original simplex algorithm
(OSA) to solve the linear program in operations research. Note that the main dif-
ference is that NSA is based on the graph and operations in the graph, while the
OSA needs a matrix and matrix manipulations. Step 0 is taken to nd an initial
solution in both algorithms. An initial basic spanning tree is created by adding the
articial node and arcs in NSA. In a similar way, OSA uses the articial variables
to generate an initial basic solution. Steps 1 and 2 in both algorithms are choos-
ing the entering and leaving arcs in the NSA, which are similar to choosing the
entering and leaving variables in the OSA. Constructing a new spanning tree in
the NSA and new basic solution in the OSA is Step 3 for both algorithms. In this
way, OSA needs some matrix manipulation and inversion, whereas a new span-
ning tree can be easily constructed by some operation in the graph without any
multiplication or division. Both algorithms continue Steps 1–3 until they meet
the optimality conditions. Obviously, the matrix manipulations are dierent from
graph operations, which have signicant negative impacts on the performance of
the OSA.
5.2.4 Short Literature over Pricing Rules
In order to nd out an entering arc for the basic solution, there are dierent rules,
which are called pricing schemes. e performance of the NSA is aected by these
schemes. A literature review over these schemes is given below:
e standard textbook by Ahuja, Magnanti, and Orlin (1993) provided a
detailed account of the literature on those schemes. We now briey review this lit-
erature. Bradley, Brown, and Graves (1977) used a dynamic queue, containing the
indices of so-called interesting nodes and admissible arcs. eir method is called the
BBG queue pricing scheme. An interesting node is a node whose incident arcs have not
been re-priced in recent iterations. At each iteration, the entering arc is selected from
the queue. Another candidate list scheme has been described by Mulvey (1978). In
the Mulvey scheme, there is a major and a minor loop to select the entering arc. A
limited number of favorably priced entering arcs are collected by scanning the non-
basic arcs in a major iteration. In the minor iteration, the most favorably priced arc
in the list is chosen to enter the basis. Grigoriadis (1986) described a very simple arc
block pricing scheme based on dividing the arcs into a number of subsets of specied
size. At each iteration, the entering arc is selected from a block with most nega-
tive price. Only the arcs of one block are re-priced at any iteration. Taha (1987)
suggested the most negative pricing scheme for the algorithm. At each iteration, all
nonbasic arcs are repriced, and the arc with the most negative price is selected as
the entering arc. Kelly and O’Neill (1993) implemented a variation of the arc block
pricing scheme, which is called arc sample. Instead of selecting the entering arc
from among the required number of consecutive arcs, this method considers arcs at
Network Simplex 113
constant intervals, called the skip factor, from throughout the entire arc set. Andrew
(1993) studied practical implementation of the MCF algorithms and claimed that
his implementations worked very well over a wide range of problems.
Istn (2001–2003) reviewed a collection of some known pricing schemes in
the OSA. ey are rst improving candidate, Dantzig rule, partial pricing, multiple
pricing, and sectional pricing. ese schemes can be applied to NSA. First improv-
ing candidate chooses the rst violate arc as the entering arc. It is cheap but it
usually leads to a very large number of iterations. In the Dantzig rule, all nonbasic
arcs are checked (full pricing) and the one that violates the optimality condition
the most is selected. is rule is quite expensive but overall is considerably better
than the previous method. e partial pricing scans only a part of the nonbasic
arcs and the best candidate from this part is selected. In the next step, the next
part is scanned, and so on. In multiple pricing, some of the most protable candi-
dates (in terms of the magnitude) are selected during one scanning pass. ey are
updated and a suboptimization is performed involving the current basis and the
selected candidates using the criterion of greatest improvement. e sectional pric-
ing behaves as a kind of partial pricing, but in each iteration sections or clusters
of arc are considered.
In recent years, several researches have been devoted on the NSA. Muramatsu
(1999) used a primal-dual symmetric pivoting rule and proposed a new scheme, in
which the algorithm can start from an arbitrary pair of primal and dual feasible
spanning trees. Eppstein (1994) presented a clustering technique for partitioning
trees and forests into smaller sub-trees or clusters. is technique has been used
to improve the time bounds for optimal pivot selection in the primal NSA for
the MCF problem. Löbel (2000) developed and implemented the multiple pricing
rules to select an entering arc, a mixture of several sizes for the arc block. A general
pricing scheme for the simplex method has been proposed by Istn (2001–2003).
His pricing scheme is controlled by three parameters. With dierent settings of
the parameters, he claimed that it creates a large exibility in pricing and is appli-
cable to general algorithms and NSAs (Istn 2001–2003). Ahuja etal. (2002)
developed an NSA with O(n) consecutive degenerate pivot. ey presented an
antistalling pivot rule based on the concept of a strong feasible spanning tree,
which is described in the following section. eir rule uses a negative cost aug-
menting cycle to identify a sequence of entering variables. Paparrizos, Samaras,
and Sifaleras (2009) presented a new network exterior point simplex algorithm
(NEPSA) for the minimum cost network ow problem. NEPSA belongs to a spe-
cial simplex type category and is a modication of the classical NSA. e main
idea of the algorithm is to compute two ows. One ow is basic but not always
feasible and the other is feasible but not always basic. e authors also presented
a complete proof of correctness for the proposed algorithm. Moreover, the com-
putational behavior of NEPSA is shown by an empirical study carried out for
randomly generated sparse instances created by the well-known Grid Generator
(GRIDGEN) network problem generator.
114 Vehicle Scheduling in Port Automation
5.2.5 Strongly Feasible Spanning Tree
e denition of the strongly feasible solution for the NSA and a property are given
below.
Denition 5.3 (Ahuja, Magnanti, and Orlin 1993):
e basis structure (T, L, U) is strongly feasible if we can send a positive amount
of ow from any node to the root along arcs in the spanning tree without violat-
ing any of the ow bounds. An equivalent way of stating this property is that no
upward pointing arc of the spanning tree can be at its upper bound and no down-
ward pointing arc can be at its lower bound. An example of a strongly feasible basis
is given in Figure5.3. Note that the current ow and upper bound of every arc are
given on each arc in the gure. e lower bound of the arcs is zero.
e NSA can maintain a strongly feasible basis at every iteration. In order to do
this, the initial basic solution, which was described in the previous section, should
be strongly feasible. e algorithm may also select the leaving arc appropriately so
that the next basis would be also strongly feasible. Suppose that the entering arc
(k,l) is at its lower bound and node w is the rst common predecessor of nodes k
3 (4)
1
3
2
4
65
87
9 10
0 (3)
3 (4)
2 (2)
3 (4)
3 (3)
2 (3)
0 (2)
4 (6)
0 (5)
kl
p
q
W
Figure 5.3 Example of strongly feasible spanning tree. (Data from Ahuja, R.K. etal.,
Network Flows: Theory, Algorithms and Applications, Prentice Hall of India, 1993.)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.58.143