Interior-Point Methods


The announcement by Karmarkar [23] in 1984 that he had developed a fast algorithm that generated iterates that lie in the interior of the feasible set (rather than on the boundary, as simplex methods do) opened up exciting new avenues for research in both the computational complexity and mathematical programming communities. Since then, there has been intense research into a variety of methods that maintain strict feasibility of all iterates, at least with respect to the inequality constraints. Although dwarfed in volume by simplex-based packages, interior-point products such as CPLEX/Barrier and OSL have emerged and have proven to be competitive with, and often superior to, the best simplex packages, especially on large problems.

Current information on interior-point software can be found at the home page for the book Primal-Dual Interior-Point Methods by Steve Wright (SIAM Publications, 1997). The NEOS SERVER includes an interior-point linear programming facility based on the interior-point code PCx. This facility allows users to solve linear programming problems remotely over the Internet.

We discuss only primal-dual interior-point algorithms. Recent research has shown the primal-dual class to be the most promising from a practical point of view, as well as the most amenable to theoretical analysis.

Writing the dual of the standard form linear programming problem as

we see that optimality conditions for to be a primal-dual solution triplet are that

where

, ,

and is the vector of all ones. Interior-point algorithms generate iterates such that and . As , the equality-constraint violations and and the duality gap are driven to zero, yielding a limiting point that solves the primal and dual linear programs.

Primal-dual methods can be thought of as a variant of Newton's method applied to the system of equations formed by the first three optimality conditions. Given the current iterate and the damping parameter , the search direction is generated by solving the linear system

where

The new point is then obtained by setting

where and are chosen to ensure that and .

When , the search direction is the pure Newton search direction for the nonlinear system , and the resulting method is an affine scaling algorithm. The effect of choosing positive values of is to orient the step away from the boundary of the nonnegative orthant defined by , , thus allowing longer step lengths , to be taken. Path-following methods require and to be chosen so that and are not merely positive but also satisfy the centrality condition

,

where

for some . The other requirement on and is that the decrease in should not outpace the improvement in feasibility (that is, the decrease in and ). Greater priority is placed on attaining feasibility than on closing the duality gap. It is possible to design path-following algorithms satisfying these requirements for which the sequence converges to zero at a linear rate. Further, the number of iterates required for convergence is a polynomial function of the size of the problem (typically, order or order ). By allowing the damping parameter to become small as the solution is approached, the method behaves more and more like a pure Newton method, and superlinear convergence can be achieved.

The current generation of interior-point software is quite sophisticated. Most current codes are based on a predictor-corrector algorithm outlined in a 1993 paper by Mehrotra. Mehrotra's algorithm consists of the basic infeasible-interior-point approach described above, with a corrector component added to each step to increase the order of approximation to the nonlinear (complementarity) term in the KKT conditions. Implementations of the algorithm also incorporate other heuristics that are essential for robust behavior on many practical problems, including the choice of step length and centering parameter, handling of free variables (i.e. variables without explicit bounds), preprocessing to remove redundant information in the problem specification, determination of a starting point, and so on.

Another useful option in practical software is the ability to cross over to a simplex-like method once a good approximation to the solution has been found. This feature is present in the CPLEX Barrier code.

Most of the computational cost of an interior-point method is associated with the solution of the linear system that defines the search direction. Typically, block elimination is used to obtain a single linear system in alone, specifically,

The coefficient matrix is formed explicitly, except that dense columns of (which would cause to have many more nonzeros than itself) are handled separately, and columns that correspond to apparently nonbasic primal variables are eventually dropped.

Interior-point algorithms have been included among the linear programming options in the OSL system from IBM. There is a primal option (based on a primal log barrier function), a primal-dual path-following strategy, and a primal-dual predictor-corrector algorithm. All these solvers have an option for switching over to a simplex solver when the interior-point method indicates that a solution is near at hand.

December 1996: This page is currently being revised to add further information on interior-point software. Meanwhile, please see the home page for the book Primal-Dual Interior-Point Methods for more information.


Up To:

Linear Programming.


treesig.gif (5961 bytes)

[ OTC Home Page | NEOS Guide | NEOS Server | Optimization Tree ]


Updated 28 March 1996