单纯形法
维基百科,自由的百科全书
数学优化中,由George Dantzig发明的单纯形法是线性规划问题的数值求解的流行技术。有一个算法与此无关,但名称类似,它是Nelder-Mead法或称下山单纯形法,由Nelder和Mead发现(1965年),这是用于优化多维无约束问题的一种数值方法,属于更一般的搜索算法的类别。
这二者都使用了单纯形的概念,它是N维中的N + 1个顶点的凸包,是一个多胞体:直线上的一个线段,平面上的一个三角形,三维空间中的一个四面体,等等。
目录 |
[编辑] Description
A linear programming problem consists of a collection of linear inequalities on a number of real variables and a given linear functional (on these real variables) which is to be maximized or minimized. Further details are given in the linear programming article.
In geometric terms we are considering a closed convex polytope P, defined by intersecting a number of half-spaces in n-dimensional Euclidean space; each half-space is the area which lies on one side of a hyperplane. If the objective is to maximise a linear functional L(x), consider the hyperplanes H(c) defined by L(x) = c; as c increases, these form a parallel family. If the problem is well-posed, we want to find the largest value of c such that H(c) intersects P (if there is no such largest value of c, this isn't a reasonable question for optimization as it stands). In this case we can show that the optimum value of c is attained on the boundary of P. Methods for finding this optimum point on P work in several ways: some attempt to improve a possible point by moving through the interior of P (so-called interior point methods); others start and remain on the boundary searching for an optimum.
The simplex algorithm follows this latter methodology. The idea is to move along the facets of P in search of the optimum, from point to point. Note that, unless the optimum occurs on an edge or face that is parallel to H, the optimum will be unique and occur at a vertex of the polytope. If an optimum is found on an edge or face that is parallel to H, the optimum is not unique and can be optained at any point on the edge or face. Since the simplex algorithm is concerned only with finding a single optimal point (even if other equally-optimal points exist), it is possible to look solely at moves skirting the edge of a simplex, ignoring the interior. The algorithm specifies how this is to be done.
We start at some vertex of the polytope, and at every iteration we choose an adjacent vertex such that the value of the objective function does not decrease. If no such vertex exists, we have found a solution to the problem. But usually, such an adjacent vertex is nonunique, and a pivot rule must be specified to determine which vertex to pick. Various pivot rules exist.
In 1972, Klee and MintyTemplate:Rf gave an example of a linear programming problem in which the polytope P is a distortion of an n-dimensional cube. They showed that the simplex method as formulated by Dantzig visits all 2n vertices before arriving at the optimal vertex. This shows that the worst-case complexity of the algorithm is exponential time. Similar examples have been found for other pivot rules. It is an open question if there is a pivot rule with polynomial time worst-case complexity.
Nevertheless, the simplex method is remarkably efficient in practice. Attempts to explain this employ the notion of average complexity or (recently) smoothed complexity.
Other algorithms for solving linear programming problems are described in the linear programming article.
[编辑] 问题的输入
思考下列線性規化的問題,
- 极大化
- 并满足约束
单纯形法要求使用线性规划的补充形式。问题可以写作矩阵形式:
- 在如下空间中极大化Z:
其中x是标准形式中的变量,xs are the introduced slack variables from the augmentation process, c contains the optimization coefficients, A and b describe the system of constraint equations, and Z is the variable to be maximized.
The system is typically underdetermined, since the number of variables exceed the number of equations. The difference between the number of variables and the number of equations gives us the degrees of freedom associated with the problem. Any solution, optimal or not, will therefore include a number of variables of arbitrary value. The simplex algorithm uses zero as this arbitrary value, and the number of variables with value zero equals the degrees of freedom.
Variables of nonzero value are called basic variables, and values of zero values are called nonbasic variables in the simplex algorithm.
This form simplifies finding the initial basic feasible solution (BF), since all variables from the standard form can be chosen to be nonbasic (zero), while all new variables introduced in the augmented form are basic (nonzero), since their value can be trivially calculated ( for them, since the augmented problem matrix is diagonal on its right half).
[编辑] Revised simplex algorithm
[编辑] Matrix form of the simplex algorithm
At any iteration of the simplex algorithm, the tableau will be of this form:
where are the coefficients of basic variables in the c-matrix; and B is the columns of corresponding to the basic variables.
It is worth noting that B and are the only variables in this equation (except Z and x of course). Everything else is constant throughout the algorithm.
[编辑] Algorithm
- Choose an initial BF as shown above
- This implies that B is the identity matrix, and is a zero-vector.
- While nonoptimal solution:
- Determine direction of highest gradient
- Choose the variable associated with the coefficient in that has the highest negative magnitude. This nonbasic variable, which we call the entering basic, will be increased to help maximize the problem.
- Determine maximum step length
- Use the subequation to determine which basic variable reaches zero first when the entering basic is increased. This variable, which we call the leaving basic then becomes nonbasic. This operation only involves a single division for each basic variable, since the existing basic-variables occur diagonally in the tableau.
- Rewrite problem
- Modify B and to account for the new basic variables. This will automatically make the tableau diagonal for the existing and new basic variables.
- Check for improvement
- Repeat procedure until no further improvement is possible, meaning all the coefficients of are positive. Procedure is also terminated if all coefficients are zero, and the algorithm has walked in a circle and revisited a previous state.
[编辑] 参考
- Greenberg, Harvey J., Klee-Minty Polytope Shows Exponential Time Complexity of Simplex Method University of Colorado at Denver (1997) PDF download
- Frederick S. Hillier and Gerald J. Lieberman: Introduction to Operations Research, 8th edition. McGraw-Hill. ISBN 0-07-123828-X
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 29.3: The simplex algorithm, pp.790–804.
[编辑] 参看
- Nelder-Mead方法
- 求解单纯形法问题的方法