失效链接处理 |
Linear Programming Theory and Applications PDF 下载
本站整理下载:
相关截图:
![]()
主要内容:
1 Introduction to Linear Programming
Linear programming was developed during World War II, when a system with
which to maximize the efficiency of resources was of utmost importance. New
war-related projects demanded attention and spread resources thin. “Programming” was a military term that referred to activities such as planning schedules
efficiently or deploying men optimally. George Dantzig, a member of the U.S.
Air Force, developed the Simplex method of optimization in 1947 in order to
provide an efficient algorithm for solving programming problems that had linear
structures. Since then, experts from a variety of fields, especially mathematics
and economics, have developed the theory behind “linear programming” and
explored its applications [1].
This paper will cover the main concepts in linear programming, including
examples when appropriate. First, in Section 1 we will explore simple properties, basic definitions and theories of linear programs. In order to illustrate
some applications of linear programming, we will explain simplified “real-world”
examples in Section 2. Section 3 presents more definitions, concluding with the
statement of the General Representation Theorem (GRT). In Section 4, we explore an outline of the proof of the GRT and in Section 5 we work through a
few examples related to the GRT.
After learning the theory behind linear programs, we will focus methods
of solving them. Section 6 introduces concepts necessary for introducing the
Simplex algorithm, which we explain in Section 7. In Section 8, we explore
the Simplex further and learn how to deal with no initial basis in the Simplex
tableau. Next, Section 9 discusses cycling in Simplex tableaux and ways to
counter this phenomenon. We present an overview of sensitivity analysis in
Section 10. Finally, we put all of these concepts together in an extensive case
study in Section 11.
1.1 What is a linear program?
We can reduce the structure that characterizes linear programming problems
(perhaps after several manipulations) into the following form:
3
Minimize c1x1 + c2x2 + · · · + cnxn = z
Subject to a11x1 + a12x2 + · · · + a1nxn = b1 a21x1 + a22x2 + · · · + a2nxn = b2 ... ... ... ... ... am1x1 + am2x2 + · · · + amnxn = bm x1, x2, . . . , xn ≥ 0.
In linear programming z, the expression being optimized, is called the objective function. The variables x1, x2 . . . xn are called decision variables, and their
values are subject to m + 1 constraints (every line ending with a bi, plus the
nonnegativity constraint). A set of x1, x2 . . . xn satisfying all the constraints is
called a feasible point and the set of all such points is called the feasible region. The solution of the linear program must be a point (x1, x2, . . . , xn) in the
feasible region, or else not all the constraints would be satisfied.
The following example from Chapter 3 of Winston [3] illustrates that geometrically interpreting the feasible region is a useful tool for solving linear
programming problems with two decision variables. The linear program is:
Minimize 4x1 + x2 = z
Subject to 3x1 + x2 ≥ 10
x1 + x2 ≥ 5 x1 ≥ 3 x1, x2 ≥ 0. We plotted the system of inequalities as the shaded region in Figure 1. Since
all of the constraints are “greater than or equal to” constraints, the shaded
region above all three lines is the feasible region. The solution to this linear
program must lie within the shaded region
|