Self Online Study - Mathematics - Linear Programming
Linear Inequations in Two Variables
The inequation of the form
are called linear inequation in two variables x and y.
The points (x,y) for which in inequation is true, constitute its solution set.
Graph of a Linear Inequation
First off, let me say that graphing linear inequalites is much easier than your book makes it look. Here’s how it works:
Think about how you’ve done inequalites on the number line. For instance, they’d ask you to graph something like x grater than 2. How did you do it? You would draw your number line, find the "equals" part (in this case, x = 2), mark this point with the appropriate notation (an open dot or a parenthesis, indicating that the point x = 2 wasn’t included in the solution), and then you’d shade everything to the right, because "greater than" meant "everything off to the right". The steps for linear inequalities are very much the same.
Linear programming is often a favorite topic for both professors and students. The ability to introduce LP using a graphical approach, the relative ease of the solution method, the widespread availability of LP software packages, and the wide range of applications make LP accessible even to students with relatively weak mathematical backgrounds. Additionally, LP provides an excellent opportunity to introduce the idea of "what-if" analysis, due to the powerful tools for post-optimality analysis developed for the LP model.
Linear Programming (LP) is a mathematical procedure for determining optimal allocation of scarce resources. LP is a procedure that has found practical application in almost all facets of business, from advertising to production planning. Transportation, distribution, and aggregate production planning problems are the most typical objects of LP analysis. In the petroleum industry, for example a data processing manager at a large oil company recently estimated that from 5 to 10 percent of the firm’s computer time was devoted to the processing of LP and LP-like models.
Linear programming deals with a class of programming problems where both the objective function to be optimized is linear and all relations among the variables corresponding to resources are linear. This problem was first formulated and solved in the late 1940’s. Rarely has a new mathematical technique found such a wide range of practical business, commerce, and industrial applications and simultaneously received so thorough a theoretical development, in such a short period of time. Today, this theory is being successfully applied to problems of capital budgeting, design of diets, conservation of resources, games of strategy, economic growth prediction, and transportation systems. In very recent times, linear programming theory has also helped resolve and unify many outstanding applications.
It is important for the reader to appreciate, at the outset, that the "programming" in Linear Programming is of a different flavor than the "programming" in Computer Programming. In the former case, it means to plan and organize as in "Get with the program!", it programs you by its solution. While in the latter case, it means to write codes for performing calculations. Training in one kind of programming has very little direct relevance to the other. In fact, the term "linear programming" was coined before the word "programming" became closely associated with computer software. This confusion is sometimes avoided by using the term linear optimization as a synonym for linear programming.
Any LP problem consists of an objective function and a set of constraints. In most cases, constraints come from the environment in which you work to achieve your objective. When you want to achieve the desirable objective, you will realize that the environment is setting some constraints (i.e., the difficulties, restrictions) in fulfilling your desire or objective. This is why religions such as Buddhism, among others, prescribe living an abstemious life. No desire, no pain. Can you take this advice with respect to your business objective?
Graph the solution to y ≥2x + 3.
Just as for number-line inequalities, the first step is to find the "equals" part. In this case, the "equals" part is the line y = 2x + 3. There are a couple ways you can graph this: you can use a T-chart, or you can graph from the y-intercept and the slope. Either way, you get a line that looks like this:
Now we're at the point where your book gets complicated, with talk of "test points" and such. When you did those one-variable inequalities (like x greater than 3), did you bother with "test points", or did you just shade one side or the other? Ignore the "test point" stuff, and look at the original inequality: y 2x + 3.
You've already graphed the "or equal to" part (it's just the line); now you're ready to do the "y less than" part. In other words, this is where you need to shade one side of the line or the other. Now think about it: If you need y LESS THAN the line, do you want ABOVE the line, or BELOW? Naturally, you want below the line. So shade it in:
And that’s all there is to it: the side you shaded is the "solution region" they want.
Linear Programming Formulation
We set up the problem by numbering each of the n cities as C1, Cn; a tour is then just a permutation, say (k1, k2, k3, knof (1,..., n). We interpret this as the tour which starts at Ck1, then goes to Ck2 and so on, running through the remaining cities up to and including Ckn before finally completing the tour back at Ck1. It is thus clear than we can permute the permutation cyclically without changing the tour. There is no loss of generality then in assuming that Ck1 = C1; it is then clear that there are (n - 1)! different toursThis also shows that solving the problem by exhaustive enumeration is infeasible unless there are very few cities. The problem can be formulated as a linear programming problem, as we now do. Introduce (integer) variables xij where xij = 1 when Cj is the city immediately following Ci in the tour, and xij = 0 otherwise. Then since exactly one city precedes Cj in the tour we have
LP Problem Formulation Process and Its Applications
To formulate an LP problem, I recommend using the following guidelines after reading the problem statement carefully a few times.
Any linear program consists of four parts: a set of decision variables, the parameters, the objective function, and a set of constraints. In formulating a given decision problem in mathematical form, you should practice understanding the problem (i.e., formulating a mental model) by carefully reading and re-reading the problem statement. While trying to understand the problem, ask yourself the following general questions:
1. What are the decision variables? That is, what are controllable inputs? Define the decision variables precisely, using descriptive names. Remember that the controllable inputs are also known as controllable activities, decision variables, and decision activities.
2. What are the parameters? That is, what are the uncontrollable inputs? These are usually the given constant numerical values. Define the parameters precisely, using descriptive names.
3. What is the objective? What is the objective function? Also, what does the owner of the problem want? How the objective is related to his decision variables? Is it a maximization or minimization problem? The objective represents the goal of the decision-maker.
4. What are the constraints? That is, what requirements must be met? Should I use inequality or equality type of constraint? What are the connections among variables? Write them out in words before putting them in mathematical form.
Learn that the feasible region has nothing or little to do with the objective function (min or max). These two parts in any LP formulation come mostly from two distinct and different sources. The objective function is set up to fulfill the decision-maker’s desire (objective), whereas the constraints which shape the feasible region usually comes from the decision-maker’s environment putting some restrictions/conditions on achieving his/her objective.
The following is a very simple illustrative problem. However, the way we approach the problem is the same for a wide variety of decision-making problems, and the size and complexity may differ. The first example is a product-mix problem.
Graphical Solution Method
Procedure for Graphical Method of Solving LP Problems:
1. Is the problem an LP? Yes, if and only if:
All variables have power of 1, and they are added or subtracted (not divided or multiplied). The constraint must be of the following forms ( ?, ?, or =, that is, the LP-constraints are always closed), and the objective must be either maximization or minimization.
For example, the following problem is not an LP: Max X, subject to X ? 1. This very simple problem has no solution.
2. Can I use the graphical method? Yes, if the number of decision variables is either 1 or 2.
3. Use Graph Paper. Graph each constraint one by one, by pretending that they are equalities (pretend all ? and ?, are = ) and then plot the line. Graph the straight line on a system of coordinates on a graph paper. A system of coordinate has two axes: a horizontal axis called the x-axis (abscissa), and a vertical axis, called the y-axis (ordinate). The axes are numbered, usually from zero to the largest value expected for each variable.
4. As each line is created, divide the region into 3 parts with respect to each line. To identify the feasible region for this particular constraint, pick a point in either side of the line and plug its coordinates into the constraint. If it satisfies the condition, this side is feasible; otherwise the other side is feasible. For equality constraints, only the points on the line are feasible.5. Throw away the sides that are not feasible.
After all the constraints are graphed, you should have a non-empty (convex) feasible region, unless the problem is infeasible.
6. Create (at least) two iso-value lines from the objective function, by setting the objective function to any two distinct numbers. Graph the resulting lines. By moving these lines parallel, you will find the optimal corner (extreme point), if it does exist.
In general, if the feasible region is within the first quadrant of the coordinate system (i.e., if X1 and X2 ? 0), then, for the maximization problems you are moving the iso-value objective function parallel to itself far away from the origin point (0, 0), while having at least a common point with the feasible region. However, for minimization problems the opposite is true, that is, you are moving the iso-value objective parallel to itself closer to the origin point, while having at least a common point with the feasible region. The common point provides the optimal solution.
Classification of the Feasible Points:
: The feasible points of any non-empty LP feasible region can be classified as, interiors, boundaries, or vertices. As shown in the following figure:
The point B in the above two-dimensional figure, for example, is a boundary point of the feasible set because every small circle centered at the point B, however small, contains both points some in the set and some points outside the set. The point I is an interior point because the orange circle and all smaller circles, as well as some larger ones; contains exclusively points in the set. The collection of boundary points belonging to one set is called boundary line (segment), e.g. the line segment cd. The intersections of boundary lines (segments) are called the vertices, if feasible it is called the corner point. In three-dimensional space and higher the circles become spheres, and hyper-spheres.
Know that, the LP constraints provide the vertices and the corner-points. A vertex is the intersection of 2-lines, or in general n-hyperplanes in LP problems with n-decision variables. A corner-point is a vertex that is also feasible.
If a linear program has a bounded optimal solution, then one of the corner points provides an optimal solution.
The proof of this claim follows from the results of the following two facts:
Fact No. 1: The feasible region of any linear program is always a convex set.
Since all of the constraints are linear, the feasible region (F.R.) is a polygon. Moreover, this polygon is a convex set. In any higher than two-dimensional LP problem, the boundaries of F.R. are parts of the hyper-planes, and the F.R. in this case is called polyhedra that is also convex. A convex set is the one that if you choose two feasible points from it, then all the points on the straight line segment joining these two points are also feasible. The proof that the F.R. of linear programs are always convex sets follows by contradiction. The following figures depict examples for the two types of sets: A non-convex and a convex set.
The set of feasible region in any linear program is called a polyhedron, it is called a polytope if it is bounded.
The Iso-value of a linear program objective function is always a linear function.
This fact follows from the nature of the objective function in any LP problem. The following figures depict the two typical kinds of iso-valued objective functions.
Combining the above two facts, it follows that, if a linear program has a non-empty, bounded feasible region, then the optimal solution is always one of the corner points.
To overcome the deficiency of the graphical method, we will utilize this useful and practical conclusion in developing an algebraic method that is applicable to multi-dimensional LP problems.
The convexity of the feasible region for linear programs makes the LP problems easy to solve. Because of this property and linearity of the objective function, the solution is always one of the vertices. Moreover, since number of vertices is limited, one has to find all feasible vertices, and then evaluate the objective function at these vertices to seek the optimal point.
For nonlinear programs, the problem is much harder to solve, because the solution could be anywhere inside the feasible region on the boundary of the feasible region, or at a vertex.
Fortunately, most of the Business optimization problems have linear constraints, which is why LP is so popular. There are well over 400 computer packages in the market today solving LP problems. Most of them are based on vertex searching, that is, jumping from one vertex to the neighboring one in search of an optimal point.
You have already noticed that, a graph of a system of inequalities and/or equalities is called the feasible region. These two representations, graphical, and algebraic are equivalent to each other, which means the coordinate of any point satisfying the constraints is located in the feasible region, and the coordinate of any point in the feasible region satisfies all the constraints.