SlideShare una empresa de Scribd logo
1 de 12
Page | 1
Linear Programming
Optimization Problem: Problems which seek to maximize or minimize a numerical function of
a number of finite variables subject to certain constraints are called optimization problems
Programming Problem: Programming problems deal with determining optimal allocations of
limited resources to meet given objectives. The constraints or limited resources are given by
linear or non-linear inequalities or equations. The given objective may be to maximize or
minimize certain function of finite variables.
Linear Programming and Linear Programming Problem: Suppose we have given m linear
inequalities or equations in n variables and we wish to find non-negative values of these
variables which will satisfy the constraints and maximize or minimize some linear functions of
these variables (objective functions), then this procedure is known as linear programming and
the problem which is described is known as linear programming problem.
Mathematically it can be described as, suppose we have m linear inequalities or equations in n
unknown variables of the form
n
ij j i
j=1
a x { ,=, }b (i= 1, 2,....,m)  where for each constraint one and only one of the signs ,=, 
holds. Now we wish to find the non-negative values of jx , j = 1, 2,………,n. which will satisfy
the constraints and maximize of minimize a linear function
n
j j
j=1
z = c x . Here ija , ib and jc
are known constant.
Application:
(i) Linear programming problem is widely applicable in business and economic activities
(ii) It is also applicable in government, military and industrial operations
(iii) It is also extensively used in development of planning.
Objective Function: In a linear programming problem, a linear function
n
j j
j=1
z = c x of the
variables jx , j = 1, 2,………,n. which is to be optimized is called objective function. In a
objective function no constant term will be appeared. i. e. we cannot write the objective
function of the type
n
j j
j=1
z = c x +k
Example of Linear Programming Problem:
Machine Type Products
1 2 3 4
Total Time Available Per Week
A
B
C
1.5 1 2.4 1
1 5 1 3.5
1.5 3 3.5 1
2000
8000
5000
Unit Profits 5.24 7.30 8.34
4.18
Page | 2
Suppose three types of machines A, B and C turns out four products 1, 2, 3, 4. The above table
shows (i) the hours required on each machine type to produce per unit of each product (ii) total
available machine hours per week, and (iii) per unit profit on sale of each of the product.
Suppose jx (j = 1, 2, 3, 4) is the no. of units of product j produced per week. So we have the
following linear constraints;
1 2 3 4
1 2 3 4
1 2 3 4
1.5x +x +2.4x +x 2000
x +5x +x +3.5x 8000
1.5x +3x +3.5x +x 5000



Since the amount of production cannot be negative so, jx 0 (j = 1, 2, 3, 4) . The weekly profit is
given by 1 2 3 4z= 5.24x +7.3x +8.34x +4.18x . Now we wish to determine the values of the variables
jx 's for which (i), (ii), (iii) and (iv) will be satisfied and (v) will be maximized
Formulation of Linear Programming Problem
(i) Transportation Problem:
Suppose given amount of uniform product are available at each of a no. of origins say
warehouse. We wish to send specified amount of the products to each of a no. of different
destinations say retail stores. We are interested in determining the minimum cost-routing
from warehouse to the retail stores.
Let use define
m = no. of warehouses
n = no. of retail stores
ijx the amount of product shipped from the ith warehouse to the jth retail store.
Since negative amounts cannot be shipped so we have ijx 0 i, j 
ia = total no. of units of the products available for shipment at the ith (i= 1,
2,………m)warehouse.
jb = the no. of units of the product required at the jth retail store.
Since we cannot supply more than the available amount of the product from ith warehouse to
the different retail stores, therefore we have
i1 i2 in ix +x +............+x a i= 1, 2,……..,m
We must supply at each retail store with the no. of units desired, therefore
1j 2j mj jx +x +.............+x =b ; j = 1, 2,………….,n
The total amount received at any retail store is the sum over the amounts received from each
warehouse. The needs of the retail stores can be satisfied
m n
i
i=1 j=1
a b j 
Page | 3
Let us define ijc is the per unit cost of shifting from ith warehouse to the jth retial store, then
the total cost of shifting
m n
ij ij
i=1 j=1
z= c x
Now we wish to determine ijx which minimize the cost
m n
ij ij
i=1 j=1
z= c x subject to the constraints
i1 i2 in ix +x +............+x a
1j 2j mj jx +x +.............+x =b
It is a linear programming problem in mn variables with (m+n) constraints.
(2) The Diet Problem
Suppose we have given the nutrient content of a no. of different foods. We have also given the
minimum daily requirement for each nutrient and quantities of nutrient contained in one of
each food being considered. Since we know the cost per ounce of food, the problem is to
determine the diet that satisfy the minimum daily requirement of nutrient and also the
minimum cost diet.
Let us define
m = the no. of nutrients
n = the no. of foods
ija = the quantity (mg) of ith nutrient per (oz) of the jith food
ib = the minimum quantity of ith nutrient
jc = the cost per (oz) of the jth food
jx = the quantity of jth food to be purchased
The total amount of ith nutrient contained in all the purchased foods cannot be less than the
minimum daily requirements
Therefore we have
n
i1 1 i2 2 in n ij j i
j=1
a x +a x +............+a x = a x b
The total cost for all purchased foods is given by;
n
j j
j=1
z = c x
Now our problem is to minimize cost
n
j j
j=1
z = c x subject to the constraints
n
i1 1 i2 2 in n ij j i
j=1
a x +a x +............+a x = a x b and
jx 0
This is called the linear programming problem.
Page | 4
Feasible Solution:
Any set of values of the variables jx which satisfies the constraints
n
ij j i
j=1
a x { , , b   , where
ija and ib are constant is called a solution to the linear programming problem and any
solution which satisfies the non-negative restrictions i. e. jx 0 is called a feasible solution.
Optimal Feasible Solution
In a linear programming problem there is an infinite no. of feasible solutions and out of all
these solutions we must find one feasible solution which optimize the objective function
n
j j
j=1
z = c x is called optimal feasible solution
In other words, any feasible solution which satisfies the following conditions;
(i)
n
ij j i
j=1
a x { , , b  
(ii) jx 0
(iii) optimize objective function
n
j j
j=1
z = c x , is called a optimal feasible solution.
Corner Point Feasible Solution:
A feasible solution which does not lie on the line segment, connection any other two feasible
solution is called a corner point feasible solution.
Properties:
(i) If there is a exactly one optimal solution of the linear programming problem, then it is a
corner point feasible solution.
(ii) If there are more than two optimal solutions of the given problem, then at least two of them
are adjacent corner points.
(iii) In a linear programming problem there are a finite number of corner points
(iv) If a corner point feasible solution is better than its adjacent corner point solution, then it is
better than all other feasible solutions.
Methods for SolvingLinear ProgrammingProblems
(1) Graphical Method
(2) Algebraic Method
(3) SimplexMethod
Graphical Method:
The graphical method to solve a linear programming problem involves two basic steps
(1) At the first step we have to determine the feasible solution space.
We represent the values of the variable 1x to the X axis and the their corresponding values of
the variable 2x to the Y axis. Any point lying in the first quadrant satisfies 1x > 0 and 2x 0 .
The easiest way of accounting for the remaining constraints for optimization objective function
is to replace inequalities with equations and then plot the resulting straight lines
Give an example:
Page | 5
Next we consider the effect of the inequality. All the inequality does is to divide the 1 2(x , x )
-plane into two spaces that occur on both sides of the plotted line: one side satisfies the
inequality and the other one dies not. Any point lying on or below the line satisfies the
inequality. A procedure to determine the feasible side is to use the origin (0, 0) as a reference
point.
Step 2: At the second step we have to determine the optimal solution.
Problem: Find the non-negative value of the variables
1 2x and x which satisfies the constraints
1 23x +5x 15
1 25x +2x 10
And which maximize the objective function 1 2z = 5x +3x
Solution: We introduce an 1 2x x co-ordinate system. Any point lying in the first quadrant has
1 2x ,x 0 . Now we show the straight lines 1 23x +5x =15 and 1 25x +2x =10 on the graph. Any
point lying on or below the line 1 23x +5x =15 satisfies the 1 23x +5x 15 . Similarly any point
lying on or below the line 1 25x +2x =10 satisfies the constraint 1 25x +2x 10
B(0,3)
A(1.053, 2.368)
1 23x +5x =15
O C (5,0)
1 2z = 5x +3x
1 25x +2x =10
So, the region ABOC containing the set of points satisfying both the constraints and the non
negative
restriction. So, the points in this region are the feasible solution. Now we wish to find the line
with the largest value of 1 2z = 5x +3x which has at least one point in common with the region
of feasible solution. The line is drawn in the graph above. It shows that the value of 1x and 2x
at the point A are the required solution.
Here 1x =1.053 and 2x 2.368 approximate.
Now from the objective function we get the maximum value of z which is given by
z = 5 1.053+3 2.368=12.37 
Page | 6
Algebraic Method: In LP problems, generally the constraints are not all equations. Since
equations are easy to handle as compared to inequalities, a simple conversion is needed to
make the inequalities into equality. Let us consider first, the constraints having less than or
equal signs (). Any constraint of this category can be written as
h1 1 h2 2 hn n ha x +a x +..............+a x b (1)
Let us introduce a new variable n+hx which satisfies that n+hx 0 where
n
n+h h hj j
j=1
x b a x 0   , to convert the inequalities to the equality such that
h1 1 h2 2 hn n n+h ha x +a x +..............+a x +x b (2)
The new variable n+hx is the difference between the amount available of resource and the
amount actually used and it is called the slack variables.
Next we consider the constraints having signs greater than or equal ( ). A typical inequality
in this set can be written as;
k1 1 k2 2 kn n ka x +a x +..............+a x b (3)
Introducing a new variable n+kx 0 , the inequality can be written as equality which is given
by;
k1 1 k2 2 kn n n+k ka x +a x +..............+a x -x b (4)
Here the variable n+kx is called the surplus variable, because it is the difference between
resources used and the minimum amount to be produced is called the surplus.
Therefore using algebraic method for solving a linear programming problem, the linear
programming problem with original constraints can be transformed into a LP problem with
constraints of simultaneously linear equation form by using slack and surplus variable
Example: Considering the LP problem
1 2Min: -x -3x
St 1 2x -2x 4
1 2-x +x 3
1 2x , x > 0
Now introducing two new variables 3 4x and x , the problem can be written as;
1 2 3 4Min: -x -3x +0.x +0.x
St:
1 2 3
1 2 4
1 2 3 4
x -2x x =4
-x +x x = 3
x , x , x , x > 0


Here 3x is the slack variable and 4x is the surplus variable.
Effect of Introducing Slack and Surplus Variables
Suppose we have a linear programming problem 1P such that
Optimize
1 1 2 2 n nZ= c x +c x +..............+c x (1)
Subject to the condition
h1 1 h2 2 hn n ha x +a x +..............+a x { , , }b   (2)
Where one and only one of the signs in the bracket hold for each constraint
The problem is converted to another linear programming problem 2P such that
1 1 2 2 n n n+1 mZ= c x +c x +..............+c x +0.x +............+0.x (3)
Page | 7
Subject to the condition
h1 1 h2 2 hn n hn+1 n+1 hm m hAx = a x +a x +..............+a x a x ........... a x b    (4)
Where  ij n m
A= a

and ja ( j = 1, 2,…….,m) is the jth column of A.
We claim that optimizing (3) subject to (4) with jx 0 is completely equivalent to optimizing
(1) subject to (2) with jx 0
To prove this, we first note that if we have any feasible solution to the original constraints,
then our method of introducing slack or surplus variables will yield a set of non-negative slack
or surplus variables such that equation (4) is satisfied with all variables non-negative
consequently if we have a feasible solution to (4) with all variables non-negative, then its first
n components will yield a feasible solution to (2) .Thus there exist one –to-one correspondence
between the feasible solutions to the original set of constraints and the feasible solution to the
set of simultaneous linear equations. Now if
* * * *
1 2 mX = (x x ,........,x ) 0 is a feasible optimal solution to linear programming 2P then the first n
components of *
X that is * * *
1 2 n(x x ,........,x ) is an optimal solution by annexing the slack and
surplus variables to any optimal solution to 1P we obtain an optimal solution to 2P
Therefore, wet may conclude that if slack and surplus variables having a zero cost are
introduced to convert the original set of constraint into a set of simultaneous linear equations,
so the resulting problem is equivalent to the original problem.
Existence of Extreme Basic Feasible Solution: Reduction of any feasible solution to a basic
feasible solution
Let us consider a linear programming problem with m linear equations in n unknowns such
that
AX = b
X 0
Which has at least one basic feasible solution without loss of generality suppose that Rank(A)
= m and let 1 2 nX=(x , x ,......,x )be as feasible solution. Further suppose that 1 2 px , x ,......,x >0 and
that p+1 P+2 nx , x ,......,x =0 . And let 1 2 pa , a ,......,a be the respective columns of A corresponding to
the variables 1 2 px , x ,......,x . If 1 2 pa , a ,......,a are linearly independent then X is a basic feasible
solution. in such case p m . If p=m from the theory of system of linear equation, the solution
is non-degeneracy basic feasible solution.
If p<m, the system have a degenerate basic feasible solution with (m-p) of the basic variables
are equal to zero.
If 1 2 pa , a ,......,a are dependent then there exist scalars 1 2, ,......, p   with at least one positive
Page | 8
j such that
1
0
p
j j
j
a 


Considering the following point X with
j 0
j
x ; 1,2,....,
x =
0; 1, 2,.....
j j p
j p p n
  
 
  
where j k
o
j=1,2,....,p
k
x x
=Minimum ; 0 = >0j
j
 
 
  
 
  
If j 0  , then jx >0 , since both jx and 0 are positive. If 0j  , then by the definition of 0
we have j
o j 0
x
x j
j
  

   . Thus jx >0
Furthermore
k
k k 0 k
k
x
x = x - =x - =0k k  

 . Hence x has at most (p-1) positive components.
Also,
n
j j
j=1
n
j j 0
j=1
n n
j j 0 j
j=1 j=1
Ax = a x
= a (x )
= a x a
= b
j
j
 
 
 




 
Thus we have a constructed feasible solution x since Ax =b , x 0  with at most (p-1)
positive components. If the columns of A corresponding to these positive components are
linearly independent then x is basic feasible solution. Otherwise the process is repeated.
Eventually a basic feasible solution (BFS) will be obtained.
Example: Consider the following inequalities
1 2
2
1 2
x +x 6
x 3
x , x 0



Find basic solution, BFS and extreme points.
Solution. By introducing slack variables 3 4x and x , the problem is put into the following
standard format
1 2 3
2 4
1 2 3 4
x +x x =6
x x =3
x , x ,x ,x 0



So, the constraint matrix A is given by;
1 1 1 0
A =
0 1 0 1
 
 
 
= 1 2 3 4(a , a , a , a ) ,
6
b=
3
 
 
 
Rank(A) = 2
Page | 9
Therefore, the basic solutions corresponding to finding a 2 2 basis B. Following are the
possible ways of extracting B out of A
(i) 1 2
1 1
B=(a , a ) =
0 1
 
 
 
, -1 1 -1
B =
0 1
 
 
 
, -1
B
1 -1 6 3
x =B b= =
0 1 3 3
    
    
    
, 3
n
4
x 0
x = =
x 0
   
   
  
(ii) 1 3
1 1
B=(a , a )=
0 0
 
 
 
, Since |B|=0, it is not possible to find -1
B and hence Bx
(iii) 1 4
1 0
B=(a , a )=
0 1
 
 
 
; -1 1 0
B =
0 1
 
 
 
21 -1
B n
34
xx 1 0 6 6 0
x = =B b= = x = =
xx 0 1 3 3 0
         
         
         
(iv) 2 3
1 1
B=(a , a )=
1 0
 
 
 
-1 0 1
B =
1 1
 
 
 
2 1-1
B n
3 4
x x0 1 6 3 0
x = =B b= = x = =
x x1 1 3 3 0
         
         
        
(v) 2 4
1 0
B=(a , a )=
1 1
 
 
 
; -1 1 0
B =
-1 1
 
 
 
;
12 -1
B n
34
xx 1 0 6 6 0
x = =B b= = x = =
xx -1 1 3 -3 0
         
         
         
(vi) 3 4
1 0
B=(a , a )=
0 1
 
 
 
; -1 1 0
B =
0 1
 
 
 
;
3 1-1
B n
4 2
x x1 0 6 6 0
x = =B b= = x = =
x x0 1 3 3 0
         
         
         
Hence we have the following five basic solutions
1
3
3
x =
0
0
 
 
 
 
  
 
; 2
6
0
x =
0
3
 
 
 
 
  
 
; 3
0
3
x =
3
0
 
 
 
 
  
 
; 4
0
6
x =
0
-3
 
 
 
 
  
 
; 5
0
0
x =
6
3
 
 
 
 
  
 
Of which except 4x are BFS because it violates non-negativity restrictions. The BFS belong to
a four dimensional space. These basic feasible solutions are projected in the 1 2(x , x ) space
gives rise to the following four points.
3 6 0 0 0
, , ,
3 0 3 6 0
        
        
        
From the graphical representation the extreme points are (0, 0), (0, 3), (3, 3) and (6,0) which
are the same as the BFSs. Therefore the extreme points are precisely the BFS. The no. of BFS
is 4 less than 6.
The SimplexMethod:
General Mathematical Formulation for Linear Programming
Let us define the objective function which to be optimized
1 1 2 2 n nz = c x +c x +...................+c x
We have to find the values of the decision variables 1 2 nx , x ,.........,x on the basis of the following
m constraints;
Page | 10
11 1 12 2 1n n 1
21 1 22 2 2n n 2
m1 1 m2 2 mn n m
a x +a x +.........+a x ( ,=, )b
a x +a x +.........+a x ( ,=, )b
a x +a x +.........+a x ( ,=, )b
 
 
 
and
jx 0; j = 1, 2,.......,n
The above formulation can be written as the following compact form by using the summation
sign;
Optimize (maximize or minimize)
n
j j
j=1
z = c x
Subject to the conditions;
n
ij j i
j=1
a x ( ,=, )b ;i=1, 2,.......,m 
and jx 0; j = 1, 2,.......,n
The constants jc ; j =1, 2,......,n are called the cost coefficients; the constants ib ; i =1, 2,.......,m
are called stipulations and the constants ija ; i =1, 2,.....,m; j=1,2,.....,n are called structural
coefficients. In matrix notation the above equations can be written as;
Optimize z = CX
Subject to the conditions
AX( ,=, )B 
Where
 1 2 n 1 n
C= c c ... ... c 
;
1
2
n n 1
x
x
.
X= .
.
.
x 
 
 
 
 
 
 
 
 
 
 
 
;
11 12 1n
21 22 2n
m1 m2 mn m n
a a ...... a
a a ...... a
A= . . . .
. . . .
a a ...... a 
 
 
 
 
 
 
 
 
;
1
2
m m n
b
b
.
B= .
.
.
b 
 
 
 
 
 
 
 
 
 
 
 
Where, A is called the coefficient matrix, X is called the decision vector, B is called the
requirement vector and C is called the cost vector of linear programming problem
The Standard Form of LP Problem
The use of basic solutions to solve the general LP models requires putting the problem in
standard form. The followings are the characteristics of the standard form
(i) All the constraints are expressed in the form of equations except the non-negative
restrictions on the decision variables which remain inequalities
(ii) The right hand side of each constraint equation is non-negative
Page | 11
(iii) All the decision variables are non-negative
(iv) The objective function may be of the maximization or the minimization type
Conversionof Inequalities into Equations:
The inequality constraint of the type ,( )  can be converted to an equation by adding or
subtracting a variable from the left-hand sides of such constraints. These new variables are
called the slack variables or simply slacks. They are added if the constraints are of the 
types and subtracted if the constraints are of the  types. Since in the cases of  type the
subtracted variables represent the surplus of the left-hand side over right-hand side, it is
commonly known as the surplus variables and is in fact a negative slack.
For example
1 2 1x +x b
Is equivalent to
1 2 1 1x +x s = b
If 1 2 2x +x b
Is equivalent to
1 2 1 2x +x =bs
The general LP problem that discussed above can be expressed as the following standard form;
n
j j
j=1
z = c x
Subject to the conditions
n
ij j i i
j=1
a x s =b ;i=1, 2,.......,m
jx 0; j = 1, 2,.......,n
And
is 0; i = 1, 2,.....,m
In the matrix notation, the general LP problem can be written as the following standard form;
Optimize z = CX
Subject to the conditions
AX S = B
X 0
S 0
Example: Express the following LP problem in a standard form;
Maximize 1 2z= 3x +2x
Subject to the conditions;
1 2
1 2
1 2
2x +x 2
3x +4x 12
x ,x 0



Solution: Introducing slack and surplus variables, the problem can be expressed as the
standard form and is given below;
Maximize 1 2z= 3x +2x
Subject to the conditions;
Page | 12
1 2 1
1 2 2
1 2 1 2
2x +x =2
3x +4x =12
x ,x , , 0
s
s
s s



Conversionof Unrestricted Variable into Non-negative Variables
An unrestricted variable jx can be expressed in terms of two non-negative variables by using
substitution such that + -
j j jx = x -x ; + -
j jx ,x 0
For example, if jx = -10 , then + -
j jx =0, and x = 10. If jx = 10 , then + -
j jx =10, and x = 0.
The substitution is effected in all constraints and in the objective function. After solving the
problem in terms of +
jx and -
jx , the value of the original variable jx is then determined
through back substitution.
Example: Express the following linear programming problem in the standard form;
Maximize, 1 2 3z= 3x +2x +5x
Subject to
1 2
1 2 3
1 3
2x -3x 3
x +2x 3x 5
3x +2x 2

 

Solution: Here 1x and 2x are restricted to be non-negative while 3x is unrestricted. Let us
express as, + -
3 3 3x = x -x where, + -
3 3x 0 and x 0  . Now introducing slack and surplus variable the
problem can be written as the standard form which is given by;
Maximize, + -
1 2 3 3z= 3x +2x +5(x -x )
Subject to the conditions;
1 2 1
+ -
1 2 3 3 2
+ -
1 3 3 3
+ -
1 2 3 3 1 2 3
2x -3x 3
x +2x 3x -3x =5
3x +2x -2x =2
x ,x ,x , x , , , 0
s
s
s
s s s
 
 


Conversionof Maximization to Minimization:
The maximization of a function 1 2 nf(x , x ,.....,x ) is equivalent to the minimization of
1 2 n-f(x , x ,.....,x ) in the sense that both problems yield the same optimal values of 1 2x ,x ,......, and
nx

Más contenido relacionado

La actualidad más candente

Lesson 19: Partial Derivatives
Lesson 19: Partial DerivativesLesson 19: Partial Derivatives
Lesson 19: Partial Derivatives
Matthew Leingang
 
Numerical analysis dual, primal, revised simplex
Numerical analysis  dual, primal, revised simplexNumerical analysis  dual, primal, revised simplex
Numerical analysis dual, primal, revised simplex
SHAMJITH KM
 
Trig substitution
Trig substitutionTrig substitution
Trig substitution
dynx24
 

La actualidad más candente (20)

Lesson 16: Inverse Trigonometric Functions (slides)
Lesson 16: Inverse Trigonometric Functions (slides)Lesson 16: Inverse Trigonometric Functions (slides)
Lesson 16: Inverse Trigonometric Functions (slides)
 
Linear Programming
Linear  ProgrammingLinear  Programming
Linear Programming
 
Integer Linear Programming
Integer Linear ProgrammingInteger Linear Programming
Integer Linear Programming
 
Integer Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear ProgrammingInteger Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear Programming
 
Simplex Algorithm
Simplex AlgorithmSimplex Algorithm
Simplex Algorithm
 
linear programming
linear programming linear programming
linear programming
 
Linear Programming Problems : Dr. Purnima Pandit
Linear Programming Problems : Dr. Purnima PanditLinear Programming Problems : Dr. Purnima Pandit
Linear Programming Problems : Dr. Purnima Pandit
 
Lesson 19: Partial Derivatives
Lesson 19: Partial DerivativesLesson 19: Partial Derivatives
Lesson 19: Partial Derivatives
 
LP linear programming (summary) (5s)
LP linear programming (summary) (5s)LP linear programming (summary) (5s)
LP linear programming (summary) (5s)
 
5.3 integration by substitution dfs-102
5.3 integration by substitution dfs-1025.3 integration by substitution dfs-102
5.3 integration by substitution dfs-102
 
Limits and derivatives
Limits and derivativesLimits and derivatives
Limits and derivatives
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING
 
Lecture27 linear programming
Lecture27 linear programmingLecture27 linear programming
Lecture27 linear programming
 
Numerical analysis dual, primal, revised simplex
Numerical analysis  dual, primal, revised simplexNumerical analysis  dual, primal, revised simplex
Numerical analysis dual, primal, revised simplex
 
Linear Programming
Linear ProgrammingLinear Programming
Linear Programming
 
Integer Programming, Gomory
Integer Programming, GomoryInteger Programming, Gomory
Integer Programming, Gomory
 
Simplex Method.pptx
Simplex Method.pptxSimplex Method.pptx
Simplex Method.pptx
 
Trig substitution
Trig substitutionTrig substitution
Trig substitution
 
Unit.2. linear programming
Unit.2. linear programmingUnit.2. linear programming
Unit.2. linear programming
 

Destacado

Destacado (20)

Lp simplex method example in construction managment
Lp simplex method example in construction managmentLp simplex method example in construction managment
Lp simplex method example in construction managment
 
Linear programing
Linear programingLinear programing
Linear programing
 
Linear programing
Linear programingLinear programing
Linear programing
 
Accounting Glossaries
Accounting GlossariesAccounting Glossaries
Accounting Glossaries
 
Decision theory
Decision theoryDecision theory
Decision theory
 
HCI Research as Problem-Solving [CHI'16, presentation slides]
HCI Research as Problem-Solving [CHI'16, presentation slides] HCI Research as Problem-Solving [CHI'16, presentation slides]
HCI Research as Problem-Solving [CHI'16, presentation slides]
 
Man prod 1
Man prod 1Man prod 1
Man prod 1
 
NTRODUCTION TO OPERATIONS RESEARCH
NTRODUCTION TO OPERATIONS RESEARCHNTRODUCTION TO OPERATIONS RESEARCH
NTRODUCTION TO OPERATIONS RESEARCH
 
Micro insurance: A tool for Poverty alleviation & management of Vulnerability...
Micro insurance: A tool for Poverty alleviation & management of Vulnerability...Micro insurance: A tool for Poverty alleviation & management of Vulnerability...
Micro insurance: A tool for Poverty alleviation & management of Vulnerability...
 
Tax deducted at source
Tax deducted at sourceTax deducted at source
Tax deducted at source
 
Linear Programming: The Construction Problem
Linear Programming: The Construction ProblemLinear Programming: The Construction Problem
Linear Programming: The Construction Problem
 
VAT Rules 1991 Bangladesh
VAT Rules 1991 BangladeshVAT Rules 1991 Bangladesh
VAT Rules 1991 Bangladesh
 
ICMAB question august 2013 All levels All questions
ICMAB question august 2013 All levels All questionsICMAB question august 2013 All levels All questions
ICMAB question august 2013 All levels All questions
 
Linear programing
Linear programingLinear programing
Linear programing
 
Compilation of vat act 1991 theory & math ICAB 2016
Compilation of vat act 1991 theory &  math ICAB 2016Compilation of vat act 1991 theory &  math ICAB 2016
Compilation of vat act 1991 theory & math ICAB 2016
 
VAT ACT 1991 Bangladesh
VAT ACT 1991 BangladeshVAT ACT 1991 Bangladesh
VAT ACT 1991 Bangladesh
 
Duality
DualityDuality
Duality
 
Ranjan sir lecture details (updated in light of FA 2016) RRH update
Ranjan sir lecture details (updated in light of FA 2016) RRH updateRanjan sir lecture details (updated in light of FA 2016) RRH update
Ranjan sir lecture details (updated in light of FA 2016) RRH update
 
Changes in Finance act-2016 BD
Changes in Finance act-2016 BDChanges in Finance act-2016 BD
Changes in Finance act-2016 BD
 
Case solving 1 Case Study of Panera Bread
Case solving 1 Case Study of Panera BreadCase solving 1 Case Study of Panera Bread
Case solving 1 Case Study of Panera Bread
 

Similar a Linear programing problem

Simplex method - Maximisation Case
Simplex method - Maximisation CaseSimplex method - Maximisation Case
Simplex method - Maximisation Case
Joseph Konnully
 
Sienna 10 dynamic
Sienna 10 dynamicSienna 10 dynamic
Sienna 10 dynamic
chidabdu
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
kongara
 
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptxLecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
hlKh4
 
Calculus Application Problem #3 Name _________________________.docx
Calculus Application Problem #3 Name _________________________.docxCalculus Application Problem #3 Name _________________________.docx
Calculus Application Problem #3 Name _________________________.docx
humphrieskalyn
 

Similar a Linear programing problem (20)

Management Science
Management Science Management Science
Management Science
 
Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research
 
Simplex method - Maximisation Case
Simplex method - Maximisation CaseSimplex method - Maximisation Case
Simplex method - Maximisation Case
 
Sienna 10 dynamic
Sienna 10 dynamicSienna 10 dynamic
Sienna 10 dynamic
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
 
OI.ppt
OI.pptOI.ppt
OI.ppt
 
Math
MathMath
Math
 
Chapter 2 Linear Programming for business (1).pptx
Chapter 2 Linear Programming for business (1).pptxChapter 2 Linear Programming for business (1).pptx
Chapter 2 Linear Programming for business (1).pptx
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
 
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptxlecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
 
02_Intro to LP.pdf
02_Intro to LP.pdf02_Intro to LP.pdf
02_Intro to LP.pdf
 
CMR_Graphical Method -Special cases.pdf
CMR_Graphical Method -Special cases.pdfCMR_Graphical Method -Special cases.pdf
CMR_Graphical Method -Special cases.pdf
 
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptxLecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
 
Basic linear programming
Basic linear programmingBasic linear programming
Basic linear programming
 
Graphical method
Graphical methodGraphical method
Graphical method
 
Linear programming.pdf
Linear programming.pdfLinear programming.pdf
Linear programming.pdf
 
introduction to Operation Research
introduction to Operation Research introduction to Operation Research
introduction to Operation Research
 
Calculus Application Problem #3 Name _________________________.docx
Calculus Application Problem #3 Name _________________________.docxCalculus Application Problem #3 Name _________________________.docx
Calculus Application Problem #3 Name _________________________.docx
 

Más de Sazzad Hossain, ITP, MBA, CSCA™

Más de Sazzad Hossain, ITP, MBA, CSCA™ (20)

ITP CIRCULAR 2017 Income Tax Bangladesh - NBR
ITP CIRCULAR 2017 Income Tax Bangladesh - NBRITP CIRCULAR 2017 Income Tax Bangladesh - NBR
ITP CIRCULAR 2017 Income Tax Bangladesh - NBR
 
Auditors Tor for Cash incentive audit of BB
Auditors Tor for Cash incentive audit of BBAuditors Tor for Cash incentive audit of BB
Auditors Tor for Cash incentive audit of BB
 
Tax year 2024 Advance Taxation book by Khalid Petiwala
Tax year 2024 Advance Taxation book by Khalid PetiwalaTax year 2024 Advance Taxation book by Khalid Petiwala
Tax year 2024 Advance Taxation book by Khalid Petiwala
 
All CA Firms 23 October 2023
All CA Firms 23 October 2023All CA Firms 23 October 2023
All CA Firms 23 October 2023
 
আয়কর পরিপত্র ২০২৩-২৪
আয়কর পরিপত্র ২০২৩-২৪ আয়কর পরিপত্র ২০২৩-২৪
আয়কর পরিপত্র ২০২৩-২৪
 
সর্বজনীন পেনশন স্কীম বিধিমালা সংক্রান্ত গেজেট (আগস্ট ২০২৩)
সর্বজনীন পেনশন স্কীম বিধিমালা সংক্রান্ত গেজেট (আগস্ট ২০২৩) সর্বজনীন পেনশন স্কীম বিধিমালা সংক্রান্ত গেজেট (আগস্ট ২০২৩)
সর্বজনীন পেনশন স্কীম বিধিমালা সংক্রান্ত গেজেট (আগস্ট ২০২৩)
 
VAT Deduction at Source
VAT Deduction at SourceVAT Deduction at Source
VAT Deduction at Source
 
জীবনকে কয়েক ধাপ এগিয়ে নিতে চাইলে
জীবনকে কয়েক ধাপ এগিয়ে নিতে চাইলে জীবনকে কয়েক ধাপ এগিয়ে নিতে চাইলে
জীবনকে কয়েক ধাপ এগিয়ে নিতে চাইলে
 
Jun-2023 এস.আর.ও. নং- ১৯৫-২০১-আইন-২০২৩ Customs Act 1969
Jun-2023 এস.আর.ও. নং- ১৯৫-২০১-আইন-২০২৩ Customs Act 1969Jun-2023 এস.আর.ও. নং- ১৯৫-২০১-আইন-২০২৩ Customs Act 1969
Jun-2023 এস.আর.ও. নং- ১৯৫-২০১-আইন-২০২৩ Customs Act 1969
 
TDS Tax Deducted at Source Rule 2023 এস.আর.ও. নং ২০৬-২১০-আইন-২০২৩
TDS Tax Deducted at Source Rule 2023 এস.আর.ও. নং ২০৬-২১০-আইন-২০২৩TDS Tax Deducted at Source Rule 2023 এস.আর.ও. নং ২০৬-২১০-আইন-২০২৩
TDS Tax Deducted at Source Rule 2023 এস.আর.ও. নং ২০৬-২১০-আইন-২০২৩
 
২০২৩ সনের ১৩ নং আইন ব্যাংক- কোম্পানি (সংশোধন) আইন, ২০২৩
২০২৩ সনের ১৩ নং আইন ব্যাংক- কোম্পানি (সংশোধন) আইন, ২০২৩২০২৩ সনের ১৩ নং আইন ব্যাংক- কোম্পানি (সংশোধন) আইন, ২০২৩
২০২৩ সনের ১৩ নং আইন ব্যাংক- কোম্পানি (সংশোধন) আইন, ২০২৩
 
২০২৩ সনের ২২ নং আইন বাংলাদেশ শিল্প-নকশা আইন, ২০২৩
২০২৩ সনের ২২ নং আইন বাংলাদেশ শিল্প-নকশা আইন, ২০২৩২০২৩ সনের ২২ নং আইন বাংলাদেশ শিল্প-নকশা আইন, ২০২৩
২০২৩ সনের ২২ নং আইন বাংলাদেশ শিল্প-নকশা আইন, ২০২৩
 
২০২৩ সনের ২০ নং আইন এজেন্সি টু ইনোভেট (এটুআই) আইন, ২০২৩
২০২৩ সনের ২০ নং আইন এজেন্সি টু ইনোভেট (এটুআই) আইন, ২০২৩২০২৩ সনের ২০ নং আইন এজেন্সি টু ইনোভেট (এটুআই) আইন, ২০২৩
২০২৩ সনের ২০ নং আইন এজেন্সি টু ইনোভেট (এটুআই) আইন, ২০২৩
 
২০২৩ সনের ১৯ নং আইন বাংলাদেশ সরকারি-বেসরকারি অংশীদারিত্ব (সংশোধন) আইন, ২০২৩
২০২৩ সনের ১৯ নং আইন বাংলাদেশ সরকারি-বেসরকারি অংশীদারিত্ব (সংশোধন) আইন, ২০২৩২০২৩ সনের ১৯ নং আইন বাংলাদেশ সরকারি-বেসরকারি অংশীদারিত্ব (সংশোধন) আইন, ২০২৩
২০২৩ সনের ১৯ নং আইন বাংলাদেশ সরকারি-বেসরকারি অংশীদারিত্ব (সংশোধন) আইন, ২০২৩
 
এস.আর.ও নং ২২৪আইন-আয়কর-২০২৩
এস.আর.ও নং ২২৪আইন-আয়কর-২০২৩এস.আর.ও নং ২২৪আইন-আয়কর-২০২৩
এস.আর.ও নং ২২৪আইন-আয়কর-২০২৩
 
Govt Employee Taxation Rules এস.আর.ও নং ২২৫-আইন-আয়কর-৭-২০২৩.pdf
Govt Employee Taxation Rules এস.আর.ও নং ২২৫-আইন-আয়কর-৭-২০২৩.pdfGovt Employee Taxation Rules এস.আর.ও নং ২২৫-আইন-আয়কর-৭-২০২৩.pdf
Govt Employee Taxation Rules এস.আর.ও নং ২২৫-আইন-আয়কর-৭-২০২৩.pdf
 
TDS Rules, 2023 উৎসে কর বিধিমালা, ২০২৩
TDS Rules, 2023 উৎসে কর বিধিমালা, ২০২৩ TDS Rules, 2023 উৎসে কর বিধিমালা, ২০২৩
TDS Rules, 2023 উৎসে কর বিধিমালা, ২০২৩
 
২০২৩-২৪ অর্থবছরে ভ্যাট হার
২০২৩-২৪ অর্থবছরে ভ্যাট হার২০২৩-২৪ অর্থবছরে ভ্যাট হার
২০২৩-২৪ অর্থবছরে ভ্যাট হার
 
TDS on ITA 2023
TDS on ITA 2023  TDS on ITA 2023
TDS on ITA 2023
 
Mapping of ITA 2023 with ITO 1984
Mapping of ITA 2023 with ITO 1984Mapping of ITA 2023 with ITO 1984
Mapping of ITA 2023 with ITO 1984
 

Último

Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 

Último (20)

How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 

Linear programing problem

  • 1. Page | 1 Linear Programming Optimization Problem: Problems which seek to maximize or minimize a numerical function of a number of finite variables subject to certain constraints are called optimization problems Programming Problem: Programming problems deal with determining optimal allocations of limited resources to meet given objectives. The constraints or limited resources are given by linear or non-linear inequalities or equations. The given objective may be to maximize or minimize certain function of finite variables. Linear Programming and Linear Programming Problem: Suppose we have given m linear inequalities or equations in n variables and we wish to find non-negative values of these variables which will satisfy the constraints and maximize or minimize some linear functions of these variables (objective functions), then this procedure is known as linear programming and the problem which is described is known as linear programming problem. Mathematically it can be described as, suppose we have m linear inequalities or equations in n unknown variables of the form n ij j i j=1 a x { ,=, }b (i= 1, 2,....,m)  where for each constraint one and only one of the signs ,=,  holds. Now we wish to find the non-negative values of jx , j = 1, 2,………,n. which will satisfy the constraints and maximize of minimize a linear function n j j j=1 z = c x . Here ija , ib and jc are known constant. Application: (i) Linear programming problem is widely applicable in business and economic activities (ii) It is also applicable in government, military and industrial operations (iii) It is also extensively used in development of planning. Objective Function: In a linear programming problem, a linear function n j j j=1 z = c x of the variables jx , j = 1, 2,………,n. which is to be optimized is called objective function. In a objective function no constant term will be appeared. i. e. we cannot write the objective function of the type n j j j=1 z = c x +k Example of Linear Programming Problem: Machine Type Products 1 2 3 4 Total Time Available Per Week A B C 1.5 1 2.4 1 1 5 1 3.5 1.5 3 3.5 1 2000 8000 5000 Unit Profits 5.24 7.30 8.34 4.18
  • 2. Page | 2 Suppose three types of machines A, B and C turns out four products 1, 2, 3, 4. The above table shows (i) the hours required on each machine type to produce per unit of each product (ii) total available machine hours per week, and (iii) per unit profit on sale of each of the product. Suppose jx (j = 1, 2, 3, 4) is the no. of units of product j produced per week. So we have the following linear constraints; 1 2 3 4 1 2 3 4 1 2 3 4 1.5x +x +2.4x +x 2000 x +5x +x +3.5x 8000 1.5x +3x +3.5x +x 5000    Since the amount of production cannot be negative so, jx 0 (j = 1, 2, 3, 4) . The weekly profit is given by 1 2 3 4z= 5.24x +7.3x +8.34x +4.18x . Now we wish to determine the values of the variables jx 's for which (i), (ii), (iii) and (iv) will be satisfied and (v) will be maximized Formulation of Linear Programming Problem (i) Transportation Problem: Suppose given amount of uniform product are available at each of a no. of origins say warehouse. We wish to send specified amount of the products to each of a no. of different destinations say retail stores. We are interested in determining the minimum cost-routing from warehouse to the retail stores. Let use define m = no. of warehouses n = no. of retail stores ijx the amount of product shipped from the ith warehouse to the jth retail store. Since negative amounts cannot be shipped so we have ijx 0 i, j  ia = total no. of units of the products available for shipment at the ith (i= 1, 2,………m)warehouse. jb = the no. of units of the product required at the jth retail store. Since we cannot supply more than the available amount of the product from ith warehouse to the different retail stores, therefore we have i1 i2 in ix +x +............+x a i= 1, 2,……..,m We must supply at each retail store with the no. of units desired, therefore 1j 2j mj jx +x +.............+x =b ; j = 1, 2,………….,n The total amount received at any retail store is the sum over the amounts received from each warehouse. The needs of the retail stores can be satisfied m n i i=1 j=1 a b j 
  • 3. Page | 3 Let us define ijc is the per unit cost of shifting from ith warehouse to the jth retial store, then the total cost of shifting m n ij ij i=1 j=1 z= c x Now we wish to determine ijx which minimize the cost m n ij ij i=1 j=1 z= c x subject to the constraints i1 i2 in ix +x +............+x a 1j 2j mj jx +x +.............+x =b It is a linear programming problem in mn variables with (m+n) constraints. (2) The Diet Problem Suppose we have given the nutrient content of a no. of different foods. We have also given the minimum daily requirement for each nutrient and quantities of nutrient contained in one of each food being considered. Since we know the cost per ounce of food, the problem is to determine the diet that satisfy the minimum daily requirement of nutrient and also the minimum cost diet. Let us define m = the no. of nutrients n = the no. of foods ija = the quantity (mg) of ith nutrient per (oz) of the jith food ib = the minimum quantity of ith nutrient jc = the cost per (oz) of the jth food jx = the quantity of jth food to be purchased The total amount of ith nutrient contained in all the purchased foods cannot be less than the minimum daily requirements Therefore we have n i1 1 i2 2 in n ij j i j=1 a x +a x +............+a x = a x b The total cost for all purchased foods is given by; n j j j=1 z = c x Now our problem is to minimize cost n j j j=1 z = c x subject to the constraints n i1 1 i2 2 in n ij j i j=1 a x +a x +............+a x = a x b and jx 0 This is called the linear programming problem.
  • 4. Page | 4 Feasible Solution: Any set of values of the variables jx which satisfies the constraints n ij j i j=1 a x { , , b   , where ija and ib are constant is called a solution to the linear programming problem and any solution which satisfies the non-negative restrictions i. e. jx 0 is called a feasible solution. Optimal Feasible Solution In a linear programming problem there is an infinite no. of feasible solutions and out of all these solutions we must find one feasible solution which optimize the objective function n j j j=1 z = c x is called optimal feasible solution In other words, any feasible solution which satisfies the following conditions; (i) n ij j i j=1 a x { , , b   (ii) jx 0 (iii) optimize objective function n j j j=1 z = c x , is called a optimal feasible solution. Corner Point Feasible Solution: A feasible solution which does not lie on the line segment, connection any other two feasible solution is called a corner point feasible solution. Properties: (i) If there is a exactly one optimal solution of the linear programming problem, then it is a corner point feasible solution. (ii) If there are more than two optimal solutions of the given problem, then at least two of them are adjacent corner points. (iii) In a linear programming problem there are a finite number of corner points (iv) If a corner point feasible solution is better than its adjacent corner point solution, then it is better than all other feasible solutions. Methods for SolvingLinear ProgrammingProblems (1) Graphical Method (2) Algebraic Method (3) SimplexMethod Graphical Method: The graphical method to solve a linear programming problem involves two basic steps (1) At the first step we have to determine the feasible solution space. We represent the values of the variable 1x to the X axis and the their corresponding values of the variable 2x to the Y axis. Any point lying in the first quadrant satisfies 1x > 0 and 2x 0 . The easiest way of accounting for the remaining constraints for optimization objective function is to replace inequalities with equations and then plot the resulting straight lines Give an example:
  • 5. Page | 5 Next we consider the effect of the inequality. All the inequality does is to divide the 1 2(x , x ) -plane into two spaces that occur on both sides of the plotted line: one side satisfies the inequality and the other one dies not. Any point lying on or below the line satisfies the inequality. A procedure to determine the feasible side is to use the origin (0, 0) as a reference point. Step 2: At the second step we have to determine the optimal solution. Problem: Find the non-negative value of the variables 1 2x and x which satisfies the constraints 1 23x +5x 15 1 25x +2x 10 And which maximize the objective function 1 2z = 5x +3x Solution: We introduce an 1 2x x co-ordinate system. Any point lying in the first quadrant has 1 2x ,x 0 . Now we show the straight lines 1 23x +5x =15 and 1 25x +2x =10 on the graph. Any point lying on or below the line 1 23x +5x =15 satisfies the 1 23x +5x 15 . Similarly any point lying on or below the line 1 25x +2x =10 satisfies the constraint 1 25x +2x 10 B(0,3) A(1.053, 2.368) 1 23x +5x =15 O C (5,0) 1 2z = 5x +3x 1 25x +2x =10 So, the region ABOC containing the set of points satisfying both the constraints and the non negative restriction. So, the points in this region are the feasible solution. Now we wish to find the line with the largest value of 1 2z = 5x +3x which has at least one point in common with the region of feasible solution. The line is drawn in the graph above. It shows that the value of 1x and 2x at the point A are the required solution. Here 1x =1.053 and 2x 2.368 approximate. Now from the objective function we get the maximum value of z which is given by z = 5 1.053+3 2.368=12.37 
  • 6. Page | 6 Algebraic Method: In LP problems, generally the constraints are not all equations. Since equations are easy to handle as compared to inequalities, a simple conversion is needed to make the inequalities into equality. Let us consider first, the constraints having less than or equal signs (). Any constraint of this category can be written as h1 1 h2 2 hn n ha x +a x +..............+a x b (1) Let us introduce a new variable n+hx which satisfies that n+hx 0 where n n+h h hj j j=1 x b a x 0   , to convert the inequalities to the equality such that h1 1 h2 2 hn n n+h ha x +a x +..............+a x +x b (2) The new variable n+hx is the difference between the amount available of resource and the amount actually used and it is called the slack variables. Next we consider the constraints having signs greater than or equal ( ). A typical inequality in this set can be written as; k1 1 k2 2 kn n ka x +a x +..............+a x b (3) Introducing a new variable n+kx 0 , the inequality can be written as equality which is given by; k1 1 k2 2 kn n n+k ka x +a x +..............+a x -x b (4) Here the variable n+kx is called the surplus variable, because it is the difference between resources used and the minimum amount to be produced is called the surplus. Therefore using algebraic method for solving a linear programming problem, the linear programming problem with original constraints can be transformed into a LP problem with constraints of simultaneously linear equation form by using slack and surplus variable Example: Considering the LP problem 1 2Min: -x -3x St 1 2x -2x 4 1 2-x +x 3 1 2x , x > 0 Now introducing two new variables 3 4x and x , the problem can be written as; 1 2 3 4Min: -x -3x +0.x +0.x St: 1 2 3 1 2 4 1 2 3 4 x -2x x =4 -x +x x = 3 x , x , x , x > 0   Here 3x is the slack variable and 4x is the surplus variable. Effect of Introducing Slack and Surplus Variables Suppose we have a linear programming problem 1P such that Optimize 1 1 2 2 n nZ= c x +c x +..............+c x (1) Subject to the condition h1 1 h2 2 hn n ha x +a x +..............+a x { , , }b   (2) Where one and only one of the signs in the bracket hold for each constraint The problem is converted to another linear programming problem 2P such that 1 1 2 2 n n n+1 mZ= c x +c x +..............+c x +0.x +............+0.x (3)
  • 7. Page | 7 Subject to the condition h1 1 h2 2 hn n hn+1 n+1 hm m hAx = a x +a x +..............+a x a x ........... a x b    (4) Where  ij n m A= a  and ja ( j = 1, 2,…….,m) is the jth column of A. We claim that optimizing (3) subject to (4) with jx 0 is completely equivalent to optimizing (1) subject to (2) with jx 0 To prove this, we first note that if we have any feasible solution to the original constraints, then our method of introducing slack or surplus variables will yield a set of non-negative slack or surplus variables such that equation (4) is satisfied with all variables non-negative consequently if we have a feasible solution to (4) with all variables non-negative, then its first n components will yield a feasible solution to (2) .Thus there exist one –to-one correspondence between the feasible solutions to the original set of constraints and the feasible solution to the set of simultaneous linear equations. Now if * * * * 1 2 mX = (x x ,........,x ) 0 is a feasible optimal solution to linear programming 2P then the first n components of * X that is * * * 1 2 n(x x ,........,x ) is an optimal solution by annexing the slack and surplus variables to any optimal solution to 1P we obtain an optimal solution to 2P Therefore, wet may conclude that if slack and surplus variables having a zero cost are introduced to convert the original set of constraint into a set of simultaneous linear equations, so the resulting problem is equivalent to the original problem. Existence of Extreme Basic Feasible Solution: Reduction of any feasible solution to a basic feasible solution Let us consider a linear programming problem with m linear equations in n unknowns such that AX = b X 0 Which has at least one basic feasible solution without loss of generality suppose that Rank(A) = m and let 1 2 nX=(x , x ,......,x )be as feasible solution. Further suppose that 1 2 px , x ,......,x >0 and that p+1 P+2 nx , x ,......,x =0 . And let 1 2 pa , a ,......,a be the respective columns of A corresponding to the variables 1 2 px , x ,......,x . If 1 2 pa , a ,......,a are linearly independent then X is a basic feasible solution. in such case p m . If p=m from the theory of system of linear equation, the solution is non-degeneracy basic feasible solution. If p<m, the system have a degenerate basic feasible solution with (m-p) of the basic variables are equal to zero. If 1 2 pa , a ,......,a are dependent then there exist scalars 1 2, ,......, p   with at least one positive
  • 8. Page | 8 j such that 1 0 p j j j a    Considering the following point X with j 0 j x ; 1,2,...., x = 0; 1, 2,..... j j p j p p n         where j k o j=1,2,....,p k x x =Minimum ; 0 = >0j j             If j 0  , then jx >0 , since both jx and 0 are positive. If 0j  , then by the definition of 0 we have j o j 0 x x j j        . Thus jx >0 Furthermore k k k 0 k k x x = x - =x - =0k k     . Hence x has at most (p-1) positive components. Also, n j j j=1 n j j 0 j=1 n n j j 0 j j=1 j=1 Ax = a x = a (x ) = a x a = b j j             Thus we have a constructed feasible solution x since Ax =b , x 0  with at most (p-1) positive components. If the columns of A corresponding to these positive components are linearly independent then x is basic feasible solution. Otherwise the process is repeated. Eventually a basic feasible solution (BFS) will be obtained. Example: Consider the following inequalities 1 2 2 1 2 x +x 6 x 3 x , x 0    Find basic solution, BFS and extreme points. Solution. By introducing slack variables 3 4x and x , the problem is put into the following standard format 1 2 3 2 4 1 2 3 4 x +x x =6 x x =3 x , x ,x ,x 0    So, the constraint matrix A is given by; 1 1 1 0 A = 0 1 0 1       = 1 2 3 4(a , a , a , a ) , 6 b= 3       Rank(A) = 2
  • 9. Page | 9 Therefore, the basic solutions corresponding to finding a 2 2 basis B. Following are the possible ways of extracting B out of A (i) 1 2 1 1 B=(a , a ) = 0 1       , -1 1 -1 B = 0 1       , -1 B 1 -1 6 3 x =B b= = 0 1 3 3                , 3 n 4 x 0 x = = x 0            (ii) 1 3 1 1 B=(a , a )= 0 0       , Since |B|=0, it is not possible to find -1 B and hence Bx (iii) 1 4 1 0 B=(a , a )= 0 1       ; -1 1 0 B = 0 1       21 -1 B n 34 xx 1 0 6 6 0 x = =B b= = x = = xx 0 1 3 3 0                               (iv) 2 3 1 1 B=(a , a )= 1 0       -1 0 1 B = 1 1       2 1-1 B n 3 4 x x0 1 6 3 0 x = =B b= = x = = x x1 1 3 3 0                              (v) 2 4 1 0 B=(a , a )= 1 1       ; -1 1 0 B = -1 1       ; 12 -1 B n 34 xx 1 0 6 6 0 x = =B b= = x = = xx -1 1 3 -3 0                               (vi) 3 4 1 0 B=(a , a )= 0 1       ; -1 1 0 B = 0 1       ; 3 1-1 B n 4 2 x x1 0 6 6 0 x = =B b= = x = = x x0 1 3 3 0                               Hence we have the following five basic solutions 1 3 3 x = 0 0              ; 2 6 0 x = 0 3              ; 3 0 3 x = 3 0              ; 4 0 6 x = 0 -3              ; 5 0 0 x = 6 3              Of which except 4x are BFS because it violates non-negativity restrictions. The BFS belong to a four dimensional space. These basic feasible solutions are projected in the 1 2(x , x ) space gives rise to the following four points. 3 6 0 0 0 , , , 3 0 3 6 0                            From the graphical representation the extreme points are (0, 0), (0, 3), (3, 3) and (6,0) which are the same as the BFSs. Therefore the extreme points are precisely the BFS. The no. of BFS is 4 less than 6. The SimplexMethod: General Mathematical Formulation for Linear Programming Let us define the objective function which to be optimized 1 1 2 2 n nz = c x +c x +...................+c x We have to find the values of the decision variables 1 2 nx , x ,.........,x on the basis of the following m constraints;
  • 10. Page | 10 11 1 12 2 1n n 1 21 1 22 2 2n n 2 m1 1 m2 2 mn n m a x +a x +.........+a x ( ,=, )b a x +a x +.........+a x ( ,=, )b a x +a x +.........+a x ( ,=, )b       and jx 0; j = 1, 2,.......,n The above formulation can be written as the following compact form by using the summation sign; Optimize (maximize or minimize) n j j j=1 z = c x Subject to the conditions; n ij j i j=1 a x ( ,=, )b ;i=1, 2,.......,m  and jx 0; j = 1, 2,.......,n The constants jc ; j =1, 2,......,n are called the cost coefficients; the constants ib ; i =1, 2,.......,m are called stipulations and the constants ija ; i =1, 2,.....,m; j=1,2,.....,n are called structural coefficients. In matrix notation the above equations can be written as; Optimize z = CX Subject to the conditions AX( ,=, )B  Where  1 2 n 1 n C= c c ... ... c  ; 1 2 n n 1 x x . X= . . . x                        ; 11 12 1n 21 22 2n m1 m2 mn m n a a ...... a a a ...... a A= . . . . . . . . a a ...... a                  ; 1 2 m m n b b . B= . . . b                        Where, A is called the coefficient matrix, X is called the decision vector, B is called the requirement vector and C is called the cost vector of linear programming problem The Standard Form of LP Problem The use of basic solutions to solve the general LP models requires putting the problem in standard form. The followings are the characteristics of the standard form (i) All the constraints are expressed in the form of equations except the non-negative restrictions on the decision variables which remain inequalities (ii) The right hand side of each constraint equation is non-negative
  • 11. Page | 11 (iii) All the decision variables are non-negative (iv) The objective function may be of the maximization or the minimization type Conversionof Inequalities into Equations: The inequality constraint of the type ,( )  can be converted to an equation by adding or subtracting a variable from the left-hand sides of such constraints. These new variables are called the slack variables or simply slacks. They are added if the constraints are of the  types and subtracted if the constraints are of the  types. Since in the cases of  type the subtracted variables represent the surplus of the left-hand side over right-hand side, it is commonly known as the surplus variables and is in fact a negative slack. For example 1 2 1x +x b Is equivalent to 1 2 1 1x +x s = b If 1 2 2x +x b Is equivalent to 1 2 1 2x +x =bs The general LP problem that discussed above can be expressed as the following standard form; n j j j=1 z = c x Subject to the conditions n ij j i i j=1 a x s =b ;i=1, 2,.......,m jx 0; j = 1, 2,.......,n And is 0; i = 1, 2,.....,m In the matrix notation, the general LP problem can be written as the following standard form; Optimize z = CX Subject to the conditions AX S = B X 0 S 0 Example: Express the following LP problem in a standard form; Maximize 1 2z= 3x +2x Subject to the conditions; 1 2 1 2 1 2 2x +x 2 3x +4x 12 x ,x 0    Solution: Introducing slack and surplus variables, the problem can be expressed as the standard form and is given below; Maximize 1 2z= 3x +2x Subject to the conditions;
  • 12. Page | 12 1 2 1 1 2 2 1 2 1 2 2x +x =2 3x +4x =12 x ,x , , 0 s s s s    Conversionof Unrestricted Variable into Non-negative Variables An unrestricted variable jx can be expressed in terms of two non-negative variables by using substitution such that + - j j jx = x -x ; + - j jx ,x 0 For example, if jx = -10 , then + - j jx =0, and x = 10. If jx = 10 , then + - j jx =10, and x = 0. The substitution is effected in all constraints and in the objective function. After solving the problem in terms of + jx and - jx , the value of the original variable jx is then determined through back substitution. Example: Express the following linear programming problem in the standard form; Maximize, 1 2 3z= 3x +2x +5x Subject to 1 2 1 2 3 1 3 2x -3x 3 x +2x 3x 5 3x +2x 2     Solution: Here 1x and 2x are restricted to be non-negative while 3x is unrestricted. Let us express as, + - 3 3 3x = x -x where, + - 3 3x 0 and x 0  . Now introducing slack and surplus variable the problem can be written as the standard form which is given by; Maximize, + - 1 2 3 3z= 3x +2x +5(x -x ) Subject to the conditions; 1 2 1 + - 1 2 3 3 2 + - 1 3 3 3 + - 1 2 3 3 1 2 3 2x -3x 3 x +2x 3x -3x =5 3x +2x -2x =2 x ,x ,x , x , , , 0 s s s s s s       Conversionof Maximization to Minimization: The maximization of a function 1 2 nf(x , x ,.....,x ) is equivalent to the minimization of 1 2 n-f(x , x ,.....,x ) in the sense that both problems yield the same optimal values of 1 2x ,x ,......, and nx