SlideShare una empresa de Scribd logo
1 de 42
DirectMethodstothesolution of linear equationssystems Lizeth Paola Barrero Riaño NumericalMethods – Industrial Univesity of Santander
[object Object]
TransposesMatrix
Determinant
Upper Triangular Matrix
Lower Triangular Matrix
BandedMatrix
AugmentedMatrix
MatrixMultiplicationBASIC FUNDAMENTALS
Matrix A horizontal set of elements is called a row(i) and a vertical set is called a column (j).  A matrix consists of a rectangular array of elements represented by a single symbol. As depicted in Figure [A] is the shorthand notation for the matrix and designates an individual element of the matrix. Column 3 Row 2
SymmetricMatrix It is a square matrix in which the elements are symmetric about the main diagonal Example: Scalar, diagonal and identity matrices, are symmetric matrices. If A is a symmetric matrix, then: The product       is defined and is a symmetric matrix. The sum of symmetric matrices is a symmetric matrix. The product of two symmetric matrices is a symmetric matrix if the matrices commute
TransposesMatrix Letanymatrix A=(aij) of mxnorder, thematrix B=(bij) de ordernxmisthe A transposeifthe A rows are the B columns . This operation is usually denoted by  At = A' = B Example: Properties:
Determinant Given a square matrix  A of n size, its determinant is defined as the sum of the any matrix line elements  product (row or column) chosen, by their corresponding attachments. Example:
Determinant Properties If a matrix has a line (row or column) of zeros, the determinant is zero. If a matrix has two equal rows or proportional, the determinant is null If we permute two parallels lines of a square matrix, its determinant changes sign. If we multiply all elements of a determinant line by a number, the determinant is multiplied by that number. If a matrix line is added another line multiplied by a number, the determinant does not change. The determinant of a matrix is equal to its transpose, If A  has  inverse matrix, A-1, it is verified that:
Upper and Lower Triangular Matrix Upper Triangular Matrix Lower Triangular Matrix Example: Example: It is a square matrix in which all the items under the main diagonal are zero. It is a square matrix in which all the elements above the main diagonal are zero.
BandedMatrix A band matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side. Example: Diagonal Triadiagonal Pentadiagonal
AugmentedMatrix It is called extended or augmented matrix that is formed by the coefficient matrix and the vector of independent terms, which are usually separated with a dotted line Example:
MatrixMultiplication . To define A ⋅ B is necessary that the number of columns in the first matrix coincides with the number of rows in the second matrix. The product order is given by the number of rows in the first matrix per the number of columns in the second matrix. That is, if A is mxn order and B is nxp order, then C = A ⋅ B is mxp order. theproduct A.Bis Given anothermatrix in  whicheachCijisthenth rowproduct of A per thenthcolumn of B, namelythe element
Matrix Multiplication Graphically: Example:
Solution of Linear Algebraic Equation Linear algebra is one of the corner stones of modern computational mathematics. Almost all numerical schemes such as the finite element method and finite difference method are in act techniques that transform, assemble, reduce, rearrange, and/or approximate the differential, integral, or other types of equations to systems of linear algebraic equations. A system of linear algebraic equations can be expressed as
Solution of Linear Algebraic Equation Or: AX=B ,[object Object]
In this part, we deal with the case of determining the values x1, x2,…,xn that simultaneously satisfy a set of equations.Solving a system with a coefficient matrix        is equivalent to finding the intersection point(s) of all m surfaces (lines) in an n dimensional space.  If all m surfaces happen to pass through a single point then the solution is unique
Small Systems of Linear Equations GraphicalMethod Cramer’s Rule TheElimination of Unknows
1. Graphical Method When solving a system with two linear equations in two variables, we are looking for the point where the two lines cross.  This can be determined by graphing each line on the same coordinate system and estimating the point of intersection. When two straight lines are graphed, one of three possibilities may result:
Graphical Method When two lines cross in exactly one point, the system is consistent and independent and the solution is the one ordered pair where the two lines cross. The coordinates of this ordered pair can be estimated from the graph of the two lines: Case 1 Independent system: one solution point
Graphical Method This graph shows two distinct lines that are parallel. Since parallel lines never cross, then there can be no intersection; that is, for a system of equations that graphs as parallel lines, there can be no solution. This is called an "inconsistent" system of equations, and it has no solution. Case 2 Inconsistent system: no solution and no intersection point
Graphical Method This graph appears to show only one line. Actually, it is the same line drawn twice. These "two" lines, really being the same line, "intersect" at every point along their length. This is called a "dependent" system, and the "solution" is the whole line. Case 3 Dependent system: the solution is the whole line
Graphical Method ADVANTAGES: The graphical method is good because it clearly illustrates the principle involved.  DISADVANTAGES: ,[object Object]
It cannot be used when we have more than two variables in the equations. For instance, if the lines cross at a shallow angle it can be just about impossible to tell where the lines cross:
Graphical Method Example Solve the following system by graphing.  2x – 3y = –2 4x +   y = 24 First, we must solve each equation for "y=", so we can graph easily: 	2x – 3y = –2			4x + y = 24 	2x + 2 = 3y			y = –4x + 24 	(2/3)x + (2/3) = y
Graphical Method The second line will be easy to graph using just the slope and intercept, but it is necessary a T-chart for the first line. Solution:  (x, y) = (5, 4)
Cramer’s Rule Cramer’s rule is another technique that is best suited to small numbers of equations. This rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2, … ,bn.  For example, x1would be computed as
Example Use Cramer’s Rule to solve the system:   5x – 4y = 2 6x – 5y = 1 Solution: Webeginbysetting up and evaluatingthethreedeterminants:
Example FromCramer’s Rule, wehave: and Thesolutionis (6,7) Cramer’s Rule doesnotapplyif D=0. When D=0 , thesystemiseitherinconsistentordependent. Anothermethodmustbeusedtosolveit.
TheElimination of Unknows The elimination of unknowns by combing equations is an algebraic approach that can be illustrated for a set of two equations: The basic strategy is to multiply the equations by constants so that of the unknowns will be eliminated when the two equations are combined. The result is a single equation that can be solved for the remaining unknown. This value can then be substituted into either of the original equations to compute the other variable. For example, these equations might be multiplied by a21 and a11 to give
TheElimination of Unknows Subtracting Eq. 3 from 4 will, therefore, eliminate the xt term from the equations to yield Which can be solve for This equation can then be substituted into Eq. 1, which can be solved for
TheElimination of Unknows Notice that these equations follow directly from Cramer’s rule, which states EXAMPLE Use the elimination of unknown to solve,
GaussianElimination Gaussian Elimination is considered the workhorse of computational science for the solution of a system of linear equations. Karl Friedrich Gauss, a great 19th century mathematician, suggested this elimination method as a part of his proof of a particular theorem. Computational scientists use this “proof” as a direct computational method.  Gaussian Elimination is a systematic application of elementary row operations to a system of linear equations in order to convert the system to upper triangular form.  Once the coefficient matrix is in upper triangular form, we use back substitution to find a solution.
GaussianElimination The general procedure for Gaussian Elimination can be summarized in the following steps:  Write the augmented matrix for the system of linear equations.  Use elementary row operations on the augmented matrix [A|b] to transform A into upper triangular form. If a zero is located on the diagonal, switch the rows until a nonzero is in that place. If you are unable to do so, stop; the system has either infinite or no solutions.  Use back substitution to find the solution of the problem.
GaussianElimination Example 1. Write the augmented matrix for the system of linear equations. 2.Use elementary row operations on theaugmented matrix [A|b] to transform A into upper triangular form. Change row 1 to row 2 and vice versa
GaussianElimination Notice that the original coefficient matrix had a “0” on the diagonal in row 1. Since we needed to use multiples of that diagonal element to eliminate the elements below it, we switched two rows in order to move a nonzero element into that position. We can use the same technique when a “0” appears on the diagonal as a result of calculation. If it is not possible to move a nonzero onto the diagonal by interchanging rows, then the system has either infinitely many solutions or no solution, and the coefficient matrix is said to be singular.  Since all of the nonzero elements are now located in the “upper triangle” of the matrix, we have completed the first phase of solving a system of linear equations using Gaussian Elimination.
GaussianElimination The second and final phase of Gaussian Elimination is back substitution. During this phase, we solve for the values of the unknowns, working our way up from the bottom row.  3.	Use back substitution to find the solution of the problem The last row in the augmented matrix represents the equation:

Más contenido relacionado

La actualidad más candente

Matrix inverse
Matrix inverseMatrix inverse
Matrix inversemstf mstf
 
9.1 Systems of Linear Equations
9.1 Systems of Linear Equations9.1 Systems of Linear Equations
9.1 Systems of Linear Equationssmiller5
 
Abstract algebra & its applications
Abstract algebra & its applicationsAbstract algebra & its applications
Abstract algebra & its applicationsdrselvarani
 
NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)krishnapriya R
 
Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applicationsPratik Gadhiya
 
systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matricesStudent
 
Divergence and curl
Divergence and curlDivergence and curl
Divergence and curlAnimesh5599
 
Quadratic equation slideshare
Quadratic equation slideshareQuadratic equation slideshare
Quadratic equation slideshareAnusharani771
 
Differential equations
Differential equationsDifferential equations
Differential equationsSeyid Kadher
 
Multiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMultiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMeenakshisundaram N
 
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical MethodsGauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical MethodsJanki Shah
 
Applications Of Laplace Transforms
Applications Of Laplace TransformsApplications Of Laplace Transforms
Applications Of Laplace TransformsKetaki_Pattani
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi methodandrushow
 
Lagrange equation and its application
Lagrange equation and its applicationLagrange equation and its application
Lagrange equation and its applicationMahmudul Alam
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringshubham211
 
Curve fitting - Lecture Notes
Curve fitting - Lecture NotesCurve fitting - Lecture Notes
Curve fitting - Lecture NotesDr. Nirav Vyas
 
Eigen values and eigenvectors
Eigen values and eigenvectorsEigen values and eigenvectors
Eigen values and eigenvectorsAmit Singh
 

La actualidad más candente (20)

Matrix inverse
Matrix inverseMatrix inverse
Matrix inverse
 
9.1 Systems of Linear Equations
9.1 Systems of Linear Equations9.1 Systems of Linear Equations
9.1 Systems of Linear Equations
 
Abstract algebra & its applications
Abstract algebra & its applicationsAbstract algebra & its applications
Abstract algebra & its applications
 
NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)
 
Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applications
 
LinearAlgebra.ppt
LinearAlgebra.pptLinearAlgebra.ppt
LinearAlgebra.ppt
 
systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matrices
 
Divergence and curl
Divergence and curlDivergence and curl
Divergence and curl
 
Quadratic equation slideshare
Quadratic equation slideshareQuadratic equation slideshare
Quadratic equation slideshare
 
Differential equations
Differential equationsDifferential equations
Differential equations
 
Multiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMultiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical Methods
 
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical MethodsGauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
 
Vector space
Vector spaceVector space
Vector space
 
Applications Of Laplace Transforms
Applications Of Laplace TransformsApplications Of Laplace Transforms
Applications Of Laplace Transforms
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi method
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Lagrange equation and its application
Lagrange equation and its applicationLagrange equation and its application
Lagrange equation and its application
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
 
Curve fitting - Lecture Notes
Curve fitting - Lecture NotesCurve fitting - Lecture Notes
Curve fitting - Lecture Notes
 
Eigen values and eigenvectors
Eigen values and eigenvectorsEigen values and eigenvectors
Eigen values and eigenvectors
 

Destacado

Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equationsmofassair
 
Crout s method for solving system of linear equations
Crout s method for solving system of linear equationsCrout s method for solving system of linear equations
Crout s method for solving system of linear equationsSugathan Velloth
 
Gaussian Elimination Method
Gaussian Elimination MethodGaussian Elimination Method
Gaussian Elimination MethodAndi Firdaus
 
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONS
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONSNUMERICAL METHODS MULTIPLE CHOICE QUESTIONS
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONSnaveen kumar
 
Gauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodGauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodMeet Nayak
 
Gaussian elimination method & homogeneous linear equation
Gaussian elimination method & homogeneous linear equationGaussian elimination method & homogeneous linear equation
Gaussian elimination method & homogeneous linear equationStudent
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidelarunsmm
 
Systems of linear equations
Systems of linear equationsSystems of linear equations
Systems of linear equationsgandhinagar
 
Multiple Choice:.doc
Multiple Choice:.docMultiple Choice:.doc
Multiple Choice:.docbutest
 
Cramers rule
Cramers ruleCramers rule
Cramers rulemstf mstf
 
Gauss y gauss jordan
Gauss y gauss jordanGauss y gauss jordan
Gauss y gauss jordanjonathann89
 
Iterative methods for the solution of systems of linear equations
Iterative methods for the solution of systems of linear equationsIterative methods for the solution of systems of linear equations
Iterative methods for the solution of systems of linear equationsNORAIMA
 
Solving system of linear equations
Solving system of linear equationsSolving system of linear equations
Solving system of linear equationsMew Aornwara
 
Linear Equations
Linear EquationsLinear Equations
Linear Equationsrfant
 
Direct solution of sparse network equations by optimally ordered triangular f...
Direct solution of sparse network equations by optimally ordered triangular f...Direct solution of sparse network equations by optimally ordered triangular f...
Direct solution of sparse network equations by optimally ordered triangular f...Dimas Ruliandi
 
Pre calculus warm up.3.11.14
Pre calculus warm up.3.11.14Pre calculus warm up.3.11.14
Pre calculus warm up.3.11.14Ron Eick
 
Direct Methods For The Solution Of Systems Of
Direct Methods For The Solution Of Systems OfDirect Methods For The Solution Of Systems Of
Direct Methods For The Solution Of Systems OfMarcela Carrillo
 
Expository paragraph
Expository paragraphExpository paragraph
Expository paragraphAfraz Khan
 
81 systems of linear equations 1
81 systems of linear equations 181 systems of linear equations 1
81 systems of linear equations 1math126
 

Destacado (20)

Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
 
Crout s method for solving system of linear equations
Crout s method for solving system of linear equationsCrout s method for solving system of linear equations
Crout s method for solving system of linear equations
 
Gaussian Elimination Method
Gaussian Elimination MethodGaussian Elimination Method
Gaussian Elimination Method
 
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONS
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONSNUMERICAL METHODS MULTIPLE CHOICE QUESTIONS
NUMERICAL METHODS MULTIPLE CHOICE QUESTIONS
 
Gauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodGauss jordan and Guass elimination method
Gauss jordan and Guass elimination method
 
Gaussian elimination method & homogeneous linear equation
Gaussian elimination method & homogeneous linear equationGaussian elimination method & homogeneous linear equation
Gaussian elimination method & homogeneous linear equation
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidel
 
Systems of linear equations
Systems of linear equationsSystems of linear equations
Systems of linear equations
 
Multiple Choice:.doc
Multiple Choice:.docMultiple Choice:.doc
Multiple Choice:.doc
 
Cramers rule
Cramers ruleCramers rule
Cramers rule
 
Gauss y gauss jordan
Gauss y gauss jordanGauss y gauss jordan
Gauss y gauss jordan
 
Gauss sediel
Gauss sedielGauss sediel
Gauss sediel
 
Iterative methods for the solution of systems of linear equations
Iterative methods for the solution of systems of linear equationsIterative methods for the solution of systems of linear equations
Iterative methods for the solution of systems of linear equations
 
Solving system of linear equations
Solving system of linear equationsSolving system of linear equations
Solving system of linear equations
 
Linear Equations
Linear EquationsLinear Equations
Linear Equations
 
Direct solution of sparse network equations by optimally ordered triangular f...
Direct solution of sparse network equations by optimally ordered triangular f...Direct solution of sparse network equations by optimally ordered triangular f...
Direct solution of sparse network equations by optimally ordered triangular f...
 
Pre calculus warm up.3.11.14
Pre calculus warm up.3.11.14Pre calculus warm up.3.11.14
Pre calculus warm up.3.11.14
 
Direct Methods For The Solution Of Systems Of
Direct Methods For The Solution Of Systems OfDirect Methods For The Solution Of Systems Of
Direct Methods For The Solution Of Systems Of
 
Expository paragraph
Expository paragraphExpository paragraph
Expository paragraph
 
81 systems of linear equations 1
81 systems of linear equations 181 systems of linear equations 1
81 systems of linear equations 1
 

Similar a Direct Methods to Solve Linear Equations Systems

Chapter 4: Linear Algebraic Equations
Chapter 4: Linear Algebraic EquationsChapter 4: Linear Algebraic Equations
Chapter 4: Linear Algebraic EquationsMaria Fernanda
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksJinTaek Seo
 
Matrix and its applications by mohammad imran
Matrix and its applications by mohammad imranMatrix and its applications by mohammad imran
Matrix and its applications by mohammad imranMohammad Imran
 
Introduction to Business Mathematics
Introduction to Business MathematicsIntroduction to Business Mathematics
Introduction to Business MathematicsZunair Bhatti
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinantssom allul
 
Determinants
DeterminantsDeterminants
DeterminantsRivan001
 
Module 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdfModule 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdfPrathamPatel560716
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equationsCesar Mendoza
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equationsCesar Mendoza
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equationpayalpriyadarshinisa1
 
Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraMUHAMMADUSMAN93058
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
 
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdf
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdfMATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdf
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdfElaMaeSilmaro
 
Linear_Algebra_final.pdf
Linear_Algebra_final.pdfLinear_Algebra_final.pdf
Linear_Algebra_final.pdfRohitAnand125
 

Similar a Direct Methods to Solve Linear Equations Systems (20)

Chapter 4: Linear Algebraic Equations
Chapter 4: Linear Algebraic EquationsChapter 4: Linear Algebraic Equations
Chapter 4: Linear Algebraic Equations
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
 
Section-7.4-PC.ppt
Section-7.4-PC.pptSection-7.4-PC.ppt
Section-7.4-PC.ppt
 
Matrix and its applications by mohammad imran
Matrix and its applications by mohammad imranMatrix and its applications by mohammad imran
Matrix and its applications by mohammad imran
 
Introduction to Business Mathematics
Introduction to Business MathematicsIntroduction to Business Mathematics
Introduction to Business Mathematics
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Linear Algebra and its use in finance:
Linear Algebra and its use in finance:Linear Algebra and its use in finance:
Linear Algebra and its use in finance:
 
system of linear equations
system of linear equationssystem of linear equations
system of linear equations
 
Determinants
DeterminantsDeterminants
Determinants
 
Module 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdfModule 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdf
 
Matrices
MatricesMatrices
Matrices
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
 
Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear Algebra
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
 
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdf
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdfMATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdf
MATRICES-AND-SYSTEMS-OF-LINEAR-EQUATIONS_Part-1_Feb14.pdf
 
Presentation.pptx
Presentation.pptxPresentation.pptx
Presentation.pptx
 
Linear_Algebra_final.pdf
Linear_Algebra_final.pdfLinear_Algebra_final.pdf
Linear_Algebra_final.pdf
 
Rankmatrix
RankmatrixRankmatrix
Rankmatrix
 

Más de Lizeth Paola Barrero (14)

Direct methods
Direct methodsDirect methods
Direct methods
 
Direct methods
Direct methodsDirect methods
Direct methods
 
Direct Methods to Solve Lineal Equations
Direct Methods to Solve Lineal EquationsDirect Methods to Solve Lineal Equations
Direct Methods to Solve Lineal Equations
 
Chapter 3 roots of equations
Chapter 3 roots of equationsChapter 3 roots of equations
Chapter 3 roots of equations
 
Chapter 3 roots of equations
Chapter 3 roots of equationsChapter 3 roots of equations
Chapter 3 roots of equations
 
Chapter 3 roots of equations
Chapter 3 roots of equationsChapter 3 roots of equations
Chapter 3 roots of equations
 
Chapter 2 roots of equations
Chapter 2 roots of equationsChapter 2 roots of equations
Chapter 2 roots of equations
 
Roots of equations
Roots of equationsRoots of equations
Roots of equations
 
Numerical approximation
Numerical approximationNumerical approximation
Numerical approximation
 
Numerical approximation
Numerical approximationNumerical approximation
Numerical approximation
 
Numerical methods
Numerical methodsNumerical methods
Numerical methods
 
Numerical methods
Numerical methodsNumerical methods
Numerical methods
 
Numerical methods
Numerical methodsNumerical methods
Numerical methods
 
Numerical methods
Numerical methodsNumerical methods
Numerical methods
 

Direct Methods to Solve Linear Equations Systems

  • 1. DirectMethodstothesolution of linear equationssystems Lizeth Paola Barrero Riaño NumericalMethods – Industrial Univesity of Santander
  • 2.
  • 10. Matrix A horizontal set of elements is called a row(i) and a vertical set is called a column (j). A matrix consists of a rectangular array of elements represented by a single symbol. As depicted in Figure [A] is the shorthand notation for the matrix and designates an individual element of the matrix. Column 3 Row 2
  • 11. SymmetricMatrix It is a square matrix in which the elements are symmetric about the main diagonal Example: Scalar, diagonal and identity matrices, are symmetric matrices. If A is a symmetric matrix, then: The product is defined and is a symmetric matrix. The sum of symmetric matrices is a symmetric matrix. The product of two symmetric matrices is a symmetric matrix if the matrices commute
  • 12. TransposesMatrix Letanymatrix A=(aij) of mxnorder, thematrix B=(bij) de ordernxmisthe A transposeifthe A rows are the B columns . This operation is usually denoted by At = A' = B Example: Properties:
  • 13. Determinant Given a square matrix A of n size, its determinant is defined as the sum of the any matrix line elements product (row or column) chosen, by their corresponding attachments. Example:
  • 14. Determinant Properties If a matrix has a line (row or column) of zeros, the determinant is zero. If a matrix has two equal rows or proportional, the determinant is null If we permute two parallels lines of a square matrix, its determinant changes sign. If we multiply all elements of a determinant line by a number, the determinant is multiplied by that number. If a matrix line is added another line multiplied by a number, the determinant does not change. The determinant of a matrix is equal to its transpose, If A has inverse matrix, A-1, it is verified that:
  • 15. Upper and Lower Triangular Matrix Upper Triangular Matrix Lower Triangular Matrix Example: Example: It is a square matrix in which all the items under the main diagonal are zero. It is a square matrix in which all the elements above the main diagonal are zero.
  • 16. BandedMatrix A band matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side. Example: Diagonal Triadiagonal Pentadiagonal
  • 17. AugmentedMatrix It is called extended or augmented matrix that is formed by the coefficient matrix and the vector of independent terms, which are usually separated with a dotted line Example:
  • 18. MatrixMultiplication . To define A ⋅ B is necessary that the number of columns in the first matrix coincides with the number of rows in the second matrix. The product order is given by the number of rows in the first matrix per the number of columns in the second matrix. That is, if A is mxn order and B is nxp order, then C = A ⋅ B is mxp order. theproduct A.Bis Given anothermatrix in whicheachCijisthenth rowproduct of A per thenthcolumn of B, namelythe element
  • 20. Solution of Linear Algebraic Equation Linear algebra is one of the corner stones of modern computational mathematics. Almost all numerical schemes such as the finite element method and finite difference method are in act techniques that transform, assemble, reduce, rearrange, and/or approximate the differential, integral, or other types of equations to systems of linear algebraic equations. A system of linear algebraic equations can be expressed as
  • 21.
  • 22. In this part, we deal with the case of determining the values x1, x2,…,xn that simultaneously satisfy a set of equations.Solving a system with a coefficient matrix is equivalent to finding the intersection point(s) of all m surfaces (lines) in an n dimensional space. If all m surfaces happen to pass through a single point then the solution is unique
  • 23. Small Systems of Linear Equations GraphicalMethod Cramer’s Rule TheElimination of Unknows
  • 24. 1. Graphical Method When solving a system with two linear equations in two variables, we are looking for the point where the two lines cross. This can be determined by graphing each line on the same coordinate system and estimating the point of intersection. When two straight lines are graphed, one of three possibilities may result:
  • 25. Graphical Method When two lines cross in exactly one point, the system is consistent and independent and the solution is the one ordered pair where the two lines cross. The coordinates of this ordered pair can be estimated from the graph of the two lines: Case 1 Independent system: one solution point
  • 26. Graphical Method This graph shows two distinct lines that are parallel. Since parallel lines never cross, then there can be no intersection; that is, for a system of equations that graphs as parallel lines, there can be no solution. This is called an "inconsistent" system of equations, and it has no solution. Case 2 Inconsistent system: no solution and no intersection point
  • 27. Graphical Method This graph appears to show only one line. Actually, it is the same line drawn twice. These "two" lines, really being the same line, "intersect" at every point along their length. This is called a "dependent" system, and the "solution" is the whole line. Case 3 Dependent system: the solution is the whole line
  • 28.
  • 29. It cannot be used when we have more than two variables in the equations. For instance, if the lines cross at a shallow angle it can be just about impossible to tell where the lines cross:
  • 30. Graphical Method Example Solve the following system by graphing. 2x – 3y = –2 4x + y = 24 First, we must solve each equation for "y=", so we can graph easily: 2x – 3y = –2 4x + y = 24 2x + 2 = 3y y = –4x + 24 (2/3)x + (2/3) = y
  • 31. Graphical Method The second line will be easy to graph using just the slope and intercept, but it is necessary a T-chart for the first line. Solution: (x, y) = (5, 4)
  • 32. Cramer’s Rule Cramer’s rule is another technique that is best suited to small numbers of equations. This rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2, … ,bn. For example, x1would be computed as
  • 33. Example Use Cramer’s Rule to solve the system: 5x – 4y = 2 6x – 5y = 1 Solution: Webeginbysetting up and evaluatingthethreedeterminants:
  • 34. Example FromCramer’s Rule, wehave: and Thesolutionis (6,7) Cramer’s Rule doesnotapplyif D=0. When D=0 , thesystemiseitherinconsistentordependent. Anothermethodmustbeusedtosolveit.
  • 35. TheElimination of Unknows The elimination of unknowns by combing equations is an algebraic approach that can be illustrated for a set of two equations: The basic strategy is to multiply the equations by constants so that of the unknowns will be eliminated when the two equations are combined. The result is a single equation that can be solved for the remaining unknown. This value can then be substituted into either of the original equations to compute the other variable. For example, these equations might be multiplied by a21 and a11 to give
  • 36. TheElimination of Unknows Subtracting Eq. 3 from 4 will, therefore, eliminate the xt term from the equations to yield Which can be solve for This equation can then be substituted into Eq. 1, which can be solved for
  • 37. TheElimination of Unknows Notice that these equations follow directly from Cramer’s rule, which states EXAMPLE Use the elimination of unknown to solve,
  • 38. GaussianElimination Gaussian Elimination is considered the workhorse of computational science for the solution of a system of linear equations. Karl Friedrich Gauss, a great 19th century mathematician, suggested this elimination method as a part of his proof of a particular theorem. Computational scientists use this “proof” as a direct computational method. Gaussian Elimination is a systematic application of elementary row operations to a system of linear equations in order to convert the system to upper triangular form. Once the coefficient matrix is in upper triangular form, we use back substitution to find a solution.
  • 39. GaussianElimination The general procedure for Gaussian Elimination can be summarized in the following steps: Write the augmented matrix for the system of linear equations. Use elementary row operations on the augmented matrix [A|b] to transform A into upper triangular form. If a zero is located on the diagonal, switch the rows until a nonzero is in that place. If you are unable to do so, stop; the system has either infinite or no solutions. Use back substitution to find the solution of the problem.
  • 40. GaussianElimination Example 1. Write the augmented matrix for the system of linear equations. 2.Use elementary row operations on theaugmented matrix [A|b] to transform A into upper triangular form. Change row 1 to row 2 and vice versa
  • 41. GaussianElimination Notice that the original coefficient matrix had a “0” on the diagonal in row 1. Since we needed to use multiples of that diagonal element to eliminate the elements below it, we switched two rows in order to move a nonzero element into that position. We can use the same technique when a “0” appears on the diagonal as a result of calculation. If it is not possible to move a nonzero onto the diagonal by interchanging rows, then the system has either infinitely many solutions or no solution, and the coefficient matrix is said to be singular. Since all of the nonzero elements are now located in the “upper triangle” of the matrix, we have completed the first phase of solving a system of linear equations using Gaussian Elimination.
  • 42. GaussianElimination The second and final phase of Gaussian Elimination is back substitution. During this phase, we solve for the values of the unknowns, working our way up from the bottom row. 3. Use back substitution to find the solution of the problem The last row in the augmented matrix represents the equation:
  • 43. GaussianElimination The second row of the augmented matrix represents the equation: Finally, the first row of the augmented matrix represents the equation
  • 44. Gaussian-Jordan Elimination As in Gaussian Elimination, again we are transforming the coefficient matrix into another matrix that is much easier to solve, and the system represented by the new augmented matrix has the same solution set as the original system of linear equations. In Gauss-Jordan Elimination, the goal is to transform the coefficient matrix into a diagonal matrix, and the zeros are introduced into the matrix one column at a time. We work to eliminate the elements both above and below the diagonal element of a given column in one pass through the matrix.
  • 45. Gaussian-Jordan Elimination The general procedure for Gauss-Jordan Elimination can be summarized in the following steps: Write the augmented matrix for the system of linear equations. Use elementary row operations on the augmented matrix [A|b] to transform A into diagonal form. If a zero is located on the diagonal, switch the rows until a nonzero is in that place. If you are unable to do so, stop; the system has either infinite or no solutions. By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one.
  • 46. Gaussian-Jordan Elimination Example We will apply Gauss-Jordan Elimination to the same example that was used to demonstrate Gaussian Elimination 1-Write the augmented matrix for the system of linear equations. 2. Use elementary row operations on the augmented matrix [A|b] to transform A into diagonal form.
  • 47. Gaussian-Jordan Elimination 3-By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one. Notice that the coefficient matrix is now a diagonal matrix with ones on the diagonal. This is a special matrix called the identity matrix.
  • 48. LU Decomposition Just as was the case with Gauss elimination, Lu decomposition requires pivoting to avoid division by zero. However, to simplify the following description, we will defer the issue of pivoting until after the fundamental approach is elaborated. In addition, the following explanation is limited to a set of three simultaneous equations. The results can be directly extended to n-dimensional systems.method. Linear algebraic notation can be rearranged to give Suppose that this equation could be expressed as an upper triangular system: Elimination is used to reduce the system to upper triangular form. The above equation can also be expressed in matrix notation and rearranged to give
  • 49. LU Decomposition Now, assume that there is a lower diagonal matrix with 1’s on the diagonal, That has the property that when Eq. 3 is premultiplied by it, Eq. 1 is the result. That is, If this equation holds, it follows from the rules for matrix multiplication that
  • 50.
  • 51. Substitution step. [L] and [U] are used to determine a solution {X} for a right-hand side {B}. This step itself consists of two steps. First, Eq. 7 is used to generate an intermediate vector {D} by forward substitution. Then, the result is substituted into Eq. 3, which can solved by back substitution for [X].In the other hand, Gauss Elimination can be implemented in this way.
  • 52. Bibliography CHAPRA, Steven. Numerical Methods for engineers. Editorial McGraw-Hill. 2000. http://www.efunda.com/math http://www.purplemath.com http://ceee.rice.edu/Books