How to Troubleshoot Apps for the Modern Connected Worker
Lesson 9: Gaussian Elimination
1. Lesson 9
Gaussian Elimination (KH, Section 1.6)
Math 20
October 10, 2007
Announcements
Problem Set 4 will be on the course web site today. Due
10/17
Prob. Sess.: Sundays 6–7 (SC B-10), Tuesdays 1–2 (SC 116)
OH Mon 1–2, Tues 3–4, Weds 1–3 (SC 323)
Midterm I 10/18, Hall A 7–8:30pm
Review Session (ML), 10/16, 7:30–9:30 Hall E
2. Systems of Linear equations
Any set (system) of equations involving one or more variables, in
which each equation involves a linear combination of the variables.
3. Systems of Linear equations
Any set (system) of equations involving one or more variables, in
which each equation involves a linear combination of the variables.
Example
Here is a single linear equation in one variable:
4x + 2 = 6
4.
5. Systems of Linear equations
Any set (system) of equations involving one or more variables, in
which each equation involves a linear combination of the variables.
Example
Here is a single linear equation in one variable:
4x + 2 = 6
Solution
Subtract 2 from each side and you get 4x = 4. Divide both sides
by 4 and you get x = 1.
6. Two equations in two variables
Example
Solve 2x + y = 3, x + 2y = 0.
7.
8. Two equations in two variables
Example
Solve 2x + y = 3, x + 2y = 0.
Solution
x = 2, y = −1.
9. Three equations in three variables
Example
Solve the system of linear equations
2x2 − 3x3 = 4
−2x1 + x2 + 2x3 = −6
2x1 + x3 = 0
10. Three equations in three variables
Example
Solve the system of linear equations
2x2 − 3x3 = 4
−2x1 + x2 + 2x3 = −6
2x1 + x3 = 0
The more variables you get, the bigger the need for a systematic
way of solving systems of linear equations.
11. The Matrix viewpoint on SLEs
A system of m equations in n variables looks like:
a11 x1 + a12 x2 + . . . + a1n xn = b1
a21 x1 + a22 x2 + . . . + a2n xn = b2
. . . .
..
. . . .
.
. . . .
am1 x1 + am2 x2 + . . . + amn xn = bm
The operative data are the coefficients and the right-hand sides.
We can summarize it like this:
a11 a12 . . . a1n x1 b1
a21 a22 . . . a2n x2 b2
. . = . , or Ax = b
. . ..
. . . . .
.
. . . . .
am1 am2 . . . amn xn bn
12.
13. The augmented matrix
In fact, we can express the whole system of linear equations in a
single matrix, called the augmented matrix:
a11 a12 . . . a1n b1
a21 a22 . . . a2n b2
. . .. . .
. . . .
.
. . . .
am1 am2 . . . amn bm
14.
15. Operations on systems of equations
Here are some facts about systems of equations.
1. Transposing equations doesn’t change their solution.
2. Scaling an equation doesn’t change its solution.
3. If a set of numbers satisfies two equations, then it also satisfies
the equation which is one plus a scalar multiple of the other.
16. Operations on systems of equations
Here are some facts about systems of equations.
1. Transposing equations doesn’t change their solution.
2. Scaling an equation doesn’t change its solution.
3. If a set of numbers satisfies two equations, then it also satisfies
the equation which is one plus a scalar multiple of the other.
A simpler form might be
3’. If a set of numbers satisfies two equations, it satisfies the sum
of the two equations.
17.
18. Row Operations
The operations on systems of linear equations are reflected in the
augmented matrix, too.
1. Transposing (switching) rows in an augmented matrix does
not change the solution.
2. Scaling any row in an augmented matrix does not change the
solution.
3. Adding to any row in an augmented matrix any multiple of
any other row in the matrix does not change the solution.
19. The Process of Gaussian Elimination
We’ll solve the system of linear equations
2x2 − 3x3 = 4
−2x1 + x2 + 2x3 = −6
2x1 + x3 = 0
The augmented matrix is
0 2 −3 4
−2 1 2 −6
20 1 0
21. Transpose the first and third equations:
2 −3 ←−
0 4 2 0 1 0
−2 1 − 6 −2 1 − 6
2 2
←− −3
2 0 1 0 0 2 4
Now we can add the first row to the second and get another zero
in that column.
2 0 1 0 20 1 0
−2 1 − 6 ← +− − 6
2 0 1 3
2 −3 0 2 −3
0 4 4
22. We add (-2) times the second row to the third row.
20 1 0 20 1 0
−6 − 6
0 1 3 0 1 3
−2
0 2 −3 ←+
− −9
4 00 16
23. We add (-2) times the second row to the third row.
20 1 0 20 1 0
−6 − 6
0 1 3 0 1 3
−2
0 2 −3 ←+
− −9
4 00 16
This matrix is in row echelon form. The corresponding SLE can
be solved by back-substitution.
24.
25.
20 1 0
− 6
0 1 3
−9
00 16
Since −9x3 = 16, we have x3 = − 16 . Substituting this into the
9
second equation gives
−6
48 54 2
x2 − = −6 = − =− .
=⇒ x2 =
9 9 9 3
Finally, we have
16 8
2x1 − = 0 =⇒ x1 = .
9 9
26. More Gaussian Elimination: The “backward pass”
Starting with the last matrix above, we scale the last row by − 1 :
9
201 0
20 1 0
−6
− 6 0 1 3
0 1 3
− 16
1
−9 |− 001
00 16 9
9
27. More Gaussian Elimination: The “backward pass”
Starting with the last matrix above, we scale the last row by − 1 :
9
201 0
20 1 0
−6
− 6 0 1 3
0 1 3
− 16
1
−9 |− 001
00 16 9
9
Now we can zero out the third column above that bottom entry,
by adding (-3) times the third row to the second row, then adding
(-1) times the third row to the first row.
16
← − −+
−−
201 0 200 9
−6
−6 ←+
−
0 1 3 0 1 0
9
− 16 − 16
001 001
−3 −1
9 9
28. The top row can be scaled by 1 , and we finally have
2
16 1 8
|2
200 100
9 9
−6 −6
0 1 0 0 1 0
9 9
− 16 − 16
001 001
9 9
This matrix is said to be in reduced row echelon form.
29. The top row can be scaled by 1 , and we finally have
2
16 1 8
|2
200 100
9 9
−6 −6
0 1 0 0 1 0
9 9
− 16 − 16
001 001
9 9
This matrix is said to be in reduced row echelon form.
And there you go; the solutions are staring you in the face!
30. Gaussian Elimination
1. Locate the first nonzero column. This is pivot column, and
the top row in this column is called a pivot position.
Transpose rows to make sure this position has a nonzero entry.
If you like, scale the row to make this position equal to one.
2. Use row operations to make all entries below the pivot
position zero.
3. Repeat Steps 1 and 2 on the submatrix below the first row
and to the right of the first column. Finally, you will arrive at
a matrix in row echelon form. (up to here is called the
forward pass)
4. Scale the bottom row to make the leading entry one.
5. Use row operations to make all entries above this entry zero.
6. Repeat Steps 4 and 5 on the submatrix formed above and to
the left of this entry. (These steps are called the backward
pass)
31.
32. So to solve a SLE:
Form the augmented matrix.
reduce this matrix to (R)REF.
read off the solution.
33. Meet the Mathematician
German
“the prince of
mathematicians”
Proved FTA four times
Invented least-squares
method
Predicted motion of
planets
Carl Friedrich Gauss
1777–1855
34. Questions
Suppose the matrix
0 3 −6 6 4 −5
3 −7 8 −5 8 9
3 −9 12 −9 6 15
is given as the augmented matrix to a system of linear equations.
How do we interpret the solution from the RREF?
35.
36.
37. Questions
Suppose the matrix
0 3 −6 6 4 −5
3 −7 8 −5 8 9
3 −9 12 −9 6 15
is given as the augmented matrix to a system of linear equations.
How do we interpret the solution from the RREF?
1 0 −2 3 0 −24
0 1 −2 2 0 −7
00 0 01 4
38. The system of linear equations is
− 2x3 + 3x4 = −24
x1
x2 − 2x3 + 2x4 = −7
x5 = 4
39. The system of linear equations is
− 2x3 + 3x4 = −24
x1
x2 − 2x3 + 2x4 = −7
x5 = 4
or
x1 = −24 + 2s − 3t
x2 = −7 + 2s − 2t
x3 = s
x4 = t
x5 = 4
Here s and t can be anything we want and we can construct a
solution out of them. x3 and x4 are known as free variables; they
can take any value.
40. The system of linear equations is
− 2x3 + 3x4 = −24
x1
x2 − 2x3 + 2x4 = −7
x5 = 4
or
x1 = −24 + 2s − 3t
x2 = −7 + 2s − 2t
x3 = s
x4 = t
x5 = 4
Here s and t can be anything we want and we can construct a
solution out of them. x3 and x4 are known as free variables; they
can take any value.
We see free variables in the RREF as the columns with no leading
entry.
41. Question
What if the RREF of the matrix were
1 0 −2 3 0 −24
0 1 −2 2 0 −7
00 0 00 1
What would be the solutions to the associated system now?
42. Question
What if the RREF of the matrix were
1 0 −2 3 0 −24
0 1 −2 2 0 −7
00 0 00 1
What would be the solutions to the associated system now?
Answer.
The bottom row represents the equation 0 = 1, which has no
solution. This system of equations is inconsistent.