SlideShare una empresa de Scribd logo
1 de 96
Descargar para leer sin conexión
Mathematics for Artificial Intelligence
Square Matrices and Related Matters
Andres Mendez-Vazquez
March 2, 2020
1 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
2 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
3 / 42
Square Matrices
Observation
Square matrices are the only matrices that can have inverses.
Further
In a system of linear algebraic equations:
1 If the number of equations equals the number of unknowns
2 Then the associated coefficient matrix A is square.
Now, use the Gauss-Jordan
What can happen?
4 / 42
Square Matrices
Observation
Square matrices are the only matrices that can have inverses.
Further
In a system of linear algebraic equations:
1 If the number of equations equals the number of unknowns
2 Then the associated coefficient matrix A is square.
Now, use the Gauss-Jordan
What can happen?
4 / 42
Square Matrices
Observation
Square matrices are the only matrices that can have inverses.
Further
In a system of linear algebraic equations:
1 If the number of equations equals the number of unknowns
2 Then the associated coefficient matrix A is square.
Now, use the Gauss-Jordan
What can happen?
4 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
5 / 42
We have two possibilities
First case
The Gauss-Jordan form for An×n is the n × n identity matrix In
Second case
The Gauss-Jordan form for A has at least one row of zeros
6 / 42
We have two possibilities
First case
The Gauss-Jordan form for An×n is the n × n identity matrix In
Second case
The Gauss-Jordan form for A has at least one row of zeros
6 / 42
Explanation
In the first case
We can show that A is invertible.
How? Do you remember?
EkEk−1...E2E1A = I,
Setting B = EkEk−1...E2E1
We have BA = I therefore B = A−1
7 / 42
Explanation
In the first case
We can show that A is invertible.
How? Do you remember?
EkEk−1...E2E1A = I,
Setting B = EkEk−1...E2E1
We have BA = I therefore B = A−1
7 / 42
Explanation
In the first case
We can show that A is invertible.
How? Do you remember?
EkEk−1...E2E1A = I,
Setting B = EkEk−1...E2E1
We have BA = I therefore B = A−1
7 / 42
Furthermore
We can build the following matrix
E−1
1 E−1
2 · · · E−1
k
Then
E−1
1 E−1
2 · · · E−1
k BA = E−1
1 E−1
2 · · · E−1
k I
Thus
E−1
1 E−1
2 · · · E−1
k (EkEk−1...E2E1) A = E−1
1 E−1
2 · · · E−1
k
8 / 42
Furthermore
We can build the following matrix
E−1
1 E−1
2 · · · E−1
k
Then
E−1
1 E−1
2 · · · E−1
k BA = E−1
1 E−1
2 · · · E−1
k I
Thus
E−1
1 E−1
2 · · · E−1
k (EkEk−1...E2E1) A = E−1
1 E−1
2 · · · E−1
k
8 / 42
Furthermore
We can build the following matrix
E−1
1 E−1
2 · · · E−1
k
Then
E−1
1 E−1
2 · · · E−1
k BA = E−1
1 E−1
2 · · · E−1
k I
Thus
E−1
1 E−1
2 · · · E−1
k (EkEk−1...E2E1) A = E−1
1 E−1
2 · · · E−1
k
8 / 42
Therefore
We have
A = E−1
1 E−1
2 · · · E−1
k
Theorem
The following are equivalent:
1 The square matrix A is invertible.
2 The Gauss-Jordan or reduced echelon form of A is the identity matrix.
3 Acan be written as a product of elementary matrices.
9 / 42
Therefore
We have
A = E−1
1 E−1
2 · · · E−1
k
Theorem
The following are equivalent:
1 The square matrix A is invertible.
2 The Gauss-Jordan or reduced echelon form of A is the identity matrix.
3 Acan be written as a product of elementary matrices.
9 / 42
The Second Case
The Gauss-Jordan form of An×n
It can only have at most n leading entries.
If the Gauss-Jordan form of A is not I
We have something quite different
Then the GJ form has n − 1 or fewer leading entries
Therefore, it has at least one row of zeros.
10 / 42
The Second Case
The Gauss-Jordan form of An×n
It can only have at most n leading entries.
If the Gauss-Jordan form of A is not I
We have something quite different
Then the GJ form has n − 1 or fewer leading entries
Therefore, it has at least one row of zeros.
10 / 42
The Second Case
The Gauss-Jordan form of An×n
It can only have at most n leading entries.
If the Gauss-Jordan form of A is not I
We have something quite different
Then the GJ form has n − 1 or fewer leading entries
Therefore, it has at least one row of zeros.
10 / 42
Example
Given
A =
2 1
1 2
What do we need to do?
Look at the blackboard...
Therefore
A−1
= E4E3E2E1
11 / 42
Example
Given
A =
2 1
1 2
What do we need to do?
Look at the blackboard...
Therefore
A−1
= E4E3E2E1
11 / 42
Example
Given
A =
2 1
1 2
What do we need to do?
Look at the blackboard...
Therefore
A−1
= E4E3E2E1
11 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
12 / 42
If A is invertible a.k.a. full rank
Equation Ax = y has the unique solution
Ax = y ⇔ A−1
Ax = A−1
y ⇔ x = A−1
y
If A is not invertible
Then there is at least one free variable.
There are non-trivial solutions to Ax = 0.
If y = 0
Either Ax = y is inconsistent.
Solutions to the system exist, but there are infinitely many.
13 / 42
If A is invertible a.k.a. full rank
Equation Ax = y has the unique solution
Ax = y ⇔ A−1
Ax = A−1
y ⇔ x = A−1
y
If A is not invertible
Then there is at least one free variable.
There are non-trivial solutions to Ax = 0.
If y = 0
Either Ax = y is inconsistent.
Solutions to the system exist, but there are infinitely many.
13 / 42
If A is invertible a.k.a. full rank
Equation Ax = y has the unique solution
Ax = y ⇔ A−1
Ax = A−1
y ⇔ x = A−1
y
If A is not invertible
Then there is at least one free variable.
There are non-trivial solutions to Ax = 0.
If y = 0
Either Ax = y is inconsistent.
Solutions to the system exist, but there are infinitely many.
13 / 42
If A is invertible a.k.a. full rank
Equation Ax = y has the unique solution
Ax = y ⇔ A−1
Ax = A−1
y ⇔ x = A−1
y
If A is not invertible
Then there is at least one free variable.
There are non-trivial solutions to Ax = 0.
If y = 0
Either Ax = y is inconsistent.
Solutions to the system exist, but there are infinitely many.
13 / 42
If A is invertible a.k.a. full rank
Equation Ax = y has the unique solution
Ax = y ⇔ A−1
Ax = A−1
y ⇔ x = A−1
y
If A is not invertible
Then there is at least one free variable.
There are non-trivial solutions to Ax = 0.
If y = 0
Either Ax = y is inconsistent.
Solutions to the system exist, but there are infinitely many.
13 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
14 / 42
Something Quite Important
We have done something important
It leads immediately to an algorithm for constructing the inverse of A.
Observation
Suppose Bn×p is another matrix with the same number of rows as An×n
Then
C = (A|B)n×(n+p)
15 / 42
Something Quite Important
We have done something important
It leads immediately to an algorithm for constructing the inverse of A.
Observation
Suppose Bn×p is another matrix with the same number of rows as An×n
Then
C = (A|B)n×(n+p)
15 / 42
Something Quite Important
We have done something important
It leads immediately to an algorithm for constructing the inverse of A.
Observation
Suppose Bn×p is another matrix with the same number of rows as An×n
Then
C = (A|B)n×(n+p)
15 / 42
Then
It is obvious
EC = (EA|EB)n×(n+p)
Where
EA is a n × n matrix.
EB is a n × p matrix
16 / 42
Then
It is obvious
EC = (EA|EB)n×(n+p)
Where
EA is a n × n matrix.
EB is a n × p matrix
16 / 42
Algorithm
Form the partitioned matrix
C = (A|I)
Apply the Gauss-Jordan reduction
EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I)
Therefore, if A is invertible
(EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1
17 / 42
Algorithm
Form the partitioned matrix
C = (A|I)
Apply the Gauss-Jordan reduction
EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I)
Therefore, if A is invertible
(EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1
17 / 42
Algorithm
Form the partitioned matrix
C = (A|I)
Apply the Gauss-Jordan reduction
EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I)
Therefore, if A is invertible
(EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1
17 / 42
Remark
The individual factors in the product of A−1
are not unique
They depend on how we do the row reduction.
18 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
19 / 42
Before the Matrix we had the Determinant
You have seen determinants in your classes long ago
A =
a b
c d
=⇒ det (A) = ad − bc
What about



a11 a12 a13
a21 a22 a23
a31 a32 a33



Then
det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ...
a12a21a33 − a11a23a32 − a13a22a31
20 / 42
Before the Matrix we had the Determinant
You have seen determinants in your classes long ago
A =
a b
c d
=⇒ det (A) = ad − bc
What about



a11 a12 a13
a21 a22 a23
a31 a32 a33



Then
det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ...
a12a21a33 − a11a23a32 − a13a22a31
20 / 42
Before the Matrix we had the Determinant
You have seen determinants in your classes long ago
A =
a b
c d
=⇒ det (A) = ad − bc
What about



a11 a12 a13
a21 a22 a23
a31 a32 a33



Then
det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ...
a12a21a33 − a11a23a32 − a13a22a31
20 / 42
Recursive Definition
Definition
Let A be a n × n matrix. Then the determinant of A is defined as follow:
det (A) =
a11 if n = 1
n
i=1 ai1Ai1 if n > 1
Where
Aij is the (i, j) −cofactor where
Aij = (−1)i+j
det (Mij)
Here
Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith
row and jth column.
21 / 42
Recursive Definition
Definition
Let A be a n × n matrix. Then the determinant of A is defined as follow:
det (A) =
a11 if n = 1
n
i=1 ai1Ai1 if n > 1
Where
Aij is the (i, j) −cofactor where
Aij = (−1)i+j
det (Mij)
Here
Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith
row and jth column.
21 / 42
Recursive Definition
Definition
Let A be a n × n matrix. Then the determinant of A is defined as follow:
det (A) =
a11 if n = 1
n
i=1 ai1Ai1 if n > 1
Where
Aij is the (i, j) −cofactor where
Aij = (−1)i+j
det (Mij)
Here
Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith
row and jth column.
21 / 42
Example
We have



1 1 1
0 2 1
0 0 3



22 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
23 / 42
Problems Will Robinson!!!
We have that for a An×n
det (A) has n factorials (n!)terms
Problems
The fastest computer of the world will take forever to finish
24 / 42
Problems Will Robinson!!!
We have that for a An×n
det (A) has n factorials (n!)terms
Problems
The fastest computer of the world will take forever to finish
24 / 42
Thus, we have some problems with that
Floating point arithmetic
It is not at all the same thing as working with real numbers.
Representation
x = (d1d2d3 · · · dn) × 2a1a2···am
(1)
The problem is at the round off
When we do a calculation on a computer, we almost never get the right
answer.
25 / 42
Thus, we have some problems with that
Floating point arithmetic
It is not at all the same thing as working with real numbers.
Representation
x = (d1d2d3 · · · dn) × 2a1a2···am
(1)
The problem is at the round off
When we do a calculation on a computer, we almost never get the right
answer.
25 / 42
Thus, we have some problems with that
Floating point arithmetic
It is not at all the same thing as working with real numbers.
Representation
x = (d1d2d3 · · · dn) × 2a1a2···am
(1)
The problem is at the round off
When we do a calculation on a computer, we almost never get the right
answer.
25 / 42
Another Problems
We would love the floating points to be represented uniformly
The floating point numbers are distributed logarithmically which is quite
different from the even distribution of the rationals.
Computations do not scale well
2 multiplications and 1 addition to compute the 2 × 2 determinant
12 multiplications and 5 additions to compute the 3 × 3 determinant
72 multiplications and 23 additions to
26 / 42
Another Problems
We would love the floating points to be represented uniformly
The floating point numbers are distributed logarithmically which is quite
different from the even distribution of the rationals.
Computations do not scale well
2 multiplications and 1 addition to compute the 2 × 2 determinant
12 multiplications and 5 additions to compute the 3 × 3 determinant
72 multiplications and 23 additions to
26 / 42
Another Problems
We would love the floating points to be represented uniformly
The floating point numbers are distributed logarithmically which is quite
different from the even distribution of the rationals.
Computations do not scale well
2 multiplications and 1 addition to compute the 2 × 2 determinant
12 multiplications and 5 additions to compute the 3 × 3 determinant
72 multiplications and 23 additions to
26 / 42
Another Problems
We would love the floating points to be represented uniformly
The floating point numbers are distributed logarithmically which is quite
different from the even distribution of the rationals.
Computations do not scale well
2 multiplications and 1 addition to compute the 2 × 2 determinant
12 multiplications and 5 additions to compute the 3 × 3 determinant
72 multiplications and 23 additions to
26 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
27 / 42
Therefore
How do we avoid to get us into problems?
We need to define our determinant as a different structure....
Definition
The determinant of A is a real-valued function of the rows of A which we
write as
det (A) = det (r1, r2, ..., rn)
28 / 42
Therefore
How do we avoid to get us into problems?
We need to define our determinant as a different structure....
Definition
The determinant of A is a real-valued function of the rows of A which we
write as
det (A) = det (r1, r2, ..., rn)
28 / 42
Properties
Multiplying a row by the constant c multiplies the determinant by c
det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn)
If row i is the sum of the two row vectors x and y
det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ...
det (r1, r2, ..., y, ..., rn)
Meaning
The determinant is a linear function of each row.
29 / 42
Properties
Multiplying a row by the constant c multiplies the determinant by c
det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn)
If row i is the sum of the two row vectors x and y
det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ...
det (r1, r2, ..., y, ..., rn)
Meaning
The determinant is a linear function of each row.
29 / 42
Properties
Multiplying a row by the constant c multiplies the determinant by c
det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn)
If row i is the sum of the two row vectors x and y
det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ...
det (r1, r2, ..., y, ..., rn)
Meaning
The determinant is a linear function of each row.
29 / 42
Further
Interchanging any two rows of the matrix changes the sign of the
determinant
det (..., ri, ..., rj, ..., ...) = det (..., rj, ..., ri, ..., ...)
Finally
The determinant of any identity matrix is 1
30 / 42
Further
Interchanging any two rows of the matrix changes the sign of the
determinant
det (..., ri, ..., rj, ..., ...) = det (..., rj, ..., ri, ..., ...)
Finally
The determinant of any identity matrix is 1
30 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
31 / 42
Some Properties
Property 1
If A has a row of zeros, then det(A) = 0.
Proof
1 if A = (..., 0, ...), also A = (..., c0, ...)
2 det (A) = c × det (A) for any c
3 Thus det (A) = 0
32 / 42
Some Properties
Property 1
If A has a row of zeros, then det(A) = 0.
Proof
1 if A = (..., 0, ...), also A = (..., c0, ...)
2 det (A) = c × det (A) for any c
3 Thus det (A) = 0
32 / 42
Some Properties
Property 1
If A has a row of zeros, then det(A) = 0.
Proof
1 if A = (..., 0, ...), also A = (..., c0, ...)
2 det (A) = c × det (A) for any c
3 Thus det (A) = 0
32 / 42
Some Properties
Property 1
If A has a row of zeros, then det(A) = 0.
Proof
1 if A = (..., 0, ...), also A = (..., c0, ...)
2 det (A) = c × det (A) for any c
3 Thus det (A) = 0
32 / 42
Next
Property 2
If ri = rj , i = j, then det(A) = 0.
Proof
Quite easy (Hint sing being reversed).
33 / 42
Next
Property 2
If ri = rj , i = j, then det(A) = 0.
Proof
Quite easy (Hint sing being reversed).
33 / 42
Finally
Proposition 3
If B is obtained from A by replacing ri with ri + crj , then
det(B) = det(A)
Proof
det (B) = det (..., ri + crj, ..., rj, ...)
= det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...)
= det (A) + cdet (..., rj, ..., rj, ...)
= det (A) + 0
34 / 42
Finally
Proposition 3
If B is obtained from A by replacing ri with ri + crj , then
det(B) = det(A)
Proof
det (B) = det (..., ri + crj, ..., rj, ...)
= det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...)
= det (A) + cdet (..., rj, ..., rj, ...)
= det (A) + 0
34 / 42
Finally
Proposition 3
If B is obtained from A by replacing ri with ri + crj , then
det(B) = det(A)
Proof
det (B) = det (..., ri + crj, ..., rj, ...)
= det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...)
= det (A) + cdet (..., rj, ..., rj, ...)
= det (A) + 0
34 / 42
Finally
Proposition 3
If B is obtained from A by replacing ri with ri + crj , then
det(B) = det(A)
Proof
det (B) = det (..., ri + crj, ..., rj, ...)
= det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...)
= det (A) + cdet (..., rj, ..., rj, ...)
= det (A) + 0
34 / 42
Finally
Proposition 3
If B is obtained from A by replacing ri with ri + crj , then
det(B) = det(A)
Proof
det (B) = det (..., ri + crj, ..., rj, ...)
= det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...)
= det (A) + cdet (..., rj, ..., rj, ...)
= det (A) + 0
34 / 42
Outline
1 Square Matrices
Introduction
The Inverse
Solution to Ax = y
Algorithm for the Inverse of a Matrix
2 Determinants
Introduction
Complexity Increases
Reducing the Complexity
Some Consequences of the definition
Special Determinants
35 / 42
Therefore
Theorem
The determinant of an upper or lower triangular matrix is equal to the
product of the entries on the main diagonal.
Proof
Suppose A is upper triangular and that none of the entries on the
main diagonal is 0.
This means all the entries beneath the main diagonal are zero.
Using Proposition 3, we can convert it into a diagonal matrix.
Then, by property 1
det (Adiag) = [
n
i aii] det (I) =
n
i aii
36 / 42
Therefore
Theorem
The determinant of an upper or lower triangular matrix is equal to the
product of the entries on the main diagonal.
Proof
Suppose A is upper triangular and that none of the entries on the
main diagonal is 0.
This means all the entries beneath the main diagonal are zero.
Using Proposition 3, we can convert it into a diagonal matrix.
Then, by property 1
det (Adiag) = [
n
i aii] det (I) =
n
i aii
36 / 42
Therefore
Theorem
The determinant of an upper or lower triangular matrix is equal to the
product of the entries on the main diagonal.
Proof
Suppose A is upper triangular and that none of the entries on the
main diagonal is 0.
This means all the entries beneath the main diagonal are zero.
Using Proposition 3, we can convert it into a diagonal matrix.
Then, by property 1
det (Adiag) = [
n
i aii] det (I) =
n
i aii
36 / 42
Therefore
Theorem
The determinant of an upper or lower triangular matrix is equal to the
product of the entries on the main diagonal.
Proof
Suppose A is upper triangular and that none of the entries on the
main diagonal is 0.
This means all the entries beneath the main diagonal are zero.
Using Proposition 3, we can convert it into a diagonal matrix.
Then, by property 1
det (Adiag) = [
n
i aii] det (I) =
n
i aii
36 / 42
Therefore
Theorem
The determinant of an upper or lower triangular matrix is equal to the
product of the entries on the main diagonal.
Proof
Suppose A is upper triangular and that none of the entries on the
main diagonal is 0.
This means all the entries beneath the main diagonal are zero.
Using Proposition 3, we can convert it into a diagonal matrix.
Then, by property 1
det (Adiag) = [
n
i aii] det (I) =
n
i aii
36 / 42
Remark
Question
This is the property we use to compute determinants!!! How?
37 / 42
Example
We have
2 1
3 −4
First, we have
r1 = (2, 1) = 2 2,
1
2
Then
det (A) = 2det
1 1
2
3 −4
38 / 42
Example
We have
2 1
3 −4
First, we have
r1 = (2, 1) = 2 2,
1
2
Then
det (A) = 2det
1 1
2
3 −4
38 / 42
Example
We have
2 1
3 −4
First, we have
r1 = (2, 1) = 2 2,
1
2
Then
det (A) = 2det
1 1
2
3 −4
38 / 42
Further
We have by proposition 3
det (A) = 2det
1 1
2
0 −11
2
Using Property 1
det (A) = 2 −
11
2
det
1 1
2
0 1
Therefore
det (A) = −11
39 / 42
Further
We have by proposition 3
det (A) = 2det
1 1
2
0 −11
2
Using Property 1
det (A) = 2 −
11
2
det
1 1
2
0 1
Therefore
det (A) = −11
39 / 42
Further
We have by proposition 3
det (A) = 2det
1 1
2
0 −11
2
Using Property 1
det (A) = 2 −
11
2
det
1 1
2
0 1
Therefore
det (A) = −11
39 / 42
Further Properties
Property 4
The determinant of A is the same as that of its transpose AT .
Proof
Hint: we do an elementary row operation on A. Then,
(EA)T
= AT ET
40 / 42
Further Properties
Property 4
The determinant of A is the same as that of its transpose AT .
Proof
Hint: we do an elementary row operation on A. Then,
(EA)T
= AT ET
40 / 42
Finally
Property 5
If A and B are square matrices of the same size, then
det (AB) = det (A) det (B)
Therefore
If A is invertible:
det AA−1
= det (A) det A−1
= det (I)
= 1
Thus
det A−1
=
1
det (A)
41 / 42
Finally
Property 5
If A and B are square matrices of the same size, then
det (AB) = det (A) det (B)
Therefore
If A is invertible:
det AA−1
= det (A) det A−1
= det (I)
= 1
Thus
det A−1
=
1
det (A)
41 / 42
Finally
Property 5
If A and B are square matrices of the same size, then
det (AB) = det (A) det (B)
Therefore
If A is invertible:
det AA−1
= det (A) det A−1
= det (I)
= 1
Thus
det A−1
=
1
det (A)
41 / 42
Finally
Definition
If the (square) matrix A is invertible, then A is said to be
non-singular.
Otherwise, A is singular.
42 / 42

Más contenido relacionado

La actualidad más candente

Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
Lecture 8   nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6Lecture 8   nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
njit-ronbrown
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
shubham211
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
itutor
 

La actualidad más candente (20)

Eigenvalue problems .ppt
Eigenvalue problems .pptEigenvalue problems .ppt
Eigenvalue problems .ppt
 
4. Linear Algebra for Machine Learning: Eigenvalues, Eigenvectors and Diagona...
4. Linear Algebra for Machine Learning: Eigenvalues, Eigenvectors and Diagona...4. Linear Algebra for Machine Learning: Eigenvalues, Eigenvectors and Diagona...
4. Linear Algebra for Machine Learning: Eigenvalues, Eigenvectors and Diagona...
 
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...
 
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
 
Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
Lecture 8   nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6Lecture 8   nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6
 
2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension
 
Eigen value and eigen vector
Eigen value and eigen vectorEigen value and eigen vector
Eigen value and eigen vector
 
Maths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectorsMaths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectors
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
 
Lecture 11 diagonalization & complex eigenvalues - 5-3 & 5-5
Lecture  11   diagonalization & complex eigenvalues -  5-3 & 5-5Lecture  11   diagonalization & complex eigenvalues -  5-3 & 5-5
Lecture 11 diagonalization & complex eigenvalues - 5-3 & 5-5
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems
 
Lecture 5 inverse of matrices - section 2-2 and 2-3
Lecture 5   inverse of matrices - section 2-2 and 2-3Lecture 5   inverse of matrices - section 2-2 and 2-3
Lecture 5 inverse of matrices - section 2-2 and 2-3
 
Eigen value , eigen vectors, caley hamilton theorem
Eigen value , eigen vectors, caley hamilton theoremEigen value , eigen vectors, caley hamilton theorem
Eigen value , eigen vectors, caley hamilton theorem
 
Linear Algebra
Linear AlgebraLinear Algebra
Linear Algebra
 
FPDE presentation
FPDE presentationFPDE presentation
FPDE presentation
 
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksBeginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
 
Dynamics of actin filaments in the contractile ring
Dynamics of actin filaments in the contractile ringDynamics of actin filaments in the contractile ring
Dynamics of actin filaments in the contractile ring
 
Eigenvalues and Eigenvectors
Eigenvalues and EigenvectorsEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
 
Eigenvalues and eigenvectors
Eigenvalues and eigenvectorsEigenvalues and eigenvectors
Eigenvalues and eigenvectors
 

Similar a 01.03 squared matrices_and_other_issues

4 pages from matlab an introduction with app.-2
4 pages from matlab an introduction with app.-24 pages from matlab an introduction with app.-2
4 pages from matlab an introduction with app.-2
Malika khalil
 

Similar a 01.03 squared matrices_and_other_issues (20)

quadratic equations-1
quadratic equations-1quadratic equations-1
quadratic equations-1
 
Convergence Criteria
Convergence CriteriaConvergence Criteria
Convergence Criteria
 
Matrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdfMatrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdf
 
Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1
 
INVERSE OF MATRIX
INVERSE OF MATRIXINVERSE OF MATRIX
INVERSE OF MATRIX
 
Engg maths k notes(4)
Engg maths k notes(4)Engg maths k notes(4)
Engg maths k notes(4)
 
1560 mathematics for economists
1560 mathematics for economists1560 mathematics for economists
1560 mathematics for economists
 
7 4
7 47 4
7 4
 
Chapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equationsChapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equations
 
23 Matrix Algorithms
23 Matrix Algorithms23 Matrix Algorithms
23 Matrix Algorithms
 
A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cube
 
CBSE Class 12 Mathematics formulas
CBSE Class 12 Mathematics formulasCBSE Class 12 Mathematics formulas
CBSE Class 12 Mathematics formulas
 
4 pages from matlab an introduction with app.-2
4 pages from matlab an introduction with app.-24 pages from matlab an introduction with app.-2
4 pages from matlab an introduction with app.-2
 
Solution of linear system of equations
Solution of linear system of equationsSolution of linear system of equations
Solution of linear system of equations
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
 
Lsd ee n
Lsd ee nLsd ee n
Lsd ee n
 
My Lecture Notes from Linear Algebra
My Lecture Notes fromLinear AlgebraMy Lecture Notes fromLinear Algebra
My Lecture Notes from Linear Algebra
 
On the Analysis of the Finite Element Solutions of Boundary Value Problems Us...
On the Analysis of the Finite Element Solutions of Boundary Value Problems Us...On the Analysis of the Finite Element Solutions of Boundary Value Problems Us...
On the Analysis of the Finite Element Solutions of Boundary Value Problems Us...
 
E04 06 3943
E04 06 3943E04 06 3943
E04 06 3943
 
Rankmatrix
RankmatrixRankmatrix
Rankmatrix
 

Más de Andres Mendez-Vazquez

Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 

Más de Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
 
Introduction Machine Learning Syllabus
Introduction Machine Learning SyllabusIntroduction Machine Learning Syllabus
Introduction Machine Learning Syllabus
 
Schedule for Data Lab Community Path in Machine Learning
Schedule for Data Lab Community Path in Machine LearningSchedule for Data Lab Community Path in Machine Learning
Schedule for Data Lab Community Path in Machine Learning
 
Schedule Classes DataLab Community
Schedule Classes DataLab CommunitySchedule Classes DataLab Community
Schedule Classes DataLab Community
 
Introduction to logistic regression
Introduction to logistic regressionIntroduction to logistic regression
Introduction to logistic regression
 

Último

Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Dr.Costas Sachpazis
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 

Último (20)

Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 

01.03 squared matrices_and_other_issues

  • 1. Mathematics for Artificial Intelligence Square Matrices and Related Matters Andres Mendez-Vazquez March 2, 2020 1 / 42
  • 2. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 2 / 42
  • 3. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 3 / 42
  • 4. Square Matrices Observation Square matrices are the only matrices that can have inverses. Further In a system of linear algebraic equations: 1 If the number of equations equals the number of unknowns 2 Then the associated coefficient matrix A is square. Now, use the Gauss-Jordan What can happen? 4 / 42
  • 5. Square Matrices Observation Square matrices are the only matrices that can have inverses. Further In a system of linear algebraic equations: 1 If the number of equations equals the number of unknowns 2 Then the associated coefficient matrix A is square. Now, use the Gauss-Jordan What can happen? 4 / 42
  • 6. Square Matrices Observation Square matrices are the only matrices that can have inverses. Further In a system of linear algebraic equations: 1 If the number of equations equals the number of unknowns 2 Then the associated coefficient matrix A is square. Now, use the Gauss-Jordan What can happen? 4 / 42
  • 7. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 5 / 42
  • 8. We have two possibilities First case The Gauss-Jordan form for An×n is the n × n identity matrix In Second case The Gauss-Jordan form for A has at least one row of zeros 6 / 42
  • 9. We have two possibilities First case The Gauss-Jordan form for An×n is the n × n identity matrix In Second case The Gauss-Jordan form for A has at least one row of zeros 6 / 42
  • 10. Explanation In the first case We can show that A is invertible. How? Do you remember? EkEk−1...E2E1A = I, Setting B = EkEk−1...E2E1 We have BA = I therefore B = A−1 7 / 42
  • 11. Explanation In the first case We can show that A is invertible. How? Do you remember? EkEk−1...E2E1A = I, Setting B = EkEk−1...E2E1 We have BA = I therefore B = A−1 7 / 42
  • 12. Explanation In the first case We can show that A is invertible. How? Do you remember? EkEk−1...E2E1A = I, Setting B = EkEk−1...E2E1 We have BA = I therefore B = A−1 7 / 42
  • 13. Furthermore We can build the following matrix E−1 1 E−1 2 · · · E−1 k Then E−1 1 E−1 2 · · · E−1 k BA = E−1 1 E−1 2 · · · E−1 k I Thus E−1 1 E−1 2 · · · E−1 k (EkEk−1...E2E1) A = E−1 1 E−1 2 · · · E−1 k 8 / 42
  • 14. Furthermore We can build the following matrix E−1 1 E−1 2 · · · E−1 k Then E−1 1 E−1 2 · · · E−1 k BA = E−1 1 E−1 2 · · · E−1 k I Thus E−1 1 E−1 2 · · · E−1 k (EkEk−1...E2E1) A = E−1 1 E−1 2 · · · E−1 k 8 / 42
  • 15. Furthermore We can build the following matrix E−1 1 E−1 2 · · · E−1 k Then E−1 1 E−1 2 · · · E−1 k BA = E−1 1 E−1 2 · · · E−1 k I Thus E−1 1 E−1 2 · · · E−1 k (EkEk−1...E2E1) A = E−1 1 E−1 2 · · · E−1 k 8 / 42
  • 16. Therefore We have A = E−1 1 E−1 2 · · · E−1 k Theorem The following are equivalent: 1 The square matrix A is invertible. 2 The Gauss-Jordan or reduced echelon form of A is the identity matrix. 3 Acan be written as a product of elementary matrices. 9 / 42
  • 17. Therefore We have A = E−1 1 E−1 2 · · · E−1 k Theorem The following are equivalent: 1 The square matrix A is invertible. 2 The Gauss-Jordan or reduced echelon form of A is the identity matrix. 3 Acan be written as a product of elementary matrices. 9 / 42
  • 18. The Second Case The Gauss-Jordan form of An×n It can only have at most n leading entries. If the Gauss-Jordan form of A is not I We have something quite different Then the GJ form has n − 1 or fewer leading entries Therefore, it has at least one row of zeros. 10 / 42
  • 19. The Second Case The Gauss-Jordan form of An×n It can only have at most n leading entries. If the Gauss-Jordan form of A is not I We have something quite different Then the GJ form has n − 1 or fewer leading entries Therefore, it has at least one row of zeros. 10 / 42
  • 20. The Second Case The Gauss-Jordan form of An×n It can only have at most n leading entries. If the Gauss-Jordan form of A is not I We have something quite different Then the GJ form has n − 1 or fewer leading entries Therefore, it has at least one row of zeros. 10 / 42
  • 21. Example Given A = 2 1 1 2 What do we need to do? Look at the blackboard... Therefore A−1 = E4E3E2E1 11 / 42
  • 22. Example Given A = 2 1 1 2 What do we need to do? Look at the blackboard... Therefore A−1 = E4E3E2E1 11 / 42
  • 23. Example Given A = 2 1 1 2 What do we need to do? Look at the blackboard... Therefore A−1 = E4E3E2E1 11 / 42
  • 24. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 12 / 42
  • 25. If A is invertible a.k.a. full rank Equation Ax = y has the unique solution Ax = y ⇔ A−1 Ax = A−1 y ⇔ x = A−1 y If A is not invertible Then there is at least one free variable. There are non-trivial solutions to Ax = 0. If y = 0 Either Ax = y is inconsistent. Solutions to the system exist, but there are infinitely many. 13 / 42
  • 26. If A is invertible a.k.a. full rank Equation Ax = y has the unique solution Ax = y ⇔ A−1 Ax = A−1 y ⇔ x = A−1 y If A is not invertible Then there is at least one free variable. There are non-trivial solutions to Ax = 0. If y = 0 Either Ax = y is inconsistent. Solutions to the system exist, but there are infinitely many. 13 / 42
  • 27. If A is invertible a.k.a. full rank Equation Ax = y has the unique solution Ax = y ⇔ A−1 Ax = A−1 y ⇔ x = A−1 y If A is not invertible Then there is at least one free variable. There are non-trivial solutions to Ax = 0. If y = 0 Either Ax = y is inconsistent. Solutions to the system exist, but there are infinitely many. 13 / 42
  • 28. If A is invertible a.k.a. full rank Equation Ax = y has the unique solution Ax = y ⇔ A−1 Ax = A−1 y ⇔ x = A−1 y If A is not invertible Then there is at least one free variable. There are non-trivial solutions to Ax = 0. If y = 0 Either Ax = y is inconsistent. Solutions to the system exist, but there are infinitely many. 13 / 42
  • 29. If A is invertible a.k.a. full rank Equation Ax = y has the unique solution Ax = y ⇔ A−1 Ax = A−1 y ⇔ x = A−1 y If A is not invertible Then there is at least one free variable. There are non-trivial solutions to Ax = 0. If y = 0 Either Ax = y is inconsistent. Solutions to the system exist, but there are infinitely many. 13 / 42
  • 30. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 14 / 42
  • 31. Something Quite Important We have done something important It leads immediately to an algorithm for constructing the inverse of A. Observation Suppose Bn×p is another matrix with the same number of rows as An×n Then C = (A|B)n×(n+p) 15 / 42
  • 32. Something Quite Important We have done something important It leads immediately to an algorithm for constructing the inverse of A. Observation Suppose Bn×p is another matrix with the same number of rows as An×n Then C = (A|B)n×(n+p) 15 / 42
  • 33. Something Quite Important We have done something important It leads immediately to an algorithm for constructing the inverse of A. Observation Suppose Bn×p is another matrix with the same number of rows as An×n Then C = (A|B)n×(n+p) 15 / 42
  • 34. Then It is obvious EC = (EA|EB)n×(n+p) Where EA is a n × n matrix. EB is a n × p matrix 16 / 42
  • 35. Then It is obvious EC = (EA|EB)n×(n+p) Where EA is a n × n matrix. EB is a n × p matrix 16 / 42
  • 36. Algorithm Form the partitioned matrix C = (A|I) Apply the Gauss-Jordan reduction EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I) Therefore, if A is invertible (EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1 17 / 42
  • 37. Algorithm Form the partitioned matrix C = (A|I) Apply the Gauss-Jordan reduction EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I) Therefore, if A is invertible (EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1 17 / 42
  • 38. Algorithm Form the partitioned matrix C = (A|I) Apply the Gauss-Jordan reduction EkEk−1 · · · E1 (A|I) = (EkEk−1 · · · E1A|EkEk−1 · · · E1I) Therefore, if A is invertible (EkEk−1 · · · E1A|EkEk−1 · · · E1I) = I|A−1 17 / 42
  • 39. Remark The individual factors in the product of A−1 are not unique They depend on how we do the row reduction. 18 / 42
  • 40. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 19 / 42
  • 41. Before the Matrix we had the Determinant You have seen determinants in your classes long ago A = a b c d =⇒ det (A) = ad − bc What about    a11 a12 a13 a21 a22 a23 a31 a32 a33    Then det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ... a12a21a33 − a11a23a32 − a13a22a31 20 / 42
  • 42. Before the Matrix we had the Determinant You have seen determinants in your classes long ago A = a b c d =⇒ det (A) = ad − bc What about    a11 a12 a13 a21 a22 a23 a31 a32 a33    Then det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ... a12a21a33 − a11a23a32 − a13a22a31 20 / 42
  • 43. Before the Matrix we had the Determinant You have seen determinants in your classes long ago A = a b c d =⇒ det (A) = ad − bc What about    a11 a12 a13 a21 a22 a23 a31 a32 a33    Then det (A) =a11a22a33 + a12a21a32 + a12a23a31 − ... a12a21a33 − a11a23a32 − a13a22a31 20 / 42
  • 44. Recursive Definition Definition Let A be a n × n matrix. Then the determinant of A is defined as follow: det (A) = a11 if n = 1 n i=1 ai1Ai1 if n > 1 Where Aij is the (i, j) −cofactor where Aij = (−1)i+j det (Mij) Here Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith row and jth column. 21 / 42
  • 45. Recursive Definition Definition Let A be a n × n matrix. Then the determinant of A is defined as follow: det (A) = a11 if n = 1 n i=1 ai1Ai1 if n > 1 Where Aij is the (i, j) −cofactor where Aij = (−1)i+j det (Mij) Here Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith row and jth column. 21 / 42
  • 46. Recursive Definition Definition Let A be a n × n matrix. Then the determinant of A is defined as follow: det (A) = a11 if n = 1 n i=1 ai1Ai1 if n > 1 Where Aij is the (i, j) −cofactor where Aij = (−1)i+j det (Mij) Here Mij is the (n − 1) × (n − 1) matrix obtained from A by removing its ith row and jth column. 21 / 42
  • 47. Example We have    1 1 1 0 2 1 0 0 3    22 / 42
  • 48. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 23 / 42
  • 49. Problems Will Robinson!!! We have that for a An×n det (A) has n factorials (n!)terms Problems The fastest computer of the world will take forever to finish 24 / 42
  • 50. Problems Will Robinson!!! We have that for a An×n det (A) has n factorials (n!)terms Problems The fastest computer of the world will take forever to finish 24 / 42
  • 51. Thus, we have some problems with that Floating point arithmetic It is not at all the same thing as working with real numbers. Representation x = (d1d2d3 · · · dn) × 2a1a2···am (1) The problem is at the round off When we do a calculation on a computer, we almost never get the right answer. 25 / 42
  • 52. Thus, we have some problems with that Floating point arithmetic It is not at all the same thing as working with real numbers. Representation x = (d1d2d3 · · · dn) × 2a1a2···am (1) The problem is at the round off When we do a calculation on a computer, we almost never get the right answer. 25 / 42
  • 53. Thus, we have some problems with that Floating point arithmetic It is not at all the same thing as working with real numbers. Representation x = (d1d2d3 · · · dn) × 2a1a2···am (1) The problem is at the round off When we do a calculation on a computer, we almost never get the right answer. 25 / 42
  • 54. Another Problems We would love the floating points to be represented uniformly The floating point numbers are distributed logarithmically which is quite different from the even distribution of the rationals. Computations do not scale well 2 multiplications and 1 addition to compute the 2 × 2 determinant 12 multiplications and 5 additions to compute the 3 × 3 determinant 72 multiplications and 23 additions to 26 / 42
  • 55. Another Problems We would love the floating points to be represented uniformly The floating point numbers are distributed logarithmically which is quite different from the even distribution of the rationals. Computations do not scale well 2 multiplications and 1 addition to compute the 2 × 2 determinant 12 multiplications and 5 additions to compute the 3 × 3 determinant 72 multiplications and 23 additions to 26 / 42
  • 56. Another Problems We would love the floating points to be represented uniformly The floating point numbers are distributed logarithmically which is quite different from the even distribution of the rationals. Computations do not scale well 2 multiplications and 1 addition to compute the 2 × 2 determinant 12 multiplications and 5 additions to compute the 3 × 3 determinant 72 multiplications and 23 additions to 26 / 42
  • 57. Another Problems We would love the floating points to be represented uniformly The floating point numbers are distributed logarithmically which is quite different from the even distribution of the rationals. Computations do not scale well 2 multiplications and 1 addition to compute the 2 × 2 determinant 12 multiplications and 5 additions to compute the 3 × 3 determinant 72 multiplications and 23 additions to 26 / 42
  • 58. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 27 / 42
  • 59. Therefore How do we avoid to get us into problems? We need to define our determinant as a different structure.... Definition The determinant of A is a real-valued function of the rows of A which we write as det (A) = det (r1, r2, ..., rn) 28 / 42
  • 60. Therefore How do we avoid to get us into problems? We need to define our determinant as a different structure.... Definition The determinant of A is a real-valued function of the rows of A which we write as det (A) = det (r1, r2, ..., rn) 28 / 42
  • 61. Properties Multiplying a row by the constant c multiplies the determinant by c det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn) If row i is the sum of the two row vectors x and y det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ... det (r1, r2, ..., y, ..., rn) Meaning The determinant is a linear function of each row. 29 / 42
  • 62. Properties Multiplying a row by the constant c multiplies the determinant by c det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn) If row i is the sum of the two row vectors x and y det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ... det (r1, r2, ..., y, ..., rn) Meaning The determinant is a linear function of each row. 29 / 42
  • 63. Properties Multiplying a row by the constant c multiplies the determinant by c det (r1, r2, ..., cri, ..., rn) = cdet (r1, r2, ..., ri, ..., rn) If row i is the sum of the two row vectors x and y det (r1, r2, ..., x + y, ..., rn) =det (r1, r2, ..., x, ..., rn) + ... det (r1, r2, ..., y, ..., rn) Meaning The determinant is a linear function of each row. 29 / 42
  • 64. Further Interchanging any two rows of the matrix changes the sign of the determinant det (..., ri, ..., rj, ..., ...) = det (..., rj, ..., ri, ..., ...) Finally The determinant of any identity matrix is 1 30 / 42
  • 65. Further Interchanging any two rows of the matrix changes the sign of the determinant det (..., ri, ..., rj, ..., ...) = det (..., rj, ..., ri, ..., ...) Finally The determinant of any identity matrix is 1 30 / 42
  • 66. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 31 / 42
  • 67. Some Properties Property 1 If A has a row of zeros, then det(A) = 0. Proof 1 if A = (..., 0, ...), also A = (..., c0, ...) 2 det (A) = c × det (A) for any c 3 Thus det (A) = 0 32 / 42
  • 68. Some Properties Property 1 If A has a row of zeros, then det(A) = 0. Proof 1 if A = (..., 0, ...), also A = (..., c0, ...) 2 det (A) = c × det (A) for any c 3 Thus det (A) = 0 32 / 42
  • 69. Some Properties Property 1 If A has a row of zeros, then det(A) = 0. Proof 1 if A = (..., 0, ...), also A = (..., c0, ...) 2 det (A) = c × det (A) for any c 3 Thus det (A) = 0 32 / 42
  • 70. Some Properties Property 1 If A has a row of zeros, then det(A) = 0. Proof 1 if A = (..., 0, ...), also A = (..., c0, ...) 2 det (A) = c × det (A) for any c 3 Thus det (A) = 0 32 / 42
  • 71. Next Property 2 If ri = rj , i = j, then det(A) = 0. Proof Quite easy (Hint sing being reversed). 33 / 42
  • 72. Next Property 2 If ri = rj , i = j, then det(A) = 0. Proof Quite easy (Hint sing being reversed). 33 / 42
  • 73. Finally Proposition 3 If B is obtained from A by replacing ri with ri + crj , then det(B) = det(A) Proof det (B) = det (..., ri + crj, ..., rj, ...) = det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...) = det (A) + cdet (..., rj, ..., rj, ...) = det (A) + 0 34 / 42
  • 74. Finally Proposition 3 If B is obtained from A by replacing ri with ri + crj , then det(B) = det(A) Proof det (B) = det (..., ri + crj, ..., rj, ...) = det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...) = det (A) + cdet (..., rj, ..., rj, ...) = det (A) + 0 34 / 42
  • 75. Finally Proposition 3 If B is obtained from A by replacing ri with ri + crj , then det(B) = det(A) Proof det (B) = det (..., ri + crj, ..., rj, ...) = det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...) = det (A) + cdet (..., rj, ..., rj, ...) = det (A) + 0 34 / 42
  • 76. Finally Proposition 3 If B is obtained from A by replacing ri with ri + crj , then det(B) = det(A) Proof det (B) = det (..., ri + crj, ..., rj, ...) = det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...) = det (A) + cdet (..., rj, ..., rj, ...) = det (A) + 0 34 / 42
  • 77. Finally Proposition 3 If B is obtained from A by replacing ri with ri + crj , then det(B) = det(A) Proof det (B) = det (..., ri + crj, ..., rj, ...) = det (..., ri, ..., rj, ...) + det (..., crj, ..., rj, ...) = det (A) + cdet (..., rj, ..., rj, ...) = det (A) + 0 34 / 42
  • 78. Outline 1 Square Matrices Introduction The Inverse Solution to Ax = y Algorithm for the Inverse of a Matrix 2 Determinants Introduction Complexity Increases Reducing the Complexity Some Consequences of the definition Special Determinants 35 / 42
  • 79. Therefore Theorem The determinant of an upper or lower triangular matrix is equal to the product of the entries on the main diagonal. Proof Suppose A is upper triangular and that none of the entries on the main diagonal is 0. This means all the entries beneath the main diagonal are zero. Using Proposition 3, we can convert it into a diagonal matrix. Then, by property 1 det (Adiag) = [ n i aii] det (I) = n i aii 36 / 42
  • 80. Therefore Theorem The determinant of an upper or lower triangular matrix is equal to the product of the entries on the main diagonal. Proof Suppose A is upper triangular and that none of the entries on the main diagonal is 0. This means all the entries beneath the main diagonal are zero. Using Proposition 3, we can convert it into a diagonal matrix. Then, by property 1 det (Adiag) = [ n i aii] det (I) = n i aii 36 / 42
  • 81. Therefore Theorem The determinant of an upper or lower triangular matrix is equal to the product of the entries on the main diagonal. Proof Suppose A is upper triangular and that none of the entries on the main diagonal is 0. This means all the entries beneath the main diagonal are zero. Using Proposition 3, we can convert it into a diagonal matrix. Then, by property 1 det (Adiag) = [ n i aii] det (I) = n i aii 36 / 42
  • 82. Therefore Theorem The determinant of an upper or lower triangular matrix is equal to the product of the entries on the main diagonal. Proof Suppose A is upper triangular and that none of the entries on the main diagonal is 0. This means all the entries beneath the main diagonal are zero. Using Proposition 3, we can convert it into a diagonal matrix. Then, by property 1 det (Adiag) = [ n i aii] det (I) = n i aii 36 / 42
  • 83. Therefore Theorem The determinant of an upper or lower triangular matrix is equal to the product of the entries on the main diagonal. Proof Suppose A is upper triangular and that none of the entries on the main diagonal is 0. This means all the entries beneath the main diagonal are zero. Using Proposition 3, we can convert it into a diagonal matrix. Then, by property 1 det (Adiag) = [ n i aii] det (I) = n i aii 36 / 42
  • 84. Remark Question This is the property we use to compute determinants!!! How? 37 / 42
  • 85. Example We have 2 1 3 −4 First, we have r1 = (2, 1) = 2 2, 1 2 Then det (A) = 2det 1 1 2 3 −4 38 / 42
  • 86. Example We have 2 1 3 −4 First, we have r1 = (2, 1) = 2 2, 1 2 Then det (A) = 2det 1 1 2 3 −4 38 / 42
  • 87. Example We have 2 1 3 −4 First, we have r1 = (2, 1) = 2 2, 1 2 Then det (A) = 2det 1 1 2 3 −4 38 / 42
  • 88. Further We have by proposition 3 det (A) = 2det 1 1 2 0 −11 2 Using Property 1 det (A) = 2 − 11 2 det 1 1 2 0 1 Therefore det (A) = −11 39 / 42
  • 89. Further We have by proposition 3 det (A) = 2det 1 1 2 0 −11 2 Using Property 1 det (A) = 2 − 11 2 det 1 1 2 0 1 Therefore det (A) = −11 39 / 42
  • 90. Further We have by proposition 3 det (A) = 2det 1 1 2 0 −11 2 Using Property 1 det (A) = 2 − 11 2 det 1 1 2 0 1 Therefore det (A) = −11 39 / 42
  • 91. Further Properties Property 4 The determinant of A is the same as that of its transpose AT . Proof Hint: we do an elementary row operation on A. Then, (EA)T = AT ET 40 / 42
  • 92. Further Properties Property 4 The determinant of A is the same as that of its transpose AT . Proof Hint: we do an elementary row operation on A. Then, (EA)T = AT ET 40 / 42
  • 93. Finally Property 5 If A and B are square matrices of the same size, then det (AB) = det (A) det (B) Therefore If A is invertible: det AA−1 = det (A) det A−1 = det (I) = 1 Thus det A−1 = 1 det (A) 41 / 42
  • 94. Finally Property 5 If A and B are square matrices of the same size, then det (AB) = det (A) det (B) Therefore If A is invertible: det AA−1 = det (A) det A−1 = det (I) = 1 Thus det A−1 = 1 det (A) 41 / 42
  • 95. Finally Property 5 If A and B are square matrices of the same size, then det (AB) = det (A) det (B) Therefore If A is invertible: det AA−1 = det (A) det A−1 = det (I) = 1 Thus det A−1 = 1 det (A) 41 / 42
  • 96. Finally Definition If the (square) matrix A is invertible, then A is said to be non-singular. Otherwise, A is singular. 42 / 42