3. Introduction
Many applications of matrices in both engineering and science
utilize eigenvalues and, sometimes,
eigenvectors. Control theory, vibration analysis, electric
circuits, advanced dynamics and
quantum mechanics are just a few of the application areas.
Many of the applications involve the use of eigenvalues and
eigenvectors in the process of transforming
a given matrix into a diagonal matrix and we discuss
this process in this Section. We
then go on to show how this process is invaluable in solving
coupled differential equations of
both first order and second order.
4. Introduction: Diagonal Matrices
Before beginning this topic, we must first
clarify the definition of a “Diagonal
Matrix”.
A Diagonal Matrix is an n by n Matrix
whose non-diagonal entries have all the
value zero.
5. In this presentation, all Diagonal
Matrices will be denoted as:
where dnn is the n-th row and the n-th
column of the Diagonal Matrix.
6. For example, the previously given Matrix
of:
Can be written in the form:
diag(5, 4, 1, 9)
7. The Effects of a Diagonal Matrix
The Identity Matrix is an example of a
Diagonal Matrix which has the effect of
maintaining the properties of a Vector
within a given System.
For example:
8. The Effects of a Diagonal Matrix
However, any other Diagonal Matrix will
have the effect of enlarging a Vector in given
axes.
For example, the following Diagonal Matrix:
Has the effect of stretching a Vector by a Scale
Factor of 2 in the x-Axis, 3 in the z-Axis and
reflecting the Vector in the y-Axis.
9. The Points of View
The Square Matrix, A, may be seen as a
Linear Operator, F, defined by:
Where X is a Column Vector.
F(A)=A X
10. Linear Independence
Introduction
This will be a brief section on Linear
Independence to enforce that the
Eigenvectors of A must be Linearly
Independent for Diagonalisation to be
implemented.
11. Linear Independency in x-Dimensions
The vectors are classified as
a Linearly Independent set of Vectors if
the following rule applies:
The only value of the Scalar, , which
makes the equation:
True is for all instances of
12. Implications of Linear Independence
If the set of Vectors, is Linearly
Independent, then it is not possible to write
any of the Vectors in the set in terms of any
of the other Vectors within the same set.
Conversely, if a set of Vectors is Linearly
Dependent, then it is possible to write at
least one Vector in terms of at least one
other Vector.
13. Implications of Linear Independence
For example, the Vector set of:
Is Linearly Dependent, as can be
written as:
14. Implications of Linear Independence
For example, the Vector set of:
We can say, however, that this Vector set
may be considered as Linearly
Independent if were omitted from the
set.
15. Finding Linear Independency
The previous equation can be more
usefully written as:
More significantly, additionally, is the
idea that this can be translated into a
Homogeneous System of x Linear
Equations, where x is the Dimension
quantity of the System.
16. Finding Linear Independency
If not, then the set of Vectors are Linearly
Dependent.
To find the Coefficients, we can put
into Reduced Echelon Form to consider
the general solutions.
17. Summary
Thus, to conclude:
is the Characteristic
Polynomial of A . This is used to find the
general set of Eigenvalues of ,A and thus, its
Eigenvectors.
This is done by finding the determinant of
and solving the resultant
Polynomial equation to isolate the
Eigenvalues.