Cybersecurity Awareness Training Presentation v2024.03
Lesson 16 The Spectral Theorem and Applications
1. Lesson 16 (S&H, Section 14.6)
The Spectral Theorem and Applications
Math 20
October 26, 2007
Announcements
Welcome parents!
Problem Set 6 on the website. Due October 31.
OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323)
Prob. Sess.: Sundays 6–7 (SC B-10), Tuesdays 1–2 (SC 116)
2. Outline
Hatsumon
Concept Review
Eigenbusiness
Diagonalization
The Spectral Theorem
The split case
The symmetric case
Iterations
Applications
Back to Fibonacci
Markov chains
3. A famous math problem
“A certain man had one pair
of rabbits together in a
certain enclosed place, and
one wishes to know how
many are created from the
pair in one year when it is the
nature of them in a single
month to bear another pair,
and in the second month
those born to bear also.
Because the abovewritten pair Leonardo of Pisa
in the first month bore, you (1170s or 1180s–1250)
will double it; there will be a/k/a Fibonacci
two pairs in one month.”
9. Diagram of rabbits
f (0) = 1
f (1) = 1
f (2) = 2
f (3) = 3
f (4) = 5
f (5) = 8
10. An equation for the rabbits
Let f (k) be the number of pairs of rabbits in month k. Each new
month we have
The same rabbits as last month
Every pair of rabbits at least one month old producing a new
pair of rabbits
11. An equation for the rabbits
Let f (k) be the number of pairs of rabbits in month k. Each new
month we have
The same rabbits as last month
Every pair of rabbits at least one month old producing a new
pair of rabbits
So
f (k) = f (k − 1) + f (k − 2)
12.
13. Some fibonacci numbers
k f (k)
0 1
1 1
2 2
3 3
4 5
5 8 Question
6 13 Can we find an explicit formula for f (k)?
7 21
8 34
9 55
10 89
11 144
12 233
14. Outline
Hatsumon
Concept Review
Eigenbusiness
Diagonalization
The Spectral Theorem
The split case
The symmetric case
Iterations
Applications
Back to Fibonacci
Markov chains
15. Concept Review
Definition
Let A be an n × n matrix. The number λ is called an eigenvalue
of A if there exists a nonzero vector x ∈ Rn such that
Ax = λx. (1)
Every nonzero vector satisfying (??) is called an eigenvector of A
associated with the eigenvalue λ.
16. Diagonalization Procedure
Find the eigenvalues and eigenvectors.
Arrange the eigenvectors in a matrix P and the corresponding
eigenvalues in a diagonal matrix D.
If you have “enough” eigenvectors (that is, one for each
column of A), the original matrix is diagonalizable and equal
to PDP−1 .
Pitfalls:
Repeated eigenvalues
Nonreal eigenvalues
17.
18. Outline
Hatsumon
Concept Review
Eigenbusiness
Diagonalization
The Spectral Theorem
The split case
The symmetric case
Iterations
Applications
Back to Fibonacci
Markov chains
20. Theorem (Baby Spectral Theorem)
Suppose An×n has n distinct real eigenvalues. Then A is
diagonalizable.
21.
22. Theorem (Spectral Theorem for Symmetric Matrices)
Suppose An×n is symmetric, that is, A = A. Then A is
diagonalizable. In fact, the eigenvectors can be chosen to be
pairwise orthogonal with length one, which means that P−1 = P .
Thus a symmetric matrix can be diagonalized as
A = PDP ,
23.
24. Powers of diagonalizable matrices
Remember if A is diagonalizable then
Ak = (PDP−1 )k = (PDP−1 )(PDP−1 ) · · · (PDP−1 )
k
−1 −1 −1
= PD(P P)D(P P) · · · D(P P)DP−1 = PDk P−1
25.
26. Another way to look at it
If v is an eigenvector corresponding to eigenvalue λ, then
Ak v =
27.
28. Another way to look at it
If v is an eigenvector corresponding to eigenvalue λ, then
Ak v = λ k v
29.
30. Another way to look at it
If v is an eigenvector corresponding to eigenvalue λ, then
Ak v = λ k v
If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then
Ak (c1 v1 + · · · + cn vn )
31. Another way to look at it
If v is an eigenvector corresponding to eigenvalue λ, then
Ak v = λ k v
If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then
Ak (c1 v1 + · · · + cn vn ) = c1 λk v1 + · · · + cn λk vn
1 n
32. Another way to look at it
If v is an eigenvector corresponding to eigenvalue λ, then
Ak v = λ k v
If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then
Ak (c1 v1 + · · · + cn vn ) = c1 λk v1 + · · · + cn λk vn
1 n
If A is diagonalizable, there are n linearly independent
eigenvectors, so any v can be written as a linear combination
of them.
33. Outline
Hatsumon
Concept Review
Eigenbusiness
Diagonalization
The Spectral Theorem
The split case
The symmetric case
Iterations
Applications
Back to Fibonacci
Markov chains
34.
35. Setting up the Fibonacci sequence
Recall the Fibonacci sequence defined by
f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
36. Setting up the Fibonacci sequence
Recall the Fibonacci sequence defined by
f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
Let’s let g (k) = f (k + 1). Then
g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k).
37.
38. Setting up the Fibonacci sequence
Recall the Fibonacci sequence defined by
f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
Let’s let g (k) = f (k + 1). Then
g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k).
f (k)
So if y(k) = , we have
g (k)
f (k + 1) g (k) 0 1
y(k + 1) = = = y(k)
g (k + 1) f (k) + g (k) 1 1
39.
40. Setting up the Fibonacci sequence
Recall the Fibonacci sequence defined by
f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
Let’s let g (k) = f (k + 1). Then
g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k).
f (k)
So if y(k) = , we have
g (k)
f (k + 1) g (k) 0 1
y(k + 1) = = = y(k)
g (k + 1) f (k) + g (k) 1 1
So if A is this matrix, then
y(k) =
41. Setting up the Fibonacci sequence
Recall the Fibonacci sequence defined by
f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
Let’s let g (k) = f (k + 1). Then
g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k).
f (k)
So if y(k) = , we have
g (k)
f (k + 1) g (k) 0 1
y(k + 1) = = = y(k)
g (k + 1) f (k) + g (k) 1 1
So if A is this matrix, then
y(k) = Ak y(0).
42.
43. Diagonalize
0 1
The eigenvalues of A = are found by solving
1 1
−λ 1
0= = (−λ)(1 − λ) − 1
1 1−λ
= λ2 − λ − 1
44. Diagonalize
0 1
The eigenvalues of A = are found by solving
1 1
−λ 1
0= = (−λ)(1 − λ) − 1
1 1−λ
= λ2 − λ − 1
The roots are √ √
1+ 5 1− 5
ϕ= ϕ=
¯
2 2
45. Diagonalize
0 1
The eigenvalues of A = are found by solving
1 1
−λ 1
0= = (−λ)(1 − λ) − 1
1 1−λ
= λ2 − λ − 1
The roots are √ √
1+ 5 1− 5
ϕ= ϕ=
¯
2 2
Notice that
ϕ + ϕ = 1,
¯ ϕϕ = −1
¯
(These facts make later calculations simpler.)
46. Eigenvectors
We row reduce to find the eigenvectors:
−ϕ 1 −ϕ 1 −ϕ
¯ −ϕ 1
A − ϕI = =
1 1−ϕ 1 ϕ ←+
¯ − 0 0
1
So is an eigenvector for A corresponding to the eigenvalue ϕ.
ϕ
47. Eigenvectors
We row reduce to find the eigenvectors:
−ϕ 1 −ϕ 1 −ϕ
¯ −ϕ 1
A − ϕI = =
1 1−ϕ 1 ϕ ←+
¯ − 0 0
1
So is an eigenvector for A corresponding to the eigenvalue ϕ.
ϕ
1
Similarly, is an eigenvector for A corresponding to the
ϕ¯
eigenvalue ϕ.
¯
48. Eigenvectors
We row reduce to find the eigenvectors:
−ϕ 1 −ϕ 1 −ϕ
¯ −ϕ 1
A − ϕI = =
1 1−ϕ 1 ϕ ←+
¯ − 0 0
1
So is an eigenvector for A corresponding to the eigenvalue ϕ.
ϕ
1
Similarly, is an eigenvector for A corresponding to the
ϕ¯
eigenvalue ϕ. So now we know that
¯
1 1
y(k) = c1 ϕk + c2 ϕk
¯
ϕ ϕ
¯
51. Finally
Putting this all together we have
ϕ 1 ϕ
¯ 1
y(k) = √ ϕk − √ ϕk¯
5 ϕ 5 ϕ
¯
f (k) 1 ϕk+1 − ϕk+1
¯
=√ k+2 − ϕk+2
g (k) 5 ϕ ¯
52. Finally
Putting this all together we have
ϕ 1 ϕ
¯ 1
y(k) = √ ϕk − √ ϕk¯
5 ϕ 5 ϕ
¯
f (k) 1 ϕk+1 − ϕk+1
¯
=√ k+2 − ϕk+2
g (k) 5 ϕ ¯
So
√ √
k+1 k+1
1 1+ 5 1− 5
f (k) = √ −
5 2 2
53. Markov Chains
Recall the setup: T is a transition matrix giving the
probabilities of switching from any state to any of the other
states.
54. Markov Chains
Recall the setup: T is a transition matrix giving the
probabilities of switching from any state to any of the other
states.
We seek a steady-state vector, i.e., a probability vector u
such that Tu = u.
55. Markov Chains
Recall the setup: T is a transition matrix giving the
probabilities of switching from any state to any of the other
states.
We seek a steady-state vector, i.e., a probability vector u
such that Tu = u.
This is nothing more than an eigenvector of eigenvalue 1!
56. Theorem
If T is a regular doubly-stochastic matrix, then
1 is an eigenvalue for T
all other eigenvalues of T have absolute value less than 1.
57. Let u be an eigenvector of eigenvalue 1, scaled so it’s a probability
vector. Let v2 , . . . , vn be eigenvectors corresponding to the other
eigenvalues λ2 , . . . , λn . Then for any initial state x(0), we have
x(k) = Ak x(0) = Ak (c1 u + c2 λ2 v2 + · · · + cn λn vn )
= c1 u + c2 λk v2 + · · · + cn λk vn
2 n
So
x(k) → c1 u
Since each x(k) is a probability vector, c1 = 1. Hence
x(k) → c1 u