1. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM
ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
Abstract. Given that every matrix has a Jordan canonical form, so does any
power of a matrix. Using the Z-transform, this article will demonstrate the
unique form of such a matrix, and the properties that pertain to it.
1. Introduction
In 1986, Schmidt wrote a paper [S86] on canonical forms of complex-valued
matrices. In his paper, he gave a proof of the Jordan decomposition of a matrix
by using the Laplace transform. This method of proof only involves elementary
calculus and an in-depth study of the matrix equation eA(s+t)
= eAs
eAt
, as opposed
to more advanced topics like other known proofs. In this paper, we prove a similar
result, written below and proven in Theorem 5.3, using only the Z-transform and
a careful study of the matrix equation Ak+`
= A`
Ak
.
Theorem 1.1. Let A be an n ⇥ n matrix with characteristic polynomial cA(z) =
(z 1)m1
· · · (z r)mr
. Then, there exists a unique decomposition
Ak
=
rX
i=1
k
i Pi +
mi 1X
q=1
Nq
i 'q, i
(k)
with the following properties
(1) PiNi = NiPi = Ni
(2) PiPj = 0 if i 6= j
PiPi = Pi
(3) Nmi
i = 0
(4) PiNj = NjPi = 0 if i 6= j
(5) A =
Pr
i=1 iPi + Ni
(6) I =
Pr
i=1 Pi.
Properties (2), (3), and (5) together state that any matrix A can be written in
a simple way as a combination of projections and nilpotents. This leads us to the
following definitions and example of the idea.
Definition 1.2. A projection is any square matrix P such that P2
= P.
Definition 1.3. A nilpotent matrix is a nonzero square matrix N such that
Nk
= 0 for some positive integer k.
Example 1.4. Consider the matrix
A =
1 4
1 3
.
Notice that we have the following decomposition
A =
1
2. 2 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
Let us build some of the needed prerequisites.
2. The Z-Transform
The Z-transform is a useful operator that often comes up as a method to solve
di↵erence equations, but we will only use some basic properties of it. Here is its
definition.
Definition 2.1. Let y(k) be a sequence of complex numbers. We define the Z-
transform of y to be the function Z{y}(z), where z is a complex variable, by the
following formula:
Z{y}(z) =
1X
k=0
y(k)
zk
.
There are some interesting functions which we are most interested in applying
the Z-transform to. These ' functions will come up in all that follow. Before we
define them, we need to understand the falling factorial.
Definition 2.2. Let n be a nonnegative integer. The falling factorial is the
sequence kn
, with k= 0,1,2,..., given by the following formula
kn
= k(k 1)(k 2)...(k n + 1).
If k were allowed to be a real variable, then kn
could be characterized as the unique
monic polynomial of degree n that vanishes at 0, 1, ..., n 1.
Remark 2.3. We can recover the familiar factorial from the falling factorial. Notice
that kn
|k=n= n!.
Let us take a look at an example to illustrate how falling factorials work.
Example 2.4. We will compute 64
. Using the definition above, we see that
64
= 6(6 1)(6 2)(6 3) = 6(5)(4)(3) = 360.
Now that we have defined the falling factorial, we can give the definition of the
' functions.
Definition 2.5. Let a be a complex number and n be a nonnegative integer. The
following sequences ' functions are sequences defined as such,
'n,a(k) =
(
ak n
kn
n! a 6= 0
n(k) a = 0,
where n(k) is the sequence which is 0 for all k 6= n and n(n) = 1.
Since these functions are so important to everything that follows, we will compute
a few examples of them below.
Example 2.6.
'0,2(k) = 2k
= (1, 2, 4, 8, 16, ...)
'1,2(k) = 2k 1
k = (0, 1, 4, 12, 32, ...)
'2,0(k) = 2(k) = (0, 0, 1, 0, 0, 0, 0...)
It turns out that the Z-transform of the ' functions is particularly useful for our
purposes. This result is written below (see Proposition 5 of [T12] for the proof).
3. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 3
Proposition 2.7. Let a 2 C and n 2 N. Then,
Z{'n,a(k)}(z) =
z
(z a)n+1
.
3. Properties of a Z-Transform
Suppose a is a non-zero complex number, n 2 N = {0, 1, 2...} and y(k) is a se-
quence for which the Z -transform exists. Then
(1) Z{ak
}(z) = z
z a
(2) Z{ak
y(k)}(z) = Y (z
a )
(3) Z{y(k + n)}(z) = zn
Y (z)
Pn 1
m=0 y(m)zn m
(translation principle)
(4) Z{(k + n 1)n
y(k)}(z) = ( 1)n
zn
Dn
Y (z)
(5) Z{kn
}(z) = n!z
(z 1)n+1
Proposition 3.1. Let A be an n ⇥ n matrix with complex entries. Let cA(z) =
det(zI A) be the characteristic polynomial. Assume a1, . . . , aR are distinct roots
with corresponding multiplicities M1, . . . , MR. Then for each r, 1 r R, and m,
0 m Mr 1, there are n ⇥ n matrices Br,m such that
Ak
=
RX
r=1
Mr 1X
m=0
Br,m'm,ar (k)
Proof. Our assumptions imply that we can factor cA in the following way:
cA(z) =
RY
r=1
(z ar)Mr
.
The (i, j) entry of (zI A) 1
is of the form
pi,j (z)
cA(z) where pi,j(z) is some poly-
nomial with degree less than n. Using partial fractions, we can write
pi,j(z)
cA(z)
=
RX
r=1
Mr 1X
m=0
br,m(i, j)
(z ar)m+1
where br,m 2 C. It follows then that
z(zI A) 1
=
RX
r=1
Mr 1X
m=0
zBr,m
(z ar)m+1
where Br,m is the n ⇥ n matrix whose (i, j) entry is br,m(i, j) for each pair (r, m).
4. 4 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
By Corollary 8 and Proposition 5 we get
Ak
=
RX
r=1
Mr 1X
m=0
Br,m'm,ar (k).
⇤
:
4. Examples
Example 4.1. Let
A =
3 2
1 0
where
Ak
=
RX
r=0
Mr 1X
m=0
Br,m'm,ar
(k)
and
'm,ar
(k) =
ak m
r km
m!
.
Note: km
is known as the falling factorial where km
= k(k 1)(k 2) · · · (k
m + 1). We must first find the characteristic polynomial CA(z) = det(zI A):
CA(z) = det
z 3 2
1 z
= z2
3z + 2
= (z 2)(z 1) = 0.
From the characteristic polynomial, we can find the eigenvalues ar and the multi-
plicities Mr. In this example there are two eigenvalues a1 = 2 and a2 = 1 with
multiplicities M1 = 1 and M2 = 1, so the Ak
equation becomes
Ak
=
RX
r=0
Mr 1X
m=0
Br,m'm,ar
(k)
=
2X
r=1
0X
m=0
B(1,0)'(0,2)(k) + B(2,0)'(0,1).
Let us define M = B(1,0) and N = B(2,0) for simplicity, so the equation now
becomes
Ak
= M'(0,2)(k) + N'(0,1)(k).
5. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 5
Similarly,
A`
=
RX
r=0
Mr 1X
m=0
Br,m'm,ar (`)
=
2X
r=1
0X
m=0
B(1,0)'(0,2)(k) + B(2,0)'(0,1)
= M'(0,2)(l) + N'(0,1)(`)
and
Ak+`
=
RX
r=0
Mr 1X
m=0
Br,m'm,ar
(k + `)
=
2X
r=1
0X
m=0
B(1,0)'(0,2)(k) + B(2,0)'(0,1)
= M'(0,2)(k + `) + N'(0,1)(k + `).
Now we can substitute these into the equation
Ak
+ A`
= Ak+`
to get:
M'(0,2)(k+`)+N'(0,1)(k+`) =
⇥
M'(0,2)(k) + N'(0,1)(k)
⇤ ⇥
M'(0,2)(`) + N'(0,1)(`)
⇤
.
After expanding the right side, the equation becomes:
M'(0,2)(k + `) + N'(0,1)(k + `) = M2
'(0,2)(k + `) + M'(0,2)(k)N'(0,1)(`)
+ N'(0,1)(k)M'(0,2)(l) + N2
'(0,1)(k + `)
(M'(0,2)(k))'(0,2)(l) + (N'(0,1)(k))'(0,1)(`) = (M2
'(0,2)(k) + NM'(0, 1)(k))'(0,2)(`)
+ (MN'(0,2)(k) + N2
'(0,1)(k))'(0,1)(`)
**Note that z-transform is 1-1 (one to one) and linear, and '0
s are the same so the
terms are linear independent**
M'(0,2)(k) = M2
'(0, 2)(k) + NM'(0, 1)(k)
N'(0, 1)(k) = MN'(0, 2)(k) + N2
'(0, 1)(k)
Linear independence is applied a second time, thus
M2
= M, N2
= N, NM = 0, and MN = 0
M and N are projections
6. 6 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
Example 4.2. Let
A =
1 4
1 3
where
Ak
=
RX
r=1
Mr 1X
m=0
Br,m'm,ar (k)
and
'm,ar (k) =
ak m
r km
m!
.
Note: km
is known as the falling factorial where km
= k(k 1)(k 2) · · · (k
m + 1).
We must first find the characteristic polynomial cA(z) = det(zI A):
cA(z) = det(zI A)
=
z 1 4
1 z + 3
= (z 1)(z + 3) ( 4)
= z2
+ 2z + 1
= (z + 1)2
From the characteristic polynomial, we can find the eigenvalues ar and the multiplic-
ities Mr. In this example there is only one eigenvalue a1 = 1 and its multiplicity
is M1 = 2, so the Ak
equation becomes
Ak
=
1X
r=1
1X
m=0
Br,m'm,ar
(k)
= B1,0'0, 1(k) + B1,1'1, 1(k).
Let us define M = B1,0 and N = B1,1 for simplicity, so the equation then be-
comes
Ak
= M'0, 1(k) + N'1, 1(k).
Similarly
7. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 7
A`
=
1X
r=1
1X
m=0
Br,m'm,ar (`)
= B1,0'0, 1(`) + B1,1'1, 1(`)
= M'0, 1(`) + N'1, 1(`)
and
Ak+`
=
1X
r=1
1X
m=0
Br,m'm,ar
(k + `)
= B1,0'0, 1(k + `) + B1,1'1, 1(k + `)
= M'0, 1(k + `) + N'1, 1(k + `).
Now we can substitute these into the equation
Ak
A`
= Ak+`
to get:
[M'0, 1(k)+N'1, 1(k)][M'0, 1(`)+N'1, 1(`)] = M'0, 1(k+`)+N'1, 1(k+`)
After expanding the left side of the equation we get:
[M'0, 1(k) + N'1, 1(k)][M'0, 1(`) + N'1, 1(`)]
= M2
'0, 1(k)'0, 1(`) + MN'0, 1(k)'1, 1(`)
+ NM'1, 1(k)'0, 1(`) + N2
'1, 1(k)'1, 1(`).
Since
'm,ar
(k) =
ak m
r km
m!
,
the left side becomes the following equation:
8. 8 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
M2
( 1)k
( 1)`
+ MN`( 1)k
( 1)` 1
+ NMk( 1)k 1
( 1)`
+ N2
k`( 1)k 1
( 1)` 1
= M2
( 1)k+`
+ MN`( 1)k+` 1
+ NMk( 1)k+` 1
+ N2
k`( 1)k+` 2
Therefore
Ak
A`
= Ak+`
implies
M'0, 1(k + `) + N'1, 1(k + `) = M2
( 1)k+`
+ MN`( 1)k+` 1
+ NMk( 1)k+` 1
+ N2
k`( 1)k+` 2
.
Since we know that the set {'m,ar (k)} is a linearly independent set of functions,
we conclude that
M2
= M
MN = NM = N
N2
= 0
Hence, M is a projection and N is nilpotent.
Example 4.3. Now let A be the 3 ⇥ 3 matrix
A =
2
4
10 2 9
12 4 9
8 4 4
3
5 .
9. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 9
Like the previous examples, we must find the characteristic polynomial for A first:
cA(z) = det
2
4
z 10 2 9
12 z + 4 9
8 4 z + 4
3
5
= z3
2z2
4z + 8
= (z 2)2
(z + 2).
We know
Ak
=
RX
r=1
Mr 1X
m=0
Br,m'm,ar (k)
where ar is the eigenvalues a1 = 2 and a2 = 2 and there multiplicities Mr are
M1 = 2 and M2 = 1.
So Ak
becomes:
Ak
= B1,0'0,2(k) + B1,1'1,2(k) + B2,0'0, 2(k).
We will say M = B1,0, N = B1,1 and P = B2,0 for simplicity, so the equation
becomes
Ak
= M'0,2(k) + N'1,2 + P'0, 2(k).
Similarly
A`
= B1,0'0,2(`) + B1,1'1,2(`) + B2,0'0, 2(`)
= M'0,2(`) + N'1,2(`) + P'0, 2(`)
and
Ak+`
= B1,0'0,2(k + `) + B1,1'1,2(k + `) + B2,0'0, 2(k + `)
= M'0,2(k + `) + N'1,2(k + `) + P'0, 2(k + `).
We can now substitute the three equations into the original equation
Ak
A`
= Ak+`
to get:
[M'0,2(k)+N'1,2(k)+P'0, 2(k)][M'0,2(`)+N'1,2(`)+P'0, 2(`)] = M'0,2(k+l)+N'1,2(k+l)+P'0, 2(k+l).
After expanding the left side of the equation we get:
10. 10 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
M'0,2(k + `) + N'1,2(k + `) + P'0, 2(k + `) = M2
'0,2(k)'0,2(`) + MN'0,2(k)'1,2(`)
+ MP'0,2(k)'0, 2(`) + NM'1,2(k)'0,2(`)
+ N2
'1,2(k)'1,2(`) + NP'1,2(k)'(0, 2)(`)
+ PM'0, 2(k)'0,2(`) + PN'0, 2(k)'1,2(`).
+ P2
'0, 2(k)'0, 2(`)
Since we know
'm,ar =
ak m
r km
m!
the equation becomes
M(2)k+`
+ N(k + `)2k+`
+ P( 2)k+`
= M2
2k+`
+ MN`2k+` 1
+ MP(2)k
( 2)`
+ NMk2k+` 1
+ N2
k`2k+` 2
+ NPk(2)k 1
( 2)`
+ PM( 2)k
(2)`
+ PN`( 2)k
(2)`
+ P2
( 2)k+`
By Linear Independence of 'm,ar we can conclude,
M2
= M
P2
= P
N2
= 0
MP = PM = 0
MN = NM = N
PN = NP = 0
Therefore, M and P are both projections and N is nilpotent.
5. Linear Independence
Theorem 5.1. Let A 2 Mn(C). Let the characteristic polynomial of A,
cA(s) = (s 1)m1
(s 2)m2
· · · (s n)mn
Then the set
B = {'0, 1 , '1, 1 , · · · , 'm1, 1 , · · · , 'mn, n }
is linearly independent.
Proof. Because the Z transform is an injective linear transformation, it preserves
linear independence.
Therefore, if Z(B) is linearly independent, so is B
By the definition of the Z transform,
Z(B) = {
z
z 1
,
z
(z 1)2
, · · ·
z
(z 1)m1
, · · · ,
z
(z n)mn
}
11. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 11
Let c1, c2, · · · cl be constants, such that
c1z
z 1
+
c2z
(z 1)2
+ · · · +
cm1
z
(z 1)m1
+ · · · +
clz
(z n)mn
= 0
Rewrite this as
z · p1(z)
(z 1)m1
+ · · · +
z · pn(z)
(z n)mn
by making common denominators. Then, take the limit as z approaches 1
Since the sum above is identically zero, it’s limit must be zero everywhere. This
means, in particular, that
lim
z! 1
z · p1(z)
(z 1)m1
is finite, because the limit as z approaches 1 is finite for all other terms in the
sum, and must add up to zero.
This implies that p1(z) = 0. Since
p1(z) = c1(z 1)mn 1
+ c2(z 1)mn 2
+ · · · cmn
c1 is the only term that has z raised to the mn 1 power, so c1 = 0.
Reapply this logic to conclude that ci = 0, 81 i l Therefore, Z(B) is linearly
independent, which implies that B is linearly independent. ⇤
Theorem 5.2. Let x, y 2 Z, n 2 N. Then
(5.1) (x + y)n
=
nX
k=0
✓
n
k
◆
xk
yn k
Proof. This proof relies on the fact that xk+1
= x · (x 1)k
We will do an inductive argument on n.
When n = 0, this equation reduces to 1 = 1, which is true.
Therefore, assume there exists an n 2 N such that (5.1) holds, and show that this
implies (5.1) holds for n + 1.
(5.2)
n+1X
k=0
✓
n + 1
k
◆
xk
yn+1 k
=
nX
k=0
✓
n + 1
k
◆
xk
yn+1 k
+ xn+1
Because we can just take out the last term of the sum.
Next, we will use Pascal’s Rule (which can be found in [KP01] or most any treatise
on combinatorics), which states that
✓
n + 1
k
◆
=
✓
n
k 1
◆
+
✓
n
k
◆
Apply this to the right hand side of (5.2), to get
(5.3)
n+1X
k=0
✓
n + 1
k
◆
xk
yn+1 k
=
nX
k=0
✓✓
n
k 1
◆
+
✓
n
k
◆◆
xk
yn+1 k
+ xn+1
This is the same as writing
(5.4)
n+1X
k=0
✓
n + 1
k
◆
xk
yn+1 k
=
nX
k=0
✓
n
k 1
◆
xk
yn+1 k
+
nX
k=0
✓
n
k
◆
xk
yn+1 k
+xn+1
12. 12 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
But the first term on the right hand side of (5.4) is zero, so we can shift our index
and keep the same answer. This gives us
(5.5)
n+1X
k=0
✓
n + 1
k
◆
xk
yn k
=
n 1X
k=0
✓
n
k
◆
xk+1
yn+1 k
+
nX
k=0
✓
n
k
◆
xk
yn+1 k
+ xn+1
Add the term that was split o↵ back in to get
(5.6)
n+1X
k=0
✓
n + 1
k
◆
xk
yn k
=
nX
k=0
✓
n
k
◆
xk+1
yn+1 k
+
nX
k=0
✓
n
k
◆
xk
yn+1 k
Now we can use the fact that was mentioned at the start of the proof, namely that
xk+1
= x · (x 1)k
.
Apply this to the right hand side of (5.6) to get
(5.7)
n+1X
k=0
✓
n + 1
k
◆
xk
yn k
= x ·
nX
k=0
✓
n
k
◆
(x 1)k
yn+1 k
+ y ·
nX
k=0
✓
n
k
◆
xk
yn k
Now we can invoke our inductive hypothesis, and get that
(5.8)
n+1X
k=0
✓
n + 1
k
◆
xk
yn k
= x · (x + y 1)n
+ y · (x + y 1)n
The right hand side of (5.8) is the same as (x + y)(x + y 1)n
, so using the fact
mentioned at the start of the proof again, we get that
(5.9)
n+1X
k=0
✓
n + 1
k
◆
xk
yn k
= (x + y)n+1
This is what we were trying to show, so by the principle of mathematical induc-
tion, (5.1) is true. ⇤
Theorem 5.3. Let A be an n ⇥ n matrix with characteristic polynomial cA(z) =
(z 1)m1
· · · (z r)mr
. Then, there exists a decomposition
Ak
=
rX
i=1
k
i Pi +
mi 1X
q=1
Nq
i 'q, i
(k)
with the following properties
(1) PiNi = NiPi = Ni
(2) PiPj = 0 if i 6= j
PiPi = Pi
(3) Nmi
i = 0
(4) PiNj = NjPi = 0 if i 6= j
(5) A =
Pr
i=1 iPi + Ni
(6) I =
Pr
i=1 Pi.
Proof. This proof is essentially a careful study of the equation Ak+l
= Al
Ak
. From
Proposition 9 of [T12], we have a canonical decomposition of the matrix power
Ak+l
=
rX
i=1
Mi 1X
j=0
Mi,j'j, i
(k + l).
13. JORDAN DECOMPOSITION VIA THE Z-TRANSFORM 13
Substituting the definition 'j, i (k +l) =
Pj
a=0
j
a
k j+l
i kj a
la
j! , the right-hand side
becomes
rX
i=1
1X
j=0
jX
a=0
Mi,j
✓
j
a
◆ k j+l
i kj a
la
j!
.
Swapping the inner two sums and making the appropriate changes to their limits,
we have
rX
i=1
1X
a=0
1X
j=a
Mi,j
✓
j
a
◆ k j+l
i kj a
la
j!
.
Letting j = j + a, rearranging, and simplifying, we get
rX
i=1
1X
a=0
1X
j=0
Mi,j+a
k j
i kj l a
i la
j!a!
.
Now, let k j
i =
Pr
b=1
k j
b i(b). Substituting into the above, our expression for
Ak+l
becomes
rX
i=1
1X
a=0
rX
b=1
1X
j=0
Mi,j+a i(b)
k j
b kj l a
i la
j!a!
.
Using the definition of the ' functions, we can rewrite this as
rX
i=1
1X
a=0
rX
b=1
1X
j=0
Mi,j+a i(b)'j, b
(k)'a, i (l).
Now we turn our attention to Al
Ak
. Using Proposition 9 of [T12], we have
Al
Ak
=
rX
i=1
1X
a=0
Mi,a'a, i
(l) ·
rX
b=1
1X
j=0
Mb,j'j, b
(k)
=
rX
i=1
1X
a=0
rX
b=1
1X
j=0
Mi,aMb,j'j, b
(k)'a, i
(l).
Now, we set equal our expressions for Ak+l
and Al
Ak
:
rX
i=1
1X
a=0
rX
b=1
1X
j=0
Mi,j+a i(b)'j, b
(k)'a, i
(l) ==
rX
i=1
1X
a=0
rX
b=1
1X
j=0
Mi,aMb,j'j, b
(k)'a, i
(l)
Invoking the linear independence of the ' functions given in Theorem 5.1, we have
a collection of equations
rX
b=1
1X
j=0
Mi,j+a i(b)'j, b
(k) =
rX
b=1
1X
j=0
Mi,aMb,j'j, b
(k),
one for each i and a. Using Theorem 5.1 once again on each of these equations, we
conclude that
Mi,j+a i(b) = Mi,aMb,j.
The six properties in the statement of the theorem come from a careful analysis
of this equation. Set Pi = Mi,0 and Ni = Mi,1. Notice that
PiNi = Mi,0Mi,1 = Mi,1 i(i) = Ni and NiPi = Mi,1Mi,0 = Mi,1 i(i) = Ni,
proving (1). ⇤
14. 14 ASIA AGE, THOMAS CREDEUR, TRAVIS DIRLE, AND LAFANIQUE REED
References
[KP01] Walter G. Kelley and Allan C. Peterson, Di↵erence Equations: An Introduction with
Applications, Academic Press, San Diego, 2001.
[S86] E. J. P. Georg Schmidt, An Alternative Approach to Canonical Forms of Matrices, American
Mathematical Monthly 93 (1986), no. 3, 176-184.
[T12] Casey Tsai, The Cayley-Hamilton Theorem via the Z-Transform, Rose-Hulman Under-
graduate Mathematics Journal 13 (2012), no. 2, 44-52.
SMILE@LSU, Louisiana State University, Baton Rouge, Louisiana
E-mail address: aage@xula.edu
SMILE@LSU, Louisiana State University, Baton Rouge, Louisiana
E-mail address: thomas.credeur@gmail.com
SMILE@LSU, Louisiana State University, Baton Rouge, Louisiana
E-mail address: tdirle@go.olemiss.edu
SMILE@LSU, Louisiana State University, Baton Rouge, Louisiana
E-mail address: lar257@msstate.edu