SlideShare a Scribd company logo
1 of 79
Download to read offline
Systems of Differential Equations
        Joshua Dagenais
            12-04-09
   Mentor: Dr. Arunas Dagys




                                    1
Table of Contents


      Introduction


      Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues


      Section 2: Solving Systems of Differential Equations with Complex Eigenvalues


      Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues


      Section 4: Solving Systems of Nonhomogenous Differential Equations


      Section 5: Application of Systems of Differential Equations – Arms Races


      Section 6: Application of Systems of Differential Equations – Predator-Prey Model


      Conclusion


      References




                                                                                            2
Introduction


       Many laws and principles that help explain the behavior of the natural world are

statements or relations that involve rates at which things change. When explained in

mathematical terms, the relations become equations and that rates become derivatives.

Equations that contain these rates or derivatives are called differential equations. Therefore,

systems of ordinary differential equations arise naturally in laws and principles explaining

behavior of the natural world involving several dependent variables, each of which is a function

of single independent variable. This then becomes a mathematical problem that consists of a

system of two or more differential equations. These systems of differential equations that

describe these laws or principles are called mathematical models of the process (Boyce &

DiPrima, 2001).


       A system of first order ordinary differential equations is an interesting mathematical

concept as it combines 2 different studies of mathematics for its use. By dissecting the phrase,

system of first order differential equations, into 2 parts, the 2 different areas of mathematics used

to solve these equations can be found. In the system part of the phrase, it involves linear algebra

to solve the system of equations and in this case the system of equations consists of first order

differential equations. The ladder part of the phrase, first order of differential equations,

indicates that solution strategies for solving them will also be involved when solving systems of

differential equations.


       So with linear algebra for systems and differential equations in mind, what other

underlying concepts and skills involved with these mathematical concepts must be learned and

explained to solve systems of differential equations? Well, for the linear algebra aspect of


                                                                                                    3
solving systems of differential equations, topics that are to mentioned briefly in this paper

include matrices, characteristic equations, roots of the characteristic equations (eigenvalues),

eigenvectors, and the diagonalization of a matrix. For the differential equation part of solving

systems, topics that are discussed include solving first order differential equations, solving

simple diagonal systems ( y '  Dy ), and solutions of the original systems ( x  Cy where

x(t )  keat ). When everything mentioned is put together, solutions of different types are found

for systems of differential equations and with the help of mathematical software such as Maple,

graphs are able to visually represent the answers of these systems (slope fields) to show that

there are actually more than one solution called a family of solutions. Also, depending on the

types of eigenvalues that are found for the system of differential equations, different methods for

solving the systems will be used for eigenvalues that are distinct and real, eigenvalues that are

complex, and eigenvalues that are repeated, all of which graphically represented in a different

manner.


       Along with different methods for solving systems of differential equations, methods for

solving homogenous and nonhomogenous systems will be explained to help further the scope of

this subject. Why do we care about solving systems of differential equations? Well, there are

many physical problems that involve a number of separate elements linked together in some

manner such as generic application problems which include spring-mass systems, electrical

circuits, and interconnected tanks that need solutions of systems of differential equations to be

understood and solved. Other, more advanced applications of the theory behind systems of

differential equations include the Predator-Prey Model (Lotka-Volterra Model) and the

Richardson’s Arms Race Model which connects mathematics with concepts that would have

never been able to be explained without such elegant mathematical equations. The Predator-Prey

                                                                                                    4
Model is a system of nonlinear differential equations (even though it is considered an almost

linear system) and the Arms Race Model that uses systems of differential equations that are

nonhomogenous. Both models are very interesting applications that will be discussed and

explained later on in this paper. Hopefully, this paper will give the reader insight on what

systems of linear differential equations are, how to solve them, how to apply them, and how to

understand and interpret the answers that are derived from problems.




                                                                                                 5
Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues


        In this section, we will be solving systems of differential equations where the eigenvalues

found from the characteristic equation are all real and all distinct. In order to do this, we will

first take the system


                                      
                                     x1  p11 (t ) x1  ...  p1n (t ) xn  g1 (t )

                                      
                                     xn  pn1 (t ) x1  ...  pnn (t ) xn  g n (t )


                                                               
and write it in matrix notation. To do this, we write x1 ,..., xn in vector form:


                                                              
                                                            x1 
                                                            
                                                      x   
                                                            x 
                                                            n


we put the coefficients p11 (t ),..., pnn (t ) in an n x n matrix:


                                                  p11 (t )           p1n (t ) 
                                                                              
                                         P(t )                               
                                                  p (t )             pnn (t ) 
                                                  n1                          


we again write x1 ,..., xn in vector form:


                                                          x1 
                                                          
                                                      x  
                                                         x 
                                                          n


and write g1 (t ),..., gn (t ) in vector form:




                                                                                                     6
 g1 (t ) 
                                                              
                                            g (t )           
                                                      g (t ) 
                                                      n 


Therefore, the resulting equation using the above vector and matrix notation is represented by


                                           x  P(t ) x  g (t )


       We will first consider homogenous systems where g (t )  0 , thus


                                               x  P(t ) x


To find the general solution of the above system when P(t ) is a 1 x 1 matrix, the system above

reduces to a single first order equation


                                                dx
                                                    px
                                                dt


where the solution is x  ce pt . Therefore, to solve any other systems with second order or

higher, we will look for solutions of the form


                                                x   ert


where  is a column vector instead of a constant c (because we are dealing with solutions to

more than one differential equation thus giving us multiple constants equating to a vector) and r

is an exponent to be solved. Substituting x   ert into both sides of x  P(t ) x gives


                                           r ert  P(t ) ert


Upon canceling ert , we obtain r  P(t ) or


                                                                                                    7
( P(t )  rI )  0


where I is the n x n identity matrix. In order to solve ( P(t )  rI )  0 , we will use theorem 1.


Theorem 1: Let A be an n x n matrix of constant real numbers and let X be an n-dimensional

column vector. The system of equations AX  0 has nontrivial solutions, that is, X  0 , if and

only if the determinant of A is zero.


In our case, ( P(t )  rI ) is the n x n matrix represented by A and  is the n-dimensional column

vector represented by X. Therefore, in order to find the nontrivial solutions of ( P(t )  rI )  0 ,

we must take the determinant of ( P(t )  rI )  0 which is represented


                                     p11 (t )  r           p1n (t )
                                                                         0
                                        pn1 (t )          pnn (t )  r


Computing the determinant will yield a characteristic equation, which resembles the structure of

a polynomial of degree n, where the roots of the characteristic equation, eigenvalues denoted by

r, will be computed. After the eigenvalues have been computed, r will be substituted back into

( P(t )  rI )  0 and solved for the nonzero vector,  , which is called the eigenvector of the

matrix P(t ) corresponding to the eigenvalue r1 . The eigenvector will be an n x 1 column vector

that will have as many values as there are equations to solve for. After finding the eigenvalues

and the eigenvectors for those specific values, they will be substituted back into the equation


                                                    x   ert


which will be represented as the following specific solutions



                                                                                                         8
 x11 (t )                  x1k (t ) 
                                                                               
                                  x (t )  
                                   (1)
                                                       ,..., x (t )  
                                                               (k )
                                                                                   ,...
                                            x (t )                    x (t ) 
                                            n1                        nk 


for the initial system. If the Wronskian of x(1) ,..., x( n) (represented as W [ x(1) ,..., x( n) ] ) does not

equal zero, then the general solutions can be represented as a linear combination of the specific

solutions


                                           x  c1 x(1) (t )     ck x( k ) (t )


The following examples will help illustrate how to solve n x n systems of differential equations

with distinct real eigenvalues. The general solution of the given system of equations will be

solved for along with a graph that shows the direction field of the answer.


Example 1: Solve the following 2 x 2 system for x


                                                    
                                                   x1  3x1  2 x2
                                                    
                                                   x2  2 x1  2 x2


To solve the problem, we rewrite the equations into its matrix form


                                                        3 2 
                                                  x        x
                                                        2 2 


which is of the form


                                                      x  P(t ) x


where




                                                                                                                 9
 3 2 
                                                    P (t )        
                                                              2 2 


We then find the eigenvalues of P(t) by finding the characteristic equation and solving for r.

Therefore,


                                             3 r     2
                       det( P(t )  rI )                    r 2  r  2  (r  1)(r  2)  0
                                              2      2  r


and the eigenvalues of P(t) are r1  1 and r2  2 . Now we compute the eigenvectors for each of

their respective eigenvalues. We will compute the nontrivial solutions of


                                              3 r  2   c1 
                                                             0
                                               2   2  r  c2 


For r1  1


              3  (1)    2   c1          4 2   c1      4c1  2c2  0
                                 c   0          c   0  2c  c  0  c2  2c1
              2        2  (1)   2       2 1   2          1   2




(Note that both of the resulting equations with c1 and c2 are the same). One such solution of the

                                                                                      1
equation is found by choosing c1  1 thus making c2  2 to give the eigenvector  1    .
                                                                                       2

                                                                       1
Knowing that x( n) (t )   ( n)ernt , it follows that x (1) (t )  c1   e  t is a solution of the initial system.
                                                                        2


For r2  2


               3 2  2   c1         1 2   c1       c1  2c2  0
                           c   0          c   0  2c  4c  0  c1  2c2
                2   2  2   n       2 4  2           1     2



                                                                                                                    10
By choosing c1  2 to solve the equation, c2  1. Proper notation of eigenvectors, if possible,

insists that fractions should be avoided when representing the numerical value of the eigenvalue.

                               2
Therefore, for r2  2 ,  2    and a second solution is
                              1


                                                                2
                                               x (2) (t )  c2   e 2t
                                                               1


Now, we check to see if we can represent x1 and x2 as a general solution by taking the Wronskian

of both specific solutions. The Wronskian of x(1) (t ) and x(2) (t ) is


                                                            et    2e2t
                                    W [ x (1) , x (2) ]                 3et
                                                            2et   e 2t




which is never equal to zero. It follows that the solutions x(1) (t ) and x(2) (t ) are linearly

independent. Therefore, the general solution of the system x  P(t ) x is


                                                   1             2
                                        x(t )  c1   e  t  c2   e 2t
                                                    2           1




                                                                                                   11
All the general solutions (represented by the family of red lines), a combination of x(1) (t ) and

x(2) (t ) , for which c1  0 and c2  0 , are asymptotic to the line x2  2 x1 . The blue trajectories

represent specific solutions to the system with each trajectory having a different initial value

( x1 (0)  a and x2 (0)  b where a and b are any real number ).


       For the remaining examples in this section, the derivation of the final solution will be

shown without all steps shown The purpose of these examples is to show the variety of systems

of differential equations that have distinct real eigenvalues such as a 3 x 3 system and a 2 x 2

system with initial conditions given.


Example 2: Solve the following 3 x 3 system for x


                           
                          x1  x1  x2  x3               1 1 1
                                                                    
                           
                          x2  2 x1  x2  x3       x   2 1 1  x
                          x3  8 x1  5 x2  3x3 
                                                 
                                                            8 5 3 
                                                                     


First, we find the eigenvalues for the coefficient matrix by the following equation


                                                  1 r 1        1
                               det( P(t )  rI )  2 1  r      1  0
                                                 8     5    3  r


and solving the resulting characteristic equation.



>




                                                                                                         12
>




Using maple yields the eigenvalues


                                           r1  2 , r2  2 , and r3  1


and eigenvectors


                                        4         0          3
                                         2   3  
                                      5  ,    1  ,    4 
                                       1

                                        7         1        2 
                                                             


The eigenvalues above are the same as the given Maple output but manipulated in properly

format where all values of the eigenvector are integers and the first value is positive. After the

eigenvalues and eigenvectors are computed, we find the Wronskian

W [ x(1) , x(2) , x(3) ]  12et  0 therefore we can substitute all the eigenvalues and eigenvectors

found into x( n)   ( n)ernt and express the solution as a linear combination


                                                       4          0            3
                                                        e 2t  c   e 2t  c   e t
                x(t )  x (t )  x (t )  x (t )  c1  5                     3  4 
                         (1)     (2)           (3)
                                                                   2 1 
                                                       7          1          2 
                                                                               


Example 3: Solve the 2 x 2 system with initial conditions given for x


                
               x1  5 x1  x2       x (0)  2          5 1                2
                              where 1           x        x where x(0)   
                
               x2  3x1  x2       x2 (0)  1        3 1                   1



                                                                                                         13
We will start off the example by using Maple to find the eigenvalues and eigenvectors of the

coefficient matrix:

>


>




>




                           1                     1
Therefore, r1  4 ,  1    , r2  2 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e6t  0
                           1                      3

therefore the specific solutions x (1) and x (2) can be expressed as the general solution


                                                    1          1
                                        x(t )  c1   e 4t  c2   e 2t
                                                    1           3


                                                                2
After the general solution has been found, we substitute x(0)    into x(t ) to get
                                                                 1


                                 1           1         2       1     1  2 
                      x(0)  c1   e 4*0  c2   e 2*0     c1    c2     
                                 1            3         1     1      3   1


After the equation has been simplified, we multiply c1 and c2 by their respected vectors to yield

the follow system of equations


                                       c1  c2  2 and c1  3c2  1




                                                                                                       14
7          3
We then solve the system of equations for c1 and c2 to get c1       and c2     . Substituting
                                                                   2          2

back into the general solution to get the specific solution of the system as


                                            7  1     3  1 
                                     x(t )    e 4t    e 2t
                                            2  1     2  3


The direction field of general solution along with a trajectory of the specific solution is

represented as




The above direction field shows the different families of solutions for the general solution

denoted by the red arrows and the blue trajectory represents the specific solution to the system

                                               2
for when the initial starting value was x(0)    . Now, after establishing the basis for solving
                                                1

systems of differential equations, we will now delve into different cases of solving systems

where the eigenvalues are not real and/or distinct.




                                                                                                   15
Section 2: Solving Systems of Differential Equations with Complex Eigenvalues


       In this section, we will use what was previously discussed in the section for solving

systems with real and distinct eigenvalues on how to generate eigenvalues for an n x n system of

linear homogenous equations with constant coefficients denoted as


                                              x  P(t ) x


Now if P(t) is real then the coefficients that make up the characteristic equation for r are real and

any complex eigenvalues must occur in conjugate pairs (Boyce & DiPrima, 2001, p. 384).

Therefore, for a 2 x 2 system, r1  a  bi and r2  a  bi would be eigenvalues where a and b are

real. Also, it follows that the corresponding eigenvectors are complex conjugate pairs of each

other. Therefore, r2  r1 and  2   1 . To help visualize this, take the equation that was formed

in the previous section


                                           ( P(t )  rI )  0


and substitute r1 and  1 into the equation to get


                                          ( P(t )  r1I ) 1  0


which forms a corresponding general solution to the system. Now, by taking the complex

conjugate of the entire equation, the resulting equation becomes


                                          ( P(t )  r1I ) 1  0


where P(t) and I are not affected by the conjugation because they both have all real values. The

equation then forms another corresponding general solution where r2  r1 and  2   1 . Now,


                                                                                                   16
with the eigenvalues and eigenvectors solved for, we can use Euler’s formula to express a

solution with real and imaginary parts just as real solutions to the system. Euler’s formula states


                                                  eit  cos t  i sin t


But, for use with general complex solutions to a system of differential equations, we will use the

a modified version of the formula


                      e( i )t  e t (cos( t )  i sin( t ))  e t cos( t )  i e t sin( t )


to find the real-value solutions to the system. We can choose either x(1) (t ) or x(2) (t ) to find the 2

real-valued solutions because they are conjugates of each other and both will yield the same real-

valued solutions. Using x(2) (t ) and  2  a  bi where a and b are real, then we have


                         x(2) (t )  (a  bi)e( i  )t  (a  bi)e t (cos( t )  i sin( t ))


Factoring the above equation results in


                       x(2) (t )  et (a cos( t )  bi cos( t )  ai sin( t )  b sin( t ))


and separating x(2) (t ) into its real and imaginary parts, x(2) (t ) will yield


                     x(2) (t )  e t (a cos( t )  b sin( t ))  ie t (a sin( t )  b cos( t ))


If x(2) (t ) is written as the sum of 2 vectors ( x(2) (t )  u(t )  iv(t ) ), then the vectors yielded are


                 u(t )  e t (a cos( t )  b sin( t )) and v(t )  e t (a sin( t )  b cos( t ))




                                                                                                               17
We can disregard the i in front of v(t ) because it is considered to be a multiplier of the vector

and we are only interested in the real-numbered vector solution. If we chose to solve for x(1) (t )

instead of x(2) (t ) , we would have gotten the same solution except x(1) (t )  u(t )  iv(t ) . i is also

considered a multiplier of the v(t ) vector therefore we can disregard it and the answers for u (t )

and v(t ) would be the same as the ones that were solved for above. u (t ) and v(t ) are the

resulting real-valued vector solutions to the system.


         It is worth mentioning that u (t ) and v(t ) are linearly independent and can be expressed as

a single general solution. Therefore, for r1    i, r2    i and that r3 ,..., rn are all real and

distinct. Let the corresponding eigenvectors be  1  a  bi,  2  a  bi,  3 ,...,  n (Boyce &

DiPrima, 2001, p. 385).        Then the general solution to systems of differential equations with

complex eigenvalues is


                                 x(t )  c1u (t )  c 2 v(t )  c3 3e r3t  ...  cn nernt


where u(t )  e t (a cos( t )  b sin( t )) , v(t )  e t (a sin( t )  b cos( t )) , and P(t) consists of all

real coefficients. It is only when P(t) consists of all real coefficients that complex eigenvectors

and eigenvalues will occur in conjugate pairs (Boyce & DiPrima, 2001, p. 385). The following

examples will help illustrate how to solve n x n systems of differential equations with complex

eigenvalues. Both the complex and real-valued solutions will be given for each of the examples

and some direction fields will be shown to demonstrate the nature of systems with complex

eigenvalues.




                                                                                                                   18
Example 1: Solve the following 2 x 2 system for x


                                    
                                   x1  3x1  2 x2          3 2 
                                                     x        x
                                    
                                   x2  4 x1  x2           4 1 


We will begin the example by using Maple to find the eigenvalues and eigenvectors of the

coefficient matrix:

>


>




>




                                              1                1 
Therefore, r1  1  2i , r2  1  2i ,  1         , and   
                                                             2
                                                                        . To get the eigenvectors in
                                             1  i            1  i 

proper form from the Maple output, we multiplied both eigenvalues (resulting from the Maple

output) by its conjugate to get a real number for the first value and then multiplied it again by 2

so that all values in the eigenvector were integers. The Wronskian W [ x(1) , x(2) ]  2e2t  i  0

therefore the specific solutions x (1) and x (2) can be expressed as the general solution in complex

form


                                             1  (1 2i )t       1  (12i )t
                                 x(t )  c1       e        c2       e
                                            1  i              1  i 




                                                                                                        19
But, we want to be able to find the real-valued solutions of the complex general solution so we

will use x (1) to find the real-valued vectors. Therefore,


                                                            1  (1 2i )t
                                              x (1) (t )        e
                                                           1  i 


Using Euler’s formula, x (1) becomes


                                          1  t
                                                e (cos(2t )  i sin(2t ))
                                         1  i 


After Euler’s formula has been applied, we factor the above equation


                                            cos(2t )  i sin(2t )               t
                                                                               e
                                 cos(2t )  i sin(2t )  i cos(2t )  sin(2t ) 


and separate the real and imaginary elements into


                                       cos(2t )            t       sin(2t )       
                              et                        ie                       
                                  sin(2t )  cos(2t )         sin(2t )  cos(2t ) 


The result is the two real-valued solutions of the form u(t )  iv(t ) where


                                       cos(2t )                     t       sin(2t )       
                     u (t )  et                       and v(t )  e                       
                                  sin(2t )  cos(2t )                  sin(2t )  cos(2t ) 


Therefore, the general solution to the system with real-valued solutions is


                                                       cos(2t )             t       sin(2t )       
               x(t )  c1u (t )  c2v(t )  c1et                        c2e                       
                                                  sin(2t )  cos(2t )          sin(2t )  cos(2t ) 




                                                                                                          20
The resulting direction field showing families of solutions to the general solution to the system is




The blue trajectories show specific solutions when initial conditions are given. Thus, the

direction field creates spiraled solutions where the origin is the center of the spirals called a

spiral point. The direction of the motion is away from the spiral point and the trajectories

become unbounded. Also, the spiral point, for this particular solution, is unstable. There are

also systems with complex eigenvalues where the general solution has a spiral point that is stable

because all trajectories approach it as t increases.


Example 2: Solve for the following 3 x 3 system for x


                              
                             x1  x1                       1 0 0 
                                                                   
                             x2  2 x1  x2  2 x3   x   2 1 2  x
                              
                             x3  3x1  2 x2  x3 
                                                  
                                                            3 2 1 
                                                                    


                                                                                                    21
Again, we will begin the example by using Maple to find the eigenvalues and eigenvectors of the

coefficient matrix:

>


>




>




Thus, the eigenvalues are r1  1 , r2  1  2i , and r3  1  2i . The simplified eigenvectors are

    2          0             0
      2  
   3  ,    i  , and    i  . Notice that r1 and  1 already contain real-values therefore
    1                         3
                                  
    2          1              1
                              

no computations are needed to turn them into real-valued solutions like the other complex

eigenvalues and eigenvectors. The Wronskian W [ x(1) , x(2) , x(3) ]  4e3t  i  0 therefore the

specific solutions x (1) , x (2) , x (3) and can be expressed as the general solution in complex form


                                        2            0               0
                                          t           (1 2i )t       
                             x(t )  c1  3  e  c2  i  e        c3  i  e(12i )t
                                        2           1                 1
                                                                      


To find the real-valued solutions of the general solution, we will use x(2) (t ) and Euler’s formula

in the following equations


                                                                                                        22
0             0                                  0                   0      
                        (1 2i )t   t                               t               t             
             x (t )   i  e
              (2)
                                     i  e (cos(2t )  i sin(2t ))  e  cos(2t )   ie  sin(2t ) 
                      1            1                                   sin(2t )         cos(2t ) 
                                                                                                  


Therefore,


                                            0                             0      
                                                                   t             
                                u (t )  e  cos(2t )  and v(t )  e  sin(2t ) 
                                          t

                                            sin(2t )                   cos(2t ) 
                                                                                 


and the general solution to the system with real-valued solutions is


                                                       2              0                    0      
                                                         t         t                t             
               x(t )  c1r1  c2u (t )  c3v(t )  c1  3  e  c2e  cos(2t )   c3e  sin(2t ) 
                            1

                                                       2              sin(2t )          cos(2t ) 
                                                                                                  


       Now that we know how to solve systems that yield real and/or imaginary eigenvalues and

eigenvectors, we will now focus our attention on the next case if a eigenvalue is repeated when

found from the characteristic equation.




                                                                                                             23
Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues


        In this section, we will be solving systems of differential equations where the eigenvalues

found from the characteristic equation are repeated. We will still be finding solutions of the

following equation


                                                          x  P(t ) x


and will still find at least one of the eigenvalues/eigenvectors in the way we previously solved

systems with distinct eigenvalues. But, when solving for the other repeated eigenvalue, we will

see that the other solution will take the form


                                                       x   tert  ert


where  and  are constant vectors. After finding the first solution of the form x(1) (t )   1ert , it

may be intuitive to find a second solution to the system of the form


                                                       x(2) (t )   1tert


because of how repeated roots are solved when finding the solution to a second order differential

equation. Substituting that back into x  P(t ) x yields


        r 1te rt   1e rt  P (t ) 1te rt  r 1te rt   1e rt  P (t ) 1te rt  0   1e rt  rt  1  P (t )t   0


But, for the equation to be solved so it is satisfied for all t, the coefficients of te rt and ert must

each be zero (Boyce & DiPrima, 2001, p. 403). Therefore, we find out that in this case,  1  0

and thus x2   1tert is not a solution for the second repeated eigenvalue. But, from




                                                                                                                             24
r 1tert   1ert  P(t ) 1tert  0 ,


we see that there is a form of x   tert in the substituted equation along with another term of the

form  ert . Therefore, we need to assume that


                                                 x2   1tert ert


Where  and  are constant vectors. Substituting the above expression into x  P(t ) x gives


       r 1tert   1ert  rert  P(t )( 1tert ert )  r 1tert  ( 1  r )ert  P(t )( 1tert ert )


Equating the coefficients of te rt and ert gives the following conditions


                      P(t ) 1te rt  r 1te rt  0  P(t ) 1  r 1  0  ( P(t )  rI ) 1  0


                   P(t ) e rt   1e rt  r e rt  0  P(t )  r   1  ( P(t )  rI )   1


for the determination of  1 and . The underlined portions are the important conditions derived

from the equation. To solve ( P(t )  rI ) 1  0 , all we do is solve for one of the repeated

eigenvalue and eigenvector just like in previous sections. We will solve a matrix equation of the

form


           p11 (t )  r          p1n (t )  1   11   p11 (t )  r 1    p1n (t )  n  11
                                               
                                                 
           p (t )              pnn (t )  r n    n   pn1 (t ) 1    pnn (t )  r n   n
                                                        1                                             1
           n1                                  


Solving for 1 ,...,n in the above equation will result in the solution of the vector  denoted




                                                                                                               25
 1 
                                                               
                                                                 
                                                                 
                                                                 n


After equating  1 and  , we substitute them into x(2) (t ) to get the second specific solution


                                                      x2 (t )   1tert ert


The last term in the above equation can be disregarded because it is a multiple of the first

specific solution x(1) (t )   1ert but the first 2 terms make a new solution of the form


                                                      x(2) (t )   1tert ert


Finding W [ x(1) , x(2) ](t )  0 will prove that x(1) and x(2) are linearly independent thus allowing us

to represent the a general solution to the system in the form


         x  c1 x(1) (t )  c2 x (1) (t )    ck x ( k ) (t )  x  c1 1e r1t  c2 [ 1te r1t  e r1t ]  ...  ck k 1e rk 1t


where x(1) and x(2) include the repeated eigenvalues of multiplicity 2.


        For the sake of simplicity, we will focus our examples on solving systems that have

repeated eigenvalues of only multiplicity 2. Also included in one of the examples is a case

where a repeated eigenvalue give rise to linearly independent eigenvectors (which is easily

identifiable using Maple) of the matrix P(t ) thus avoiding the complications of solving systems

with repeated eigenvalues.




                                                                                                                                     26
Example 1: Solve the following 2 x 2 system for x


                                       
                                      x1  4 x1  x2             4 1
                                                          x        x
                                       
                                      x2  4 x1  8 x2          4 8 


We will begin the example by using Maple to find the eigenvalues and eigenvectors of the

coefficient matrix:

>


>




>




                                                0
Notice in the resulting eigenvectors that  2    which is a zero multiple of  1 and does us no
                                                0

help in finding the second specific solution of the above system. But, the results derived from

                                   1                    1
Maple gives us r1  r2  6 ,  1    , and x (1) (t )    e 6t . We need to use the equation
                                    2                    2


                                           x(2) (t )   1tert ert


to solve for  and thus have a second specific solution to the system. To find out the second

                                              1                         4 1
specific solution, we substitute x (2) (t )    te6t   e6t into x         x to get the following
                                               2                        4 8 

expression


                                                                                                           27
 1  6t     1                      4 1  1  6t     4 1   1  6t
                 e          6te6t     1  6e6t          te           e
                 2          2           2         4 8  2        4 8  2 


                 4 1  1  6t
Multiplying out         te and factoring out a 6 from the result yields
                 4 8  2 


                     1  6t  1  6t  1  6t  1  6t  4 1   1  6t
                      e    6te    6e    6te         e
                     2      2      2       2       4 8  2 


                  1
Canceling out the   6te6t on each side of the equation and rearranging the equation yields
                   2


                                    4 1   1  6t  1  6t  1  6t
                                            e    6e    e
                                    4 8  2      2       2


               1 
Factoring out   e6t on the left side of the equation and simplifying gives us
               2 


                                                   
                                      4 1          1  6t  1  6t
                                       
                                      4 8   6 I    e    e
                                     
                                     
                                                    2 
                                                    
                                                                 2

                                                      
                                   4 1   6 0    1  6t  1  6t
                                                 e    e
                                    4 8   0 6   2     2
                                                  
                                               2 1   1   1 
                                                        
                                               4 2  2   2 


                                          2 1   1   1 
                                                      , is of the form ( P(t )  rI )   .
                                                                                                 1
The end product of the above expression, 
                                          4 2   2   2 

In this case


                                                                                                      28
 4 1   6 0      1
                           ( P(t )  rI )   1                
                                                    4 8   0 6      2




Thus, to solve for  , we solve


                2 1   1   1   21  2  1                         0
                            4  2  2  1  0 and 2  1     
                4 2   2   2      1      2                            1


(Note that both of the resulting equations with 1 and 2 are the same). After solving for  , we

substitute it into x(2) (t )   1tert ert to find the second solution of the system to be


                                                     1         0
                                        x (2) (t )    te rt    e rt
                                                      2        1


The Wronskian W [ x(1) , x(2) ]  e12t  0 . Therefore the specific solutions x (1) and x (2) can be

expressed as the general solution


                                             1           1   0  
                                  x(t )  c1   e6t  c2   t     e6t
                                              2          2   1  


The resulting direction field showing families of solutions to the general solution to the system is




                                                                                                       29
The blue trajectories show specific solutions when initial conditions are given. The origin is

called an improper node. If the eigenvalues are negative, then the trajectories are similar but

traversed in the inward direction. An improper node is asymptotically stable or unstable,

depending on whether the eigenvalues are negative or positive (Boyce & DiPrima, 2001, p. 404).


Example 2: Solve the following 3 x 3 system for x


                              
                             x1  x1  2 x2  x3          1 2 1
                                                                
                              
                             x2  x2  x3          x   0 1 1  x
                              
                             x3  2 x3                   0 0 2 
                                                                


We will begin the example by using Maple to find the eigenvalues and eigenvectors of the

coefficient matrix:




>


                                                                                                  30
>




>




                                                         1                      1
                                                           t                    
From the Maple results: r1  r2  1 , r3  2 , x1 (t )   0  e , and x3 (t )  1 e2t . What we need to
                                                         0                      1
                                                                                

find is the specific solution to x(2) (t ) . In this example, we will use the equation ( P(t )  rI )   1

to solve for  , substitute it into x(2) (t )   1tert ert , and use the shortcut to find out the third

specific solution to the system. Therefore


                                               1 2 1        1      1
                                                              e t   0  e t
                        ( P(t )  rI )     0 1 1   1I   2 
                                           1
                                                                            
                                               0 0 2 
                                                           
                                                                 
                                                                3       
                                                                           0
                                                      
                                         0 2 1 1     1
                                                 t   t
                                         0 0 1 2  e   0  e
                                         0 0 1         0
                                                3       


and


                                                                       0
                            22  3  1                                
                                                        1               1
                               3  0       1  0,2  ,3  0     
                                                        2              2
                               3  0                                  0
                                                                        


                                                                                                             31
Substituting what we found for  into x(2) (t )   1tert ert yields


                                                    0
                                         1               2       0
                                           t 1 t   t   t
                                x (t )   0  te 
                                 (2)
                                                        e   0  te   1  e
                                         0        2     0         0
                                                  0               
                                                     


The Wronskian W [ x(1) , x(2) , x(3) ]  e4t  0 . Therefore, the specific solutions x(1) , x(2) , and x(3) can

be expressed as the general solution


                                         1        1           2   0  
                                           2t       t            
                              x(t )  c1 1 e  c2  0  e  c3  0  t   1   et
                                         1        0           0   0  
                                                                 




Example 3: Solve the following 3 x 3 system for x


                                
                               x1   x2  3x3                0 1 3 
                                                                     
                               x2  2 x1  3x2  3x3   x   2 3 3  x
                                
                               x3  2 x1  x2  x3 
                                                    
                                                               2 1 1 
                                                                      


For this example, using Maple can unlock a potential shortcut in solving for the general solution

to the above system. Again, we will begin the example by using Maple to find the eigenvalues

and eigenvectors of the coefficient matrix:

>


>



                                                                                                             32
>




Unlike the other 2 examples, the Maple output displays 2 linearly independent eigenvectors of

the repeated eigenvalues r2  r3  2 . Another shortcut for finding eigenvectors of repeating

eigenvalues is found if a math program, such as Maple, is utilized to solve systems of differential

equations. Therefore


                                     1             1                 3
                                      2t 2           2t 3           
                          x1 (t )  1 e , x (t )   2  e , x (t )   0  e 2t
                                     1              0                2 
                                                                     


and the general solution to the system is


                                       1      1           3 
                                         2t               
                                x  c1 1 e  c2  2   c3  0   e 2t
                                       1      0           2  
                                                          


       A more advanced look at systems with repeated eigenvalues would include repeated

eigenvalues with multiplicities higher than 2. The equations to solve higher multiplicities of

repeated eigenvalues become more detailed and difficult to solve for but to find the eigenvalues

for such values, we would follow the same thought process in how we found the eigenvalue for

repeated eigenvalues of multiplicity 2. For the next section, we will return to our original form




                                                                                                    33
of a differential equation x  p1 (t ) x1  ...  pn (t ) xn  g (t ) and solve nonhomogenous systems

where the value of g (t )  0 .




                                                                                                         34
Section 4: Solving Systems of Nonhomogenous Differential Equations


         Unlike the previous sections where we solved different types of systems of homogeneous

differential equations with constant coefficients, this section will focus on solving systems of

nonhomogenous differential equations of the form


                                             x  P(t ) x  g (t )


         The following theorem related to nonhomogenous systems should help us figure out

where to start solution process:


         Theorem 2: If x(1) (t ),..., x( n) (t ) are linearly independent solutions of the n-dimensional

         homogenous system x  P(t ) x on the interval a < t < b and if x p (t ) is any solution of the

         nonhomogenous system x  P(t ) x  g (t ) on the interval a < t < b, then any solution of

         the nonhomogenous system can be written x  c1 x (1) (t )        ck x ( n ) (t )  x p (t ) for a unique

         choice of the constants c1 ,..., cn (Rainville, Bedient, & Bedient, 1997, p. 199).


         The theorem states that we will need to find a particular solution x p (t ) and add it on to the

general solution of the homogenous system that is part of the nonhomogenous system. To do

that, we will be using a variation of parameters technique to find x p (t ) and solve the equation

x  P(t ) x  g (t ) .


         Solutions of the homogenous part of the nonhomogenous systems will take the form

                                         x  c1 1er1t     cn n ernt

and using the variation of parameters technique suggests we seek a solution to the

nonhomogenous system to be

                                                                                                                 35
x p (t )  c1 (t ) 1e r1t       cn (t ) n e rnt


Direct substitution back into x  P(t ) x  g (t ) yields


(r1c1 (t ) 1er1t      rn cn (t ) n ernt )  (c1 (t ) 1er1t 
                                                                       cn (t ) nernt )  ( P(t )c1 (t ) 1er1t 
                                                                                                                          P(t )cn (t ) ne rnt )  g (t )



P(t ) multiplied by any eigenvalue found to be a part of the specific solution will result in that

particular eigenvalue multiplied by its eigenvector because it is already part of the solution to the

homogeneous system. Therefore


(r1c1 (t ) 1e r1t     rn cn (t ) n e rnt )  (c1 (t ) 1e r1t 
                                                                       cn (t ) n e rnt )  (r1c1 (t ) 1e r1t 
                                                                                                                      rncn (t ) ne rnt )  g (t )
                                                                         
                                                (c1 (t ) 1e r1t 
                                                                       cn (t ) n e rnt )  g (t )
                                                                          



The resulting equation can be rewritten in matrix form as


                                               11            1n   c1(t )e r t   g1 (t ) 
                                                                                1

                                                                                             
                                                                                            
                                               n
                                              
                                                  1
                                                                nn   cn (t )e rnt   g n (t ) 
                                                                                             


                          
To solve for c1 (t ),..., cn (t ) , we must use Cramer’s Rule to solve Ax  b for x where


                                         11                1n           
                                                                          c1 (t )e r1t       g1 (t ) 
                                                                                                   
                                      A                         , x                ,b           
                                         n1
                                                              nn        cn (t )e rnt 
                                                                                              g (t ) 
                                                                                           n 


Cramer’s Rule states that the system has a unique solution that is given by


                                                             det( Bk )
                                                      xk              for k  1,..., n
                                                             det( A)

                                                                                                                                              36
Therefore


                                               g1 (t )          1n                         11          g1 (t )

                                               g n (t )          nn                        n
                                                                                             1
                                                                                                         g n (t )
                                
                               c1 (t )e 
                                        r1t
                                                                                
                                                                         ,..., cn (t )e 
                                                                                    rn t

                                                 11          1n                            11          1n

                                                 n
                                                  1
                                                               nn                           n
                                                                                              1
                                                                                                           nn


Thus,


                                                                        
                        c1 (t )e r1t  a1 g1 (t )  ...  an g n (t )  c1 (t )  [a1 g1 (t )  ...  an g n (t )]e  r1t


                                                                       
                        cn (t )ernt  b1 g1 (t )  ...  bn g n (t )  cn (t )  [b1 g1 (t )  ...  bn g n (t )]e  rnt


for some arbitrary constants a1 ,..., an and b 1 ,..., bn . To solve for the general solution, integrate

both sides of the above equation to get c1 (t ),..., cn (t ) , substitute them into

x p (t )  c1 (t ) 1e r1t     cn (t ) n e rnt to find the particular solution, and substitute x p (t ) into

x  c1 x (1) (t )       ck x ( n ) (t )  x p (t ) to find the general solution for the system. The following

examples will help demonstrate how to solve systems of nonhomogenous differential equations

and lead into an application of nonhomogenous systems.


Example 1: Solve the following 2 x 2 system for x


                                      
                                     x1  x2                        0 1  0 
                                                             x        x t 
                                      
                                     x2  2 x1  3x2  3e          2 3   3e 
                                                                     t




We will begin the example by using Maple to find the eigenvalues and eigenvectors of the

homogenous part of the system


                                                                                                                            37
 0 1
                                             x        x
                                                   2 3 


Therefore

>


>




>




                                                                     1             1
The resulting eigenvalues and eigenvectors are r1  1, r2  2,  1    , and  2    . The
                                                                     1              2

Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the system

is


                                                1        1
                                       xh  c1   et  c2   e 2t
                                                1         2


To find the particular solution of the nonhomogenous part of the system, we will use the

variation of parameters technique to find a solution of the above equation of the form


                                                   1             1
                                    x p  c1 (t )   et  c2 (t )   e 2t
                                                   1              2




                                                                                                    38
 0 1        0 
We will first substitute x p for x and x directly into x         x   t  to get the following
                                                              2 3       3e 

expression


         1              1              1                1         0 1  1                0 1  1                      0 
                                                      
c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t          c1 (t )e
                                                                                              t
                                                                                                          c2 (t )e
                                                                                                                        2t
                                                                                                                                   t 
         1              1               2                2        2 3  1               2 3  2                     3e 
                                                                     
                   1              1              1                1                 1              1          0 
                                                                
          c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t  c1 (t )   et  2c2 (t )   e 2t    t 
                   1              1               2                2                1               2         3e 
                                                                    
                                                        1             1         0 
                                                                
                                               c1 (t )   et  c2 (t )   e 2t   t 
                                                        1              2        3e 



The final expression given above can be written in matrix notation as


                                                            
                                                 1 1   c1 (t )et   0 
                                                              2t 
                                                                        t
                                                 1 2   c2 (t )e   3e 


                         
To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find


                                          0      1                       1 0
                                         3et     2 3et                  1 3et 3et
                              
                             c1 (t )et                     
                                                        and c2 (t )e2t       
                                          0      1   2                    0 1   2
                                         2      3                       2 3


Thus


                                         3et    t 3                3et        2t 3et
                               
                              c1 (t )           e =        
                                                         and c2 (t )             e 
                                         2           2                2               2


To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that


                                                                                                                                  39
3t               3et
                                        c1 (t )        and c2 (t ) 
                                                     2                  2


                                                          1             1
and substituting into the partial solution x p  c1 (t )   et  c2 (t )   e 2t yields
                                                          1              2


                                  3t 1 t 3et        1  2t    t  1   t 1
                           xp         e               e  3te    3e  
                                   2  1     2          2           1      2


Therefore, the general solution to the nonhomogenous system is


                                              1        1              1      1
                           x  xh  x p  c1   et  c2   e 2t  3tet    3et  
                                              1         2             1       2


Example 2: Solve the following 2 x 2 system for x


                              
                             x1  2 x1  x2  et          2 1   et 
                                                     x        x 
                              
                             x2  x1  2 x2  3t            1 2   3t 


We will begin the example by using Maple to find the eigenvalues and eigenvectors of the

homogenous part of the system


                                                          2 1 
                                                    x        x
                                                          1 2 


Therefore

>


>




                                                                                            40
>




                                                                        1            1
The resulting eigenvalues and eigenvectors are r1  1, r2  3,  1    , and  2    . The
                                                                        1             1

Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the system

is


                                                1          1
                                       xh  c1   e t  c2   e 3t
                                                1           1


To find the particular solution of the nonhomogenous part of the system, we will use the

variation of parameters technique to find a solution of the above equation of the form


                                                  1               1
                                   x p  c1 (t )   e t  c2 (t )   e 3t
                                                  1                1


                                                  2 1   et 
Substituting x p for x and x directly into x         x    to get the following
                                                  1 2   3t 


                                          1              1         et 
                                   (t )   et  c2 (t )   e3t   
                                  c1                
                                          1               1       3t 


The final expression above can be written in matrix notation as


                                                 
                                      1 1   c1 (t )et   et 
                                                   3t 
                                                              
                                      1 1  c2 (t )e   3t 


                         
To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find



                                                                                                    41
 et     3t                     et   3t 
                            
                           c1 (t )et                     
                                                     and c2 (t )e 3t      
                                         2      2                       2  2


Thus


                                        1   3te                  e2t     3te3t 
                                                  t
                              
                             c1 (t )                 
                                                      and c2 (t )                
                                       2  2                      2       2 


To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that


                                              et   3tet  3et
                                                   2
              3tet 3et t
    c1 (t )            + and c2 (t ) 
               2    2         2                 4      2      2
                            1  t         1  3t  3t 3 te t  1  e t 3te 2t 3e 2t   1 
    x  xh  x p  c1 (t )   e  c2 (t )   e    +                                
                            1             1        2 2     2   1  4    2       2   1


                                                          1               1
and substituting into the partial solution x p  c1 (t )   e t  c2 (t )   e 3t yields
                                                          1                1


                            3tet 3et t 1 t   e  3tet 3et   1  3t
                                                     t 2

                      xp          +   e                
                                                                    e
                            2     2  2   1    4       2   2   1
                                                                
                                               
                                3t 3 tet  1  et 3te2t 3e2t   1 
                          xp    +                          
                               2 2    2   1  4      2     2   1


Therefore, the general solution to the nonhomogenous system is


                            1              1          3t 3 tet  1  et 3te2t 3e2t   1 
    x  xh  x p  c1 (t )   et  c2 (t )   e 3t    +                          
                            1               1       2 2    2   1  4      2     2   1




                                                                                                        42
Section 5: Application of Systems of Differential Equations – Arms Races

(Nonhomogenous Systems of Equations)


         In the previous section, we discussed how to solve systems of differential equations that

were nonhomogenous using a variation of parameters technique. Now, we can apply that

knowledge of solving systems with nonhomogenous equations to solve a model that illustrates an

arms race between two competing nations. L.F. Richardson, an English meteorologist, first

proposed this model (also known as the Richardson Model) that tried to mathematically explain

an arms race between two rival nations. Richardson himself seemed to have believed that his

perceptions relating to the way nations compete militarily might have been useful in preventing

the outbreak of hostilities in World War II (Brown, 2007, p. 60). Both nations are self-defensive,

both fight back to protect their nation, both maintain army and stock weapons, and when one

nation expands their army the other nation finds it offensive. Therefore, both nations will spend

money (in billions of dollars) on armaments x and y that are functions of time t measured in

years. x(t) and y(t) will represent the yearly rate of armament expenditures of the two nations

using some standard unit. Richardson then made some of the following assumptions about his

model:


        The expenditure for armaments of each country will increase at a rate that is proportional

         to the other country’s expenditure (each nation's mutual fear's rate is directly proportional

         to the expenditure of the other nation) (Rainville, Bedient, & Bedient, 1997, p. 228).

        The expenditure for armaments of each country will decrease at a rate that is proportional

         to its own expenditure (extensive armament expenditures create a drag on the nation's

         economy) (Rainville, Bedient, & Bedient, 1997, p. 228).



                                                                                                   43
   The rate of change of arms expenditure for a country has a constant component that

       measures that level of antagonism of that country toward the other (Rainville, Bedient, &

       Bedient, 1997, p. 228).

      The effects of the three previous assumptions are additive (Rainville, Bedient, & Bedient,

       1997, p. 228).


The previous assumptions make up the differential equations of the arms race system denoted by


                                  dx                   x(0)  x0
                                      ay  mx  r
                                  dt
                                                   for
                                  dy
                                      bx  ny  s     y (0)  y0
                                  dt


where a, m, b, and n are all positive constants. The positive terms ay and bx represent the drive to

spend more money are arms due to the level of spending of the other nation, and the negative

terms mx and ny reflect a nation’s desire to inhibit future military spending because of the

economic burden of its own spending. But, r and s can be any value because they represent the

attitudes of each nation towards each other (negative values represent feelings of good will while

positive values represent feelings of distrust). The initial values x(0) and y(0) represent the

initial amount of money (in billions of dollars) each nation will spend towards armaments. The

system can be simplified into


                                         x(t )  mx  ay  r
                                          y(t )  bx  ny  s


and is expressed in matrix notation where


                         x(t )   m a  x(t )   r 
                                                  X   P(t ) X  B
                         y (t )   b n  y (t )   s 

                                                                                                  44
To solve for the system, we will use the knowledge from the previous section to develop

general solutions to the homogenous system. For the nonhomogenous part of the system, the

                                                 f
solution will be a constant solution of the form   because the vector B is made up of
                                                 g

constants thus making the process of solving by variation of parameters much easier. Lastly, the

initial values (trajectories) for the solution will represent the starting amount of money each

country will be spending on armaments.


        General solutions to the arms race system will represent one of a few types of races: a

stable arms race, a runaway arms race, a disarmament, or disarmament/runaway/stable arms race

depending on the initial values. The following examples will help demonstrate each of the above

mentioned arms races along with slope fields to graphically represent the races.


Example 1: A Runaway Arms Race


The following system will result in a runaway arms race:


                         x(t )  2 x  4 y  8        2 4     8
                                                 X         X  
                         y(t )  4 x  2 y  2         4 2     2


       To find the solution to this arms race, we will first find the general solution to the

homogenous part of the system using Maple:

>


>




>

                                                                                                  45
 1                      1
Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0
                           1                       1 

thus, the general solution to the homogenous part of the arms race is


                                                1          1
                                       xh  c1   e 2t  c2   e 6t
                                                1           1


        As mentioned in the beginning of this section, the nonhomogenous system

                                                      e
X   P(t ) X  B has a constant solution of the form   because B is a vector of constants thus
                                                      f

the solution should also be a vector made up of constants. Therefore, in the equation

                             e
X   P(t ) X  B , X (t )    can be substituted into X and X  where
                             f


                                              2 4          8
                                     P(t )         and B   
                                              4 2           2


to get the following expression


        2 4  f   8   2 4  f     8  2 f  4 g  8  f   2 
     0                                               xn
        4 2  g   2   4 2  g     2  4 f  2 g  2    g   3 


Therefore the general solution of the nonhomogenous system is


                                                1          1          2 
                              x  xh  xn  c1   e 2t  c2   e 6t   
                                                1           1        3 




                                                                                                         46
Note that as lim x(t ) and lim y(t )   . Thus we would predict that the rate that each nation
             t          t 



spends their money on armaments would increase infinity resulting in an arms race.


       The direction field for the nonhomogenous system is represented by




with the initial conditions x0  5, y0  2 and x0  2, y0  5 given. The direction field of the

system shows that for any initial value, the solution goes to  as t   . Thus, we have a

runaway arms race.


       If you wanted to solve the system with an initial condition given such as x0  5, y0  2 ,

we would set up the general solution as


               5      1 20     1  60  2   5   1     1   2 
                  c1   e  c2   e        c1    c2     
                2      1        1       3   2   1     1  3 


                                                                                                    47
and solve for c1 and c2 . Therefore


          5       1      1   2  c1  c2  2  5 c1  c2  7
             c1    c2       c  c  3  2  c  c  6  c1  6 and c2  1
           2      1      1  3   1    2          1    2




Thus, the final solution with the initial conditions given is


                                         1       1          2 
                                    x    6e2t    e 6t   
                                         1        1        3 


or


                                         x(t )  6e 2t  e 6t  2
                                         y (t )  6e 2t  e 6t  3


The role of the initial value is how much each nation will initially spend on armaments in

billions of dollars. Using initial values when solving an arms race system will lead to a specific

solution describing the race instead of families of general solutions describing all cases of the

system.


Example 2: A Stable Arms Race


The following system will result in a stable arms race:


                          x(t )  5 x  2 y  1        5 2     1
                                                  X         X  
                          y(t )  4 x  3 y  2         4 3      2


       To find the solution to this arms race, we will first find the general solution to the

homogenous part of the system using Maple:




                                                                                                    48
>


>




>




                           1                       1
Therefore, r1  7 ,  1    , r2  1 , and  2    . The Wronskian W [ x(1) , x(2) ]  3e8t  0
                            1                      2

thus, the general solution to the homogenous part of the arms race is


                                              1             1
                                       x  c1   e 7 t  c2   e  t
                                               1            2


        The general solution to the nonhomogenous part of the arms race will be found by

                      f             5 2      1
substituting X (t )    into X          X    . Therefore
                      g             4 3       2


                     5 2  f   1  5 f  2 g  1  0
                  0                                f  1 and g  2
                     4 3  g   2  4 f  3g  2  0


                                        1
and the solution of that system is xn    . Thus the general solution of the nonhomogenous
                                         2

system is


                                               1             1        1
                              x  xh  xn  c1   e 7 t  c2   e t   
                                                1            2        2




                                                                                                         49
Note that as lim x(t )  1 and lim y(t )  2 because in both equations, the terms with both
             t              t 



e7t and et go to 0 as t   . All that is left from the differential equations are the constant

terms x(t )  1 and y(t )  2 and what initial values of the system converge to.


       The direction field with a few trajectories denoting the initial values for the

nonhomogenous system is represented by




The direction field of the system shows that for any initial value, the solution approaches the

point (1, 2) as t   . Thus, we have a stable arms race.




                                                                                                    50
Example 3: Disarmament


The following system


                         x(t )  4 x  y  1        4 1      1 
                                               X         X  
                         y(t )  x  y  2           1 1      2 


will result in disarmament between the competing nations for all initial values.


       For the sake of simplicity, only the graph of this system and the solution solved by Maple

will be shown because the eigenvectors and eigenvalues generated from P(t) become

complicated radicals that would be difficult to manipulate by hand to find an answer. Therefore,

the general solution to the system given by Maple output is

>


>




>




>




                                                                                               51
>




As you can see from the above solution, the general solution to the system is becomes very

complicated but, lim x(t )  1 and lim y(t )  3 thus showing that eventually, the nations will
                  t                t 



get to a point in time where they are decreasing the rate at which they are spending money on

armaments until they are spending no money on the arms race. The graph of the system is much

more beneficial in demonstrating an arms race that ends in disarmament.


       The direction field with a few trajectories denoting the initial values for the

nonhomogenous system is represented by




                                                                                                  52
For the initial values represented by the blue trajectories in the above directional field, the

trajectories will approach the point (1, 3) as t   thus resulting in disarmament with any

initial value chosen for the system.



Example 4: Disarmament/Runaway Arms Race/Stable Arms Race


The following system


                          x(t )  2 x  4 y  2         2 4      2 
                                                   X         X  
                          y(t )  4 x  2 y  2          4 2      2 


will result in disarmament if x0  y0  2 , a runaway arms race if x0  y0  2 , or a stable arms

race if x0  y0  2 .


        To find the solution to this arms race, we will first find the general solution to the

homogenous part of the system using Maple:

>


>




>




                           1                      1
Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0
                           1                       1 

thus, the general solution to the homogenous part of the arms race is


                                                                                                         53
 1          1
                                     xh  c1   e 2t  c2   e 6t
                                              1           1


       The general solution to the nonhomogenous part of the arms race will be found by

                      f             2 4       2 
substituting X (t )    into X          X    . Therefore
                      g             4 2       2 


                              2 4  f   2 
                           0            f  1 and g  1
                              4 2  g   2 


                                        1
and the solution of the system is xn    . Thus the general solution of the nonhomogenous
                                        1

system is


                                               1          1          1
                             x  xh  xn  c1   e 2t  c2   e 6t   
                                               1           1        1


The direction field with a few trajectories denoting the initial values for the nonhomogenous

system is represented by




                                                                                                54
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations
Senior Seminar:  Systems of Differential Equations

More Related Content

What's hot

Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...mathsjournal
 
Quasi Lie systems and applications
Quasi Lie systems and applicationsQuasi Lie systems and applications
Quasi Lie systems and applicationsdelucasaraujo
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Alexander Decker
 
11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...Alexander Decker
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Matthew Leingang
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series AnalysisAmit Ghosh
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Logics of the laplace transform
Logics of the laplace transformLogics of the laplace transform
Logics of the laplace transformTarun Gehlot
 
tensor-decomposition
tensor-decompositiontensor-decomposition
tensor-decompositionKenta Oono
 
A current perspectives of corrected operator splitting (os) for systems
A current perspectives of corrected operator splitting (os) for systemsA current perspectives of corrected operator splitting (os) for systems
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1NBER
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Fabian Pedregosa
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4Phong Vo
 

What's hot (20)

Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
 
Quasi Lie systems and applications
Quasi Lie systems and applicationsQuasi Lie systems and applications
Quasi Lie systems and applications
 
03 lect5randomproc
03 lect5randomproc03 lect5randomproc
03 lect5randomproc
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...
 
11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...
 
Hw4sol
Hw4solHw4sol
Hw4sol
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series Analysis
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Logics of the laplace transform
Logics of the laplace transformLogics of the laplace transform
Logics of the laplace transform
 
tensor-decomposition
tensor-decompositiontensor-decomposition
tensor-decomposition
 
A current perspectives of corrected operator splitting (os) for systems
A current perspectives of corrected operator splitting (os) for systemsA current perspectives of corrected operator splitting (os) for systems
A current perspectives of corrected operator splitting (os) for systems
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1
 
ma112011id535
ma112011id535ma112011id535
ma112011id535
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
 

Viewers also liked

Differential equations of first order
Differential equations of first orderDifferential equations of first order
Differential equations of first ordervishalgohel12195
 
Simple harmonic motion
Simple harmonic motionSimple harmonic motion
Simple harmonic motionAdarsh Ang
 
The simple pendulum (using O.D.E)
The simple pendulum (using O.D.E)The simple pendulum (using O.D.E)
The simple pendulum (using O.D.E)darshan231995
 
simple harmonic motion
simple harmonic motionsimple harmonic motion
simple harmonic motionsaba majeed
 
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJ
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJAPPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJ
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJZuhair Bin Jawaid
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equationNofal Umair
 
Ode powerpoint presentation1
Ode powerpoint presentation1Ode powerpoint presentation1
Ode powerpoint presentation1Pokkarn Narkhede
 

Viewers also liked (10)

Differential equations of first order
Differential equations of first orderDifferential equations of first order
Differential equations of first order
 
Shm 1
Shm 1Shm 1
Shm 1
 
Maths 3 ppt
Maths 3 pptMaths 3 ppt
Maths 3 ppt
 
Simple harmonic motion
Simple harmonic motionSimple harmonic motion
Simple harmonic motion
 
The simple pendulum (using O.D.E)
The simple pendulum (using O.D.E)The simple pendulum (using O.D.E)
The simple pendulum (using O.D.E)
 
simple harmonic motion
simple harmonic motionsimple harmonic motion
simple harmonic motion
 
Differential equations
Differential equationsDifferential equations
Differential equations
 
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJ
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJAPPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJ
APPLICATIONS OF DIFFERENTIAL EQUATIONS-ZBJ
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equation
 
Ode powerpoint presentation1
Ode powerpoint presentation1Ode powerpoint presentation1
Ode powerpoint presentation1
 

Similar to Senior Seminar: Systems of Differential Equations

a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdfa) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdfpetercoiffeur18
 
Module 2 Lesson 2 Notes
Module 2 Lesson 2 NotesModule 2 Lesson 2 Notes
Module 2 Lesson 2 Notestoni dimella
 
Common Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative SchemesCommon Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative Schemesinventy
 
Free Ebooks Download ! Edhole
Free Ebooks Download ! EdholeFree Ebooks Download ! Edhole
Free Ebooks Download ! EdholeEdhole.com
 
Tensor 1
Tensor  1Tensor  1
Tensor 1BAIJU V
 
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...QUESTJOURNAL
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementationQuasar Chunawala
 
Synchronizing Chaotic Systems - Karl Dutson
Synchronizing Chaotic Systems - Karl DutsonSynchronizing Chaotic Systems - Karl Dutson
Synchronizing Chaotic Systems - Karl DutsonKarl Dutson
 

Similar to Senior Seminar: Systems of Differential Equations (20)

Linear regression
Linear regressionLinear regression
Linear regression
 
03_AJMS_170_19_RA.pdf
03_AJMS_170_19_RA.pdf03_AJMS_170_19_RA.pdf
03_AJMS_170_19_RA.pdf
 
03_AJMS_170_19_RA.pdf
03_AJMS_170_19_RA.pdf03_AJMS_170_19_RA.pdf
03_AJMS_170_19_RA.pdf
 
Es272 ch5b
Es272 ch5bEs272 ch5b
Es272 ch5b
 
Ch07 7
Ch07 7Ch07 7
Ch07 7
 
Paper06
Paper06Paper06
Paper06
 
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdfa) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
 
Linear Algebra and its use in finance:
Linear Algebra and its use in finance:Linear Algebra and its use in finance:
Linear Algebra and its use in finance:
 
Module 2 Lesson 2 Notes
Module 2 Lesson 2 NotesModule 2 Lesson 2 Notes
Module 2 Lesson 2 Notes
 
Common Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative SchemesCommon Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative Schemes
 
Free Ebooks Download ! Edhole
Free Ebooks Download ! EdholeFree Ebooks Download ! Edhole
Free Ebooks Download ! Edhole
 
AJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdfAJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdf
 
lec24.ppt
lec24.pptlec24.ppt
lec24.ppt
 
Introduction to R
Introduction to RIntroduction to R
Introduction to R
 
Ch07 8
Ch07 8Ch07 8
Ch07 8
 
Tensor 1
Tensor  1Tensor  1
Tensor 1
 
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
 
Statistics lab 1
Statistics lab 1Statistics lab 1
Statistics lab 1
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementation
 
Synchronizing Chaotic Systems - Karl Dutson
Synchronizing Chaotic Systems - Karl DutsonSynchronizing Chaotic Systems - Karl Dutson
Synchronizing Chaotic Systems - Karl Dutson
 

Senior Seminar: Systems of Differential Equations

  • 1. Systems of Differential Equations Joshua Dagenais 12-04-09 Mentor: Dr. Arunas Dagys 1
  • 2. Table of Contents Introduction Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues Section 2: Solving Systems of Differential Equations with Complex Eigenvalues Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues Section 4: Solving Systems of Nonhomogenous Differential Equations Section 5: Application of Systems of Differential Equations – Arms Races Section 6: Application of Systems of Differential Equations – Predator-Prey Model Conclusion References 2
  • 3. Introduction Many laws and principles that help explain the behavior of the natural world are statements or relations that involve rates at which things change. When explained in mathematical terms, the relations become equations and that rates become derivatives. Equations that contain these rates or derivatives are called differential equations. Therefore, systems of ordinary differential equations arise naturally in laws and principles explaining behavior of the natural world involving several dependent variables, each of which is a function of single independent variable. This then becomes a mathematical problem that consists of a system of two or more differential equations. These systems of differential equations that describe these laws or principles are called mathematical models of the process (Boyce & DiPrima, 2001). A system of first order ordinary differential equations is an interesting mathematical concept as it combines 2 different studies of mathematics for its use. By dissecting the phrase, system of first order differential equations, into 2 parts, the 2 different areas of mathematics used to solve these equations can be found. In the system part of the phrase, it involves linear algebra to solve the system of equations and in this case the system of equations consists of first order differential equations. The ladder part of the phrase, first order of differential equations, indicates that solution strategies for solving them will also be involved when solving systems of differential equations. So with linear algebra for systems and differential equations in mind, what other underlying concepts and skills involved with these mathematical concepts must be learned and explained to solve systems of differential equations? Well, for the linear algebra aspect of 3
  • 4. solving systems of differential equations, topics that are to mentioned briefly in this paper include matrices, characteristic equations, roots of the characteristic equations (eigenvalues), eigenvectors, and the diagonalization of a matrix. For the differential equation part of solving systems, topics that are discussed include solving first order differential equations, solving simple diagonal systems ( y '  Dy ), and solutions of the original systems ( x  Cy where x(t )  keat ). When everything mentioned is put together, solutions of different types are found for systems of differential equations and with the help of mathematical software such as Maple, graphs are able to visually represent the answers of these systems (slope fields) to show that there are actually more than one solution called a family of solutions. Also, depending on the types of eigenvalues that are found for the system of differential equations, different methods for solving the systems will be used for eigenvalues that are distinct and real, eigenvalues that are complex, and eigenvalues that are repeated, all of which graphically represented in a different manner. Along with different methods for solving systems of differential equations, methods for solving homogenous and nonhomogenous systems will be explained to help further the scope of this subject. Why do we care about solving systems of differential equations? Well, there are many physical problems that involve a number of separate elements linked together in some manner such as generic application problems which include spring-mass systems, electrical circuits, and interconnected tanks that need solutions of systems of differential equations to be understood and solved. Other, more advanced applications of the theory behind systems of differential equations include the Predator-Prey Model (Lotka-Volterra Model) and the Richardson’s Arms Race Model which connects mathematics with concepts that would have never been able to be explained without such elegant mathematical equations. The Predator-Prey 4
  • 5. Model is a system of nonlinear differential equations (even though it is considered an almost linear system) and the Arms Race Model that uses systems of differential equations that are nonhomogenous. Both models are very interesting applications that will be discussed and explained later on in this paper. Hopefully, this paper will give the reader insight on what systems of linear differential equations are, how to solve them, how to apply them, and how to understand and interpret the answers that are derived from problems. 5
  • 6. Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues In this section, we will be solving systems of differential equations where the eigenvalues found from the characteristic equation are all real and all distinct. In order to do this, we will first take the system  x1  p11 (t ) x1  ...  p1n (t ) xn  g1 (t )  xn  pn1 (t ) x1  ...  pnn (t ) xn  g n (t )   and write it in matrix notation. To do this, we write x1 ,..., xn in vector form:   x1    x     x   n we put the coefficients p11 (t ),..., pnn (t ) in an n x n matrix:  p11 (t ) p1n (t )    P(t )     p (t ) pnn (t )   n1  we again write x1 ,..., xn in vector form:  x1    x   x   n and write g1 (t ),..., gn (t ) in vector form: 6
  • 7.  g1 (t )    g (t )     g (t )   n  Therefore, the resulting equation using the above vector and matrix notation is represented by x  P(t ) x  g (t ) We will first consider homogenous systems where g (t )  0 , thus x  P(t ) x To find the general solution of the above system when P(t ) is a 1 x 1 matrix, the system above reduces to a single first order equation dx  px dt where the solution is x  ce pt . Therefore, to solve any other systems with second order or higher, we will look for solutions of the form x   ert where  is a column vector instead of a constant c (because we are dealing with solutions to more than one differential equation thus giving us multiple constants equating to a vector) and r is an exponent to be solved. Substituting x   ert into both sides of x  P(t ) x gives r ert  P(t ) ert Upon canceling ert , we obtain r  P(t ) or 7
  • 8. ( P(t )  rI )  0 where I is the n x n identity matrix. In order to solve ( P(t )  rI )  0 , we will use theorem 1. Theorem 1: Let A be an n x n matrix of constant real numbers and let X be an n-dimensional column vector. The system of equations AX  0 has nontrivial solutions, that is, X  0 , if and only if the determinant of A is zero. In our case, ( P(t )  rI ) is the n x n matrix represented by A and  is the n-dimensional column vector represented by X. Therefore, in order to find the nontrivial solutions of ( P(t )  rI )  0 , we must take the determinant of ( P(t )  rI )  0 which is represented p11 (t )  r p1n (t ) 0 pn1 (t ) pnn (t )  r Computing the determinant will yield a characteristic equation, which resembles the structure of a polynomial of degree n, where the roots of the characteristic equation, eigenvalues denoted by r, will be computed. After the eigenvalues have been computed, r will be substituted back into ( P(t )  rI )  0 and solved for the nonzero vector,  , which is called the eigenvector of the matrix P(t ) corresponding to the eigenvalue r1 . The eigenvector will be an n x 1 column vector that will have as many values as there are equations to solve for. After finding the eigenvalues and the eigenvectors for those specific values, they will be substituted back into the equation x   ert which will be represented as the following specific solutions 8
  • 9.  x11 (t )   x1k (t )      x (t )   (1)  ,..., x (t )   (k )  ,...  x (t )   x (t )   n1   nk  for the initial system. If the Wronskian of x(1) ,..., x( n) (represented as W [ x(1) ,..., x( n) ] ) does not equal zero, then the general solutions can be represented as a linear combination of the specific solutions x  c1 x(1) (t )   ck x( k ) (t ) The following examples will help illustrate how to solve n x n systems of differential equations with distinct real eigenvalues. The general solution of the given system of equations will be solved for along with a graph that shows the direction field of the answer. Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2  x2  2 x1  2 x2 To solve the problem, we rewrite the equations into its matrix form  3 2  x   x  2 2  which is of the form x  P(t ) x where 9
  • 10.  3 2  P (t )     2 2  We then find the eigenvalues of P(t) by finding the characteristic equation and solving for r. Therefore, 3 r 2 det( P(t )  rI )   r 2  r  2  (r  1)(r  2)  0 2 2  r and the eigenvalues of P(t) are r1  1 and r2  2 . Now we compute the eigenvectors for each of their respective eigenvalues. We will compute the nontrivial solutions of 3 r 2   c1      0  2 2  r  c2  For r1  1  3  (1) 2   c1   4 2   c1  4c1  2c2  0  c   0     c   0  2c  c  0  c2  2c1  2 2  (1)   2   2 1   2  1 2 (Note that both of the resulting equations with c1 and c2 are the same). One such solution of the 1 equation is found by choosing c1  1 thus making c2  2 to give the eigenvector  1    .  2 1 Knowing that x( n) (t )   ( n)ernt , it follows that x (1) (t )  c1   e  t is a solution of the initial system.  2 For r2  2 3 2 2   c1   1 2   c1  c1  2c2  0  c   0     c   0  2c  4c  0  c1  2c2  2 2  2   n   2 4  2  1 2 10
  • 11. By choosing c1  2 to solve the equation, c2  1. Proper notation of eigenvectors, if possible, insists that fractions should be avoided when representing the numerical value of the eigenvalue.  2 Therefore, for r2  2 ,  2    and a second solution is 1  2 x (2) (t )  c2   e 2t 1 Now, we check to see if we can represent x1 and x2 as a general solution by taking the Wronskian of both specific solutions. The Wronskian of x(1) (t ) and x(2) (t ) is et 2e2t W [ x (1) , x (2) ]   3et 2et e 2t which is never equal to zero. It follows that the solutions x(1) (t ) and x(2) (t ) are linearly independent. Therefore, the general solution of the system x  P(t ) x is 1  2 x(t )  c1   e  t  c2   e 2t  2 1 11
  • 12. All the general solutions (represented by the family of red lines), a combination of x(1) (t ) and x(2) (t ) , for which c1  0 and c2  0 , are asymptotic to the line x2  2 x1 . The blue trajectories represent specific solutions to the system with each trajectory having a different initial value ( x1 (0)  a and x2 (0)  b where a and b are any real number ). For the remaining examples in this section, the derivation of the final solution will be shown without all steps shown The purpose of these examples is to show the variety of systems of differential equations that have distinct real eigenvalues such as a 3 x 3 system and a 2 x 2 system with initial conditions given. Example 2: Solve the following 3 x 3 system for x  x1  x1  x2  x3  1 1 1     x2  2 x1  x2  x3   x   2 1 1  x x3  8 x1  5 x2  3x3     8 5 3    First, we find the eigenvalues for the coefficient matrix by the following equation 1 r 1 1 det( P(t )  rI )  2 1  r 1  0 8 5 3  r and solving the resulting characteristic equation. > 12
  • 13. > Using maple yields the eigenvalues r1  2 , r2  2 , and r3  1 and eigenvectors  4 0  3   2   3      5  ,    1  ,    4  1  7   1  2        The eigenvalues above are the same as the given Maple output but manipulated in properly format where all values of the eigenvector are integers and the first value is positive. After the eigenvalues and eigenvectors are computed, we find the Wronskian W [ x(1) , x(2) , x(3) ]  12et  0 therefore we can substitute all the eigenvalues and eigenvectors found into x( n)   ( n)ernt and express the solution as a linear combination  4 0  3   e 2t  c   e 2t  c   e t x(t )  x (t )  x (t )  x (t )  c1  5  3  4  (1) (2) (3) 2 1   7   1  2        Example 3: Solve the 2 x 2 system with initial conditions given for x  x1  5 x1  x2 x (0)  2  5 1 2 where 1  x    x where x(0)     x2  3x1  x2 x2 (0)  1 3 1   1 13
  • 14. We will start off the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > > >  1 1 Therefore, r1  4 ,  1    , r2  2 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e6t  0  1  3 therefore the specific solutions x (1) and x (2) can be expressed as the general solution  1 1 x(t )  c1   e 4t  c2   e 2t  1  3 2 After the general solution has been found, we substitute x(0)    into x(t ) to get  1  1 1 2  1 1  2  x(0)  c1   e 4*0  c2   e 2*0     c1    c2       1  3  1  1  3   1 After the equation has been simplified, we multiply c1 and c2 by their respected vectors to yield the follow system of equations c1  c2  2 and c1  3c2  1 14
  • 15. 7 3 We then solve the system of equations for c1 and c2 to get c1  and c2  . Substituting 2 2 back into the general solution to get the specific solution of the system as 7  1 3  1  x(t )    e 4t    e 2t 2  1 2  3 The direction field of general solution along with a trajectory of the specific solution is represented as The above direction field shows the different families of solutions for the general solution denoted by the red arrows and the blue trajectory represents the specific solution to the system 2 for when the initial starting value was x(0)    . Now, after establishing the basis for solving  1 systems of differential equations, we will now delve into different cases of solving systems where the eigenvalues are not real and/or distinct. 15
  • 16. Section 2: Solving Systems of Differential Equations with Complex Eigenvalues In this section, we will use what was previously discussed in the section for solving systems with real and distinct eigenvalues on how to generate eigenvalues for an n x n system of linear homogenous equations with constant coefficients denoted as x  P(t ) x Now if P(t) is real then the coefficients that make up the characteristic equation for r are real and any complex eigenvalues must occur in conjugate pairs (Boyce & DiPrima, 2001, p. 384). Therefore, for a 2 x 2 system, r1  a  bi and r2  a  bi would be eigenvalues where a and b are real. Also, it follows that the corresponding eigenvectors are complex conjugate pairs of each other. Therefore, r2  r1 and  2   1 . To help visualize this, take the equation that was formed in the previous section ( P(t )  rI )  0 and substitute r1 and  1 into the equation to get ( P(t )  r1I ) 1  0 which forms a corresponding general solution to the system. Now, by taking the complex conjugate of the entire equation, the resulting equation becomes ( P(t )  r1I ) 1  0 where P(t) and I are not affected by the conjugation because they both have all real values. The equation then forms another corresponding general solution where r2  r1 and  2   1 . Now, 16
  • 17. with the eigenvalues and eigenvectors solved for, we can use Euler’s formula to express a solution with real and imaginary parts just as real solutions to the system. Euler’s formula states eit  cos t  i sin t But, for use with general complex solutions to a system of differential equations, we will use the a modified version of the formula e( i )t  e t (cos( t )  i sin( t ))  e t cos( t )  i e t sin( t ) to find the real-value solutions to the system. We can choose either x(1) (t ) or x(2) (t ) to find the 2 real-valued solutions because they are conjugates of each other and both will yield the same real- valued solutions. Using x(2) (t ) and  2  a  bi where a and b are real, then we have x(2) (t )  (a  bi)e( i  )t  (a  bi)e t (cos( t )  i sin( t )) Factoring the above equation results in x(2) (t )  et (a cos( t )  bi cos( t )  ai sin( t )  b sin( t )) and separating x(2) (t ) into its real and imaginary parts, x(2) (t ) will yield x(2) (t )  e t (a cos( t )  b sin( t ))  ie t (a sin( t )  b cos( t )) If x(2) (t ) is written as the sum of 2 vectors ( x(2) (t )  u(t )  iv(t ) ), then the vectors yielded are u(t )  e t (a cos( t )  b sin( t )) and v(t )  e t (a sin( t )  b cos( t )) 17
  • 18. We can disregard the i in front of v(t ) because it is considered to be a multiplier of the vector and we are only interested in the real-numbered vector solution. If we chose to solve for x(1) (t ) instead of x(2) (t ) , we would have gotten the same solution except x(1) (t )  u(t )  iv(t ) . i is also considered a multiplier of the v(t ) vector therefore we can disregard it and the answers for u (t ) and v(t ) would be the same as the ones that were solved for above. u (t ) and v(t ) are the resulting real-valued vector solutions to the system. It is worth mentioning that u (t ) and v(t ) are linearly independent and can be expressed as a single general solution. Therefore, for r1    i, r2    i and that r3 ,..., rn are all real and distinct. Let the corresponding eigenvectors be  1  a  bi,  2  a  bi,  3 ,...,  n (Boyce & DiPrima, 2001, p. 385). Then the general solution to systems of differential equations with complex eigenvalues is x(t )  c1u (t )  c 2 v(t )  c3 3e r3t  ...  cn nernt where u(t )  e t (a cos( t )  b sin( t )) , v(t )  e t (a sin( t )  b cos( t )) , and P(t) consists of all real coefficients. It is only when P(t) consists of all real coefficients that complex eigenvectors and eigenvalues will occur in conjugate pairs (Boyce & DiPrima, 2001, p. 385). The following examples will help illustrate how to solve n x n systems of differential equations with complex eigenvalues. Both the complex and real-valued solutions will be given for each of the examples and some direction fields will be shown to demonstrate the nature of systems with complex eigenvalues. 18
  • 19. Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2   3 2    x   x  x2  4 x1  x2   4 1  We will begin the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > > >  1   1  Therefore, r1  1  2i , r2  1  2i ,  1    , and    2  . To get the eigenvectors in 1  i  1  i  proper form from the Maple output, we multiplied both eigenvalues (resulting from the Maple output) by its conjugate to get a real number for the first value and then multiplied it again by 2 so that all values in the eigenvector were integers. The Wronskian W [ x(1) , x(2) ]  2e2t  i  0 therefore the specific solutions x (1) and x (2) can be expressed as the general solution in complex form  1  (1 2i )t  1  (12i )t x(t )  c1  e  c2  e 1  i  1  i  19
  • 20. But, we want to be able to find the real-valued solutions of the complex general solution so we will use x (1) to find the real-valued vectors. Therefore,  1  (1 2i )t x (1) (t )   e 1  i  Using Euler’s formula, x (1) becomes  1  t   e (cos(2t )  i sin(2t )) 1  i  After Euler’s formula has been applied, we factor the above equation  cos(2t )  i sin(2t )  t  e  cos(2t )  i sin(2t )  i cos(2t )  sin(2t )  and separate the real and imaginary elements into  cos(2t )  t  sin(2t )  et    ie    sin(2t )  cos(2t )   sin(2t )  cos(2t )  The result is the two real-valued solutions of the form u(t )  iv(t ) where  cos(2t )  t  sin(2t )  u (t )  et   and v(t )  e    sin(2t )  cos(2t )   sin(2t )  cos(2t )  Therefore, the general solution to the system with real-valued solutions is  cos(2t )  t  sin(2t )  x(t )  c1u (t )  c2v(t )  c1et    c2e    sin(2t )  cos(2t )   sin(2t )  cos(2t )  20
  • 21. The resulting direction field showing families of solutions to the general solution to the system is The blue trajectories show specific solutions when initial conditions are given. Thus, the direction field creates spiraled solutions where the origin is the center of the spirals called a spiral point. The direction of the motion is away from the spiral point and the trajectories become unbounded. Also, the spiral point, for this particular solution, is unstable. There are also systems with complex eigenvalues where the general solution has a spiral point that is stable because all trajectories approach it as t increases. Example 2: Solve for the following 3 x 3 system for x  x1  x1  1 0 0     x2  2 x1  x2  2 x3   x   2 1 2  x  x3  3x1  2 x2  x3    3 2 1    21
  • 22. Again, we will begin the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > > > Thus, the eigenvalues are r1  1 , r2  1  2i , and r3  1  2i . The simplified eigenvectors are 2 0 0   2      3  ,    i  , and    i  . Notice that r1 and  1 already contain real-values therefore 1 3   2 1  1       no computations are needed to turn them into real-valued solutions like the other complex eigenvalues and eigenvectors. The Wronskian W [ x(1) , x(2) , x(3) ]  4e3t  i  0 therefore the specific solutions x (1) , x (2) , x (3) and can be expressed as the general solution in complex form 2  0 0   t   (1 2i )t   x(t )  c1  3  e  c2  i  e  c3  i  e(12i )t 2 1  1       To find the real-valued solutions of the general solution, we will use x(2) (t ) and Euler’s formula in the following equations 22
  • 23. 0  0  0   0    (1 2i )t   t t   t   x (t )   i  e (2)   i  e (cos(2t )  i sin(2t ))  e  cos(2t )   ie  sin(2t )  1 1  sin(2t )    cos(2t )          Therefore,  0   0    t   u (t )  e  cos(2t )  and v(t )  e  sin(2t )  t  sin(2t )    cos(2t )      and the general solution to the system with real-valued solutions is 2  0   0    t t   t   x(t )  c1r1  c2u (t )  c3v(t )  c1  3  e  c2e  cos(2t )   c3e  sin(2t )  1 2  sin(2t )    cos(2t )        Now that we know how to solve systems that yield real and/or imaginary eigenvalues and eigenvectors, we will now focus our attention on the next case if a eigenvalue is repeated when found from the characteristic equation. 23
  • 24. Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues In this section, we will be solving systems of differential equations where the eigenvalues found from the characteristic equation are repeated. We will still be finding solutions of the following equation x  P(t ) x and will still find at least one of the eigenvalues/eigenvectors in the way we previously solved systems with distinct eigenvalues. But, when solving for the other repeated eigenvalue, we will see that the other solution will take the form x   tert  ert where  and  are constant vectors. After finding the first solution of the form x(1) (t )   1ert , it may be intuitive to find a second solution to the system of the form x(2) (t )   1tert because of how repeated roots are solved when finding the solution to a second order differential equation. Substituting that back into x  P(t ) x yields r 1te rt   1e rt  P (t ) 1te rt  r 1te rt   1e rt  P (t ) 1te rt  0   1e rt  rt  1  P (t )t   0 But, for the equation to be solved so it is satisfied for all t, the coefficients of te rt and ert must each be zero (Boyce & DiPrima, 2001, p. 403). Therefore, we find out that in this case,  1  0 and thus x2   1tert is not a solution for the second repeated eigenvalue. But, from 24
  • 25. r 1tert   1ert  P(t ) 1tert  0 , we see that there is a form of x   tert in the substituted equation along with another term of the form  ert . Therefore, we need to assume that x2   1tert ert Where  and  are constant vectors. Substituting the above expression into x  P(t ) x gives r 1tert   1ert  rert  P(t )( 1tert ert )  r 1tert  ( 1  r )ert  P(t )( 1tert ert ) Equating the coefficients of te rt and ert gives the following conditions P(t ) 1te rt  r 1te rt  0  P(t ) 1  r 1  0  ( P(t )  rI ) 1  0 P(t ) e rt   1e rt  r e rt  0  P(t )  r   1  ( P(t )  rI )   1 for the determination of  1 and . The underlined portions are the important conditions derived from the equation. To solve ( P(t )  rI ) 1  0 , all we do is solve for one of the repeated eigenvalue and eigenvector just like in previous sections. We will solve a matrix equation of the form  p11 (t )  r p1n (t )  1   11   p11 (t )  r 1    p1n (t )  n  11              p (t ) pnn (t )  r n    n   pn1 (t ) 1    pnn (t )  r n   n 1 1  n1     Solving for 1 ,...,n in the above equation will result in the solution of the vector  denoted 25
  • 26.  1          n After equating  1 and  , we substitute them into x(2) (t ) to get the second specific solution x2 (t )   1tert ert The last term in the above equation can be disregarded because it is a multiple of the first specific solution x(1) (t )   1ert but the first 2 terms make a new solution of the form x(2) (t )   1tert ert Finding W [ x(1) , x(2) ](t )  0 will prove that x(1) and x(2) are linearly independent thus allowing us to represent the a general solution to the system in the form x  c1 x(1) (t )  c2 x (1) (t )  ck x ( k ) (t )  x  c1 1e r1t  c2 [ 1te r1t  e r1t ]  ...  ck k 1e rk 1t where x(1) and x(2) include the repeated eigenvalues of multiplicity 2. For the sake of simplicity, we will focus our examples on solving systems that have repeated eigenvalues of only multiplicity 2. Also included in one of the examples is a case where a repeated eigenvalue give rise to linearly independent eigenvectors (which is easily identifiable using Maple) of the matrix P(t ) thus avoiding the complications of solving systems with repeated eigenvalues. 26
  • 27. Example 1: Solve the following 2 x 2 system for x  x1  4 x1  x2   4 1   x   x  x2  4 x1  8 x2   4 8  We will begin the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > > > 0 Notice in the resulting eigenvectors that  2    which is a zero multiple of  1 and does us no 0 help in finding the second specific solution of the above system. But, the results derived from 1 1 Maple gives us r1  r2  6 ,  1    , and x (1) (t )    e 6t . We need to use the equation  2  2 x(2) (t )   1tert ert to solve for  and thus have a second specific solution to the system. To find out the second 1  4 1 specific solution, we substitute x (2) (t )    te6t   e6t into x    x to get the following  2  4 8  expression 27
  • 28.  1  6t 1    4 1  1  6t  4 1   1  6t  e    6te6t   1  6e6t     te   e  2  2  2   4 8  2   4 8  2   4 1  1  6t Multiplying out    te and factoring out a 6 from the result yields  4 8  2   1  6t  1  6t  1  6t  1  6t  4 1   1  6t   e    6te    6e    6te    e  2  2  2  2  4 8  2  1 Canceling out the   6te6t on each side of the equation and rearranging the equation yields  2  4 1   1  6t  1  6t  1  6t     e    6e    e  4 8  2   2  2  1  Factoring out   e6t on the left side of the equation and simplifying gives us  2     4 1    1  6t  1  6t   4 8   6 I    e    e     2    2   4 1   6 0    1  6t  1  6t      e    e   4 8   0 6   2   2   2 1   1   1         4 2  2   2   2 1   1   1        , is of the form ( P(t )  rI )   . 1 The end product of the above expression,   4 2   2   2  In this case 28
  • 29.  4 1   6 0   1 ( P(t )  rI )   1         4 8   0 6    2 Thus, to solve for  , we solve  2 1   1   1  21  2  1 0         4  2  2  1  0 and 2  1       4 2   2   2  1 2 1 (Note that both of the resulting equations with 1 and 2 are the same). After solving for  , we substitute it into x(2) (t )   1tert ert to find the second solution of the system to be 1 0 x (2) (t )    te rt    e rt  2 1 The Wronskian W [ x(1) , x(2) ]  e12t  0 . Therefore the specific solutions x (1) and x (2) can be expressed as the general solution 1  1   0   x(t )  c1   e6t  c2   t     e6t  2  2   1   The resulting direction field showing families of solutions to the general solution to the system is 29
  • 30. The blue trajectories show specific solutions when initial conditions are given. The origin is called an improper node. If the eigenvalues are negative, then the trajectories are similar but traversed in the inward direction. An improper node is asymptotically stable or unstable, depending on whether the eigenvalues are negative or positive (Boyce & DiPrima, 2001, p. 404). Example 2: Solve the following 3 x 3 system for x  x1  x1  2 x2  x3   1 2 1     x2  x2  x3   x   0 1 1  x  x3  2 x3  0 0 2     We will begin the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > 30
  • 31. > > 1  1   t   From the Maple results: r1  r2  1 , r3  2 , x1 (t )   0  e , and x3 (t )  1 e2t . What we need to 0  1     find is the specific solution to x(2) (t ) . In this example, we will use the equation ( P(t )  rI )   1 to solve for  , substitute it into x(2) (t )   1tert ert , and use the shortcut to find out the third specific solution to the system. Therefore  1 2 1   1  1      e t   0  e t ( P(t )  rI )     0 1 1   1I   2  1    0 0 2         3    0   0 2 1 1  1    t   t  0 0 1 2  e   0  e  0 0 1     0   3    and 0 22  3  1   1 1 3  0  1  0,2  ,3  0      2 2 3  0 0   31
  • 32. Substituting what we found for  into x(2) (t )   1tert ert yields 0 1    2 0   t 1 t   t   t x (t )   0  te  (2) e   0  te   1  e 0 2 0  0   0       The Wronskian W [ x(1) , x(2) , x(3) ]  e4t  0 . Therefore, the specific solutions x(1) , x(2) , and x(3) can be expressed as the general solution 1 1  2   0     2t   t      x(t )  c1 1 e  c2  0  e  c3  0  t   1   et 1 0  0   0            Example 3: Solve the following 3 x 3 system for x  x1   x2  3x3   0 1 3     x2  2 x1  3x2  3x3   x   2 3 3  x  x3  2 x1  x2  x3     2 1 1    For this example, using Maple can unlock a potential shortcut in solving for the general solution to the above system. Again, we will begin the example by using Maple to find the eigenvalues and eigenvectors of the coefficient matrix: > > 32
  • 33. > Unlike the other 2 examples, the Maple output displays 2 linearly independent eigenvectors of the repeated eigenvalues r2  r3  2 . Another shortcut for finding eigenvectors of repeating eigenvalues is found if a math program, such as Maple, is utilized to solve systems of differential equations. Therefore  1 1  3   2t 2   2t 3   x1 (t )  1 e , x (t )   2  e , x (t )   0  e 2t  1  0  2        and the general solution to the system is 1  1  3    2t      x  c1 1 e  c2  2   c3  0   e 2t 1  0  2          A more advanced look at systems with repeated eigenvalues would include repeated eigenvalues with multiplicities higher than 2. The equations to solve higher multiplicities of repeated eigenvalues become more detailed and difficult to solve for but to find the eigenvalues for such values, we would follow the same thought process in how we found the eigenvalue for repeated eigenvalues of multiplicity 2. For the next section, we will return to our original form 33
  • 34. of a differential equation x  p1 (t ) x1  ...  pn (t ) xn  g (t ) and solve nonhomogenous systems where the value of g (t )  0 . 34
  • 35. Section 4: Solving Systems of Nonhomogenous Differential Equations Unlike the previous sections where we solved different types of systems of homogeneous differential equations with constant coefficients, this section will focus on solving systems of nonhomogenous differential equations of the form x  P(t ) x  g (t ) The following theorem related to nonhomogenous systems should help us figure out where to start solution process: Theorem 2: If x(1) (t ),..., x( n) (t ) are linearly independent solutions of the n-dimensional homogenous system x  P(t ) x on the interval a < t < b and if x p (t ) is any solution of the nonhomogenous system x  P(t ) x  g (t ) on the interval a < t < b, then any solution of the nonhomogenous system can be written x  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) for a unique choice of the constants c1 ,..., cn (Rainville, Bedient, & Bedient, 1997, p. 199). The theorem states that we will need to find a particular solution x p (t ) and add it on to the general solution of the homogenous system that is part of the nonhomogenous system. To do that, we will be using a variation of parameters technique to find x p (t ) and solve the equation x  P(t ) x  g (t ) . Solutions of the homogenous part of the nonhomogenous systems will take the form x  c1 1er1t   cn n ernt and using the variation of parameters technique suggests we seek a solution to the nonhomogenous system to be 35
  • 36. x p (t )  c1 (t ) 1e r1t   cn (t ) n e rnt Direct substitution back into x  P(t ) x  g (t ) yields (r1c1 (t ) 1er1t   rn cn (t ) n ernt )  (c1 (t ) 1er1t    cn (t ) nernt )  ( P(t )c1 (t ) 1er1t    P(t )cn (t ) ne rnt )  g (t ) P(t ) multiplied by any eigenvalue found to be a part of the specific solution will result in that particular eigenvalue multiplied by its eigenvector because it is already part of the solution to the homogeneous system. Therefore (r1c1 (t ) 1e r1t   rn cn (t ) n e rnt )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  (r1c1 (t ) 1e r1t    rncn (t ) ne rnt )  g (t )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  g (t )  The resulting equation can be rewritten in matrix form as  11 1n   c1(t )e r t   g1 (t )  1           n  1  nn   cn (t )e rnt   g n (t )         To solve for c1 (t ),..., cn (t ) , we must use Cramer’s Rule to solve Ax  b for x where  11 1n    c1 (t )e r1t   g1 (t )        A , x   ,b     n1  nn   cn (t )e rnt    g (t )       n  Cramer’s Rule states that the system has a unique solution that is given by det( Bk ) xk  for k  1,..., n det( A) 36
  • 37. Therefore g1 (t ) 1n 11 g1 (t ) g n (t )  nn n 1 g n (t )  c1 (t )e  r1t  ,..., cn (t )e  rn t 11 1n 11 1n n 1  nn n 1  nn Thus,   c1 (t )e r1t  a1 g1 (t )  ...  an g n (t )  c1 (t )  [a1 g1 (t )  ...  an g n (t )]e  r1t   cn (t )ernt  b1 g1 (t )  ...  bn g n (t )  cn (t )  [b1 g1 (t )  ...  bn g n (t )]e  rnt for some arbitrary constants a1 ,..., an and b 1 ,..., bn . To solve for the general solution, integrate both sides of the above equation to get c1 (t ),..., cn (t ) , substitute them into x p (t )  c1 (t ) 1e r1t   cn (t ) n e rnt to find the particular solution, and substitute x p (t ) into x  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) to find the general solution for the system. The following examples will help demonstrate how to solve systems of nonhomogenous differential equations and lead into an application of nonhomogenous systems. Example 1: Solve the following 2 x 2 system for x  x1  x2   0 1  0    x   x t   x2  2 x1  3x2  3e   2 3   3e  t We will begin the example by using Maple to find the eigenvalues and eigenvectors of the homogenous part of the system 37
  • 38.  0 1 x   x  2 3  Therefore > > > 1 1 The resulting eigenvalues and eigenvectors are r1  1, r2  2,  1    , and  2    . The 1  2 Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the system is  1 1 xh  c1   et  c2   e 2t  1  2 To find the particular solution of the nonhomogenous part of the system, we will use the variation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   et  c2 (t )   e 2t  1  2 38
  • 39.  0 1  0  We will first substitute x p for x and x directly into x    x   t  to get the following  2 3   3e  expression  1  1 1 1  0 1  1  0 1  1   0    c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t     c1 (t )e t    c2 (t )e 2t  t   1  1  2  2  2 3  1  2 3  2   3e    1  1 1 1  1 1  0    c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t  c1 (t )   et  2c2 (t )   e 2t  t   1  1  2  2  1  2  3e    1 1  0    c1 (t )   et  c2 (t )   e 2t   t   1  2  3e  The final expression given above can be written in matrix notation as  1 1   c1 (t )et   0     2t   t 1 2   c2 (t )e   3e    To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 0 1 1 0 3et 2 3et 1 3et 3et  c1 (t )et    and c2 (t )e2t   0 1 2 0 1 2 2 3 2 3 Thus  3et  t 3  3et  2t 3et  c1 (t )    e =  and c2 (t )   e   2  2  2  2 To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that 39
  • 40. 3t 3et c1 (t )  and c2 (t )  2 2  1 1 and substituting into the partial solution x p  c1 (t )   et  c2 (t )   e 2t yields  1  2 3t 1 t 3et  1  2t t  1 t 1 xp   e    e  3te    3e   2  1 2  2  1  2 Therefore, the general solution to the nonhomogenous system is  1 1  1 1 x  xh  x p  c1   et  c2   e 2t  3tet    3et    1  2  1  2 Example 2: Solve the following 2 x 2 system for x  x1  2 x1  x2  et   2 1   et    x   x   x2  x1  2 x2  3t   1 2   3t  We will begin the example by using Maple to find the eigenvalues and eigenvectors of the homogenous part of the system  2 1  x   x  1 2  Therefore > > 40
  • 41. >  1 1 The resulting eigenvalues and eigenvectors are r1  1, r2  3,  1    , and  2    . The  1  1 Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the system is  1 1 xh  c1   e t  c2   e 3t  1  1 To find the particular solution of the nonhomogenous part of the system, we will use the variation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   e t  c2 (t )   e 3t  1  1  2 1   et  Substituting x p for x and x directly into x    x    to get the following  1 2   3t   1 1  et  (t )   et  c2 (t )   e3t    c1   1  1  3t  The final expression above can be written in matrix notation as  1 1   c1 (t )et   et     3t    1 1  c2 (t )e   3t    To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 41
  • 42.  et   3t   et   3t   c1 (t )et        and c2 (t )e 3t        2  2  2  2 Thus  1   3te   e2t   3te3t  t  c1 (t )        and c2 (t )     2  2   2   2  To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that  et   3tet  3et 2 3tet 3et t c1 (t )   + and c2 (t )  2 2 2 4 2 2  1  t  1  3t  3t 3 te t  1  e t 3te 2t 3e 2t   1  x  xh  x p  c1 (t )   e  c2 (t )   e    +          1  1 2 2 2   1  4 2 2   1  1 1 and substituting into the partial solution x p  c1 (t )   e t  c2 (t )   e 3t yields  1  1  3tet 3et t 1 t   e  3tet 3et   1  3t t 2 xp    +   e       e  2 2 2   1  4 2 2   1     3t 3 tet  1  et 3te2t 3e2t   1  xp    +         2 2 2   1  4 2 2   1 Therefore, the general solution to the nonhomogenous system is  1 1  3t 3 tet  1  et 3te2t 3e2t   1  x  xh  x p  c1 (t )   et  c2 (t )   e 3t    +          1  1 2 2 2   1  4 2 2   1 42
  • 43. Section 5: Application of Systems of Differential Equations – Arms Races (Nonhomogenous Systems of Equations) In the previous section, we discussed how to solve systems of differential equations that were nonhomogenous using a variation of parameters technique. Now, we can apply that knowledge of solving systems with nonhomogenous equations to solve a model that illustrates an arms race between two competing nations. L.F. Richardson, an English meteorologist, first proposed this model (also known as the Richardson Model) that tried to mathematically explain an arms race between two rival nations. Richardson himself seemed to have believed that his perceptions relating to the way nations compete militarily might have been useful in preventing the outbreak of hostilities in World War II (Brown, 2007, p. 60). Both nations are self-defensive, both fight back to protect their nation, both maintain army and stock weapons, and when one nation expands their army the other nation finds it offensive. Therefore, both nations will spend money (in billions of dollars) on armaments x and y that are functions of time t measured in years. x(t) and y(t) will represent the yearly rate of armament expenditures of the two nations using some standard unit. Richardson then made some of the following assumptions about his model:  The expenditure for armaments of each country will increase at a rate that is proportional to the other country’s expenditure (each nation's mutual fear's rate is directly proportional to the expenditure of the other nation) (Rainville, Bedient, & Bedient, 1997, p. 228).  The expenditure for armaments of each country will decrease at a rate that is proportional to its own expenditure (extensive armament expenditures create a drag on the nation's economy) (Rainville, Bedient, & Bedient, 1997, p. 228). 43
  • 44. The rate of change of arms expenditure for a country has a constant component that measures that level of antagonism of that country toward the other (Rainville, Bedient, & Bedient, 1997, p. 228).  The effects of the three previous assumptions are additive (Rainville, Bedient, & Bedient, 1997, p. 228). The previous assumptions make up the differential equations of the arms race system denoted by dx x(0)  x0  ay  mx  r dt for dy  bx  ny  s y (0)  y0 dt where a, m, b, and n are all positive constants. The positive terms ay and bx represent the drive to spend more money are arms due to the level of spending of the other nation, and the negative terms mx and ny reflect a nation’s desire to inhibit future military spending because of the economic burden of its own spending. But, r and s can be any value because they represent the attitudes of each nation towards each other (negative values represent feelings of good will while positive values represent feelings of distrust). The initial values x(0) and y(0) represent the initial amount of money (in billions of dollars) each nation will spend towards armaments. The system can be simplified into x(t )  mx  ay  r y(t )  bx  ny  s and is expressed in matrix notation where  x(t )   m a  x(t )   r           X   P(t ) X  B  y (t )   b n  y (t )   s  44
  • 45. To solve for the system, we will use the knowledge from the previous section to develop general solutions to the homogenous system. For the nonhomogenous part of the system, the f solution will be a constant solution of the form   because the vector B is made up of g constants thus making the process of solving by variation of parameters much easier. Lastly, the initial values (trajectories) for the solution will represent the starting amount of money each country will be spending on armaments. General solutions to the arms race system will represent one of a few types of races: a stable arms race, a runaway arms race, a disarmament, or disarmament/runaway/stable arms race depending on the initial values. The following examples will help demonstrate each of the above mentioned arms races along with slope fields to graphically represent the races. Example 1: A Runaway Arms Race The following system will result in a runaway arms race: x(t )  2 x  4 y  8  2 4  8  X    X   y(t )  4 x  2 y  2   4 2  2 To find the solution to this arms race, we will first find the general solution to the homogenous part of the system using Maple: > > > 45
  • 46.  1 1 Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0  1  1  thus, the general solution to the homogenous part of the arms race is  1 1 xh  c1   e 2t  c2   e 6t  1  1 As mentioned in the beginning of this section, the nonhomogenous system e X   P(t ) X  B has a constant solution of the form   because B is a vector of constants thus f the solution should also be a vector made up of constants. Therefore, in the equation e X   P(t ) X  B , X (t )    can be substituted into X and X  where f  2 4  8 P(t )    and B     4 2   2 to get the following expression  2 4  f   8   2 4  f   8  2 f  4 g  8  f   2  0                      xn  4 2  g   2   4 2  g   2 4 f  2 g  2  g   3  Therefore the general solution of the nonhomogenous system is  1 1  2  x  xh  xn  c1   e 2t  c2   e 6t     1  1  3  46
  • 47. Note that as lim x(t ) and lim y(t )   . Thus we would predict that the rate that each nation t  t  spends their money on armaments would increase infinity resulting in an arms race. The direction field for the nonhomogenous system is represented by with the initial conditions x0  5, y0  2 and x0  2, y0  5 given. The direction field of the system shows that for any initial value, the solution goes to  as t   . Thus, we have a runaway arms race. If you wanted to solve the system with an initial condition given such as x0  5, y0  2 , we would set up the general solution as 5 1 20  1  60  2   5   1  1   2     c1   e  c2   e        c1    c2       2  1  1  3   2   1  1  3  47
  • 48. and solve for c1 and c2 . Therefore 5  1  1   2  c1  c2  2  5 c1  c2  7    c1    c2       c  c  3  2  c  c  6  c1  6 and c2  1  2  1  1  3  1 2 1 2 Thus, the final solution with the initial conditions given is  1 1  2  x    6e2t    e 6t     1  1  3  or x(t )  6e 2t  e 6t  2 y (t )  6e 2t  e 6t  3 The role of the initial value is how much each nation will initially spend on armaments in billions of dollars. Using initial values when solving an arms race system will lead to a specific solution describing the race instead of families of general solutions describing all cases of the system. Example 2: A Stable Arms Race The following system will result in a stable arms race: x(t )  5 x  2 y  1  5 2  1  X    X   y(t )  4 x  3 y  2   4 3   2 To find the solution to this arms race, we will first find the general solution to the homogenous part of the system using Maple: 48
  • 49. > > > 1 1 Therefore, r1  7 ,  1    , r2  1 , and  2    . The Wronskian W [ x(1) , x(2) ]  3e8t  0  1  2 thus, the general solution to the homogenous part of the arms race is 1 1 x  c1   e 7 t  c2   e  t  1  2 The general solution to the nonhomogenous part of the arms race will be found by f  5 2  1 substituting X (t )    into X     X    . Therefore g  4 3   2  5 2  f   1  5 f  2 g  1  0 0        f  1 and g  2  4 3  g   2  4 f  3g  2  0 1 and the solution of that system is xn    . Thus the general solution of the nonhomogenous  2 system is 1 1 1 x  xh  xn  c1   e 7 t  c2   e t     1  2  2 49
  • 50. Note that as lim x(t )  1 and lim y(t )  2 because in both equations, the terms with both t  t  e7t and et go to 0 as t   . All that is left from the differential equations are the constant terms x(t )  1 and y(t )  2 and what initial values of the system converge to. The direction field with a few trajectories denoting the initial values for the nonhomogenous system is represented by The direction field of the system shows that for any initial value, the solution approaches the point (1, 2) as t   . Thus, we have a stable arms race. 50
  • 51. Example 3: Disarmament The following system x(t )  4 x  y  1  4 1   1   X    X   y(t )  x  y  2   1 1  2  will result in disarmament between the competing nations for all initial values. For the sake of simplicity, only the graph of this system and the solution solved by Maple will be shown because the eigenvectors and eigenvalues generated from P(t) become complicated radicals that would be difficult to manipulate by hand to find an answer. Therefore, the general solution to the system given by Maple output is > > > > 51
  • 52. > As you can see from the above solution, the general solution to the system is becomes very complicated but, lim x(t )  1 and lim y(t )  3 thus showing that eventually, the nations will t  t  get to a point in time where they are decreasing the rate at which they are spending money on armaments until they are spending no money on the arms race. The graph of the system is much more beneficial in demonstrating an arms race that ends in disarmament. The direction field with a few trajectories denoting the initial values for the nonhomogenous system is represented by 52
  • 53. For the initial values represented by the blue trajectories in the above directional field, the trajectories will approach the point (1, 3) as t   thus resulting in disarmament with any initial value chosen for the system. Example 4: Disarmament/Runaway Arms Race/Stable Arms Race The following system x(t )  2 x  4 y  2   2 4   2   X    X   y(t )  4 x  2 y  2   4 2   2  will result in disarmament if x0  y0  2 , a runaway arms race if x0  y0  2 , or a stable arms race if x0  y0  2 . To find the solution to this arms race, we will first find the general solution to the homogenous part of the system using Maple: > > >  1 1 Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0  1  1  thus, the general solution to the homogenous part of the arms race is 53
  • 54.  1 1 xh  c1   e 2t  c2   e 6t  1  1 The general solution to the nonhomogenous part of the arms race will be found by f  2 4   2  substituting X (t )    into X     X    . Therefore g  4 2   2   2 4  f   2  0       f  1 and g  1  4 2  g   2   1 and the solution of the system is xn    . Thus the general solution of the nonhomogenous  1 system is  1 1  1 x  xh  xn  c1   e 2t  c2   e 6t     1  1  1 The direction field with a few trajectories denoting the initial values for the nonhomogenous system is represented by 54