SlideShare una empresa de Scribd logo
1 de 9
The world of Eigenvalues-eigenfunctions

An operator    A   operates on a function and produces a
function.

For every operator, there is a set of functions which
when operated by the operator produces the same
function modified only multiplied by a constant
factor.

Such a function is called the eigenfunction of the
operator, and the constant modifier is called its
corresponding eigenvalue. An eigenvalue is just a
number: Real or complex.

A typical eigenvalue equation would look like
                Ax = λ x


Here, the matrix or the operator A operates on a
vector (or a function) x producing an amplified or
reduced vector λx . Here the eigenvalue λbelongs to
eigenfunction x .


                                  d
Suppose the operator is A = ( x dx ) .   A   operating on
                   d n
x n produces Ax = x x = nx .
               n          n
                   dx
Therefore, the operator A has an eigenvalue n
corresponding to eigenfunction x n .

  1. Eigenfunctions are not unique.

  Suppose Ax = λ x . Define, another vector z = cx , where
  c is a constant.

  Now, Az = Acx = cAx = cλ x = λ cx = λ z
  Therefore, z is also an e-function (eigenfunction)
  of A.

  2.   If Ax = λ x is an eigenvalue equation (and we
       assume that x is not a zero vector), then
            Ax = λx   ⇔ (A - λI)x = 0 ⇐⇒ det(A - λI) = 0
   This leads to a characteristic polynomial in λ:
                  p A = det( A − λ I )
       λ   is an e-value of          A   only if   pA = 0.


  3.   Spectrum of an operator                 A    is σ( A ) : set of all its
       e-values.


  4.   Spectral radius of an operator                  A     is
           ρ ( A ) = max | λ |
                    λ∈σ ( A )  = 1maxn | λi |
                                  ≤i ≤



  5. Computation of spectrum and spectral radius:
2   −1
  Let A = 2 5  be the matrix and we want to
              
  compute its eigenvalues and eigenfunctions. Its
  characteristic equation (CE) is:
                 2 − λ    −1 
             det                = 0 ⇐⇒ (2 - λ )(5 - λ ) + 2 = 0
                  2      5 − λ
                               


This gives λ2 − 7λ + 12 = 0 ⇐ ⇒            ( λ − 3 )( λ − 4 ) = 0



Therefore,         A   has two eigenvalues: 3 and 4.

                                                                   x 
Let the eigenfunction be the vector                            x =  1
                                                                    x2 
corresponding to e-value 3. Then

      2 − 1  x1   x1   3 x1 
      2 5   x  = 3 x  =  3 x 
            2   2   2 




Therefore, we have 2 x1 − x2 = 3x1 yielding
x1 = − x2 . Also, we get 2 x1 + 5 x2 = 3 x2 which gives us no new

result. Therefore, we can arbitrarily take the
                              1 
following solution: e1 = −1 corresponding to e-value 3
                               
for the matrix A.
Similarly, for e-value of 4, the eigenfunction appears
           1 
to be e2 = − 2 .
            


  6. Faddeev-Leverrier Method to get characteristic
     polynomial.

  Define a sequence of matrices                              P = A, p1 = trace( P )
                                                              1                  1
                            1
  P2 = A[ P − p1I ] , p2 = trace( P2 )
           1
                            2
                            1
  P3 = A[ P2 − p2 I ] , p3 = trace( P3 )
                            3
  …
  …
                                  1
  Pn = A[ Pn −1 − pn −1I ] , p n = trace( Pn )
                                  n
  Then the characteristic polynomial                               P( λ )   is
                  [
  P( λ ) = ( −1 )n λn − p1λn −1 − p2 λn − 2 − ... − pn   ]
                12  6        − 6
                 6 16         2 
  e.g.       A=
                                
                − 6 2
                             16 
                                 



  Define        P = A, p1 = trace( A ) = 12 + 16 + 16 = 44
                 1
  P2 = A( P − p1I ) =
           1


  12  6       − 6− 32      6      −6 
   6 16        2  6       − 28     2 
                                      
  − 6 2
              16  − 6
                             2     − 28
                                         


    − 312     −108     108 
  = −108     − 408     − 60 , p 2 = −564
                            
     108
              − 60     − 408
                             
And one proceeds this way to get          p3 = 1728



  The CA polynomial = ( −1 )3 [λ3 − 44λ2 + 564λ −1728]


  The eigenvalues are next found solving
    [λ3 − 44λ2 + 564λ −1728] = 0

  7. More facts about eigenvalues.

  Assume Ax = λ x . Therefore,      λ   is the eigenvalue of
  A with eigenvector x .


  a. A−1 has the same eigenvector as A and the
  corresponding eigenvalue is λ−1 .

  b. An has the same eigenvector as          A   with the
  eigenvalue λn .

  c. ( A + µI ) has the same eigenvector as      A   with the
  eigenvalue ( λ + µ ) .

  d. If   A   is symmetric, all its eigenvalues are real.

  e. If P is an invertible matrix then       P −1 AP    has the
  same eigenvalues as A .

Proof of e.
Suppose, the eigenfunction of                                                 P −1 AP          is     y     with
eigenvalue k .
Then,
       P − APy = ky
          1
                               ⇐⇒        APy = Pky = kPy

Therefore, Py = x and k must be equal to λ. Therefore
the eigenvalues of A and P −1 AP are identical and the
eigenvector of one is a linear mapping of the other
one.

If the eigenvalues of A , λ1 ,λ2 ,...,λn are all distinct
then there exists a similarity transformation such that
           λ1 0               0      .. 0 
           0 λ                0      .. 0 
                2                          
 −1
P AP = D =  0 0               λ3     .. 0 
            .. ..              ..    .. 0 
                                           
           
           0 0                0      .. λn 
                                            


Let the eigenvectors of A be                                      x ( 1 ) , x ( 2 ) ,..., x ( i ) ,...x ( n )

such that we have Ax( i ) = λi x( i )

Then the matrix P = [ x( 1 ) , x( 2 ) ,..., x( n ) ]
Then AP = [ Ax( 1 ) , Ax( 2 ) ,..., Ax( n ) ]
                     [
                = λ1 x( 1 ) ,λ2 x( 2 ) ,..., λn x( n )     ]
                 [                           ][
              = x ( 1 ) , x ( 2 ) ,..., x ( n ) λ1e( 1 ) ,λ 2 e( 2 ) ,..., λn e( n )   ]
= PD

Therefore,               P −1 AP = D



Also, note the following. If                                   A     is symmetric, then
. So, we can normalize each
( x ( i ) )t x (   j)
                        = 0 , ∀i ≠ j
                                                                     (i )
                            x                               (i)
eigenvector and obtain u = x so that the                             (i )




matrix Q = [u ( 1 ) ,u ( 2 ) ,...,u ( n ) ] would be an orthogonal matrix.
i.e. Q AQ = Dt




Matrix-norm.

Computationally, the                                l 2 -norm               of a matrix is
determined as

            l 2 -norm               of                  [
                                          A =|| A ||2 = ρ( At A )   ]1 / 2
                 1            1    0
e.g.          A = 1           2    1
                                    
                 −1
                              1    2
                                     


                         1         1    −1 1     1       0  3           2   −1
Then               A A = 1
                    t
                                    2    1  1     2       1 =  2         6   4
                                                                              
                         0
                                   1    2 −1
                                                  1       2 −1
                                                                           4   5



The eigenvalues are:
            λ1 = 0, λ2 = 7 + 7 , λ3 = 7 − 7


Therefore,                      A2 =       ρ( At A ) = 7 + 7 ≈ 3.106


                                                                  A ∞ = max ∑ aij
The l∞norm is defined as                                               1≤i ≤n      j
                      1        1        0 
e.g.               A =1        2        1 
                                          
                      −1
                               1       − 4
                                           
3                           3
∑ a1 j = 1 + 1 + 0 = 2 ,   ∑ a2 j = 1 + 2 + 1 = 4
j =1                       j =1


3
∑ a3 j = 6
j =1
             Therefore,           A ∞ = max( 2 ,4 ,6 ) = 6




In computational matrix algebra, we would often be
interested about situations when A k becomes small
(all the entrees become almost zero). In that case, A is
considered convergent.

           is convergent if klim∞( A )ij = 0
                                    k
i.e.   A                      →



                               1      
                                     0
Example.              Is   A = 2          convergent?
                                 1    1
                                      
                               4     2

     1                  1           1     
      4 0               8  0        16 0 
A2 =                A3 =         A4 = 
       1 1 ,               3 1 ,        1 1 ,
                                          
      4 4               16 8         8 16 


It appears that

      1         
      2k      0
Ak = 
         k     1
                
      2k + 1 2k 
                
1
In the limit   k → ∞,
                        2k
                             →0   . Therefore,   A   is a convergent
matrix.

Note the following equivalent results:

    a. A is a convergent matrix
               k
    b1. klim∞ A 2 = 0
          →


               k
    b2. klim∞ A ∞ = 0
           →

    c. ρ( A ) < 1
               k
    d. klim∞ A x = 0 ∀x
         →



Condition number               K( A )   of a non-singular matrix   A
is computed as
          K ( A ) = A . A -1




A matrix is well-behaved if its condition number is
close to 1. When K ( A ) of a matrix A is significantly
larger than 1, we call it an ill-behaved matrix.

Más contenido relacionado

La actualidad más candente

APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXESAPPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
SANTHANAM V
 
6563.nuclear models
6563.nuclear models6563.nuclear models
6563.nuclear models
akshay garg
 

La actualidad más candente (20)

Vibrational Spectrroscopy
Vibrational SpectrroscopyVibrational Spectrroscopy
Vibrational Spectrroscopy
 
Born-Oppenheimer approximation.pptx
Born-Oppenheimer approximation.pptxBorn-Oppenheimer approximation.pptx
Born-Oppenheimer approximation.pptx
 
Harmonic Oscillator
Harmonic OscillatorHarmonic Oscillator
Harmonic Oscillator
 
Point group
Point groupPoint group
Point group
 
Diatomic Molecules as a simple Anharmonic Oscillator
Diatomic Molecules as a simple Anharmonic OscillatorDiatomic Molecules as a simple Anharmonic Oscillator
Diatomic Molecules as a simple Anharmonic Oscillator
 
APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXESAPPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
APPLICATIONS OF ESR SPECTROSCOPY TO METAL COMPLEXES
 
Postulates of quantum mechanics & operators
Postulates of quantum mechanics & operatorsPostulates of quantum mechanics & operators
Postulates of quantum mechanics & operators
 
Linear vector space
Linear vector spaceLinear vector space
Linear vector space
 
6563.nuclear models
6563.nuclear models6563.nuclear models
6563.nuclear models
 
Lect. 23 rotational vibrational raman spectroscopy
Lect. 23 rotational   vibrational raman spectroscopyLect. 23 rotational   vibrational raman spectroscopy
Lect. 23 rotational vibrational raman spectroscopy
 
Postulates of quantum mechanics
Postulates of quantum mechanicsPostulates of quantum mechanics
Postulates of quantum mechanics
 
Atomic term symbol
Atomic term symbolAtomic term symbol
Atomic term symbol
 
Electronic spectra
Electronic spectraElectronic spectra
Electronic spectra
 
NUCLEAR QUADRUPOLE RESONANCE SPECTROSCOPY
NUCLEAR QUADRUPOLE RESONANCE SPECTROSCOPY NUCLEAR QUADRUPOLE RESONANCE SPECTROSCOPY
NUCLEAR QUADRUPOLE RESONANCE SPECTROSCOPY
 
Brillouin zone and wigner seitz cell
Brillouin zone and wigner  seitz cellBrillouin zone and wigner  seitz cell
Brillouin zone and wigner seitz cell
 
Schrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSchrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen Atom
 
quantum view of Harmonic oscillator
quantum view of Harmonic oscillator quantum view of Harmonic oscillator
quantum view of Harmonic oscillator
 
nuclear shell model.pptx
nuclear shell model.pptxnuclear shell model.pptx
nuclear shell model.pptx
 
Fermi dirac distribution
Fermi dirac distributionFermi dirac distribution
Fermi dirac distribution
 
Particle in 3D box
Particle in 3D boxParticle in 3D box
Particle in 3D box
 

Similar a Eigenvalues

Eigen values and eigenvectors
Eigen values and eigenvectorsEigen values and eigenvectors
Eigen values and eigenvectors
Amit Singh
 
Unit i
Unit i Unit i
Unit i
sunmo
 
eigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdfeigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdf
Sunny432360
 
2 senarai rumus add maths k1 trial spm sbp 2010
2 senarai rumus add maths k1 trial spm sbp 20102 senarai rumus add maths k1 trial spm sbp 2010
2 senarai rumus add maths k1 trial spm sbp 2010
zabidah awang
 
2 senarai rumus add maths k2 trial spm sbp 2010
2 senarai rumus add maths k2 trial spm sbp 20102 senarai rumus add maths k2 trial spm sbp 2010
2 senarai rumus add maths k2 trial spm sbp 2010
zabidah awang
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
d00a7ece
 
Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference Sheet
Daniel Nolan
 
Dynamical systems
Dynamical systemsDynamical systems
Dynamical systems
Springer
 

Similar a Eigenvalues (20)

eigenvalue
eigenvalueeigenvalue
eigenvalue
 
Eigen values and eigenvectors
Eigen values and eigenvectorsEigen values and eigenvectors
Eigen values and eigenvectors
 
Unit i
Unit i Unit i
Unit i
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization
 
eigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdfeigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdf
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebra
 
Maths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectorsMaths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectors
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt ms
 
Ch07 6
Ch07 6Ch07 6
Ch07 6
 
Diagonalization of Matrices
Diagonalization of MatricesDiagonalization of Matrices
Diagonalization of Matrices
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
 
DOC-20231230-WA0001..pdf
DOC-20231230-WA0001..pdfDOC-20231230-WA0001..pdf
DOC-20231230-WA0001..pdf
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvalues
 
Applications of Differential Calculus in real life
Applications of Differential Calculus in real life Applications of Differential Calculus in real life
Applications of Differential Calculus in real life
 
2 senarai rumus add maths k1 trial spm sbp 2010
2 senarai rumus add maths k1 trial spm sbp 20102 senarai rumus add maths k1 trial spm sbp 2010
2 senarai rumus add maths k1 trial spm sbp 2010
 
2 senarai rumus add maths k2 trial spm sbp 2010
2 senarai rumus add maths k2 trial spm sbp 20102 senarai rumus add maths k2 trial spm sbp 2010
2 senarai rumus add maths k2 trial spm sbp 2010
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
 
Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference Sheet
 
Dynamical systems
Dynamical systemsDynamical systems
Dynamical systems
 
10.3
10.310.3
10.3
 

Más de Tarun Gehlot

Más de Tarun Gehlot (20)

Materials 11-01228
Materials 11-01228Materials 11-01228
Materials 11-01228
 
Binary relations
Binary relationsBinary relations
Binary relations
 
Continuity and end_behavior
Continuity and  end_behaviorContinuity and  end_behavior
Continuity and end_behavior
 
Continuity of functions by graph (exercises with detailed solutions)
Continuity of functions by graph   (exercises with detailed solutions)Continuity of functions by graph   (exercises with detailed solutions)
Continuity of functions by graph (exercises with detailed solutions)
 
Factoring by the trial and-error method
Factoring by the trial and-error methodFactoring by the trial and-error method
Factoring by the trial and-error method
 
Introduction to finite element analysis
Introduction to finite element analysisIntroduction to finite element analysis
Introduction to finite element analysis
 
Finite elements : basis functions
Finite elements : basis functionsFinite elements : basis functions
Finite elements : basis functions
 
Finite elements for 2‐d problems
Finite elements  for 2‐d problemsFinite elements  for 2‐d problems
Finite elements for 2‐d problems
 
Error analysis statistics
Error analysis   statisticsError analysis   statistics
Error analysis statistics
 
Matlab commands
Matlab commandsMatlab commands
Matlab commands
 
Introduction to matlab
Introduction to matlabIntroduction to matlab
Introduction to matlab
 
Linear approximations and_differentials
Linear approximations and_differentialsLinear approximations and_differentials
Linear approximations and_differentials
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Interpolation functions
Interpolation functionsInterpolation functions
Interpolation functions
 
Propeties of-triangles
Propeties of-trianglesPropeties of-triangles
Propeties of-triangles
 
Gaussian quadratures
Gaussian quadraturesGaussian quadratures
Gaussian quadratures
 
Basics of set theory
Basics of set theoryBasics of set theory
Basics of set theory
 
Numerical integration
Numerical integrationNumerical integration
Numerical integration
 
Applications of set theory
Applications of  set theoryApplications of  set theory
Applications of set theory
 
Miscellneous functions
Miscellneous  functionsMiscellneous  functions
Miscellneous functions
 

Último

Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 

Último (20)

Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 

Eigenvalues

  • 1. The world of Eigenvalues-eigenfunctions An operator A operates on a function and produces a function. For every operator, there is a set of functions which when operated by the operator produces the same function modified only multiplied by a constant factor. Such a function is called the eigenfunction of the operator, and the constant modifier is called its corresponding eigenvalue. An eigenvalue is just a number: Real or complex. A typical eigenvalue equation would look like Ax = λ x Here, the matrix or the operator A operates on a vector (or a function) x producing an amplified or reduced vector λx . Here the eigenvalue λbelongs to eigenfunction x . d Suppose the operator is A = ( x dx ) . A operating on d n x n produces Ax = x x = nx . n n dx
  • 2. Therefore, the operator A has an eigenvalue n corresponding to eigenfunction x n . 1. Eigenfunctions are not unique. Suppose Ax = λ x . Define, another vector z = cx , where c is a constant. Now, Az = Acx = cAx = cλ x = λ cx = λ z Therefore, z is also an e-function (eigenfunction) of A. 2. If Ax = λ x is an eigenvalue equation (and we assume that x is not a zero vector), then Ax = λx ⇔ (A - λI)x = 0 ⇐⇒ det(A - λI) = 0 This leads to a characteristic polynomial in λ: p A = det( A − λ I ) λ is an e-value of A only if pA = 0. 3. Spectrum of an operator A is σ( A ) : set of all its e-values. 4. Spectral radius of an operator A is ρ ( A ) = max | λ | λ∈σ ( A ) = 1maxn | λi | ≤i ≤ 5. Computation of spectrum and spectral radius:
  • 3. 2 −1 Let A = 2 5  be the matrix and we want to   compute its eigenvalues and eigenfunctions. Its characteristic equation (CE) is: 2 − λ −1  det  = 0 ⇐⇒ (2 - λ )(5 - λ ) + 2 = 0  2 5 − λ  This gives λ2 − 7λ + 12 = 0 ⇐ ⇒ ( λ − 3 )( λ − 4 ) = 0 Therefore, A has two eigenvalues: 3 and 4. x  Let the eigenfunction be the vector x =  1  x2  corresponding to e-value 3. Then  2 − 1  x1   x1   3 x1   2 5   x  = 3 x  =  3 x    2   2   2  Therefore, we have 2 x1 − x2 = 3x1 yielding x1 = − x2 . Also, we get 2 x1 + 5 x2 = 3 x2 which gives us no new result. Therefore, we can arbitrarily take the 1  following solution: e1 = −1 corresponding to e-value 3   for the matrix A.
  • 4. Similarly, for e-value of 4, the eigenfunction appears 1  to be e2 = − 2 .   6. Faddeev-Leverrier Method to get characteristic polynomial. Define a sequence of matrices P = A, p1 = trace( P ) 1 1 1 P2 = A[ P − p1I ] , p2 = trace( P2 ) 1 2 1 P3 = A[ P2 − p2 I ] , p3 = trace( P3 ) 3 … … 1 Pn = A[ Pn −1 − pn −1I ] , p n = trace( Pn ) n Then the characteristic polynomial P( λ ) is [ P( λ ) = ( −1 )n λn − p1λn −1 − p2 λn − 2 − ... − pn ] 12 6 − 6  6 16 2  e.g. A=   − 6 2  16   Define P = A, p1 = trace( A ) = 12 + 16 + 16 = 44 1 P2 = A( P − p1I ) = 1 12 6 − 6− 32 6 −6   6 16 2  6 − 28 2     − 6 2  16  − 6  2 − 28  − 312 −108 108  = −108 − 408 − 60 , p 2 = −564    108  − 60 − 408 
  • 5. And one proceeds this way to get p3 = 1728 The CA polynomial = ( −1 )3 [λ3 − 44λ2 + 564λ −1728] The eigenvalues are next found solving [λ3 − 44λ2 + 564λ −1728] = 0 7. More facts about eigenvalues. Assume Ax = λ x . Therefore, λ is the eigenvalue of A with eigenvector x . a. A−1 has the same eigenvector as A and the corresponding eigenvalue is λ−1 . b. An has the same eigenvector as A with the eigenvalue λn . c. ( A + µI ) has the same eigenvector as A with the eigenvalue ( λ + µ ) . d. If A is symmetric, all its eigenvalues are real. e. If P is an invertible matrix then P −1 AP has the same eigenvalues as A . Proof of e.
  • 6. Suppose, the eigenfunction of P −1 AP is y with eigenvalue k . Then, P − APy = ky 1 ⇐⇒ APy = Pky = kPy Therefore, Py = x and k must be equal to λ. Therefore the eigenvalues of A and P −1 AP are identical and the eigenvector of one is a linear mapping of the other one. If the eigenvalues of A , λ1 ,λ2 ,...,λn are all distinct then there exists a similarity transformation such that λ1 0 0 .. 0  0 λ 0 .. 0   2  −1 P AP = D =  0 0 λ3 .. 0   .. .. .. .. 0     0 0 0 .. λn   Let the eigenvectors of A be x ( 1 ) , x ( 2 ) ,..., x ( i ) ,...x ( n ) such that we have Ax( i ) = λi x( i ) Then the matrix P = [ x( 1 ) , x( 2 ) ,..., x( n ) ] Then AP = [ Ax( 1 ) , Ax( 2 ) ,..., Ax( n ) ] [ = λ1 x( 1 ) ,λ2 x( 2 ) ,..., λn x( n ) ] [ ][ = x ( 1 ) , x ( 2 ) ,..., x ( n ) λ1e( 1 ) ,λ 2 e( 2 ) ,..., λn e( n ) ] = PD Therefore, P −1 AP = D Also, note the following. If A is symmetric, then
  • 7. . So, we can normalize each ( x ( i ) )t x ( j) = 0 , ∀i ≠ j (i ) x (i) eigenvector and obtain u = x so that the (i ) matrix Q = [u ( 1 ) ,u ( 2 ) ,...,u ( n ) ] would be an orthogonal matrix. i.e. Q AQ = Dt Matrix-norm. Computationally, the l 2 -norm of a matrix is determined as l 2 -norm of [ A =|| A ||2 = ρ( At A ) ]1 / 2 1 1 0 e.g. A = 1 2 1   −1  1 2  1 1 −1 1 1 0  3 2 −1 Then A A = 1 t 2 1  1 2 1 =  2 6 4      0  1 2 −1  1 2 −1   4 5 The eigenvalues are: λ1 = 0, λ2 = 7 + 7 , λ3 = 7 − 7 Therefore, A2 = ρ( At A ) = 7 + 7 ≈ 3.106 A ∞ = max ∑ aij The l∞norm is defined as 1≤i ≤n j 1 1 0  e.g. A =1 2 1    −1  1 − 4 
  • 8. 3 3 ∑ a1 j = 1 + 1 + 0 = 2 , ∑ a2 j = 1 + 2 + 1 = 4 j =1 j =1 3 ∑ a3 j = 6 j =1 Therefore, A ∞ = max( 2 ,4 ,6 ) = 6 In computational matrix algebra, we would often be interested about situations when A k becomes small (all the entrees become almost zero). In that case, A is considered convergent. is convergent if klim∞( A )ij = 0 k i.e. A → 1   0 Example. Is A = 2 convergent? 1 1   4 2 1  1  1   4 0 8 0 16 0  A2 =  A3 =  A4 =  1 1 , 3 1 , 1 1 ,        4 4 16 8   8 16  It appears that  1   2k 0 Ak =  k 1    2k + 1 2k   
  • 9. 1 In the limit k → ∞, 2k →0 . Therefore, A is a convergent matrix. Note the following equivalent results: a. A is a convergent matrix k b1. klim∞ A 2 = 0 → k b2. klim∞ A ∞ = 0 → c. ρ( A ) < 1 k d. klim∞ A x = 0 ∀x → Condition number K( A ) of a non-singular matrix A is computed as K ( A ) = A . A -1 A matrix is well-behaved if its condition number is close to 1. When K ( A ) of a matrix A is significantly larger than 1, we call it an ill-behaved matrix.