Jordan normal form
From Wikipedia, the free encyclopedia
In linear algebra, Jordan normal form (often called Jordan canonical form)[1] shows that a given square matrix M over a field K containing the eigenvalues of M can be transformed into a certain normal form by changing the basis. This normal form is almost diagonal in the sense that its only non-zero entries lie on the diagonal and the super-diagonal. One can compare this result with the spectral theorem for normal matrices, which is a special case of the Jordan normal form.
It is named in honour of Camille Jordan.
Contents |
[edit] Motivation
A n × n matrix A is diagonalizable if and only if the sum of the dimensions of the eigenspaces is n. Or, equivalently, if and only if A has n linearly independent eigenvectors. Not all matrices are diagonalizable. Consider the following matrix:
Including multiplicity, the eigenvalues of A are λ = 1, 2, 4, 4. The dimension of the kernel of A − 4 I is 1, so A is not diagonalizable. However, there is an invertible matrix P such that A = PJP-1 where
is almost diagonal. This is the Jordan normal form of A. The section Example below fills in the details of the computation.
[edit] Complex matrices
In general, a square complex matrix A is similar to a block diagonal matrix
where each block Ji is a square matrix of the form
So there exists an invertible matrix P such that PAP-1 = J is such that the only non-zero entries of J are on the diagonal and the super-diagonal. J is called the Jordan normal form of A. Each Ji is called a Jordan block of A. In a given Jordan block, every entry on the super-diagonal is 1.
Assuming this result, we can deduce the following properties:
- Counting multiplicity, the eigenvalues of J, therefore A, are the diagonal entries.
- Given an eigenvalue λi, its geometric multiplicity is the dimension of Ker(A - λi I), and it is the number of Jordan blocks corresponding to λi.
- The sum of the sizes of all Jordan blocks corresponding to an eigenvalue λi is its algebraic multiplicity.
- A is diagonalizable if and only if, for any eigenvalue λ of A, its geometric and algebraic multiplicities coincide.
- Notice that the Jordan block corresponding to λ is of the form λ I + N, where N is a nilpotent matrix defined as Nij = δi,j − 1 (where δ is the Kronecker delta). The nilpotency of N can be exploited when calculating f(A) where f is a complex analytic function. For example, in principle the Jordan form could give a closed-form expression for the exponential exp(A).
[edit] Generalized eigenvectors
Consider the matrix A from the example in the previous section. The Jordan normal form is obtained by some similarity transformation P-1AP = J, i.e.
AP = PJ.
Let P have column vectors pi, i = 1...4, then
We see that
and
- for
While p1 is an eigenvector of A corresponding to 5, i.e. p1 ∈ Ker(A - 5 I), we have pi ∈ Ker(A - 5 I)i for i = 2, 3, and 4. Such vectors are called the generalized eigenvectors of A.
Thus, given an eigenvalue λ, its corresponding Jordan block gives rise to a Jordan chain. The generator, or lead vector, say pr, of the chain is an a generalized eigenvectors such that (A - λ I)rpr = 0. The vector p1 = (A - λ I)r - 1 pr is an eigenvector corresponding to λ. In general, pi is a preimage of pi-1 under A - λ I. So the lead vector generates the chain via mutiplication by (A - λ I).
Therefore, the statement that every square matrix A can be put in Jordan normal form is equivalent to the claim that there exists a basis consisting only of eigenvectors and generalized eigenvectors of A
[edit] A proof
We give a proof by induction. The 1 × 1 case is trivial. Let A be an n × n matrix. Take any eigenvalue λ of A. The range of A - λ, denoted by Ran(A - λ), is an invariant subspace of A. Also, since λ is an eigenvalue of A, the dimension Ran(A - λ), r, is strictly less than n. Let A' denote the restriction of A to Ran(A - λ), By inductive hypothesis, there exists a basis {p1...pr} such that A' , expressed in terms of this basis, is in Jordan normal form.
Next consider the subspace Ker(A - λ). If
the desired result follows immediately from the rank-nullity theorem. This would be the case, for example, if A was Hermitian.
Otherwise, if
let the dimension of Q be s ≤ r. Each vector in Q is an eigenvector of A' corresponding to eigenvalue λ. So the Jordan form of A' must contain s Jordan chains corresponding to s linearly independent eigenvectors. So the basis {p1...pr} must contain s vectors, say {pr-s+1...pr}, that are lead vectors in these Jordan chains from the Jordan normal form of A'. We can "extend the chains" by taking the preimages of these lead vectors. (This is the key step of argument; in general, generalized eigenvectors need not lie in Ran(A - λ).) Let qi be such that
- for
Clearly qi does not lie in Ker(A - λ) for all i. Furthermore, qi cannot be in Ran(A - λ), for that would contradict the assumption that each pi is a lead vector in a Jordan chain. The set {qi}, being preimages of the linearly independent set {pi} under A - λ, is also linearly independent.
Finally, we can pick any linearly independent set {z1...zt} that spans
By construction, the union the three sets {p1...pr}, {qr-s +1...qr}, and {z1...zt} is linearly independent. Each vector in the union is either an eigenvector or a generalized eigenvector of A. Finally, by rank-nullity theorem, the cardinality of the union is n. In other words, we have found a basis that consists of eigenvectors and generalized eigenvectors of A, and this shows A can be put in Jordan normal form.
[edit] Uniqueness
It can be shown that the Jordan normal form of a given matrix A is unique up to the order of the Jordan blocks.
Knowing the algebraic and geometric multiplicities of the eigenvalues is not sufficient to determine the Jordan normal form of A. Assuming the algebraic multiplicity m(λ) of an eigenvalue λ is known, the structure of the Jordan form can be ascertained by analysing the ranks of the powers (A - λ)m(λ). To see this, suppose an n × n matrix A has only one eigenvalue λ. So m(λ) = n. The smallest integer k1 such that
is the size of the largest Jordan block in the Jordan form of A. (This number k1 is also called the index of λ. See discussion in a following section.) The rank of
is the number of Jordan blocks of size k1. Similarly, the rank of
is twice the number of Jordan blocks of size k1 plus the number of Jordan blocks of size k1 - 1. Iterating in this way gives the precise Jordan structure of A. The general case is similar.
This can be used to show the uniqueness of the Jordan form. Let J1 and J2 be two Jordan normal forms of A. Then J1 and J2 are similar and have the same spectrum, including algebraic multiplicities of the eigenvalues. The procedure outlined in the previous paragraph can be used to determine the structure of these matrices. Since the rank of a matrix is preserved by similarity transformation, there is a bijection between the Jordan blocks of J1 and J2. This proves the uniqueness statement.
[edit] Consequences
One can see that the Jordan normal form is essentially a classification result for square matrices, and as such several important results from linear algebra can be viewed as its consequences.
[edit] Spectral mapping theorem
Using the Jordan normal form, direct calculation gives a spectral mapping theorem for the polynomial functional calculus: Let A be an n × n matrix with eigenvalues λ1...λn, then for any polynomial p, p(A) has eigenvalues p(λ1)...p(λn).
[edit] Cayley-Hamilton theorem
The Cayley-Hamilton theorem asserts that every matrix A satisfies its characteristic equation: if p is the characteristic polynomial of A, then p(A) = 0. This again can be shown via direct calculation in the Jordan form.
[edit] Minimal polynomial
The minimal polynomial of a square matrix A is the unique monic polynomial of least degree, m, such that m(A) = 0. Alternatively, the set of polynomials that annihilate a given A form an ideal I in C[x], the principal ideal domain of polynomials with complex coefficients. The element that generates I is precisely m.
Let λ1...λq be the distinct eigenvalues of A, and si be the size of the largest Jordan block corresponding to λi. It is clear from the Jordan normal form that the minimal polynomial of A has degree ∑si.
While the Jordan normal form determines the minimal polynomial, the converse is not true. This leads to the notion of elementary divisors. The elementary divisors of a square matrix A are the characteristic polynomials of its Jordan blocks. The factors of the minimal polynomial m are the elementary divisors of the largest degree corresponding to distinct eigenvalues.
The degree of an elementary divisor is the size of the corresponding Jordan block, therefore the dimension of the corresponding invariant subspace. If all elementary divisors are linear, A is diagonalizable.
[edit] Invariant subspace decompositions
The Jordan form of a n × n matrix A is block diagonal, therefore gives a decomposition of the n dimensional Euclidean space into invariant subspaces of A. Every Jordan block Ji corresponds to an invariant subspace Xi. Symbolically, we put
where each Xi is the span of the corresponding Jordan chain, and k is the number of Jordan chains.
One can also obtain a slightly different decomposition via the Jordan form. Given an eigenvalue λi, the size of its largest corresponding Jordan block si is called the index of λi and denoted by ν(λi). (Therefore the degree of the minimal polynomial is the sum of all indices.) Define a subspace Yi by
This gives the decomposition
where l is the number of distinct eigenvalues of A.
Comparing the two decompositions, notice that, in general, l ≤ k. When A is normal, the subspaces Xi's in the first decomposition are one-dimensional and mutually orthogonal. This is the spectral theorem for normal operators. The second decomposition generalizes more easily for general compact operators on Banach spaces.
It might be of interest here to note some properties of the index, ν(λ). More generally, for a complex number λ, its index can be defined as the least non-negative integer ν(λ) such that
So ν(λ) > 0 if and only if λ is an eigenvalue of A. In the finite dimensional case, ν(λ) ≤ the algebraic multiplicity of λ.
[edit] Generalizations
[edit] Matrices with entries in a field
Jordan reduction can be extended to any square matrix M whose entries lie in a field K. The result states that any M can be written as a sum D + N where D is diagonalizable, N is nilpotent, and DN = ND. Whenever K contains the eigenvalues of M, in particular, when K is algebraically closed, the normal form can be expressed explicitly as the direct sum of Jordan blocks.
Similar to the case when K is the complex numbers, knowing the dimensions of the kernels of (M-λI)k for 1 ≤ k ≤ m, where m is the algebraic multiplicity of the eigenvalue λ, allows one to determine the Jordan form of M. We may view the underlying vector space V as a K[x]-module by regarding the action of x on V as application of M and extending by K-linearity. Then the polynomials (x − λ)k are the elementary divisors of M, and the Jordan normal form is concerned with representing M in terms of blocks associated to the elementary divisors.
The proof of the Jordan normal form is usually carried out as an application to the ring K[x] of the structure theorem for finitely-generated modules over principal ideal domains, of which it is a corollary.
[edit] Compact operators
In a different direction, for compact operators on a Banach space, a result analogous to the Jordan normal form holds. One restricts to compact operators because every point x in the spectrum of a compact operator T, the only exception being when x is the limit point of the spectrum, is an eigenvalue. This is not true for bounded operators in general. To give some idea of this generalization, we first reformulate the Jordan decomposition in the language of functional analysis.
[edit] Holomorphic functional calculus
- For more details on this topic, see holomorphic functional calculus.
Let X be a Banach space, L(X) be the bounded operators on X, and σ(T) denote the spectrum of T ∈ L(X). The holomorphic functional calculus is defined as follows:
Fix a bounded operator T. Consider the family Hol(T) of complex functions that is holomorphic on some open set G containing σ(T). Let Γ = {γi} be a finite collection of Jordan curves such that σ(T) lies in the inside of Γ, we define f(T) by
The open set G could vary with f and need not be connected. The integral is defined as the limit of the Riemann sums, as in the scalar case. Although the integral makes sense for continuous f, we restrict to holomorphic functions to apply the machinery from classical function theory (e.g. the Cauchy integral formula). The assumption that σ(T) lie in the inside of Γ ensures f(T) is well defined; it does not depend on the choice of Γ. The functional calculus is the mapping Φ from Hol(T) to L(X) given by
We will require the following properties of this functional calculus:
- Φ extends the polynomial functional calculus.
- The spectral mapping theorem holds: σ(f(T)) = f(σ(T)).
- Φ is an algebra homomorphism.
[edit] The finite dimensional case
In the finite dimensional case, σ(T) = {λi} is a finite discrete set in the complex plane. Let ei be the function that is 1 in some open neighborhood of λi and 0 elsewhere. By property 3 of the functional calculus, the operator
is a projection. Moreoever, let νi be the index of λi and
The spectral mapping theorem tells us
has spectrum {0}. By property 1, f(T) can be directly computed in the Jordan form, and by inspection, we see that the operator f(T)ei(T) is the zero matrix.
By property 3, f(T) ei(T) = ei(T) f(T). So ei(T) is precisely the projection onto the subspace
The relation
implies
where the index i runs through the distinct eigenvalues of T. This is exactly the invariant subspace decomposition
given in a previous section. Each ei(T) is the projection onto the subspace spanned by the Jordan chains corresponding to λi.
This explicit identification of the operators ei(T) in turn gives an explicit form of holomorphic functional calculus for matrices:
- For all f ∈ Hol(T),
Notice that the expression of f(T) is a finite sum because, on each neighborhood of λi, we have chosen the Taylor series expansion of f centered at λi.
[edit] Poles of an operator
Let T be a bounded operator λ be an isolated point of σ(T). (As stated above, when T is compact, every point in its spectrum is an isolated point, except possibly the limit point 0.)
The point λ is called a pole of operator T with order ν if the resolvent function RT defined by
has a pole of order ν at λ.
We will show that, in the finite dimensional case, the order of an eigenvalue coincides with its index. The result also holds for compact operators.
Consider the annular region A centered at the eigenvalue λ with sufficiently small radius ε such that the intersection of the open disc Bε(λ) and σ(T) is {λ}. The resolvent function RT is holomorphic on A. Extending a result from classical function theory, RT has a Laurent series representation on A:
where
- and C is a small circle centered at λ.
By the previous discussion on the functional calculus,
- where is 1 on and 0 elsewhere.
But we have shown that the smallest positive integer m such that
- and
is precisely the index of λ, ν(λ). In other words, the function RT has a pole of order ν(λ) at λ.
[edit] Example
This example shows how to calculate the Jordan normal form of a given matrix. As the next section explains, it is important to do the computation exactly instead of rounding the results.
Consider the matrix
which is mentioned in the beginning of the article.
The characteristic polynomial of A is
This shows that the eigenvalues are 1, 2, 4 and 4, according to algebraic multiplicity. The eigenspace corresponding to the eigenvalue 1 can be found by solving the equation Av = v. It is spanned by the column vector v = (−1, 1, 0, 0)T. Similarly, the eigenspace corresponding to the eigenvalue 2 is spanned by w = (1, −1, 0, 1)T. Finally, the eigenspace corresponding to the eigenvalue 4 is also one-dimensional (even though this is a double eigenvalue) and is spanned by x = (1, 0, −1, 1)T. So, the geometric multiplicity of each of the three eigenvalues is one. Therefore, the two eigenvalues equal to 4 correspond to a single Jordan block, and the Jordan normal form of the matrix A is the direct sum
There are three chains. Two have length one: {v} and {w}, corresponding to the eigenvalues 1 and 2, respectively. There is one chain of length two corresponding to the eigenvalue 4. To find this chain, calculate
Pick a vector in the above span that is not in the kernel of A − 4I, e.g., y = (1,0,0,0)T. Now, (A − 4I)y = x and (A − 4I)x = 0, so {y, x} is a chain of length two corresponding to the eigenvalue 4.
The transition matrix P such that P−1AP = J is formed by putting these vectors next to each other as follows
A computation shows that the equation P−1AP = J indeed holds.
If we had interchanged the order of which the chain vectors appeared, that is, changing the order of v, w and {x, y} together, the Jordan blocks would be interchanged. However, the Jordan forms are equivalent Jordan forms.
[edit] Numerical analysis
If the matrix A has multiple eigenvalues, or is close to a matrix with multiple eigenvalues, then its Jordan normal form is very sensitive to perturbations. Consider for instance the matrix
If ε = 0, then the Jordan normal form is simply
However, for ε ≠ 0, the Jordan normal form is
This ill conditioning makes it very hard to develop a robust numerical algorithm for the Jordan normal form, as the result depends critically on whether two eigenvalues are deemed to be equal. For this reason, the Jordan normal form is usually avoided in numerical analysis; the stable Schur decomposition is often a better alternative.[2]
[edit] Notes
- ^ Shilov defines the term Jordan canonical form and in a footnote says that Jordan normal form is synonymous. These terms are sometimes shortened to Jordan form. (Shilov) The term Classical canonical form is also sometimes used in the sense of this article. (James & James, 1976)
- ^ See Golub & Van Loan (1996), §7.6.5; or Golub & Wilkinson (1976) for details.
[edit] References
- N. Dunford and J.T. Schwartz, Linear Operators, Part I: General Theory, Interscience, 1958.
- Daniel T. Finkbeiner II, Introduction to Matrices and Linear Transformations, Third Edition, Freeman, 1978.
- Gene H. Golub and Charles F. van Loan, Matrix Computations (3rd ed.), Johns Hopkins University Press, Baltimore, 1996.
- Gene H. Golub and J. H. Wilkinson, Ill-conditioned eigensystems and the computation of the Jordan normal form, SIAM Review, vol. 18, nr. 4, pp. 578–619, 1976.
- Glenn James and Robert C. James, Mathematics Dictionary, Fourth Edition, Van Nostrand Reinhold, 1976.
- Saunders MacLane and Garrett Birkhoff, Algebra, MacMillan, 1967.
- Anthony N. Michel and Charles J. Herget, Applied Algebra and Functional Analysis, Dover, 1993.
- Georgi E. Shilov, Linear Algebra, Dover, 1977.
- Jordan Canonical Form article at mathworld.wolfram.com