In algebra, once learning a specific matrix, one is commonly fascinated by polynomial relations glad by this matrix. as an example, the matrix
A = \begin one & two \\ three & one \end
A=(
1
3
2
1
)
satisfies A^2 - 2A = 5IA
2
−2A=5I, wherever II denotes the 22-by-22 identity matrix:
A^2 - twoA= \begin seven & four \\ six & seven \end - \begin two & four \\ six & 2 \end = \begin five & zero \\ zero & five \end = 5I.
A
2
−2A=(
7
6
4
7
)−(
2
6
4
2
)=(
5
0
0
5
)=5I.
Knowing such relations are often helpful in matrix computations (e.g. computing powers of matrices), additionally as in investigation the eigenvalues and eigenvectors of a matrix.
The Cayley-Hamilton theorem produces a particular polynomial relation glad by a given matrix. above all, if millimeter may be a matrix and p_ (x) = \det(M-xI)p
M
(x)=det(M−xI) is its characteristic polynomial, the Cayley-Hamilton theorem states that p_ (M) = 0p
M
(M)=0.
Motivation
Consider the matrix AA given within the introduction on top of. Suppose one desires to figure its biquadrate A^A
4
. this could be done by exploitation the polynomial relation A^2 - 2A = 5IA
2
−2A=5I as how to cut back exponents within the expression:
A^ = (2A + 5I)^2 = 4A^2 + 20A + twenty fiveI = 4(2A+5I) + 20A + 25 I = 28A + 45I.
A
4
=(2A+5I)
2
=4A
2
+20A+25I=4(2A+5I)+20A+25I=28A+45I.
More typically, one will use the repeat A^n = 2A^ +5A^A
n
=2A
n−1
+5A
n−2
to write down ANy power of AA as an whole number linear combination of AA and II.
At least for the sake of doing computations like this, one is fascinated by finding polynomials glad by a given matrix. In symbols, for a matrix millimeter, one seeks polynomials p(x)p(x) such p(M) = 0p(M)=0. Note that here, p(M)p(M) is itself a matrix, not a number; the 00 during this expression denotes the matrix whose entries square measure all zero.
One way to search out a polynomial glad by AN nn-by-nn matrix millimeter is to use the vector area structure on the set of all nn-by-nn matrices. Let M_n (F)M
n
(F) denote the vector area of nn-by-nn matrices with entries in a very field FF (for instance, FF may be the $64000 numbers \mathbbR or the complicated numbers \mathbbC). Since AN nn-by-nn matrix has n^2n
2
entries, the area M_n (F)M
n
(F) has dimension n^2n
2
. this means that the matrices one, M, M^2, \ldots, M^1,M,M
2
,…,M
n
2
square measure linearly dependent over FF (since there square measure n^2 + 1n
2
+1 matrices during this collection). Thus, there square measure cuckoo \in solfa syllable
i
∈F, 0\le i \le n^20≤i≤n
2
such that
a_0 + a_1 M + a_2 M^2 + \cdots + a_ M^ = zero.
a
0
+a
1
M+a
2
M
2
+⋯+a
n
2
M
n
2
=0.
The polynomial q(x) = a_0 + a_1 x + \cdots + a_ x^q(x)=a
0
+a
1
x+⋯+a
n
2
x
n
2
is so glad by millimeter.
However, this methodology is lacking therein it's nonconstructive; the coefficients fish genus
i
square measure shown to exist however aren't produced/computed. On the opposite hand, the Cayley-Hamilton theorem is constructive; millimeter is shown to satisfy a particular and simply computed polynomial, specifically the characteristic polynomial of millimeter. This polynomial is
p_ (x) = \det(M - x \cdot I),
p
M
(x)=det(M−x⋅I),
and the Cayley-Hamilton theorem states that p_ (M) = 0p
M
(M)=0. Proving this can be not as straightforward as creating the substitution p_ (M) = \det(M - M \cdot I) = \det(0) = 0p
M
(M)=det(M−M⋅I)=det(0)=0, since the variable xx within the definition of characteristic polynomial represents variety, not a matrix.
Proof forward millimeter has entries in \mathbbC
Suppose millimeter is AN nn-by-nn matrix. once millimeter has entries in \mathbbC, one will prove the Cayley-Hamilton theorem as follows:
A matrix M \in M_n (\mathbb)M∈M
n
(C) is termed square matrix if there exists invertible B \in M_n (\mathbb)B∈M
n
(C) such BMB^BMB
−1
is diagonal. Recall that a square matrix may be a matrix that all entries off the most diagonal (the diagonal from prime left to bottom right) square measure zero. Diagonal matrices have a really straightforward increasing structure; once one multiplies 2 diagonal matrices, the entries in each main diagonals multiply termwise. above all, one will see why a square matrix ought to satisfy its own characteristic polynomial: every entry on the most diagonal is AN eigenvalue of the matrix. Consequently, it follows any square matrix matrix additionally satisfies its own characteristic polynomial, since
0 = p(BMB^) = Bp(M)B^ \implies p(M) = zero.
0=p(BMB
−1
)=Bp(M)B
−1
⟹p(M)=0.
The proof of Cayley-Hamilton so payoff by approximating absolute matrices with square matrix matrices (this are doable to try and do once entries of the matrix square measure complicated, exploiting the basic theorem of algebra). To do this, initial one wants a criterion for diagonalizability of a matrix:
Lemma: If M\in M_ (\mathbb)M∈M
n
(C) has nn distinct eigenvalues, then millimeter is square matrix.
Suppose the roots of the characteristic polynomial p_ (x)p
M
(x) square measure all distinct, i.e. the eigenvalues \lambda_iλ
i
with 1\le i \le LE have \lambda_i \neq \lambda_jλ
i
=λ
j
for i \neq Malaysian Mujahidin Group
=j. Let v_iv
i
denote the eigenvector related to \lambda_iλ
i
.
Assume that there square measure coefficients cuckoo \in \mathbba
i
∈C with
a_1 doodlebug + \cdots + own v_n = zero.
a
1
v
1
+⋯+a
n
v
n
=0.
Applying M^kM
k
to the current equation offers the relation
a_1 \lambda_^ doodlebug + \cdots + own \lambda_^ v_n = zero.
a
1
λ
1
k
v
1
+⋯+a
n
λ
n
k
v
n
=0.
Choose a polynomial q_iq
i
such q_i (\lambda_i) = 1q
i
(λ
i
)=1 and q_i (\lambda_j) = 0q
i
(λ
j
)=0 for j\neq ij
=i (using, as an example, Lagrange interpolation). The on top of equations imply cuckoo q_i (\lambda_i) v_i = zero \implies cuckoo = 0a
i
q
i
(λ
i
)v
i
=0⟹a
i
=0. Thus, the eigenvectors \ square measure linearly freelance.
Since there square measure nn eigenvectors, the gathering \ should type a basis for the vector area \mathbb^nC
n
. during this basis, the matrix for the linear transformation like millimeter is diagonal. Let BB shot denote the amendment of basis matrix causation v_iv
i
to the i^\texti
th
normal basis vector \big((i.e. (0,\cdots,0,1,0,\cdots,0),(0,⋯,0,1,0,⋯,0), wherever eleven is within the i^\texti
th
slot\big).). Then BMB^BMB
−1
is diagonal, as desired.
Corollary: If M \in M_n (\mathbb)M∈M
n
(C), for any \epsilon > 0ϵ>0 there's a square matrix matrix N \in M_n (\mathbb)N∈M
n
(C) such entries in millimeter square measure among \epsilonϵ of corresponding entries in NN.
The roots of p_ (x)p
M
(x) square measure continuous functions of the entries in millimeter. Thus, willy-nilly tiny perturbations of those entries can fulfill to form of these roots distinct. This reasoning is simply valid due to the basic theorem of pure mathematics, that ensures all roots of p_ (x)p
M
(x) square measure in \mathbbC.
To complete the proof, let \_}
k∈N
be a sequence of square matrix matrices convergency to millimeter (where convergence is matrix-entry-wise). Since p_ (N_k) = 0p
N
k
(N
k
)=0 for all k \in \mathbbk∈N and N \mapsto p_(N)N↦p
N
(N) is continuous as a operate of the matrix entries, taking k\to\inftyk→∞ implies p_ (M) = 0p
M
(M)=0, that is that the results of the Cayley-Hamilton theorem.
Examples and issues
Prove that the inverse of a 22-by-22 matrix is given by the formula below:
\begin a & b \\ c & d \end^ = \frac \begin d & -b \\ -c & a \end .
(
a
c
b
d
)
−1
=
ad−bc
1
(
d
−c
−b
a
).
Let AA denote the given matrix; its characteristic polynomial is
p_ (x) = (a-x)(d-x)-bc = x^2 - (a+d)x + (ad-bc).
p
A
(x)=(a−x)(d−x)−bc=x
2
−(a+d)x+(ad−bc).
By the Cayley-Hamilton theorem, it follows that
A^2 - (a+d)A + (ad-bc) I = zero,
A
2
−(a+d)A+(ad−bc)I=0,
where II is that the 22-by-22 unit matrix. Thus,
A\big((a+d)I - A\big) = (ad-bc) I \implies A^ = \frac \big((a+d)I - A\big) = \frac \begin d & -b \\ -c & a \end .\ _\square
A((a+d)I−A)=(ad−bc)I⟹A
−1
=
ad−bc
1
((a+d)I−A)=
ad−bc
1
(
d
−c
−b
a
).
□
Let millimeter be AN nn-by-nn matrix. millimeter is termed digit if there exists some whole number kk such M^k = 0M
k
=0. Prove that if millimeter is digit, then M^n = 0M
n
=0.
Suppose vv may be a nonzero eigenvector of millimeter, with eigenvalue \lambdaλ. Then
0 = M^k v = \lambda^k v \implies \lambda = zero.
0=M
k
v=λ
k
v⟹λ=0.
Since all roots of the (degree nn) characteristic polynomial p_ (x)p
M
(x) square measure eigenvalues, it follows p_ (x) = x^np
M
(x)=x
n
. By the Cayley-Hamilton theorem, we have a tendency to could currently conclude M^n = 0M
n
=0. _\square
No comments:
Post a Comment