How To Find Number Of Linearly Independent Eigenvectors
Eigenvalues and eigenvectors
Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021
Issues 6.5
In Problems 1–sixteen, notice a gear up of linearly independent eigenvectors for the given matrices.
- 1.
-
- 2.
-
- 3.
-
- iv.
-
- 5.
-
- six.
-
- 7.
-
- eight.
-
- 9.
-
- ten.
-
- 11.
-
- 12.
-
- 13.
-
- xiv.
-
- 15.
-
- 16.
-
- 17.
-
The Vandermonde determinant
is known to equal the product
Using this result, prove Theorem iii for n distinct eigenvalues.
Read total chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B978012818419600006X
The Algebraic Eigenvalue Trouble
William Ford , in Numerical Linear Algebra with Applications, 2015
18.2.1 Additional Property of a Diagonalizable Matrix
Theorem 5.3 states that if the n×n matrix A has n linearly independent eigenvectors v i, v 2, …, 5 n , then A tin exist diagonalized by the matrix the eigenvector matrix X = (v i 5 2 … v north ). The converse of Theorem 5.3 is also true; that is, if a matrix tin be diagonalized, it must have n linearly independent eigenvectors. We need this issue for the purposes of developing the power method in Section xviii.two.2.
Theorem eighteen.1
If A is a real n × n matrix that is diagonalizable, it must have northward linearly independent eigenvectors.
Proof. We know there is an invertible matrix 5 such that V −one AV = D, where is a diagonal matrix, and let 5 1, five 2, …, v due north be the columns of V. Since V is invertible, the v i are linearly independent. The relationship V −1 AV = D gives AV = VD, and using matrix column annotation we accept
Column i of is Av i , and cavalcade i of is λ i 5 i , so Av i = λ i v i .
Thus, the linearly independent set 5 i, five 2, …, v due north are eigenvectors of A corresponding to eigenvalues λane, λii, …, λ northward .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000181
Systems of Ordinary Differential Equations
Martha Fifty. Abell , James P. Braselton , in Differential Equations with Mathematica (Fourth Edition), 2016
six.6.2 Repeated Eigenvalues
We retrieve from our previous feel with repeated eigenvalues of a two × ii organization that the eigenvalue tin can have two linearly independent eigenvectors associated with it or merely 1 eigenvector associated with it. Hence, we investigate the beliefs of solutions in this instance by considering both of these possibilities.
- one.
-
Suppose that the eigenvalue λ = λ 1 = λ ii has two respective linearly independent eigenvectors v 1 and v 2. And so, a general solution is
- 2.
-
Suppose that λ = λ 1 = λ 2 has only one corresponding eigenvector v 1. Hence, a general solution is
Example six.6.2
Classify the equilibrium point (0, 0) in the systems: (a) ; (b) .
Solution
(a) Using Eigensystem,
{{−2, −two}, {{−3, ane}, {0, 0}}}
nosotros see that λ one = λ two = −2 and that there is just one respective eigenvector. Therefore, because λ = −2 < 0, (0, 0) is a degenerate stable node. Discover that in the graph of several members of the family of solutions of this organisation along with the direction field shown in Figure 6-40, which nosotros generate using the same technique as in part (b) of the previous case, the solutions approach (0, 0) along the line in the management of , .
Articulate[x, y]pvf1=StreamPlot[{ten+ixy, −ten−5y}, {ten, −ane, 1}, {y, −1, ane}, StreamStyle→Fine];
Simplify[DSolve[{x′[t]==10[t]+niney[t], y′[t]==−x[t]−5y[t], ten[0]==x0, y[0]==y0}, {ten[t], y[t]}, t]]
{{x[t] → e −2t (x0 + threetx0 + 9ty0), y[t] → e −2t (y0 − t(x0 + 3y0))}}
initconds1=Table[{−1, i}, {i, −1, ane, 2/nine}]; initconds2=Table[{ane, i}, {i, −1, 1, 2/nine}]; initconds3=Table[{i, 1}, {i, −1, 1, 2/nine}]; initconds4=Table[{i, −1}, {i, −one, i, 2/9}];
toplot=Map[sol, initconds];
somegraphs=ParametricPlot[Evaluate[toplot], {t, −iii, three}, PlotRange→{{−1, one}, {−1, one}}, AspectRatio→i];
Show[pvf1, somegraphs, p4, PlotRange→{{−1, i}, {−1, 1}}, AspectRatio→1, Axes→ Automatic, Frame→Simulated, AxesLabel→{x, y}, AxesOrigin→{0, 0}]
(b) We take λ one = λ 2 = 2 and ii linearly independent vectors, and . (Note: The choice of these two vectors does non change the value of the solution, because of the form of the general solution in this case.)
{{two, two}, {{0, 1}, {i, 0}}}
Because λ = 2 > 0, we classify (0,0) equally a degenerate unstable node (or star). Some of these solutions along with the direction field are graphed in Figure vi-41 in the same way as in part (c) of the previous example. Discover that they become unbounded in the direction of any vector in the xy-plane because and .
Clear[ten, y]pvf1=StreamPlot[{2x, 2y}, {10, −1, 1}, {y, −1, ane}, StreamStyle→Fine];
Simplify[DSolve[{x′[t]==2x[t], y′[t]==2y[t], x[0]==x0, y[0]==y0}, {x[t], y[t]}, t]]
{{10[t] → due east 2t x0, y[t] → eastward 2t y0}}
sol[{x0:, y0:}]={E 2t x0, E twot y0};
toplot=Map[sol, initconds];
somegraphs=ParametricPlot[Evaluate[toplot], {t, −three, 3}, PlotRange→{{−ane, 1}, {−one, 1}}, AspectRatio→1];
Show[pvf1, somegraphs, PlotRange→{{−i, 1}, {−1, i}}, AspectRatio→i, Axes→Automatic, Frame→Simulated, AxesLabel→{ten, y}, AxesOrigin→{0, 0}]
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/commodity/pii/B9780128047767000061
Systems of Differential Equations
Martha L. Abell , James P. Braselton , in Introductory Differential Equations (5th Edition), 2018
Repeated Eigenvalues
We recall from our previous experience with repeated eigenvalues of a system that the eigenvalue can have two linearly independent eigenvectors associated with it or only 1 (linearly independent) eigenvector associated with information technology. We investigate the beliefs of solutions in the case of repeated eigenvalues by considering both of these possibilities.
- i.
-
If the eigenvalue has ii respective linearly independent eigenvectors and a general solution is
- 2.
-
If the eigenvalue has only one corresponding (linearly independent) eigenvector , a general solution is
Note: The name "star" was selected due to the shape of the solutions.
Example vi.37
Classify the equilibrium point in the systems: (a) ; and (b) .Solution: (a) The eigenvalues are found past solving
Hence, . In this case, an eigenvector satisfies , which is equivalent to , so in that location is merely i corresponding (linearly independent) eigenvector . Because , is a degenerate stable node. In this instance, the eigenline is . We graph this line in Fig. 6.xvA and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as and associate with each arrows directed toward the origin.
(b) Solving the feature equation
we have . However, because an eigenvector satisfies the system , any nonzero choice of is an eigenvector. If we select 2 linearly independent vectors such as and , nosotros obtain ii linearly independent eigenvectors respective to . (Note: The option of these two vectors does not change the value of the solution, considering of the grade of the full general solution in this instance.) Because , nosotros classify every bit a degenerate unstable star node. A full general solution of the system is , so when nosotros eliminate the parameter, we obtain . Therefore, the trajectories of this system are lines passing through the origin. In Fig. six.15B, we graph several trajectories. Because of the positive eigenvalue, we acquaintance with each an arrow directed away from the origin. □
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128149485000069
Systems of Differential Equations
Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014
Repeated Eigenvalues
We recall from our previous experience with repeated eigenvalues of a 2 × ii organization that the eigenvalue tin have two linearly contained eigenvectors associated with it or only one (linearly contained) eigenvector associated with it. We investigate the behavior of solutions in the example of repeated eigenvalues by considering both of these possibilities.
- 1.
-
If the eigenvalue λ = λ1,2 has two corresponding linearly independent eigenvectors v one and v 2, a general solution is
Notation: The name "star" was selected due to the shape of the solutions.
- 2.
-
If the eigenvalue λ = λ1,2 has merely one corresponding (linearly independent) eigenvector v = v 1, a general solution is
Example 6.vi.3
Classify the equilibrium point (0, 0) in the systems: (a) and (b) .
Solution
- (a)
-
The eigenvalues are plant by solving . Hence, λi,2 = − 2. In this case, an eigenvector satisfies , which is equivalent to , and so in that location is only ane corresponding (linearly independent) eigenvector . Because λ = − 2 < 0, (0, 0) is a degenerate stable node. In this instance, the eigenline is y = − x/3. Nosotros graph this line in Figure 6.15(a) and direct the arrows toward the origin considering of the negative eigenvalue. Next, nosotros sketch trajectories that become tangent to the eigenline as t → ∞and associate with each arrows directed toward the origin.
- (b)
-
Solving the characteristic equation
Read full chapter
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B9780124172197000065
Systems of linear differential equations
Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021
6.8.3 Both eigenvalues nil
Finally, let's presume that . If there are 2 linearly independent eigenvectors and , then the full general solution is , a single vector of constants. If at that place is only i linearly independent eigenvector V corresponding to the eigenvalue 0, then nosotros tin can find a generalized eigenvector and employ formula (6.8.iii):
For , we get . In Exercise 15 you lot will investigate a organization that has both eigenvalues nada.
Exercises 6.8
A
For each of the Systems 1–8, (a) find the eigenvalues and their corresponding linearly contained eigenvectors and (b) sketch/plot a few trajectories and show the position(s) of the eigenvector(s) if they do non accept complex entries. Do part (a) manually, but if the eigenvalues are irrational numbers, you may utilize technology to find the corresponding eigenvectors.
- 1.
-
,
- 2.
-
,
- 3.
-
,
- 4.
-
,
- 5.
-
,
- 6.
-
,
- 7.
-
,
- 8.
-
,
B
- 9.
-
Given a feature polynomial , what condition on α and β guarantees that there is a repeated eigenvalue?
- 10.
-
Allow . Show that A has only one eigenvalue if and only if .
- 11.
-
Write a system of beginning-society linear equations for which is a sink with eigenvalues and .
- 12.
-
Write a system of first-society linear equations for which is a source with eigenvalues and .
- 13.
-
Show that if Five is an eigenvector of a matrix A corresponding to eigenvalue λ and vector W is a solution of , then 5 and W are linearly contained. [Run across Eqs. (vi.viii.2)–(6.8.3).] [Hint: Suppose that for some scalar c. Then show that V must exist the zero vector.]
- xiv.
-
Suppose that a arrangement has only i eigenvalue λ, and that every eigenvector is a scalar multiple of one stock-still eigenvector, V. And so Eq. (6.8.3) tells us that any trajectory has the form .
- a.
-
If , show that the gradient of approaches the slope of the line adamant past 5 equally . [Hint: , equally a scalar multiple of , is parallel to .]
- b.
-
If , show that the gradient of approaches the slope of the line adamant past V as .
- fifteen.
-
Consider the system , .
- a.
-
Show that the only eigenvalue of the system is 0.
- b.
-
Notice the single independent eigenvector 5 corresponding to .
- c.
-
Evidence that every trajectory of this system is a directly line parallel to V, with trajectories on opposite sides of V moving in reverse directions. [Hint: First, for whatever trajectory not on the line adamant past V, wait at its slope, .]
- 16.
-
If is a system with a double eigenvalue and , bear witness that the general solution of the system is
C
- 17.
-
Prove that is the general solution of , where .
- xviii.
-
Suppose the matrix A has repeated existent eigenvalues λ and at that place is a pair of linearly contained eigenvectors associated with A. Evidence that .
- xix.
-
A special case of the Cayley–Hamilton Theorem states that if is the characteristic equation of a matrix A, then is the zero matrix. (We say that a matrix always satisfies its own characteristic equation.) Using this result, show that if a matrix A has a repeated eigenvalue λ and (the nada vector), then either V is an eigenvector of A or else is an eigenvector of A. [Encounter Appendix B.three if you are not familiar with matrix-matrix multiplication.]
Read total chapter
URL:
https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128182178000130
Basic Iterative Methods
William Ford , in Numerical Linear Algebra with Applications, 2015
20.4.5 The Spectral Radius and Rate of Convergence
Intuitively, there should be a link betwixt the spectral radius of the iteration matrix B and the rate of convergence. Suppose that B has n linearly independent eigenvectors, five one, v two,…, v n and associated eigenvalues λ 1, λ 2,…, λ n . Employ the notation of Theorems 20.ane and 20.2 for the mistake e (k). Since the eigenvectors are a basis,
Information technology follows that:
By continuing in this manner, there results
Let ρ (B) = λ 1 and suppose that |λ 1| > |λ ii| ≥ |λ iii| ≥ … ≥ λ n and then that
As one thousand becomes large, , ii ≤ i ≤ n becomes small and we have
This says that the error varies with the kth ability of the spectral radius and that the spectral radius is a good indicator for the rate of convergence.
Read full affiliate
URL:
https://www.sciencedirect.com/science/article/pii/B978012394435100020X
Linear Transformations
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (5th Edition), 2016
Criterion for Diagonalization
Given a linear operator Fifty on a finite dimensional vector space , our goal is to find a basis B for such that the matrix for L with respect to B is diagonal, as in Case three. But, just as not every square matrix can be diagonalized, neither tin every linear operator.
Definition
A linear operator Fifty on a finite dimensional vector space is diagonalizable if and only if the matrix representation of L with respect to some ordered basis for is a diagonal matrix.
The next event indicates precisely which linear operators are diagonalizable.
Theorem 5.22
Permit L exist a linear operator on an n-dimensional vector space . Then L is diagonalizable if and only if there is a set of north linearly independent eigenvectors for L.
Proof
Suppose that L is diagonalizable. And then there is an ordered basis B = (v 1,…,five n ) for such that the matrix representation for 50 with respect to B is a diagonal matrix D. Now, B is a linearly contained set. If we can evidence that each vector v i in B, for 1 ≤ i ≤ n, is an eigenvector corresponding to some eigenvalue for Fifty, then B volition exist a set of n linearly contained eigenvectors for L. Now, for each v i , nosotros have where d two is the (i, i) entry of D. Since coordinatization of vectors with respect to B is an isomorphism, we have Fifty(v i ) = d 2 v i , and so each v i is an eigenvector for L respective to the eigenvalue d 2 .
Conversely, suppose that B = {westward 1,…,due west n } is a prepare of n linearly independent eigenvectors for L, respective to the (not necessarily singled-out) eigenvalues λ 1,…,λ n , respectively. Since B contains linearly independent vectors, B is a ground for , by part (two) of Theorem 4.12. We show that the matrix A for L with respect to B is, in fact, diagonal. Now, for 1 ≤ i ≤ n,
Thus, A is a diagonal matrix, and and then Fifty is diagonalizable.
Instance 5
In Instance 3, Fifty: was divers by L([a, b]) = [b, a]. In that example, we constitute a prepare of two linearly independent eigenvectors for L, namely v 1 = [ane,ane] and v ii = [1,−i]. Since , Theorem five.22 indicates that L is diagonalizable. In fact, in Example 3, we computed the matrix for Fifty with respect to the ordered ground (v one,v 2) for to exist the diagonal matrix .
Case vi
Consider the linear operator 50: that rotates the plane counterclockwise through an angle of . At present, every nonzero vector v is moved to L(five), which is non parallel to v, since L(five) forms a 45° angle with v. Hence, L has no eigenvectors, and so a set of two linearly independent eigenvectors cannot exist establish for 50. Therefore, by Theorem 5.22, L is not diagonalizable.
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780128008539000050
Eigenvalues, Eigenvectors, and Differential Equations
Richard Bronson , ... John T. Saccoman , in Linear Algebra (Tertiary Edition), 2014
4.3 Diagonalization of Matrices
We are ready to respond the question that motivated this affiliate: Which linear transformations can be represented by diagonal matrices and what bases generate such representations? Remember that different matrices represent the same linear transformation if and just if those matrices are similar (Theorem 3 of Section 3.four). Therefore, a linear transformation has a diagonal matrix representation if and simply if any matrix representation of the transformation is like to a diagonal matrix.
To found whether a linear transformation T has a diagonal matrix representation, we first create one matrix representation for the transformation and then decide whether that matrix is like to a diagonal matrix. If it is, we say the matrix is diagonalizable, in which case T has a diagonal matrix representation.
If a matrix A is similar to a diagonal matrix D, and so the form of D is adamant. Both A and D have identical eigenvalues, and the eigenvalues of a diagonal matrix (which is both upper and lower triangular) are the elements on its master diagonal. Consequently, the main diagonal of D must be the eigenvalues of A. If, for instance,
A matrix is diagonalizable if it is similar to a diagonal matrix.
with eigenvalues − ane and 5, is diagonalizable, then A must be similar to either
Now allow A be an due north × n matrix with north linearly independent eigenvectors ten ane, x ii, … , ten n corresponding to the eigenvalues λ 1, λ ii, … , λdue north , respectively. Therefore,
(iv.7)
for j = ane,2, … , n. There are no restrictions on the multiplicity of the eigenvalues, so some or all of them may be equal. Set
Here M is called a modal matrix for A and D a spectral matrix for A. Now
(4.viii)
Because the columns of M are linearly independent, the column rank of M is n, the rank of M is north, and M − 1 exists. Premultiplying Equation (iv.8) past Thousand − i, we obtain
(4.9)
Postmultiplying Equation (4.8) past M − 1, we have
(4.10)
Thus, A is similar to D. We can retrace our steps and bear witness that if Equation (4.10) is satisfied, then One thousand must exist an invertible matrix having as its columns a prepare of eigenvectors of A. We have proven the following result.
▸Theorem one
An n × n matrix is diagonalizable if and only if the matrix possesses n linearly independent eigenvectors.◂
Case 1 Decide whether is diagonalizable.
Solution: Using the results of Example iii of Section 4.1, we take λ 1 = − 1 and λ 2 = v as the eigenvalues of A with corresponding eigenspaces spanned by the vectors
respectively. These two vectors are linearly contained, and then A is diagonalizable. Nosotros tin can choose either
Making the first choice, we find
Making the 2nd choice, nosotros find
In general, neither the modal matrix Chiliad nor the spectral matrix D is unique. All the same, in one case M is selected, then D is fully determined. The element of D located in the jth row and jth column must exist the eigenvalue respective to the eigenvector in the jth column of M. In detail,
is matched with
while
is matched with
Case two Determine whether is diagonalizable.
Solution: Using the results of Example 6 of Section 4.ane, we take
as a basis for the eigenspace corresponding to eigenvalue λ = 1 of multiplicity 2 and
as a basis corresponding to eigenvalue λ = − 1 of multiplicity 1. These iii vectors are linearly independent, so A is diagonalizable. If we choose
The procedure of determining whether a given set of eigenvectors is linearly independent is simplified by the following ii results.
▸Theorem 2
Eigenvectors of a matrix respective to distinct eigenvalues are linearly independent.◂
Proof
Let λ 1, λ 2, … , λk denote the distinct eigenvalues of an n × n matrix A with corresponding eigenvectors x one, x 2, … , x k . If all the eigenvalues have multiplicity 1, then k = due north, otherwise k < n. We utilise mathematical induction to prove that {10 1, x ii, … , x thou } is a linearly independent ready.
For k = 1, the gear up {x one} is linearly independent considering the eigenvector x i cannot exist 0. We now assume that the set {10 i, ten 2, … , ten yard− one} is linearly independent and apply this to bear witness that the set {x ane, x ii, … , 10 k− 1, x k } is linearly independent. This is equivalent to showing that the only solution to the vector equation
(iv.eleven)
is c 1 = c 2 = ⋯ = c k− 1 = cm = 0.Multiplying Equation (4.11) on the left past A and using the fact that Ax j = λj ten j for j = 1,2, … , k, we obtain
(4.12)
Multiplying Equation (4.eleven) by λthou , we obtain
(4.xiii)
Subtracting Equation (4.xiii) from (iv.12), we have
But the vectors {ten ane, 10 2, … , x k− i} are linearly contained by the induction hypothesis, hence the coefficients in the last equation must all be 0; that is,
from which nosotros imply that c ane = c ii = ⋯ = c k− 1 = 0, because the eigenvalues are distinct. Equation (iv.xi) reduces to cchiliad x k = 0 and because x k is an eigenvector, and therefore nonzero, nosotros also conclude that ck = 0, and the proof is consummate.
It follows from Theorems 1 and ii that any northward × n real matrix having n distinct real roots of its characteristic equation, that is a matrix having n eigenvalues all of multiplicity one, must be diagonalizable (see, in item, Example 1).
Example iii Determine whether is diagonalizable.
Solution: The matrix is lower triangular and then its eigenvalues are the elements on the main diagonal, namely ii, 3, and 4. Every eigenvalue has multiplicity 1, hence A is diagonalizable.
▸Theorem 3
If λ is an eigenvalue of multiplicity k of an north × n matrix A, then the number of linearly independent eigenvectors of A associated with λ is n − r(A − λ I), where r denotes rank.◂
Proof
The eigenvectors of A corresponding to the eigenvalue λ are all nonzero solutions of the vector Equation (A − λ I)x = 0. This homogeneous system is consistent, and then by Theorem 3 of Section two.6 the solutions will exist in terms of n − r(A − λ I) arbitrary unknowns. Since these unknowns tin be picked independently of each other, they generate n − r(A − λ I) linearly contained eigenvectors.
In Example 2, A is a 3 × 3 matrix (n = 3) and λ = 1 is an eigenvalue of multiplicity 2. In this case,
tin can exist transformed into row-reduced form (past adding to the 2d row − iii times the first row)
having rank 1. Thus, north − r(A − I) = 3 − 1 = 2 and A has 2 linearly independent eigenvectors associated with λ = 1. Two such vectors are exhibited in Example two.
Example iv Determine whether is diagonalizable.
Solution: The matrix is upper triangular so its eigenvalues are the elements on the principal diagonal, namely, 2 and ii. Thus, A is 2 × 2 matrix with one eigenvalue of multiplicity 2. Here
has a rank of 1. Thus, n − r(A − 2I) = ii − ane = 1 and A has just one linearly independent eigenvector associated with its eigenvalues, not two equally needed. Matrix A is not diagonalizable.
We saw in the beginning of Department 4.1 that if a linear transformation is represented by a diagonal matrix, then the basis that generates such a representation is a ground of eigenvectors. To this we now add together that a linear transformation , where is n-dimensional, tin can be represented by a diagonal matrix if and only if T possesses n-linearly independent eigenvectors. When such a set exists, it is a footing for .
If is an n-dimensional vector space, so a linear transformation may be represented by a diagonal matrix if and only if T possesses a basis of eigenvectors.
Case 5 Determine whether the linear transformation T : defined by
can be represented by a diagonal matrix.
Solution: A standard basis for is , and we showed in Case vii of Section iv.1 that a matrix representation for T with respect to this basis is
It at present follows from Example i that this matrix is diagonalizable; hence T can be represented by a diagonal matrix D, in fact, either of the two diagonal matrices produced in Example one.
Furthermore, we accept from Example vii of Department 4.one that − t + one is an eigenvector of T corresponding to λ 1 = − ane while 5t + x is an eigenvector respective λ two = 5. Since both polynomials stand for to distinct eigenvalues, the vectors are linearly independent and, therefore, constitute a footing. Setting , we have the matrix representation of T with respect to every bit
Example half-dozen Let exist the set of all ii × two existent upper triangular matrices. Determine whether the linear transformation divers by
can be represented by a diagonal matrix and, if so, produce a basis that generates such a representation.
Solution: is airtight nether addition and scalar multiplication, then information technology is a sub-space of A simple basis for is given by
With respect to these basis vectors,
and a matrix representation for T is
The eigenvalues of this matrix are 2, two, and 4. Even though the eigenvalues are not all distinct, the matrix still has 3 linearly independent eigenvectors, namely,
Thus, A is diagonalizable and, therefore, T has a diagonal matrix representation. Setting
which is one diagonal representation for T .
The vectors x 1, x 2, and 10 3 are coordinate representations with respect to the footing for
The set
is a ground of eigenvectors of T for the vector space . A matrix representation of T with respect to the basis is the diagonal matrix D.
Problems iv.3
In Problems i through 11, make up one's mind whether the matrices are diagonalizable. If they are, identify a modal matrix Chiliad and summate M − i AM.
- (one)
-
.
- (2)
-
.
- (iii)
-
.
- (four)
-
.
- (5)
-
.
- (half-dozen)
-
.
- (7)
-
.
- (8)
-
.
- (9)
-
.
- (10)
-
.
- (11)
-
.
In Problems 12 through 21, make up one's mind whether the linear transformations can exist represented by diagonal matrices and, if so, produce bases that volition generate such representations.
- (12)
-
defined by T (at + b) = (iia − iiib)t + (a − 2b).
- (xiii)
-
defined by T (at + b) = (foura + 3b)t + (iiia − fourb).
- (14)
-
defined by T (at 2 + bt + c) = at 2 + (2a − 3b + iiic)t + (a + 2b + twoc).
- (15)
-
divers by T (at 2 + bt + c) = (5a + b + twoc)t 2 + 3bt + (iia + b + 5c).
- (sixteen)
-
defined by T (at 2 + bt + c) = (3a + b)t 2 + (iiib + c)t + 3c.
- (17)
-
where is the set of all 2 × two real upper triangular matrices and
- (eighteen)
-
where is the set of all 2 × ii real upper triangular matrices and
- (19)
-
where is the set of all 2 × 2 real lower triangular matrices and
- (20)
-
defined by
- (21)
-
defined by .
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780123914200000044
Discrete Dynamical Systems, Bifurcations and Chaos in Economic science
Wei-Bin Zhang , in Mathematics in Science and Engineering, 2006
half dozen.2 Autonomous linear difference equations
We now study the post-obit linear homogenous difference equations
(half-dozen.2.1)
where
and A is a due north×n existent nonsingular matrix. A solution of arrangement (six.2.i) is an expression that satisfies this arrangement for all t ≥ 0. A full general solution is a solution that contains all solutions of the organisation. A particular solution is one that satisfies an initial condition x 0 = x(t 0). The problem of finding a particular solution with specified initial conditions is called an initial value trouble. Information technology can be seen that the solution of arrangement (6.2.1) has the form
Theorem vi.2.1. There exists a central set, denoted by
of solutions for system (half-dozen.ii.i). A full general solution is given by
Along with the homogeneous system (vi.2.ane), we consider the nonhomogeneous system
(6.2.ii)
The initial value problem (6.ii.two) has a unique solution given by
We see that the main problem is to summate At . There are some algorithms for computing At . Here, we introduce the Putzer algorithm. one Allow the characteristic equation of A be
where a0 = i. Let
exist the eigenvalues of A (some of them may be repeated). The following formula determines At
where
Instance Solve
The 3 eigenvalues of A are
Hence, we have
Applying the to a higher place adding results to
we get
is the solution.
Nosotros now use the Jordan to solve system (6.2.ane). Showtime, we consider the example that A is similar to the diagonal matrix
where ρi are the eigenvalues of A. 2 That is, there exists a non-singular matrix ρ such that
From AP = PD, we have
where ξi is the ith column of P. Nosotros come across that ξi is the eigenvector of A corresponding to the eigenvalue ρ i. From
and P being non-singular, nosotros accept
Consequently, nosotros have
(6.two.3)
Substituting (equation 6.2.3) into
with t 0 = 0 yields the general solution
(half dozen.2.4)
Equally
the general solution (half dozen.two.iv) can also exist expressed past
(6.2.5)
Later having calculated the eigenvalues and eigenvectors, we may direct determine a by (equation half-dozen.2.5) through the initial conditions without computing p −1
Case Find the general solution and the initial value problem of ten(t + 1) = Ax(t)
The iii eigenvalues of matrix A are
Correspondingly, we can detect three linearly independent vectors 3
It should exist noted that at that place are space choices for ξ ii and ξ three considering of multiplicity of the corresponding eigenvalues. The general solution is
The solution of the initial value problem is solved past substituting the initial condition x 0 into the to a higher place equation and then solving ai . We summate
Hence
We may too use x(t) = Atx 0 and (equation 6.ii.3) to solve the initial value problem. Nosotros become the same solution by computing
The reader is asked to cheque the result.
The matrix A may non be diagonalizable when A has repeated eigenvalues. There is something close to diagonal form chosen the Jordan canonical form of a square matrix. A bones Jordan cake associated with a value ρ is expressed
The Hashemite kingdom of jordan canonical form of a square matrix is compromised of such Hashemite kingdom of jordan blocks.
Theorem half dozen.2.two. (the Jordan approved class) Any n×due north matrix A is similar to a Jordan course given by
where each Ji is an si × si bones Jordan block and
Assume that A is similar to J under P, i.e., P −1 AP = J. We accept
It tin can be seen that
Nosotros can write Ji equally
where Due northi is an si × southwardi nilpotent matrix. Using for all chiliad ≥ southi , we have
The general solution of (equation six.2.one) (for t 0 = 0 ) is at present given by
Corollary 6.2.one. Presume that A is any north×n matrix. Then
if and only if |ρ| < i for all eigenvalues ρ of A.
Exercise 6.2
- 1
-
Use the Putzer algorithm to evaluate At
- (i)
-
- (2)
-
- 2
-
Solve the following systems with the Putzer algorithm
- (i)
-
- (ii)
-
- 3
-
Utilise formula (half dozen.1.v) to find the solution of x(t + 1) = Ax(t)
- (i)
-
- (ii)
-
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/commodity/pii/S0076539206800251
How To Find Number Of Linearly Independent Eigenvectors,
Source: https://www.sciencedirect.com/topics/mathematics/linearly-independent-eigenvectors
Posted by: krollevessureary.blogspot.com
0 Response to "How To Find Number Of Linearly Independent Eigenvectors"
Post a Comment