Eigenvalues and eigenvectors

Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021

Issues 6.5

In Problems 1–sixteen, notice a gear up of linearly independent eigenvectors for the given matrices.

1.

[ 2 i 1 4 ] ,

2.

[ 3 1 0 3 ] ,

3.

[ 3 0 0 3 ] ,

iv.

[ 2 1 1 0 one 0 1 one 2 ] ,

5.

[ 2 ane one 0 1 0 i 2 2 ] ,

six.

[ two 0 1 2 1 two ane 0 2 ] ,

7.

[ 1 one 1 0 0 0 one 2 3 ] ,

eight.

[ 1 2 iii 2 4 six 3 half dozen 9 ] ,

9.

[ 3 1 1 1 3 1 1 one 3 ] ,

ten.

[ 0 i 0 0 0 i 27 27 nine ] ,

11.

[ 0 1 0 0 0 1 1 3 3 ] ,

12.

[ 4 2 ane 2 7 2 1 2 four ] ,

13.

[ 0 1 0 0 0 0 1 0 0 0 0 1 1 4 half dozen 4 ] ,

xiv.

[ one 0 0 0 0 0 ane 0 0 0 0 ane 0 1 3 three ] ,

15.

[ 1 0 0 0 one 1 ane ane ane 1 2 1 ane 1 1 2 ] ,

16.

[ 3 1 ane 2 0 3 i one 0 0 2 0 0 0 0 two ] .

17.

The Vandermonde determinant

| 1 1 ane 10 ane ten two x n x 1 2 x two 2 10 n 2 10 1 northward one x two n 1 x northward n one |

is known to equal the product

( x two x 1 ) ( x iii 10 2 ) ( x 3 x i ) ( 10 4 x 3 ) ( 10 4 x ii ) ( x n x 1 ) .

Using this result, prove Theorem iii for n distinct eigenvalues.

Read total chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B978012818419600006X

The Algebraic Eigenvalue Trouble

William Ford , in Numerical Linear Algebra with Applications, 2015

18.2.1 Additional Property of a Diagonalizable Matrix

Theorem 5.3 states that if the n×n matrix A has n linearly independent eigenvectors v i, v 2, , 5 n , then A tin exist diagonalized by the matrix the eigenvector matrix X = (v i 5 2 … v north ). The converse of Theorem 5.3 is also true; that is, if a matrix tin be diagonalized, it must have n linearly independent eigenvectors. We need this issue for the purposes of developing the power method in Section xviii.two.2.

Theorem eighteen.1

If A is a real n × n matrix that is diagonalizable, it must have northward linearly independent eigenvectors.

Proof. We know there is an invertible matrix 5 such that V −one AV = D, where D = [ λ 1 λ ii λ n ] is a diagonal matrix, and let 5 1, five 2, , v due north be the columns of V. Since V is invertible, the v i are linearly independent. The relationship V −1 AV = D gives AV = VD, and using matrix column annotation we accept

A = [ five one v 2 5 n ] = [ v 1 v 2 v n ] [ λ 1 λ 2 λ north ] .

Column i of A = [ v one v ii v n ] is Av i , and cavalcade i of [ 5 1 v ii v n ] [ λ 1 λ 2 λ n ] is λ i 5 i , so Av i = λ i v i .

Thus, the linearly independent set 5 i, five 2, , v due north are eigenvectors of A corresponding to eigenvalues λane, λii, , λ northward .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123944351000181

Systems of Ordinary Differential Equations

Martha Fifty. Abell , James P. Braselton , in Differential Equations with Mathematica (Fourth Edition), 2016

six.6.2 Repeated Eigenvalues

We retrieve from our previous feel with repeated eigenvalues of a two × ii organization that the eigenvalue tin can have two linearly independent eigenvectors associated with it or merely 1 eigenvector associated with it. Hence, we investigate the beliefs of solutions in this instance by considering both of these possibilities.

one.

Suppose that the eigenvalue λ = λ 1 = λ ii has two respective linearly independent eigenvectors v 1 and v 2. And so, a general solution is

Ten = c 1 five 1 eastward λ t + c ii v 2 e λ t .

Hence, if λ > 0, then X becomes unbounded forth the line through the origin determined by the vector c 1 v 1 + c 2 v 2 where c 1 and c two are arbitrary constants. In this case, we telephone call the equilibrium indicate a degenerate unstable node (or an unstable star). On the other hand, if λ < 0, then Ten approaches (0, 0) along these lines, and we call (0, 0) a degenerate stable node (or stable star). Annotation that the proper noun "star" was selected due to the shape of the solutions.
2.

Suppose that λ = λ 1 = λ 2 has only one corresponding eigenvector v 1. Hence, a general solution is

10 = c 1 v 1 e λ t + c 2 v ane t + w ii due east λ t = c 1 five one + c 2 w ii e λ t + c ii v 1 t e λ t ,

where (Aλ I)due west 2 =five 1. We can more easily investigate the behavior of this solution if we write this solution as

10 = t due east λ t 1 t c 1 v 1 + c 2 westward 2 + c 2 v ane .

If λ < 0, lim t t due east λ t = 0 and lim t i t c 1 v one + c 2 w 2 + c two 5 1 = c 2 five 1 . Hence, the solutions approach (0, 0) along the line adamant by v 1, and we call (0, 0) a degenerate stable node. If λ > 0, the solutions go unbounded along this line, and we say that (0, 0) is a degenerate unstable node.

Example six.6.2

Classify the equilibrium point (0, 0) in the systems: (a) 10 = ten + 9 y y = x five y ; (b) ten = two x y = 2 y .

Solution

(a) Using Eigensystem,

a = ane 9 1 five ; Eigensystem [ a ]

{{−2, −two}, {{−3, ane}, {0, 0}}}

nosotros see that λ one = λ two = −2 and that there is just one respective eigenvector. Therefore, because λ = −2 < 0, (0, 0) is a degenerate stable node. Discover that in the graph of several members of the family of solutions of this organisation along with the direction field shown in Figure 6-40, which nosotros generate using the same technique as in part (b) of the previous case, the solutions approach (0, 0) along the line in the management of v 1 = 3 one , y = 1 3 10 .

Effigy 6-forty. The origin is a degenerate stable node

Articulate[x, y]pvf1=StreamPlot[{ten+ixy, −ten−5y}, {ten, −ane, 1}, {y, −1, ane}, StreamStyle→Fine];

Simplify[DSolve[{x′[t]==10[t]+niney[t], y′[t]==−x[t]−5y[t], ten[0]==x0, y[0]==y0}, {ten[t], y[t]}, t]]

{{x[t] → e −2t (x0 + threetx0 + 9ty0), y[t] → e −2t (y0 − t(x0 + 3y0))}}

sol [ x0 _ , y0 _ ] = { x0 + 3 t x0 + 9 t y0 East 2 t , ( t x0 ) + y0 3 t y0 E two t } ;

initconds1=Table[{−1, i}, {i, −1, ane, 2/nine}]; initconds2=Table[{ane, i}, {i, −1, 1, 2/nine}]; initconds3=Table[{i, 1}, {i, −1, 1, 2/nine}]; initconds4=Table[{i, −1}, {i, −one, i, 2/9}];

initconds = initconds1 initconds2 initconds3 initconds4 ;

toplot=Map[sol, initconds];

somegraphs=ParametricPlot[Evaluate[toplot], {t, −iii, three}, PlotRange→{{−1, one}, {−1, one}}, AspectRatio→i];

p4 = Plot [ x iii , { x , i , 1 } , . PlotStyle { { CMYKColor [ 0 , 0.89 , 0.94 , 0.28 ] , Thickness [ . 01 ] } } ] ;

Show[pvf1, somegraphs, p4, PlotRange→{{−1, i}, {−1, 1}}, AspectRatio→1, Axes→ Automatic, Frame→Simulated, AxesLabel→{x, y}, AxesOrigin→{0, 0}]

(b) We take λ one = λ 2 = 2 and ii linearly independent vectors, v 1 = 1 0 and five ii = 0 1 . (Note: The choice of these two vectors does non change the value of the solution, because of the form of the general solution in this case.)

a = ii 0 0 2 ; Eigensystem [ a ]

{{two, two}, {{0, 1}, {i, 0}}}

Because λ = 2 > 0, we classify (0,0) equally a degenerate unstable node (or star). Some of these solutions along with the direction field are graphed in Figure vi-41 in the same way as in part (c) of the previous example. Discover that they become unbounded in the direction of any vector in the xy-plane because v 1 = 1 0 and v 2 = 0 i .

Figure six-41. The origin is a degenerate unstable node

Clear[ten, y]pvf1=StreamPlot[{2x, 2y}, {10, −1, 1}, {y, −1, ane}, StreamStyle→Fine];

Simplify[DSolve[{x′[t]==2x[t], y′[t]==2y[t], x[0]==x0, y[0]==y0}, {x[t], y[t]}, t]]

{{10[t] → due east 2t x0, y[t] → eastward 2t y0}}

sol[{x0:, y0:}]={E 2t x0, E twot y0};

initconds = Table [ { 0.05 Cos [ 2 πt ] , 0.05 Sin [ two πt ] } , { t , 0 , 1 , one 24 } ] ;

toplot=Map[sol, initconds];

somegraphs=ParametricPlot[Evaluate[toplot], {t, −three, 3}, PlotRange→{{−ane, 1}, {−one, 1}}, AspectRatio→1];

Show[pvf1, somegraphs, PlotRange→{{−i, 1}, {−1, i}}, AspectRatio→i, Axes→Automatic, Frame→Simulated, AxesLabel→{ten, y}, AxesOrigin→{0, 0}]

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/B9780128047767000061

Systems of Differential Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (5th Edition), 2018

Repeated Eigenvalues

We recall from our previous experience with repeated eigenvalues of a 2 × 2 system that the eigenvalue can have two linearly independent eigenvectors associated with it or only 1 (linearly independent) eigenvector associated with information technology. We investigate the beliefs of solutions in the case of repeated eigenvalues by considering both of these possibilities.

i.

If the eigenvalue λ = λ 1 , 2 has ii respective linearly independent eigenvectors v 1 and v 2 a general solution is

X ( t ) = c 1 5 1 e λ t + c 2 5 2 e λ t = ( c 1 v ane + c 2 v 2 ) due east λ t .

If λ > 0 , and then 10 ( t ) becomes unbounded forth the lines through ( 0 , 0 ) adamant by the vectors c 1 5 1 + c ii v 2 , where c i and c 2 are arbitrary constants. In this case, we call the equilibrium betoken an unstable star node. However, if λ < 0 , then X ( t ) approaches ( 0 , 0 ) along these lines, and nosotros telephone call ( 0 , 0 ) a stable star node.
2.

If the eigenvalue λ = λ one , 2 has only one corresponding (linearly independent) eigenvector v = v one , a general solution is

Ten ( t ) = c 1 5 eastward λ t + c 2 ( 5 t + w ) e λ t = ( c 1 v + c 2 westward ) e λ t + c 2 v t e λ t

where due west satisfies ( A λ I ) westward = five . If nosotros write this solution equally

X ( t ) = t e λ t [ ( c 1 5 + c 2 westward ) 1 t + c 2 v ] ,

we tin can more easily investigate the beliefs of this solution. If λ < 0 , then lim t t e λ t = 0 and lim t [ ( c i five + c 2 w ) 1 t + c 2 v ] = c 2 v . The solutions approach ( 0 , 0 ) along the line through ( 0 , 0 ) determined past v, and we phone call ( 0 , 0 ) a stable scarce node. If λ > 0 , the solutions become unbounded along this line, and we say that ( 0 , 0 ) is an unstable deficient node.

Note: The name "star" was selected due to the shape of the solutions.

Example vi.37

Image 2
Classify the equilibrium point ( 0 , 0 ) in the systems: (a) { ten = ten + ix y y = x 5 y ; and (b) { x = 2 x y = ii y .

Solution: (a) The eigenvalues are found past solving

| i λ 9 1 five λ | = λ ii + 4 λ + 4 = ( λ + ii ) 2 = 0 .

Hence, λ 1 , 2 = 2 . In this case, an eigenvector v one = ( 10 i y 1 ) satisfies ( 3 nine 1 3 ) ( x one y 1 ) = ( 0 0 ) , which is equivalent to ( ane iii 0 0 ) ( x i y ane ) = ( 0 0 ) , so in that location is merely i corresponding (linearly independent) eigenvector v i = ( 3 y ane y 1 ) = ( iii 1 ) y i . Because λ = 2 < 0 , ( 0 , 0 ) is a degenerate stable node. In this instance, the eigenline is y = ten / iii . We graph this line in Fig. 6.xvA and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as t and associate with each arrows directed toward the origin.

Figure 6.15

Effigy 6.fifteen. (A) Phase portrait for Example half dozen.37, solution (a). (B) Phase portrait for Case 6.37, solution (b).

(b) Solving the feature equation

| 2 λ 0 0 two λ | = ( 2 λ ) 2 = 0 ,

we have λ = λ 1 , ii = 2 . However, because an eigenvector v 1 = ( x one y ane ) satisfies the system ( 0 0 0 0 ) ( ten 1 y i ) = ( 0 0 ) , any nonzero choice of 5 ane is an eigenvector. If we select 2 linearly independent vectors such as 5 1 = ( 1 0 ) and v 2 = ( 0 i ) , nosotros obtain ii linearly independent eigenvectors respective to λ 1 , ii = ii . (Note: The option of these two vectors does not change the value of the solution, considering of the grade of the full general solution in this instance.) Because λ = ii > 0 , nosotros classify ( 0 , 0 ) every bit a degenerate unstable star node. A full general solution of the system is X ( t ) = c 1 ( 1 0 ) east two t + c 2 ( 0 1 ) eastward 2 t , so when nosotros eliminate the parameter, we obtain y = c 2 x / c 1 . Therefore, the trajectories of this system are lines passing through the origin. In Fig. six.15B, we graph several trajectories. Because of the positive eigenvalue, we acquaintance with each an arrow directed away from the origin.  □

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128149485000069

Systems of Differential Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014

Repeated Eigenvalues

We recall from our previous experience with repeated eigenvalues of a 2   × ii organization that the eigenvalue tin have two linearly contained eigenvectors associated with it or only one (linearly contained) eigenvector associated with it. We investigate the behavior of solutions in the example of repeated eigenvalues by considering both of these possibilities.

1.

If the eigenvalue λ   =   λ1,2 has two corresponding linearly independent eigenvectors v one and v 2, a general solution is

Ten t = c one 5 1 eastward λ t + c 2 v ii e λ t = c one 5 one + c 2 v two e λ t .

If λ >  0, then 10(t) becomes unbounded along the lines through (0, 0) determined by the vectors c 1 v ane  + c 2 5 2, where c ane and c 2 are capricious constants. In this case, we call the equilibrium betoken an unstable star node. However, if λ <  0, and so Ten(t) approaches (0, 0) along these lines, and we call (0, 0) a stable star node.

Notation: The name "star" was selected due to the shape of the solutions.

2.

If the eigenvalue λ   =   λ1,2 has merely one corresponding (linearly independent) eigenvector v  = v 1, a general solution is

X t = c 1 v east λ t + c 2 five t + due west due east λ t = c 1 v + c 2 w e λ t + c 2 v t e λ t

where w satisfies (A    λI)w  = 5. If we write this solution as

10 t = t e λ t c 1 v + c 2 w one t + c 2 v ,

nosotros tin more hands investigate the behavior of this solution. If λ <  0, and so lim t    teλt   =   0 and lim t    [(c 1 v  + c two w)(1/t)   + c 2 v]   = c 2 five. The solutions approach (0, 0) forth the line through (0, 0) determined by v, and we call (0, 0) a stable deficient node. If λ >  0, the solutions become unbounded along this line, and nosotros say that (0, 0) is an unstable scarce node.

Example 6.vi.3

Classify the equilibrium point (0, 0) in the systems: (a) x = x + 9 y y = x 5 y and (b) x = 2 10 y = ii y .

Solution

(a)

The eigenvalues are plant by solving 1 λ 9 1 5 λ = λ ii + 4 λ + 4 = λ + 2 2 = 0 . Hence, λi,2  =     2. In this case, an eigenvector v 1 = ten 1 y 1 satisfies three ix one 3 x one y 1 = 0 0 , which is equivalent to i 3 0 0 x 1 y i = 0 0 , and so in that location is only ane corresponding (linearly independent) eigenvector v ane = 3 y 1 y 1 = iii one y 1 . Because λ   =     2 <  0, (0, 0) is a degenerate stable node. In this instance, the eigenline is y  =   x/3. Nosotros graph this line in Figure 6.15(a) and direct the arrows toward the origin considering of the negative eigenvalue. Next, nosotros sketch trajectories that become tangent to the eigenline as t    ∞and associate with each arrows directed toward the origin.

Figure six.15. (a) Phase portrait for Case 6.6.3, solution (a). (b) Phase portrait for Example half dozen.6.3, solution (b).

(b)

Solving the characteristic equation

2 λ 0 0 2 λ = 2 λ two = 0

we take λ   =   λ1,2  =   2. Still, because an eigenvector v 1 = x 1 y 1 satisfies the system 0 0 0 0 x i y ane = 0 0 , any nonzero choice of v i is an eigenvector. If nosotros select two linearly contained vectors such as v ane = i 0 and v ii = 0 1 , nosotros obtain two linearly independent eigenvectors corresponding to λ ane,ii  =   2. (Note: The selection of these two vectors does not modify the value of the solution, because of the form of the general solution in this case.) Considering λ   =   two >  0, we allocate (0, 0) as a degenerate unstable star node. A full general solution of the system is 10 t = c 1 i 0 e 2 t + c 2 0 1 due east two t ,and then when we eliminate the parameter, we obtain y  = c 2 x/c 1. Therefore, the trajectories of this organization are lines passing through the origin. In Figure 6.15(b), we graph several trajectories. Considering of the positive eigenvalue, we acquaintance with each an pointer directed away from the origin.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780124172197000065

Systems of linear differential equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

6.8.3 Both eigenvalues nil

Finally, let's presume that λ 1 = λ 2 = 0 . If there are 2 linearly independent eigenvectors 5 ane and 5 2 , then the full general solution is 10 ( t ) = c i e 0 t V i + c two e 0 t V 2 = c ane 5 1 + c ii V 2 , a single vector of constants. If at that place is only i linearly independent eigenvector V corresponding to the eigenvalue 0, then nosotros tin can find a generalized eigenvector and employ formula (6.8.iii):

10 ( t ) = c 1 e λ t Five + c 2 [ t e λ t Five + due east λ t W ] .

For λ = 0 , we get X ( t ) = c ane 5 + c 2 [ t V + W ] = ( c 1 + c two t ) V + c ii W . In Exercise 15 you lot will investigate a organization that has both eigenvalues nada.

Exercises 6.8

A

For each of the Systems 1–8, (a) find the eigenvalues and their corresponding linearly contained eigenvectors and (b) sketch/plot a few trajectories and show the position(s) of the eigenvector(s) if they do non accept complex entries. Do part (a) manually, but if the eigenvalues are irrational numbers, you may utilize technology to find the corresponding eigenvectors.

1.

10 ˙ = 3 x , y ˙ = iii y

2.

10 ˙ = 4 x , y ˙ = ten 4 y

3.

ten ˙ = 2 x + y , y ˙ = 4 y x

4.

x ˙ = 3 ten y , y ˙ = 4 x y

5.

ten ˙ = 2 y 3 x , y ˙ = y 2 x

6.

x ˙ = 5 10 + 3 y , y ˙ = iii ten y

7.

ten ˙ = 3 x y , y ˙ = ten y

8.

x ˙ = 2 ten + 5 y , y ˙ = 2 y

B

9.

Given a feature polynomial λ 2 + α λ + β , what condition on α and β guarantees that there is a repeated eigenvalue?

10.

Allow A = [ a b c d ] . Show that A has only one eigenvalue if and only if [ trace ( A ) ] 2 4 det ( A ) = 0 .

11.

Write a system of beginning-society linear equations for which ( 0 , 0 ) is a sink with eigenvalues λ ane = ii and λ ii = 2 .

12.

Write a system of first-society linear equations for which ( 0 , 0 ) is a source with eigenvalues λ ane = 3 and λ ii = 3 .

13.

Show that if Five is an eigenvector of a ii × two matrix A corresponding to eigenvalue λ and vector W is a solution of ( A λ I ) W = V , then 5 and W are linearly contained. [Run across Eqs. (vi.viii.2)(6.8.3).] [Hint: Suppose that W = c V for some scalar c. Then show that V must exist the zero vector.]

xiv.

Suppose that a arrangement X ˙ = A X has only i eigenvalue λ, and that every eigenvector is a scalar multiple of one stock-still eigenvector, V. And so Eq. (6.8.3) tells us that any trajectory has the form 10 ( t ) = c ane due east λ t V + c 2 [ t e λ t Five + eastward λ t W ] = t e λ t [ 1 t ( c one V + W ) + c two V ] .

a.

If λ < 0 , show that the gradient of X ( t ) approaches the slope of the line adamant past 5 equally t . [Hint: east λ t t 10 ( t ) , equally a scalar multiple of X ( t ) , is parallel to X ( t ) .]

b.

If λ < 0 , show that the gradient of 10 ( t ) approaches the slope of the line adamant past V as t .

fifteen.

Consider the system 10 ˙ = 6 ten + four y , y ˙ = nine 10 half-dozen y .

a.

Show that the only eigenvalue of the system is 0.

b.

Notice the single independent eigenvector 5 corresponding to λ = 0 .

c.

Evidence that every trajectory of this system is a directly line parallel to V, with trajectories on opposite sides of V moving in reverse directions. [Hint: First, for whatever trajectory not on the line adamant past V, wait at its slope, d y / d x .]

16.

If { x ˙ = a 10 + b y , y ˙ = c x + d y } is a system with a double eigenvalue and a d , bear witness that the general solution of the system is

c 1 due east λ t [ 2 b d a ] + c 2 e λ t ( t [ two b d a ] + [ 0 two ] ) ,

where λ = ( a + d ) / 2 .

C

17.

Prove that c ane east λ t [ 1 0 ] + c ii e λ t [ t 1 ] is the general solution of X ˙ = A 10 , where A = [ λ 1 0 λ ] .

xviii.

Suppose the matrix A has repeated existent eigenvalues λ and at that place is a pair of linearly contained eigenvectors associated with A. Evidence that A = [ λ 0 0 λ ] .

xix.

A special case of the Cayley–Hamilton Theorem states that if λ 2 + α λ + β = 0 is the characteristic equation of a matrix A, then A two + α A + β I is the zero matrix. (We say that a two × 2 matrix always satisfies its own characteristic equation.) Using this result, show that if a 2 × 2 matrix A has a repeated eigenvalue λ and V = [ x y ] 0 (the nada vector), then either V is an eigenvector of A or else ( A λ I ) Five is an eigenvector of A. [Encounter Appendix B.three if you are not familiar with matrix-matrix multiplication.]

Read total chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128182178000130

Basic Iterative Methods

William Ford , in Numerical Linear Algebra with Applications, 2015

20.4.5 The Spectral Radius and Rate of Convergence

Intuitively, there should be a link betwixt the spectral radius of the iteration matrix B and the rate of convergence. Suppose that B has n linearly independent eigenvectors, five one, v two,…, v n and associated eigenvalues λ 1, λ 2,…, λ n . Employ the notation of Theorems 20.ane and 20.2 for the mistake e (k). Since the eigenvectors are a basis,

e ( 0 ) = i = i due north c i v i .

Information technology follows that:

e ( 1 ) = B eastward ( 0 ) = i = 1 n c i B 5 i = i = 1 n c i λ i five i e ( 2 ) = B e ( 1 ) = i = one n c i λ i B 5 i = i = 1 n c i λ i 2 v i .

By continuing in this manner, there results

e ( k ) = i = 1 n c i λ i k v i .

Let ρ (B) = λ 1 and suppose that |λ 1| > |λ ii| ≥ |λ iii| ≥ … ≥ λ n and then that

eastward ( k ) = c 1 λ 1 k v 1 + i = ii northward c i λ i k v i = λ i k ( c 1 v 1 + i = 2 n c i ( λ i λ 1 ) k 5 i ) .

As one thousand becomes large, ( λ i λ one ) k , ii ≤ in becomes small and we have

e ( k ) λ ane k c 1 v ane .

This says that the error varies with the kth ability of the spectral radius and that the spectral radius is a good indicator for the rate of convergence.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B978012394435100020X

Linear Transformations

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (5th Edition), 2016

Criterion for Diagonalization

Given a linear operator Fifty on a finite dimensional vector space V , our goal is to find a basis B for 5 such that the matrix for L with respect to B is diagonal, as in Case three. But, just as not every square matrix can be diagonalized, neither tin every linear operator.

Definition

A linear operator Fifty on a finite dimensional vector space Five is diagonalizable if and only if the matrix representation of L with respect to some ordered basis for V is a diagonal matrix.

The next event indicates precisely which linear operators are diagonalizable.

Theorem 5.22

Permit L exist a linear operator on an n-dimensional vector space V . Then L is diagonalizable if and only if there is a set of north linearly independent eigenvectors for L.

Proof

Suppose that L is diagonalizable. And then there is an ordered basis B = (v 1,…,five n ) for V such that the matrix representation for 50 with respect to B is a diagonal matrix D. Now, B is a linearly contained set. If we can evidence that each vector v i in B, for 1 ≤ in, is an eigenvector corresponding to some eigenvalue for Fifty, then B volition exist a set of n linearly contained eigenvectors for L. Now, for each v i , nosotros have L v i B = D [ v i ] B = De i = d i i e i = d i i [ v i ] B = [ d i i v i ] B , where d two is the (i, i) entry of D. Since coordinatization of vectors with respect to B is an isomorphism, we have Fifty(v i ) = d 2 v i , and so each v i is an eigenvector for L respective to the eigenvalue d 2 .

Conversely, suppose that B = {westward 1,…,due west n } is a prepare of n linearly independent eigenvectors for L, respective to the (not necessarily singled-out) eigenvalues λ 1,…,λ n , respectively. Since B contains n = dim ( V ) linearly independent vectors, B is a ground for 5 , by part (two) of Theorem 4.12. We show that the matrix A for L with respect to B is, in fact, diagonal. Now, for 1 ≤ in,

i th cavalcade of A = [ L ( w i ) ] B = [ λ i w i ] B = λ i [ w i ] B = λ i due east i .

Thus, A is a diagonal matrix, and and then Fifty is diagonalizable.

Instance 5

In Instance 3, Fifty: R 2 R 2 was divers by L([a, b]) = [b, a]. In that example, we constitute a prepare of two linearly independent eigenvectors for L, namely v 1 = [ane,ane] and v ii = [1,−i]. Since dim ( R 2 ) = 2 , Theorem five.22 indicates that L is diagonalizable. In fact, in Example 3, we computed the matrix for Fifty with respect to the ordered ground (v one,v 2) for R two to exist the diagonal matrix 1 0 0 1 .

Case vi

Consider the linear operator 50: R 2 R 2 that rotates the plane counterclockwise through an angle of π four . At present, every nonzero vector v is moved to L(five), which is non parallel to v, since L(five) forms a 45° angle with v. Hence, L has no eigenvectors, and so a set of two linearly independent eigenvectors cannot exist establish for 50. Therefore, by Theorem 5.22, L is not diagonalizable.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128008539000050

Eigenvalues, Eigenvectors, and Differential Equations

Richard Bronson , ... John T. Saccoman , in Linear Algebra (Tertiary Edition), 2014

4.3 Diagonalization of Matrices

We are ready to respond the question that motivated this affiliate: Which linear transformations can be represented by diagonal matrices and what bases generate such representations? Remember that different matrices represent the same linear transformation if and just if those matrices are similar (Theorem 3 of Section 3.four). Therefore, a linear transformation has a diagonal matrix representation if and simply if any matrix representation of the transformation is like to a diagonal matrix.

To found whether a linear transformation T has a diagonal matrix representation, we first create one matrix representation for the transformation and then decide whether that matrix is like to a diagonal matrix. If it is, we say the matrix is diagonalizable, in which case T has a diagonal matrix representation.

If a matrix A is similar to a diagonal matrix D, and so the form of D is adamant. Both A and D have identical eigenvalues, and the eigenvalues of a diagonal matrix (which is both upper and lower triangular) are the elements on its master diagonal. Consequently, the main diagonal of D must be the eigenvalues of A. If, for instance,

A matrix is diagonalizable if it is similar to a diagonal matrix.

A = one 2 iv 3

with eigenvalues −   ane and 5, is diagonalizable, then A must be similar to either

ane 0 0 five or 5 0 0 one

Now allow A be an due north  × n matrix with north linearly independent eigenvectors ten ane, x ii, … , ten n corresponding to the eigenvalues λ 1, λ ii, … , λdue north , respectively. Therefore,

(iv.7) A ten j = λ j 10 j

for j  =   ane,2, … , n. There are no restrictions on the multiplicity of the eigenvalues, so some or all of them may be equal. Set

M = ten 1 x 2 x n and

D = λ 1 0 0 0 λ ii 0 0 0 λ n

Here M is called a modal matrix for A and D a spectral matrix for A. Now

(4.viii) AM = A ten i 10 2 x north = A x 1 A x 2 A 10 due north = λ 1 x 1 λ two x 2 λ northward x northward = x 1 ten 2 x due north D = Doctor

Because the columns of M are linearly independent, the column rank of M is n, the rank of M is north, and M   1 exists. Premultiplying Equation (iv.8) past Thousand   i, we obtain

(4.9) D = 1000 1 AM

Postmultiplying Equation (4.8) past M   1, we have

(4.10) A = Physician One thousand 1

Thus, A is similar to D. We can retrace our steps and bear witness that if Equation (4.10) is satisfied, then One thousand must exist an invertible matrix having as its columns a prepare of eigenvectors of A. We have proven the following result.

▸Theorem one

An n  × n matrix is diagonalizable if and only if the matrix possesses n linearly independent eigenvectors.◂

Case 1 Decide whether A = 1 2 iv 3 is diagonalizable.

Solution: Using the results of Example iii of Section 4.1, we take λ 1  =     1 and λ 2  =   v as the eigenvalues of A with corresponding eigenspaces spanned by the vectors

x 1 = 1 1 and x ii = ane 2

respectively. These two vectors are linearly contained, and then A is diagonalizable. Nosotros tin can choose either

M = 1 1 1 2 or M = one i 2 1

Making the first choice, we find

D = M 1 AM = 1 3 2 1 ane 1 1 2 4 3 1 ane one ii = ane 0 0 5

Making the 2nd choice, nosotros find

D = K 1 AM = i iii ane one 2 1 1 2 four 3 one ane 2 1 = 5 0 0 one

In general, neither the modal matrix Chiliad nor the spectral matrix D is unique. All the same, in one case M is selected, then D is fully determined. The element of D located in the jth row and jth column must exist the eigenvalue respective to the eigenvector in the jth column of M. In detail,

M = ten 2 10 1 x three ten n

is matched with

D = λ 2 0 0 0 0 λ one 0 0 0 0 λ 3 0 0 0 0 λ due north

while

M = x northward 10 n i 10 1

is matched with

D = λ northward 0 0 0 λ north 1 0 0 0 λ 1

Case two Determine whether A = 2 1 0 3 2 0 0 0 1 is diagonalizable.

Solution: Using the results of Example 6 of Section 4.ane, we take

x 1 = 1 1 0 and x ii = 0 0 1

as a basis for the eigenspace corresponding to eigenvalue λ  =   1 of multiplicity 2 and

x 3 = 1 3 0

as a basis corresponding to eigenvalue λ  =     1 of multiplicity 1. These iii vectors are linearly independent, so A is diagonalizable. If we choose

M = i 0 i 1 0 three 0 i 0 , so Thousand ane AM = 1 0 0 0 i 0 0 0 1

The procedure of determining whether a given set of eigenvectors is linearly independent is simplified by the following ii results.

▸Theorem 2

Eigenvectors of a matrix respective to distinct eigenvalues are linearly independent.◂

Proof

Let λ 1, λ 2, … , λk denote the distinct eigenvalues of an n  × n matrix A with corresponding eigenvectors x one, x 2, … , x k . If all the eigenvalues have multiplicity 1, then k  = due north, otherwise k  < n. We utilise mathematical induction to prove that {10 1, x ii, … , x thou } is a linearly independent ready.

For k  =   1, the gear up {x one} is linearly independent considering the eigenvector x i cannot exist 0. We now assume that the set {10 i, ten 2, … , ten yard  one} is linearly independent and apply this to bear witness that the set {x ane, x ii, … , 10 k  1, x k } is linearly independent. This is equivalent to showing that the only solution to the vector equation

(iv.eleven) c one 10 i + c two x 2 + + c k one x g 1 + c m ten k = 0

is c 1  = c 2  =     = c k  1  = cm   =   0.

Multiplying Equation (4.11) on the left past A and using the fact that Ax j   = λj ten j for j  =   1,2, … , k, we obtain

(4.12) c 1 λ i x 1 + c 2 λ 2 x ii + + c grand 1 λ grand 1 ten k 1 + c chiliad λ chiliad x chiliad = 0

Multiplying Equation (4.eleven) by λthou , we obtain

(4.xiii) c 1 λ k x 1 + c two λ k x 2 + + c k 1 λ k x k 1 + c 1000 λ grand x thou = 0

Subtracting Equation (4.xiii) from (iv.12), we have

c one λ 1 λ chiliad x one + c two λ 2 λ chiliad ten two + + c yard ane λ k 1 λ k ten k one = 0

But the vectors {ten ane, 10 2, … , x k  i} are linearly contained by the induction hypothesis, hence the coefficients in the last equation must all be 0; that is,

c 1 λ 1 λ grand = c ii λ ii λ k = = c k i λ m one λ k = 0

from which nosotros imply that c ane  = c ii  =     = c k  1  =   0, because the eigenvalues are distinct. Equation (iv.xi) reduces to cchiliad x k   = 0 and because x k is an eigenvector, and therefore nonzero, nosotros also conclude that ck   =   0, and the proof is consummate.

It follows from Theorems 1 and ii that any northward  × n real matrix having n distinct real roots of its characteristic equation, that is a matrix having n eigenvalues all of multiplicity one, must be diagonalizable (see, in item, Example 1).

Example iii Determine whether A = 2 0 0 3 three 0 ii 1 4 is diagonalizable.

Solution: The matrix is lower triangular and then its eigenvalues are the elements on the main diagonal, namely ii, 3, and 4. Every eigenvalue has multiplicity 1, hence A is diagonalizable.

▸Theorem 3

If λ is an eigenvalue of multiplicity k of an north  × n matrix A, then the number of linearly independent eigenvectors of A associated with λ is n  r(A  λ I), where r denotes rank.◂

Proof

The eigenvectors of A corresponding to the eigenvalue λ are all nonzero solutions of the vector Equation (A  λ I)x  = 0. This homogeneous system is consistent, and then by Theorem 3 of Section two.6 the solutions will exist in terms of n  r(A  λ I) arbitrary unknowns. Since these unknowns tin be picked independently of each other, they generate n  r(A  λ I) linearly contained eigenvectors.

In Example 2, A is a 3   ×   3 matrix (n  =   3) and λ  =   1 is an eigenvalue of multiplicity 2. In this case,

A 1 I = A I = i i 0 3 3 0 0 0 0

tin can exist transformed into row-reduced form (past adding to the 2d row −   iii times the first row)

ane ane 0 0 0 0 0 0 0

having rank 1. Thus, north  r(A  I)   =   3     1   =   2 and A has 2 linearly independent eigenvectors associated with λ  =   1. Two such vectors are exhibited in Example two.

Example iv Determine whether A = two one 0 ii is diagonalizable.

Solution: The matrix is upper triangular so its eigenvalues are the elements on the principal diagonal, namely, 2 and ii. Thus, A is 2   ×   2 matrix with one eigenvalue of multiplicity 2. Here

A two I = 0 one 0 0

has a rank of 1. Thus, n  r(A    2I)   =   ii     ane   =   1 and A has just one linearly independent eigenvector associated with its eigenvalues, not two equally needed. Matrix A is not diagonalizable.

We saw in the beginning of Department 4.1 that if a linear transformation T : V V is represented by a diagonal matrix, then the basis that generates such a representation is a ground of eigenvectors. To this we now add together that a linear transformation T : V V , where Five is n-dimensional, tin can be represented by a diagonal matrix if and only if T possesses n-linearly independent eigenvectors. When such a set exists, it is a footing for V .

If 5 is an n-dimensional vector space, so a linear transformation T : V V may be represented by a diagonal matrix if and only if T possesses a basis of eigenvectors.

Case 5 Determine whether the linear transformation T : P 1 P 1 defined by

T at + b = a + 2 b t + 4 a + 3 b

can be represented by a diagonal matrix.

Solution: A standard basis for P i is B = t 1 , and we showed in Case vii of Section iv.1 that a matrix representation for T with respect to this basis is

A = 1 2 4 3

It at present follows from Example i that this matrix is diagonalizable; hence T can be represented by a diagonal matrix D, in fact, either of the two diagonal matrices produced in Example one.

Furthermore, we accept from Example vii of Department 4.one that − t  +   one is an eigenvector of T corresponding to λ 1  =     ane while 5t  +   x is an eigenvector respective λ two  =   5. Since both polynomials stand for to distinct eigenvalues, the vectors are linearly independent and, therefore, constitute a footing. Setting C = t + 1 , 5 t + x , we have the matrix representation of T with respect to C every bit

A C C = D = ane 0 0 five

Example half-dozen Let U exist the set of all ii   ×   two existent upper triangular matrices. Determine whether the linear transformation T : U U divers by

T a b 0 c = 3 a + two b + c 2 b 0 a + 2 b + three c

can be represented by a diagonal matrix and, if so, produce a basis that generates such a representation.

Solution: U is airtight nether addition and scalar multiplication, then information technology is a sub-space of M two × 2 . A simple basis for U is given by

B = ane 0 0 0 0 1 0 0r 0 0 0 1

With respect to these basis vectors,

T 1 0 0 0 = 3 0 0 one = 3 1 0 0 0 + 0 0 i 0 0 + 1 0 0 0 1 3 0 one

T 0 1 0 0 = ii 2 0 2 = 2 ane 0 0 0 + 2 0 i 0 0 + 2 0 0 0 1 ii 2 2

T 0 0 0 i = one 0 0 three = 1 i 0 0 0 + 0 0 i 0 0 + 3 0 0 0 i 1 0 3

and a matrix representation for T is

A = 3 2 ane 0 2 0 1 2 3

The eigenvalues of this matrix are 2, two, and 4. Even though the eigenvalues are not all distinct, the matrix still has 3 linearly independent eigenvectors, namely,

x 1 = 2 one 0 , x ii = 1 0 i , and ten 3 = one 0 1

Thus, A is diagonalizable and, therefore, T has a diagonal matrix representation. Setting

Thou = 2 i 1 1 0 0 0 1 1 , we have D = Thousand ane AM = 2 0 0 0 2 0 0 0 four

which is one diagonal representation for T .

The vectors x 1, x 2, and 10 3 are coordinate representations with respect to the B footing for

2 ane 0 2 1 0 0 0 + 1 0 1 0 0 + 0 0 0 0 1 = 2 1 0 0

i 0 one i ane 0 0 0 + 0 0 one 0 0 + one 0 0 0 i = i 0 0 one

1 0 1 ane 1 0 0 0 + 0 0 1 0 0 + 1 0 0 0 1 = i 0 0 1

The set

C = 2 1 0 0 ane 0 0 i ane 0 0 1

is a ground of eigenvectors of T for the vector space U . A matrix representation of T with respect to the C basis is the diagonal matrix D.

Problems iv.3

In Problems i through 11, make up one's mind whether the matrices are diagonalizable. If they are, identify a modal matrix Chiliad and summate M   i AM.

(one)

A = two 3 1 ii .

(2)

A = 4 3 three 4 .

(iii)

A = three 1 1 v .

(four)

A = one ane i 0 1 0 0 0 1 .

(5)

A = 1 0 0 2 3 3 1 2 two .

(half-dozen)

A = 5 1 2 0 3 0 2 1 5 .

(7)

A = 1 2 three 2 4 half dozen 3 six 9 .

(8)

A = 3 1 1 1 3 i 1 1 iii .

(9)

A = 7 3 3 0 1 0 3 three 1 .

(10)

A = 3 1 0 0 iii 1 0 0 3 .

(11)

A = three 0 0 0 three 1 0 0 3 .

In Problems 12 through 21, make up one's mind whether the linear transformations can exist represented by diagonal matrices and, if so, produce bases that volition generate such representations.

(12)

T : P 1 P ane defined by T (at  + b)   =   (iia    iiib)t  +   (a    2b).

(xiii)

T : P 1 P 1 defined by T (at  + b)   =   (foura  +   3b)t  +   (iiia    fourb).

(14)

T : P 2 P 2 defined by T (at 2  + bt  + c)   = at 2  +   (2a    3b  +   iiic)t  +   (a  +   2b  +   twoc).

(15)

T : P 2 P 2 divers by T (at 2  + bt  + c)   =   (5a  + b  +   twoc)t 2  +   3bt  +   (iia  + b  +   5c).

(sixteen)

T : P 2 P 2 defined by T (at 2  + bt  + c)   =   (3a  + b)t 2  +   (iiib  + c)t  +   3c.

(17)

T : U U where U is the set of all 2   ×   two real upper triangular matrices and

T a b 0 c = a + 2 b + 3 c ii a + 4 b + 6 c 0 3 a + vi b + nine c .

(eighteen)

T : U U where U is the set of all 2   ×   ii real upper triangular matrices and

T a b 0 c = 7 a + 3 b + 3 c b 0 3 a 3 b + c .

(19)

T : W Westward where Due west is the set of all 2   ×   2 real lower triangular matrices and

T a 0 b c = 3 a b + c 0 a + 3 b c a b + 3 c .

(20)

T : R 3 R 3 defined by T a b c = c a b .

(21)

T : R 3 R 3 defined by T a b c = 3 a + b 3 b + c c .

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123914200000044

Discrete Dynamical Systems, Bifurcations and Chaos in Economic science

Wei-Bin Zhang , in Mathematics in Science and Engineering, 2006

half dozen.2 Autonomous linear difference equations

We now study the post-obit linear homogenous difference equations

(half-dozen.2.1) 10 ( t + 1 ) = A ten ( t ) ,

where

x ( t ) = ( x ane ( t ) , ten ii ( t ) , , x n ( t ) ) T ,

and A is a due north×n existent nonsingular matrix. A solution of arrangement (six.2.i) is an expression that satisfies this arrangement for all t ≥ 0. A full general solution is a solution that contains all solutions of the organisation. A particular solution is one that satisfies an initial condition x 0 = x(t 0). The problem of finding a particular solution with specified initial conditions is called an initial value trouble. Information technology can be seen that the solution of arrangement (6.2.1) has the form

x ( t ) = A t t 0 x 0 .

Theorem vi.2.1. There exists a central set, denoted by

{ X 1 ( t ) , Ten 2 ( t ) , , Ten north ( t ) } ,

of solutions for system (half-dozen.ii.i). A full general solution is given by

ten ( t ) = i = 1 northward c i X i ( t ) , c j R , i = 1 , , north .

Along with the homogeneous system (vi.2.ane), we consider the nonhomogeneous system

(6.2.ii) x ( t + 1 ) = A x ( t ) + B ( t ) , x ( 0 ) = x 0 , t = 0 , i , .

The initial value problem (6.ii.two) has a unique solution given by

x ( t ) = A t x 0 + i = 0 t A t i B ( i ) , t = 0 , 1 .

We see that the main problem is to summate At . There are some algorithms for computing At . Here, we introduce the Putzer algorithm. one Allow the characteristic equation of A be

i = 0 due north a i ρ n i = 0 ,

where a0 = i. Let

ρ one , ρ 2 , , ρ n ,

exist the eigenvalues of A (some of them may be repeated). The following formula determines At

A t = j = ane n u j ( t ) M ( j 1 ) ,

where

Thousand ( 0 ) = I , Grand ( k ) = j = ane thou ( A ρ j I ) , chiliad = 1 , , n 1 , u 1 ( t ) = ρ i t , u j ( t ) = i = 0 t 1 ρ i t 1 i u j 1 ( i ) .

Instance Solve

10 ( t + i ) = A ten ( t ) ,

The 3 eigenvalues of A are

ρ one = ρ 2 = ρ 3 = 4.

Hence, we have

M ( 0 ) = I , M ( i ) = A four I = [ 0 i 2 0 2 4 0 1 2 ] , Thousand ( two ) = ( A iv I ) M ( 1 ) = [ 0 0 0 0 0 0 0 0 0 ] , u ane ( t ) = 4 t , u 2 ( t ) = i = 0 t ane 4 t one i ( four i ) = t 4 t 1 , u 3 ( t ) = i = 0 t 1 4 t 1 i ( i 4 i ) = t ( t one ) 2 4 t 2 .

Applying the to a higher place adding results to

A t = j = i northward u j ( t ) M ( j 1 ) ,

we get

A t = [ 4 t t 4 t i 2 t 4 t i 0 4 t 2 t 4 t 1 t four t 0 t 4 t 1 4 t + 2 north iv t 1 ] .

is the solution.

Nosotros now use the Jordan to solve system (6.2.ane). Showtime, we consider the example that A is similar to the diagonal matrix

D = d i a g [ ρ i ] ,

where ρi are the eigenvalues of A. 2 That is, there exists a non-singular matrix ρ such that

p 1 A P = D .

From AP = PD, we have

A ξ i = ρ i ξ i ,

where ξi is the ith column of P. Nosotros come across that ξi is the eigenvector of A corresponding to the eigenvalue ρ i. From

p 1 A P = D

and P being non-singular, nosotros accept

A = P D P one .

Consequently, nosotros have

(6.two.3) A t = P D t P i = P d i a thousand [ ρ i t ] P 1 .

Substituting (equation 6.2.3) into

10 ( t ) = A t t 0 x 0 ,

with t 0 = 0 yields the general solution

(half dozen.2.4) ten ( t ) = P d i a chiliad [ ρ i t ] P 1 x 0 .

Equally

P d i a g [ ρ i t ] = [ ρ 1 ξ one , ρ 2 ξ 2 , , ρ northward ξ n ] ,

the general solution (half dozen.two.iv) can also exist expressed past

(6.2.5) x ( t ) = a 1 ρ 1 t ξ one + a 2 ρ ii t ξ 2 + + a n ρ n t ξ n ,

Later having calculated the eigenvalues and eigenvectors, we may direct determine a by (equation half-dozen.2.5) through the initial conditions without computing p −1

Case Find the general solution and the initial value problem of ten(t + 1) = Ax(t)

A = [ two 2 1 1 3 i 1 2 two ] , x ( 0 ) = [ 0 1 0 ] .

The iii eigenvalues of matrix A are

ρ i = 5 , ρ ii = ρ three = 1.

Correspondingly, we can detect three linearly independent vectors 3

ξ one = [ ane 1 1 ] , ξ 2 = [ 1 0 i ] , ξ iii = [ 0 1 2 ] .

It should exist noted that at that place are space choices for ξ ii and ξ three considering of multiplicity of the corresponding eigenvalues. The general solution is

x ( t ) = a 1 ρ 1 t ξ 1 + a ii ρ ii t ξ 2 + a 3 ρ iii t ξ 3 = [ a 1 v t + a 2 a 1 5 t + a 3 a ane 5 t a 2 2 a 3 ] .

The solution of the initial value problem is solved past substituting the initial condition x 0 into the to a higher place equation and then solving ai . We summate

a one = 1 2 , a 2 = 1 ii , a 3 = 1 ii .

Hence

ten ( t ) = ane 2 ρ 1 t ξ 1 i 2 ρ two t ξ 2 + 1 2 ρ 3 t ξ 3 = 1 2 [ 5 t 1 5 t + 1 five t 1 ] .

We may too use x(t) = Atx 0 and (equation 6.ii.3) to solve the initial value problem. Nosotros become the same solution by computing

P = [ ξ 1 ξ 2 ξ iii ] = [ 1 i 0 1 0 1 i 1 2 ] P 1 = [ one 4 one 2 ane 4 3 4 1 2 one iv 1 4 ane 2 1 four ] .

The reader is asked to cheque the result.

The matrix A may non be diagonalizable when A has repeated eigenvalues. There is something close to diagonal form chosen the Jordan canonical form of a square matrix. A bones Jordan cake associated with a value ρ is expressed

J = [ ρ 1 0 0 0 0 ρ 1 0 0 0 0 0 ρ 1 0 0 0 0 ρ ] .

The Hashemite kingdom of jordan canonical form of a square matrix is compromised of such Hashemite kingdom of jordan blocks.

Theorem half dozen.2.two. (the Jordan approved class) Any n×due north matrix A is similar to a Jordan course given by

J = d i a g [ J one , J ii , , J k ] , 1 k n ,

where each Ji is an si × si bones Jordan block and

i = 1 k Due south i = n .

Assume that A is similar to J under P, i.e., P −1 AP = J. We accept

A = P J P one .

It tin can be seen that

J t = d i a 1000 [ J i t , J 2 t , , J k t ] , 1 k n .

Nosotros can write Ji equally

( J i ) southward i × s t = ρ i I + N i ,

where Due northi is an si × southwardi nilpotent matrix. Using North i m = 0 for all chiliadsouthi , we have

( J i t ) s i × southward t = ( ρ i I + N i ) t = [ ρ i t ( t 1 ) ρ i t one ( t two ) ρ i t 2 ( t s i 1 ) ρ i t s i + 1 0 ρ i t ( t 1 ) ρ i t 1 ( t s i 2 ) ρ i t s i + 2 ( t 1 ) ρ i t i 0 0 ρ i t ] .

The general solution of (equation six.2.one) (for t 0 = 0 ) is at present given by

x ( t ) = A t x 0 = P J t P ane ten 0 = P J t a , a P ane x 0 .

Corollary 6.2.one. Presume that A is any north×n matrix. Then

lim t A t = 0 ,

if and only if |ρ| < i for all eigenvalues ρ of A.

Exercise 6.2

1

Use the Putzer algorithm to evaluate At

(i)

A = [ 1 i 2 4 ] ;

(2)

A = [ 1 2 i 0 1 0 four 4 v ] .

2

Solve the following systems with the Putzer algorithm

(i)

x one ( t + ane ) = 10 i ( t ) + x two ( t ) , x ii ( t + 1 ) = 2 x ii ( t ) , x 1 ( 0 ) = 1 , x 2 ( 0 ) = two ;

(ii)

x ( t + ane ) = A x ( t ) where

A = [ 1 2 2 0 0 1 0 2 3 ] , x 0 = [ one one 0 ] .

3

Utilise formula (half dozen.1.v) to find the solution of x(t + 1) = Ax(t)

(i)

A = [ 2 3 0 iv iii 0 0 0 3 ] , x ( 0 ) = [ 0 1 0 ] ;

(ii)

[ one 0 1 i ii 3 0 0 3 ] , ten ( 0 ) = [ 0 0 1 ] .

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/S0076539206800251