Eigenvectors and eigenvalues ​​of a linear operator

Diagonal matrices have the simplest structure. The question arises whether it is possible to find a basis in which the matrix of the linear operator would have a diagonal form. Such a basis exists.
Let us be given a linear space R n and a linear operator A acting in it; in this case, operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector x is called an eigenvector of the operator A if the operator A transforms x into a collinear vector, that is. The number λ is called the eigenvalue or eigenvalue of the operator A, corresponding to the eigenvector x.
Let us note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise different eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors , corresponding to different eigenvalues ​​λ 1, λ 2, ..., λ n, then they are linearly independent, therefore, they can be taken as the basis of the space R n. Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we will act with the operator A on the basis vectors: Then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are along the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of a linear operator A in the basis (i = 1..n) has a diagonal form if and only if all the vectors of the basis are eigenvectors of the operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let a vector be given , where x 1 , x 2 , …, x n are the coordinates of the vector x relative to the basis and x is the eigenvector of the linear operator A corresponding to the eigenvalue λ, that is. This relationship can be written in matrix form

. (*)


Equation (*) can be considered as an equation for finding x, and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If equation (*) is written in detail in coordinate form, we obtain a system of linear homogeneous equations:

(1)
Where - linear operator matrix.

System (1) has a non-zero solution if its determinant D is equal to zero


We received an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side is called the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A has no eigenvectors and cannot be reduced to diagonal form.
Let λ 1, λ 2, …, λ n be the real roots of the characteristic equation, and among them there may be multiples. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law, where x 1, x 2, .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Solution. We build the matrix of this operator:
.
We create a system for determining the coordinates of eigenvectors:

We compose a characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
Because , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find the general solution of this system: The fundamental system of solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, putting x 1 = 1: .
Reasoning similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R 3, the basis consists of three linearly independent vectors, but we received only two linearly independent eigenvectors, from which the basis in R 3 cannot be composed. Consequently, we cannot reduce the matrix A of a linear operator to diagonal form.

Example 13. Given a matrix .
1. Prove that the vector is an eigenvector of matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which matrix A has a diagonal form.
Solution.
1. If , then x is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in a basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We look for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Let's find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is two and equal to the number unknowns, so this system has only the zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0,1,0) is an eigenvector, corresponding to λ = -3. Let's check:
.
If λ = 1, then we obtain the system
The rank of the matrix is ​​two. We cross out the last equation.
Let x 3 be a free unknown. Then x 1 = -3x 3, 4x 2 = 10x 1 - 6x 3 = -30x 3 - 6x 3, x 2 = -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and distinct, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of a linear operator A:R n → R n can be reduced to diagonal form, since for some linear operators there may be less than n linear independent eigenvectors. However, if the matrix is ​​symmetric, then the root of the characteristic equation of multiplicity m corresponds to exactly m linearly independent vectors.

Definition. A symmetric matrix is ​​a square matrix in which the elements symmetric about the main diagonal are equal, that is, in which .
Notes. 1. All eigenvalues ​​of a symmetric matrix are real.
2. The eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the many applications of the studied apparatus, we consider the problem of determining the type of a second-order curve.

Vector X ≠ 0 is called eigenvector linear operator with matrix A, if there is a number such that AX =X.

In this case, the number is called eigenvalue operator (matrix A) corresponding to the vector x.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more complex to transform.

Let's write down the definition of an eigenvector in the form of a system of equations:

Let's move all the terms to the left side:

The latter system can be written in matrix form as follows:

(A - E)X = O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square and its determinant is not equal to zero, then using Cramer’s formulas we will always get a unique solution – zero. It can be proven that a system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - E| = = 0

This equation with an unknown is called characteristic equation(characteristic polynomial) matrix A (linear operator).

It can be proven that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator defined by the matrix A = .

To do this, let's create a characteristic equation |A - E| = = (1 -) 2 – 36 = 1 – 2+ 2 - 36 = 2 – 2- 35; D = 4 + 140 = 144; eigenvalues 1 = (2 - 12)/2 = -5; 2 = (2 + 12)/2 = 7.

To find eigenvectors, we solve two systems of equations

(A + 5E)X = O

(A - 7E)X = O

For the first of them, the expanded matrix takes the form

,

whence x 2 = c, x 1 + (2/3)c = 0; x 1 = -(2/3)s, i.e. X (1) = (-(2/3)s; s).

For the second of them, the expanded matrix takes the form

,

from where x 2 = c 1, x 1 - (2/3)c 1 = 0; x 1 = (2/3)s 1, i.e. X (2) = ((2/3)s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)с; с) with eigenvalue (-5) and all vectors of the form ((2/3)с 1 ; с 1) with eigenvalue 7 .

It can be proven that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where  i are the eigenvalues ​​of this matrix.

The converse is also true: if matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proven that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.

Definition: Let L be a given n- dimensional linear space. A non-zero vector L is called eigenvector linear transformation A, if there is a number such that the equality holds:

A
(7.1)

In this case, the number  is called eigenvalue (characteristic number) linear transformation A corresponding to the vector .

Moving the right side of (7.1) to the left and taking into account the relation
, we rewrite (7.1) in the form

(7.2)

Equation (7.2) is equivalent to a system of linear homogeneous equations:

(7.3)

For the existence of a nonzero solution to the system of linear homogeneous equations (7.3), it is necessary and sufficient that the determinant of the coefficients of this system be equal to zero, i.e.

|A-λE|=
(7.4)

This determinant is a polynomial of the nth degree with respect to λ and is called characteristic polynomial linear transformation A, and equation (7.4) - characteristic equation matrices A.

Definition: If a linear transformation A in some basis ,,…,has matrix A =
, then the eigenvalues ​​of the linear transformation A can be found as the roots  1 ,  2 , … ,  n of the characteristic equation:

Let's consider special case. Let A be some linear transformation of the plane whose matrix is ​​equal to
. Then transformation A can be given by the formulas:


;

on some basis
.

If a transformation A has an eigenvector with an eigenvalue , then A
.

or

Because eigenvector non-zero, then x 1 and x 2 are not equal to zero at the same time. Because If this system is homogeneous, then in order for it to have a nontrivial solution, the determinant of the system must be equal to zero. Otherwise, according to Cramer’s rule, the system has a unique solution – zero, which is impossible.

The resulting equation is characteristic equation of linear transformation A.

Thus, one can find the eigenvector (x 1, x 2) linear transformation A with an eigenvalue, where is the root of the characteristic equation, and x 1 and x 2 are the roots of the system of equations when the value is substituted into it.

It is clear that if the characteristic equation does not have real roots, then the linear transformation A does not have eigenvectors.

It should be noted that if is the eigenvector of transformation A, then any vector collinear to it is also eigenvector with the same eigenvalue.

Really,. If we take into account that the vectors have the same origin, then these vectors form the so-called own direction or own line.

Because the characteristic equation may have two different real roots  1 and  2, then in this case, when substituting them into the system of equations, we obtain an infinite number of solutions. (Because the equations are linearly dependent). This set of solutions determines two own lines.

If the characteristic equation has two equal roots 1 = 2 =, then either there is only one proper straight line, or if, when substituted into the system, it turns into a system of the form:
. This system satisfies any values ​​of x 1 and x 2. Then all vectors will be eigenvectors, and such a transformation is called similarity transformation.

Example.
.

Example. Find characteristic numbers and eigenvectors of a linear transformation with matrix A =
.

Let's write the linear transformation in the form:

Let's create a characteristic equation:

 2 - 4+ 4 = 0;

Roots of the characteristic equation:  1 = 2 = 2;

We get:

The system produces a dependency: x 1 x 2 = 0. The eigenvectors for the first root of the characteristic equation have the coordinates: ( t ; t ) Where t- parameter.

The eigenvector can be written:
.

Let's consider another special case. If is the eigenvector of a linear transformation A specified in a three-dimensional linear space, and x 1, x 2, x 3 are the components of this vector in a certain basis
, That

where  is the eigenvalue (characteristic number) of transformation A.

If the linear transformation matrix A has the form:

, That

Characteristic equation:

Expanding the determinant, we obtain a cubic equation for . Any cubic equation with real coefficients has either one or three real roots.

Then any linear transformation in three-dimensional space has eigenvectors.

Example. Find the characteristic numbers and eigenvectors of the linear transformation A, linear transformation matrix A = .

Example. Find the characteristic numbers and eigenvectors of the linear transformation A, linear transformation matrix A =
.

Let's create a characteristic equation:

-(3 + )((1 -)(2 -) – 2) + 2(4 - 2- 2) - 4(2 - 1 +) = 0

-(3 + )(2 -- 2+ 2 - 2) + 2(2 - 2) - 4(1 +) = 0

-(3 + )( 2 - 3) + 4 - 4- 4 - 4= 0

3 2 + 9- 3 + 3 2 - 8= 0

 1 = 0; 2 = 1; 3 = -1;

For  1 = 0:

If we take x 3 = 1, we get x 1 = 0, x 2 = -2

Eigenvectors
t, where t is a parameter.

Similarly you can find And for  2 and  3 .

With matrix A, if there is a number l such that AX = lX.

In this case, the number l is called eigenvalue operator (matrix A) corresponding to vector X.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more complex to transform.

Let's write down the definition of an eigenvector in the form of a system of equations:

Let's move all the terms to the left side:

The latter system can be written in matrix form as follows:

(A - lE)X = O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square and its determinant is not equal to zero, then using Cramer’s formulas we will always get a unique solution - zero. It can be proven that the system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - lE| = = 0

This equation with unknown l is called characteristic equation (characteristic polynomial) matrix A (linear operator).

It can be proven that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator defined by the matrix A = .

To do this, let's create a characteristic equation |A - lE| = = (1 - l) 2 - 36 = 1 - 2l + l 2 - 36 = l 2 - 2l - 35 = 0; D = 4 + 140 = 144; eigenvalues ​​l 1 = (2 - 12)/2 = -5; l 2 = (2 + 12)/2 = 7.

To find eigenvectors, we solve two systems of equations

(A + 5E)X = O

(A - 7E)X = O

For the first of them, the expanded matrix takes the form

,

whence x 2 = c, x 1 + (2/3)c = 0; x 1 = -(2/3)s, i.e. X (1) = (-(2/3)s; s).

For the second of them, the expanded matrix takes the form

,

from where x 2 = c 1, x 1 - (2/3)c 1 = 0; x 1 = (2/3)s 1, i.e. X (2) = ((2/3)s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)с; с) with eigenvalue (-5) and all vectors of the form ((2/3)с 1 ; с 1) with eigenvalue 7 .

It can be proven that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where l i are the eigenvalues ​​of this matrix.

The converse is also true: if matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proven that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.


Let's illustrate this with the previous example. Let's take arbitrary non-zero values ​​c and c 1, but such that the vectors X (1) and X (2) are linearly independent, i.e. would form a basis. For example, let c = c 1 = 3, then X (1) = (-2; 3), X (2) = (2; 3).

Let us verify the linear independence of these vectors:

12 ≠ 0. In this new basis, matrix A will take the form A * = .

To verify this, let's use the formula A * = C -1 AC. First, let's find C -1.

C -1 = ;

Quadratic shapes

Quadratic shape f(x 1, x 2, x n) of n variables is called a sum, each term of which is either the square of one of the variables, or the product of two different variables, taken with a certain coefficient: f(x 1, x 2, x n) = (a ij = a ji).

The matrix A composed of these coefficients is called matrix quadratic form. It's always symmetrical matrix (i.e. a matrix symmetrical about the main diagonal, a ij = a ji).

In matrix notation, the quadratic form is f(X) = X T AX, where

Indeed

For example, let's write the quadratic form in matrix form.

To do this, we find a matrix of quadratic form. Its diagonal elements are equal to the coefficients of the squared variables, and the remaining elements are equal to the halves of the corresponding coefficients of the quadratic form. That's why

Let the matrix-column of variables X be obtained by a non-degenerate linear transformation of the matrix-column Y, i.e. X = CY, where C is a non-singular matrix of nth order. Then the quadratic form f(X) = X T AX = (CY) T A(CY) = (Y T C T)A(CY) = Y T (C T AC)Y.

Thus, with a non-degenerate linear transformation C, the matrix of quadratic form takes the form: A * = C T AC.

For example, let's find the quadratic form f(y 1, y 2), obtained from the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 by linear transformation.

The quadratic form is called canonical(It has canonical view), if all its coefficients a ij = 0 for i ≠ j, i.e.
f(x 1, x 2, x n) = a 11 x 1 2 + a 22 x 2 2 + a nn x n 2 = .

Its matrix is ​​diagonal.

Theorem(proof not given here). Any quadratic form can be reduced to canonical form using a non-degenerate linear transformation.

For example, let us reduce the quadratic form to canonical form
f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3.

To do this, we first select perfect square with variable x 1:

f(x 1, x 2, x 3) = 2(x 1 2 + 2x 1 x 2 + x 2 2) - 2x 2 2 - 3x 2 2 - x 2 x 3 = 2(x 1 + x 2) 2 - 5x 2 2 - x 2 x 3.

Now we select a complete square with the variable x 2:

f(x 1, x 2, x 3) = 2(x 1 + x 2) 2 - 5(x 2 2 + 2* x 2 *(1/10)x 3 + (1/100)x 3 2) + (5/100)x 3 2 =
= 2(x 1 + x 2) 2 - 5(x 2 - (1/10)x 3) 2 + (1/20)x 3 2.

Then the non-degenerate linear transformation y 1 = x 1 + x 2, y 2 = x 2 + (1/10)x 3 and y 3 = x 3 brings this quadratic form to the canonical form f(y 1, y 2, y 3) = 2y 1 2 - 5y 2 2 + (1/20)y 3 2 .

Note that the canonical form of a quadratic form is determined ambiguously (the same quadratic form can be reduced to the canonical form different ways). However, canonical forms obtained by various methods have a number of common properties. In particular, the number of terms with positive (negative) coefficients of a quadratic form does not depend on the method of reducing the form to this form (for example, in the example considered there will always be two negative and one positive coefficient). This property is called the law of inertia of quadratic forms.

Let us verify this by bringing the same quadratic form to canonical form in a different way. Let's start the transformation with the variable x 2:

f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3 = -3x 2 2 - x 2 x 3 + 4x 1 x 2 + 2x 1 2 = - 3(x 2 2 +
+ 2* x 2 ((1/6) x 3 - (2/3)x 1) + ((1/6) x 3 - (2/3)x 1) 2) + 3((1/6) x 3 - (2/3)x 1) 2 + 2x 1 2 =
= -3(x 2 + (1/6) x 3 - (2/3)x 1) 2 + 3((1/6) x 3 + (2/3)x 1) 2 + 2x 1 2 = f (y 1 , y 2 , y 3) = -3y 1 2 -
+3y 2 2 + 2y 3 2, where y 1 = - (2/3)x 1 + x 2 + (1/6) x 3, y 2 = (2/3)x 1 + (1/6) x 3 and y 3 = x 1 . Here there is a negative coefficient -3 at y 1 and two positive coefficients 3 and 2 at y 2 and y 3 (and using another method we got a negative coefficient (-5) at y 2 and two positive ones: 2 at y 1 and 1/20 at y 3).

It should also be noted that the rank of a matrix of quadratic form, called rank of quadratic form, is equal to the number of nonzero coefficients of the canonical form and does not change under linear transformations.

The quadratic form f(X) is called positively (negative) certain, if for all values ​​of the variables that are not simultaneously equal to zero, it is positive, i.e. f(X) > 0 (negative, i.e.
f(X)< 0).

For example, the quadratic form f 1 (X) = x 1 2 + x 2 2 is positive definite, because is a sum of squares, and the quadratic form f 2 (X) = -x 1 2 + 2x 1 x 2 - x 2 2 is negative definite, because represents it can be represented as f 2 (X) = -(x 1 - x 2) 2.

In most practical situations, it is somewhat more difficult to establish the definite sign of a quadratic form, so for this we use one of the following theorems (we will formulate them without proof).

Theorem. A quadratic form is positive (negative) definite if and only if all eigenvalues ​​of its matrix are positive (negative).

Theorem(Sylvester criterion). A quadratic form is positive definite if and only if all the leading minors of the matrix of this form are positive.

Main (corner) minor The kth order matrix A of the nth order is called the determinant of the matrix, composed of the first k rows and columns of the matrix A ().

Note that for negative definite quadratic forms the signs of the principal minors alternate, and the first-order minor must be negative.

For example, let us examine the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 + 3x 2 2 for sign definiteness.

= (2 - l)*
*(3 - l) - 4 = (6 - 2l - 3l + l 2) - 4 = l 2 - 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is positive definite.

Method 2. Major minor first order matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = 6 - 4 = 2 > 0. Therefore, according to Sylvester’s criterion, the quadratic form is positive definite.

We examine another quadratic form for sign definiteness, f(x 1, x 2) = -2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (-2 - l)*
*(-3 - l) - 4 = (6 + 2l + 3l + l 2) - 4 = l 2 + 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is negative definite.

Method 2. Principal minor of the first order of matrix A D 1 = a 11 =
= -2 < 0. Главный минор второго порядка D 2 = = 6 - 4 = 2 >0. Consequently, according to Sylvester’s criterion, the quadratic form is negative definite (the signs of the main minors alternate, starting with the minus).

And as another example, we examine the sign-determined quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (2 - l)*
*(-3 - l) - 4 = (-6 - 2l + 3l + l 2) - 4 = l 2 + l - 10 = 0; D = 1 + 40 = 41;
.

One of these numbers is negative and the other is positive. The signs of the eigenvalues ​​are different. Consequently, the quadratic form can be neither negatively nor positively definite, i.e. this quadratic form is not sign-definite (it can take values ​​of any sign).

Method 2. Principal minor of the first order of matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = -6 - 4 = -10< 0. Следовательно, по критерию Сильвестра квадратичная форма не является знакоопределенной (знаки главных миноров разные, при этом первый из них - положителен).

Eigenvalues ​​(numbers) and eigenvectors.
Examples of solutions

Be yourself


From both equations it follows that .

Let's put it then: .

As a result: – second eigenvector.

Let us repeat the important points of the decision:

– the resulting system certainly has a general solution (the equations are linearly dependent);

– we select the “y” in such a way that it is integer and the first “x” coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

Intermediate " control points" was quite sufficient, so checking equalities is, in principle, unnecessary.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself am used to writing them down in lines). This option is acceptable, but in light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but this is only because I commented on the first example in great detail.

Example 2

Matrices

Let's train on our own! An approximate example of a final task at the end of the lesson.

Sometimes you need to complete an additional task, namely:

write the canonical matrix decomposition

What it is?

If the eigenvectors of the matrix form basis, then it can be represented as:

Where is a matrix composed of coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Let's look at the matrix of the first example. Its eigenvectors linearly independent(non-collinear) and form a basis. Let's create a matrix of their coordinates:

On main diagonal matrices in the appropriate order the eigenvalues ​​are located, and the remaining elements are equal to zero:
– I once again emphasize the importance of order: “two” corresponds to the 1st vector and is therefore located in the 1st column, “three” – to the 2nd vector.

Using the usual algorithm for finding inverse matrix or Gauss-Jordan method we find . No, that's not a typo! - before you is rare, like solar eclipse an event when the inverse coincides with the original matrix.

It remains to write down the canonical decomposition of the matrix:

The system can be solved using elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: – substitute into the second equation:

Since the first coordinate is zero, we obtain a system, from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If only a trivial solution is obtained , then either the eigenvalue was found incorrectly, or the system was compiled/solved with an error.

Compact coordinates gives the value

Eigenvector:

And once again, we check that the solution found satisfies every equation of the system. In subsequent paragraphs and in subsequent tasks, I recommend taking this wish as a mandatory rule.

2) For the eigenvalue, using the same principle, we obtain the following system:

From the 2nd equation of the system we express: – substitute into the third equation:

Since the “zeta” coordinate is equal to zero, we obtain a system from each equation of which a linear dependence follows.

Let

Checking that the solution satisfies every equation of the system.

Thus, the eigenvector is: .

3) And finally, the system corresponds to the eigenvalue:

The second equation looks the simplest, so let’s express it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear relationship has emerged, which we substitute into the expression:

As a result, “x” and “y” were expressed through “z”: . In practice, it is not necessary to achieve precisely such relationships; in some cases it is more convenient to express both through or and through . Or even “train” - for example, “X” through “I”, and “I” through “Z”

Let's put it then:

We check that the solution found satisfies each equation of the system and writes the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms non-zero vectors (eigenvectors) into collinear vectors.

If the condition required finding the canonical decomposition, then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. Making a matrix from their coordinates, diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, by condition, you need to write linear transformation matrix in the basis of eigenvectors, then we give the answer in the form . There is a difference, and the difference is significant! Because this matrix is ​​the “de” matrix.

A problem with simpler calculations for you to solve on your own:

Example 5

Find eigenvectors of a linear transformation given by a matrix

When finding your own numbers, try not to go all the way to a 3rd degree polynomial. In addition, your system solutions may differ from my solutions - there is no certainty here; and the vectors you find may differ from the sample vectors up to the proportionality of their respective coordinates. For example, and. It is more aesthetically pleasing to present the answer in the form, but it’s okay if you stop at the second option. However, there are reasonable limits to everything; the version no longer looks very good.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in the case of multiple eigenvalues?

The general algorithm remains the same, but it has its own characteristics, and it is advisable to keep some parts of the solution in a more strict academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Solution

Of course, let’s capitalize the fabulous first column:

And, after factoring the quadratic trinomial:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find the eigenvectors:

1) Let’s deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

You won't find a better combination:
Eigenvector:

2-3) Now we remove a couple of sentries. In this case it may turn out either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value into the determinant which brings us the next homogeneous system of linear equations:

Eigenvectors are exactly vectors
fundamental system of solutions

Actually, throughout the entire lesson we did nothing but find the vectors of the fundamental system. It’s just that for the time being this term was not particularly required. By the way, those clever students who missed the topic in camouflage suits homogeneous equations, will be forced to smoke it now.


The only action was to remove the extra lines. The result is a one-by-three matrix with a formal “step” in the middle.
– basic variable, – free variables. There are two free variables, therefore there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero multiplier in front of the “X” allows it to take on absolutely any values ​​(which is clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can select these vectors orally - simply by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- one, which means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are clearly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even more “beautifully”: . However, I warn you that in another example, a simple selection may not be possible, which is why the clause is intended for experienced people. In addition, why not take, say, as the third vector? After all, its coordinates also satisfy each equation of the system, and the vectors linearly independent. This option, in principle, is suitable, but “crooked”, since the “other” vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for an independent solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of the final design at the end of the lesson.

It should be noted that in both the 6th and 7th examples a triple of linearly independent eigenvectors is obtained, and therefore the original matrix is ​​representable in the canonical decomposition. But such raspberries do not happen in all cases:

Example 8


Solution: Let’s create and solve the characteristic equation:

Let's expand the determinant in the first column:

We carry out further simplifications according to the considered method, avoiding the third-degree polynomial:

– eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Don’t be surprised, in addition to the kit, there are also variables in use - there is no difference here.

From the 3rd equation we express it and substitute it into the 1st and 2nd equations:

From both equations it follows:

Let then:

2-3) For multiple values ​​we get the system .

Let's write down the matrix of the system and, using elementary transformations, bring it to a stepwise form:

Similar articles

2024 my-cross.ru. Cats and dogs. Small animals. Health. Medicine.