Product of the inverse matrix and the original one. Matrix algebra - matrix inverse

Matrix algebra - Inverse matrix

inverse matrix

Inverse matrix is a matrix that, when multiplied both on the right and on the left by a given matrix, gives the identity matrix.
Let us denote the inverse matrix of the matrix A through , then according to definition we get:

Where E– identity matrix.
Square matrix called not special (non-degenerate) if its determinant is not zero. Otherwise it is called special (degenerate) or singular.

The theorem holds: Every non-singular matrix has an inverse matrix.

The operation of finding the inverse matrix is ​​called appeal matrices. Let's consider the matrix inversion algorithm. Let a non-singular matrix be given n-th order:

where Δ = det A ≠ 0.

Algebraic addition of an element matrices n-th order A is called the determinant of a matrix taken with a certain sign ( n–1)th order obtained by deleting i-th line and j th matrix column A:

Let's create the so-called attached matrix:

where are the algebraic complements of the corresponding elements of the matrix A.
Note that algebraic additions of matrix row elements A are placed in the corresponding columns of the matrix à , that is, the matrix is ​​transposed at the same time.
By dividing all the elements of the matrix à by Δ – the value of the matrix determinant A, we get the inverse matrix as a result:

Let us note a number of special properties of the inverse matrix:
1) for a given matrix A its inverse matrix is the only one;
2) if there is an inverse matrix, then right reverse And left reverse the matrices coincide with it;
3) a singular (singular) square matrix does not have an inverse matrix.

Basic properties of an inverse matrix:
1) the determinant of the inverse matrix and the determinant of the original matrix are reciprocals;
2) the inverse matrix of the product of square matrices is equal to the product of the inverse matrix of factors, taken in reverse order:

3) the transposed inverse matrix is ​​equal to the inverse matrix of the given transposed matrix:

EXAMPLE Calculate the inverse of the given matrix.

Definition 1: a matrix is ​​called singular if its determinant is zero.

Definition 2: a matrix is ​​called non-singular if its determinant is not equal to zero.

Matrix "A" is called inverse matrix, if the condition A*A-1 = A-1 *A = E (unit matrix) is satisfied.

A square matrix is ​​invertible only if it is non-singular.

Scheme for calculating the inverse matrix:

1) Calculate the determinant of matrix "A" if A = 0, then the inverse matrix does not exist.

2) Find all algebraic complements of matrix "A".

3) Create a matrix of algebraic additions (Aij)

4) Transpose the matrix of algebraic complements (Aij )T

5) Multiply the transposed matrix by the inverse of the determinant of this matrix.

6) Perform check:

At first glance it may seem complicated, but in fact everything is very simple. All solutions are based on simple arithmetic operations, the main thing when solving is not to get confused with the “-” and “+” signs and not to lose them.

Now let’s solve a practical task together by calculating the inverse matrix.

Task: find the inverse matrix "A" shown in the picture below:

We solve everything exactly as indicated in the plan for calculating the inverse matrix.

1. The first thing to do is to find the determinant of matrix "A":

Explanation:

We have simplified our determinant using its basic functions. First, we added to the 2nd and 3rd lines the elements of the first line, multiplied by one number.

Secondly, we changed the 2nd and 3rd columns of the determinant, and according to its properties, we changed the sign in front of it.

Thirdly, we took out the common factor (-1) of the second line, thereby changing the sign again, and it became positive. We also simplified line 3 in the same way as at the very beginning of the example.

We have a triangular determinant whose elements below the diagonal are equal to zero, and by property 7 it is equal to the product of the diagonal elements. In the end we got A = 26, therefore the inverse matrix exists.

A11 = 1*(3+1) = 4

A12 = -1*(9+2) = -11

A13 = 1*1 = 1

A21 = -1*(-6) = 6

A22 = 1*(3-0) = 3

A23 = -1*(1+4) = -5

A31 = 1*2 = 2

A32 = -1*(-1) = -1

A33 = 1+(1+6) = 7

3. The next step is to compile a matrix from the resulting additions:

5. Multiply this matrix by the inverse of the determinant, that is, by 1/26:

6. Now we just need to check:

During the test, we received an identity matrix, therefore, the solution was carried out absolutely correctly.

2 way to calculate the inverse matrix.

1. Elementary matrix transformation

2. Inverse matrix through an elementary converter.

Elementary matrix transformation includes:

1. Multiplying a string by a number that is not equal to zero.

2. Adding to any line another line multiplied by a number.

3. Swap the rows of the matrix.

4. Applying a chain of elementary transformations, we obtain another matrix.

A -1 = ?

1. (A|E) ~ (E|A -1 )

2.A -1 * A = E

Let's look at this practical example with real numbers.

Exercise: Find the inverse matrix.

Solution:

Let's check:

A little clarification on the solution:

First, we rearranged rows 1 and 2 of the matrix, then multiplied the first row by (-1).

After that, we multiplied the first row by (-2) and added it with the second row of the matrix. Then we multiplied line 2 by 1/4.

The final stage of transformation was multiplying the second line by 2 and adding it with the first. As a result, we have the identity matrix on the left, therefore, the inverse matrix is ​​the matrix on the right.

After checking, we were convinced that the decision was correct.

As you can see, calculating the inverse matrix is ​​very simple.

At the end of this lecture, I would also like to spend a little time on the properties of such a matrix.

Finding the inverse matrix.

In this article we will understand the concept of an inverse matrix, its properties and methods of finding. Let us dwell in detail on solving examples in which it is necessary to construct an inverse matrix for a given one.

Page navigation.

    Inverse matrix - definition.

    Finding the inverse matrix using a matrix from algebraic complements.

    Properties of an inverse matrix.

    Finding the inverse matrix using the Gauss-Jordan method.

    Finding the elements of the inverse matrix by solving the corresponding systems of linear algebraic equations.

Inverse matrix - definition.

The concept of an inverse matrix is ​​introduced only for square matrices whose determinant is nonzero, that is, for non-singular square matrices.

Definition.

Matrixcalled the inverse of a matrix, whose determinant is different from zero if the equalities are true , Where E– unit order matrix n on n.

Finding the inverse matrix using a matrix from algebraic complements.

How to find the inverse matrix for a given one?

First, we need the concepts transposed matrix, matrix minor and algebraic complement of a matrix element.

Definition.

Minorkth order matrices A order m on n is the determinant of the order matrix k on k, which is obtained from the matrix elements A located in the selected k lines and k columns. ( k does not exceed the smallest number m or n).

Minor (n-1)th order, which is composed of elements of all rows except i-th, and all columns except jth, square matrix A order n on n let's denote it as .

In other words, the minor is obtained from a square matrix A order n on n by crossing out elements i-th lines and jth column.

For example, let's write, minor 2nd order, which is obtained from the matrix selecting elements of its second, third rows and first, third columns . We will also show the minor, which is obtained from the matrix by crossing out the second line and third column . Let us illustrate the construction of these minors: and .

Definition.

Algebraic complement element of a square matrix is ​​called minor (n-1)th order, which is obtained from the matrix A, crossing out elements of it i-th lines and jth column multiplied by .

The algebraic complement of an element is denoted as . Thus, .

For example, for the matrix the algebraic complement of an element is .

Secondly, we will need two properties of the determinant, which we discussed in the section calculating the determinant of a matrix:

Based on these properties of the determinant, the definition operations of multiplying a matrix by a number and the concept of an inverse matrix is ​​true: , where is a transposed matrix whose elements are algebraic complements.

Matrix is indeed the inverse of the matrix A, since the equalities are satisfied . Let's show it

Let's compose algorithm for finding the inverse matrix using equality .

Let's look at the algorithm for finding the inverse matrix using an example.

Example.

Given a matrix . Find the inverse matrix.

Solution.

Let's calculate the determinant of the matrix A, decomposing it into the elements of the third column:

The determinant is nonzero, so the matrix A reversible.

Let's find a matrix of algebraic additions:

That's why

Let's transpose the matrix from algebraic additions:

Now we find the inverse matrix as :

Let's check the result:

Equalities are satisfied, therefore, the inverse matrix is ​​found correctly.

Properties of an inverse matrix.

The concept of an inverse matrix, equality , definitions of operations on matrices and properties of the determinant of a matrix make it possible to justify the following properties of inverse matrix:

Finding the elements of the inverse matrix by solving the corresponding systems of linear algebraic equations.

Let's consider another way to find the inverse matrix for a square matrix A order n on n.

This method is based on the solution n systems of linear inhomogeneous algebraic equations with n unknown. The unknown variables in these systems of equations are the elements of the inverse matrix.

The idea is very simple. Let us denote the inverse matrix as X, that is, . Since by definition of the inverse matrix, then

Equating the corresponding elements by columns, we get n systems of linear equations

We solve them in any way and form an inverse matrix from the found values.

Let's look at this method with an example.

Example.

Given a matrix . Find the inverse matrix.

Solution.

Let's accept . Equality gives us three systems of linear inhomogeneous algebraic equations:

We will not describe the solution to these systems; if necessary, refer to the section solving systems of linear algebraic equations.

From the first system of equations we have, from the second - , from the third - . Therefore, the required inverse matrix has the form . We recommend checking it to make sure the result is correct.

Summarize.

We looked at the concept of an inverse matrix, its properties, and three methods for finding it.

Example of solutions using the inverse matrix method

Exercise 1. Solve SLAE using the inverse matrix method. 2 x 1 + 3x 2 + 3x 3 + x 4 = 1 3 x 1 + 5x 2 + 3x 3 + 2x 4 = 2 5 x 1 + 7x 2 + 6x 3 + 2x 4 = 3 4 x 1 + 4x 2 + 3x 3 + x 4 = 4

Beginning of the form

End of form

Solution. Let's write the matrix in the form: Vector B: B T = (1,2,3,4) Main determinant Minor for (1,1): = 5 (6 1-3 2)-7 (3 1-3 2)+4 ( 3 2-6 2) = -3 Minor for (2,1): = 3 (6 1-3 2)-7 (3 1-3 1)+4 (3 2-6 1) = 0 Minor for (3 ,1): = 3 (3 1-3 2)-5 (3 1-3 1)+4 (3 2-3 1) = 3 Minor for (4,1): = 3 (3 2-6 2) -5 (3 2-6 1)+7 (3 2-3 1) = 3 Determinant of minor ∆ = 2 (-3)-3 0+5 3-4 3 = -3

Transposed matrix Algebraic additions ∆ 1,1 = 5 (6 1-2 3)-3 (7 1-2 4)+2 (7 3-6 4) = -3 ∆ 1,2 = -3 (6 1-2 3) -3 (7 1-2 4)+1 (7 3-6 4) = 0 ∆ 1.3 = 3 (3 1-2 3)-3 (5 1-2 4)+1 (5 3-3 4 ) = 3 ∆ 1.4 = -3 (3 2-2 6)-3 (5 2-2 7)+1 (5 6-3 7) = -3 ∆ 2.1 = -3 (6 1-2 3)-3 (5 1-2 4)+2 (5 3-6 4) = 9 ∆ 2.2 = 2 (6 1-2 3)-3 (5 1-2 4)+1 (5 3- 6 4) = 0 ∆ 2.3 = -2 (3 1-2 3)-3 (3 1-2 4)+1 (3 3-3 4) = -6 ∆ 2.4 = 2 (3 2- 2 6)-3 (3 2-2 5)+1 (3 6-3 5) = 3 ∆ 3.1 = 3 (7 1-2 4)-5 (5 1-2 4)+2 (5 4 -7 4) = -4 ∆ 3.2 = -2 (7 1-2 4)-3 (5 1-2 4)+1 (5 4-7 4) = 1 ∆ 3.3 = 2 (5 1 -2 4)-3 (3 1-2 4)+1 (3 4-5 4) = 1 ∆ 3.4 = -2 (5 2-2 7)-3 (3 2-2 5)+1 ( 3 7-5 5) = 0 ∆ 4.1 = -3 (7 3-6 4)-5 (5 3-6 4)+3 (5 4-7 4) = -12 ∆ 4.2 = 2 ( 7 3-6 4)-3 (5 3-6 4)+3 (5 4-7 4) = -3 ∆ 4.3 = -2 (5 3-3 4)-3 (3 3-3 4) +3 (3 4-5 4) = 9 ∆ 4.4 = 2 (5 6-3 7)-3 (3 6-3 5)+3 (3 7-5 5) = -3 Inverse matrix Results vector X X = A -1 ∙ B X T = (2,-1,-0.33,1) x 1 = 2 x 2 = -1 x 3 = -0.33 x 4 = 1

see also solutions of SLAEs using the inverse matrix method online. To do this, enter your data and receive a solution with detailed comments.

Task 2. Write the system of equations in matrix form and solve it using the inverse matrix. Check the resulting solution. Solution:xml:xls

Example 2. Write the system of equations in matrix form and solve using the inverse matrix. Solution:xml:xls

Example. A system of three linear equations with three unknowns is given. Required: 1) find its solution using Cramer formulas; 2) write the system in matrix form and solve it using matrix calculus. Guidelines. After solving by Cramer's method, find the "Solving by inverse matrix method for source data" button. You will receive the appropriate solution. Thus, you will not have to fill in the data again. Solution. Let us denote by A the matrix of coefficients for unknowns; X - matrix-column of unknowns; B - matrix-column of free members:

Vector B: B T =(4,-3,-3) Taking into account these notations, this system of equations takes the following matrix form: A*X = B. If matrix A is non-singular (its determinant is non-zero, then it has an inverse matrix A -1... Multiplying both sides of the equation by A -1, we get: A -1 *A*X = A -1 *B, A -1 *A = E. This equality is called matrix notation of the solution to a system of linear equations. To find a solution to the system of equations, it is necessary to calculate the inverse matrix A -1. The system will have a solution if the determinant of the matrix A is nonzero. Let's find the main determinant. ∆=-1 (-2 (-1)-1 1)-3 (3 (-1)-1 0)+2 (3 1-(-2 0))=14 So, determinant 14 ≠ 0, so we continue solution. To do this, we find the inverse matrix through algebraic additions. Let us have a non-singular matrix A:

We calculate algebraic complements.

∆ 1,1 =(-2 (-1)-1 1)=1

∆ 1,2 =-(3 (-1)-0 1)=3

∆ 1,3 =(3 1-0 (-2))=3

∆ 2,1 =-(3 (-1)-1 2)=5

∆ 2,2 =(-1 (-1)-0 2)=1

∆ 2,3 =-(-1 1-0 3)=1

∆ 3,1 =(3 1-(-2 2))=7

∆ 3,2 =-(-1 1-3 2)=7

X T =(-1,1,2) x 1 = -14 / 14 =-1 x 2 = 14 / 14 =1 x 3 = 28 / 14 =2 Examination. -1 -1+3 1+0 2=4 3 -1+-2 1+1 2=-3 2 -1+1 1+-1 2=-3 doc:xml:xls Answer: -1,1,2.

This topic is one of the most hated among students. Worse, probably, are the qualifiers.

The trick is that the very concept of an inverse element (and I’m not just talking about matrices) refers us to the operation of multiplication. Even in the school curriculum, multiplication is considered a complex operation, and multiplication of matrices is generally a separate topic, to which I have an entire paragraph and video lesson dedicated.

Today we will not go into the details of matrix calculations. Let’s just remember: how matrices are designated, how they are multiplied, and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

To avoid accidentally mixing up rows and columns (believe me, in an exam you can confuse a one with a two, let alone some rows), just look at the picture:

Determining indices for matrix cells

What's happening? If you place the standard coordinate system $OXY$ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with coordinates $\left(x;y \right)$ - this will be the row number and column number.

Why is the coordinate system placed in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis directed downwards and not to the right? Again, it's simple: take a standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it covers the matrix. This is a 90 degree clockwise rotation - we see the result in the picture.

In general, we have figured out how to determine the indices of matrix elements. Now let's look at multiplication.

Definition. Matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first coincides with the number of rows in the second, are called consistent.

Exactly in that order. One can be confused and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$ those. the pair $\left(B;A \right)$ is also consistent.

Only matched matrices can be multiplied.

Definition. The product of matched matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , the elements of which $((c)_(ij))$ are calculated according to the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that’s such a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication, generally speaking, is non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributively: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And once again distributively: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right sum factor precisely because of the non-commutativity of the multiplication operation.

If it turns out that $A\cdot B=B\cdot A$, such matrices are called commutative.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest when solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the nonsense that will be written next.

What is an inverse matrix

Since matrix multiplication is a very labor-intensive operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix also turns out to be not the most trivial. And requiring some explanation.

Key Definition

Well, it's time to know the truth.

Definition. A matrix $B$ is called the inverse of a matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten as follows:

It would seem that everything is extremely simple and clear. But when analyzing this definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that there is exactly one such matrix? What if for some initial matrix $A$ there is a whole crowd of inverses?
  3. What do all these “reverses” look like? And how, exactly, should we count them?

As for calculation algorithms, we will talk about this a little later. But we will answer the remaining questions right now. Let us formulate them in the form of separate statements-lemmas.

Basic properties

Let's start with how the matrix $A$ should, in principle, look in order for $((A)^(-1))$ to exist for it. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square, and of the same order $n$.

Proof. It's simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in the order shown:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are “transit” and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, therefore the matrices $((A)^(-1))$ and $A$ are also consistent in the specified order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, therefore the sizes of the matrices strictly coincide:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square matrices of size $\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​the only one.

Proof. Let's go by contradiction: let the matrix $A$ have at least two inverses - $B$ and $C$. Then, according to definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices - $A$, $B$, $C$ and $E$ - are squares of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

We received the only possible variant: two instances of the inverse matrix are equal. The lemma is proven.

The above arguments repeat almost verbatim the proof of the uniqueness of the inverse element for all real numbers $b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether every square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3. Given a matrix $A$. If its inverse matrix $((A)^(-1))$ exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A\right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them we can calculate the determinant: $\left| A\right|$ and $\left| ((A)^(-1)) \right|$. However, the determinant of a product is equal to the product of the determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition, $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is non-zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, with a zero determinant, no inverse matrix in principle can exist.

But first, let’s formulate an “auxiliary” definition:

Definition. A singular matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can claim that every invertible matrix is ​​non-singular.

How to find the inverse of a matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be discussed now is very effective for matrices of size $\left[ 2\times 2 \right]$ and - partially - size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything yourself.

Algebraic additions

Get ready. Now there will be pain. No, don’t worry: a beautiful nurse in a skirt, stockings with lace will not come to you and give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the “Union Matrix” come to you.

Let's start with the main thing. Let there be a square matrix of size $A=\left[ n\times n \right]$, whose elements are called $((a)_(ij))$. Then for each such element we can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ located in the $i$th row and $j$th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$th row and $j$th column.

Again. The algebraic complement to a matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and $j$-th column from the original matrix. We obtain a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in essence we are simply figuring out the sign in front of $M_(ij)^(*) $.
  3. We count and get a specific number. Those. the algebraic addition is precisely a number, and not some new matrix, etc.

The matrix $M_(ij)^(*)$ itself is called an additional minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - what we looked at in the lesson about the determinant.

Important note. Actually, in “adult” mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection we get a matrix of size $\left[ k\times k \right]$ - its determinant is called a minor of order $k$ and is denoted $((M)_(k))$.
  2. Then we cross out these “selected” $k$ rows and $k$ columns. Once again you get a square matrix - its determinant is called an additional minor and is denoted $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we will get only 2 terms - these will be the same $i+j$ - the “coordinates” of the element $((a)_(ij))$ for which we are looking for an algebraic complement.

So today we're using a slightly simplified definition. But as we will see later, it will be more than enough. The following thing is much more important:

Definition. The allied matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic additions $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “how much will have to be counted!” Relax: you will have to count, but not that much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, in Lemma 3 it was stated that the invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the opposite is also true: if the matrix $A$ is not singular, then it is always invertible. And there is even a search scheme for $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - everything is the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it is non-zero.
  2. Construct the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and place them in place $((a)_(ij))$.
  3. Transpose this matrix $S$, and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

That's all! The inverse matrix $((A)^(-1))$ has been found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Solution. Let's check the reversibility. Let's calculate the determinant:

\[\left| A\right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. This means the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2 \right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5 \right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Please note: the determinants |2|, |5|, |1| and |3| are determinants of matrices of size $\left[ 1\times 1 \right]$, and not modules. Those. if the qualifiers included negative numbers, there is no need to remove the “minus”.

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

OK it's all over Now. The problem is solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Solution. We calculate the determinant again:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is nonzero—the matrix is ​​invertible. But now it’s going to be really tough: we need to count as many as 9 (nine, motherfucker!) algebraic additions. And each of them will contain the determinant $\left[ 2\times 2 \right]$. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \2 & 1 & -2 \\\end(array) \right]\]

That's it. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse matrix - you should get $E$.

Performing this check is much easier and faster than looking for an error in further calculations when, for example, you are solving a matrix equation.

Alternative way

As I said, the inverse matrix theorem works great for sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in the latter case, it’s not so “great” "), but for larger matrices the sadness begins.

But don’t worry: there is an alternative algorithm with which you can calmly find the inverse even for the matrix $\left[ 10\times 10 \right]$. But, as often happens, to consider this algorithm we need a little theoretical background.

Elementary transformations

Among all possible matrix transformations, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column), multiplied by any number $k\ne 0$ (you can, of course, do $k=0$, but what's the point? ? Nothing will change).
  3. Rearrangement. Take the $i$th and $j$th rows (columns) and swap places.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the adjoint matrix. Yes, yes: you heard right. Now there will be one more definition - the last one in today's lesson.

Adjoint matrix

Surely at school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that’s all.

So: now everything will be the same, but in an “adult” way. Ready?

Definition. Let a matrix $A=\left[ n\times n \right]$ and an identity matrix $E$ of the same size $n$ be given. Then the adjoint matrix $\left[ A\left| E\right. \right]$ is a new matrix of size $\left[ n\times 2n \right]$ that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ of the required size, we separate them with a vertical bar for beauty - here you have the adjoint. :)

What's the catch? Here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string conversions bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to obtain from $A$ the matrix $E$ on the right, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the adjoint matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until $E$ appears instead of $A$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the opposite;
  4. PROFIT!:)

Of course, this is much easier said than done. So let's look at a couple of examples: for sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Solution. We create the adjoint matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we don’t touch it, otherwise the newly removed units will begin to “multiply” in the third column.

But we can subtract the second line twice from the last - we get one in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - this way we “zero” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second line by −1, and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

All that remains is to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

Task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Solution. We compose the adjoint again:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's cry a little, be sad about how much we have to count now... and start counting. First, let’s “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We see too many “cons” in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now is the time to “fry” the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final throw: “burn out” the second column by subtracting line 2 from lines 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again the identity matrix is ​​on the left, which means the inverse is on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

Let us be given a square matrix. You need to find the inverse matrix.

First way. Theorem 4.1 of the existence and uniqueness of an inverse matrix indicates one of the ways to find it.

1. Calculate the determinant of this matrix. If, then the inverse matrix does not exist (the matrix is ​​singular).

2. Construct a matrix from algebraic complements of matrix elements.

3. Transpose the matrix to obtain the adjoint matrix .

4. Find the inverse matrix (4.1) by dividing all elements of the adjoint matrix by the determinant

Second way. To find the inverse matrix, you can use elementary transformations.

1. Construct a block matrix by assigning to a given matrix an identity matrix of the same order.

2. Using elementary transformations performed on the rows of the matrix, bring its left block to its simplest form. In this case, the block matrix is ​​reduced to the form where is a square matrix obtained as a result of transformations from the identity matrix.

3. If , then the block is equal to the inverse of the matrix, i.e. If, then the matrix does not have an inverse.

In fact, with the help of elementary transformations of the rows of the matrix, it is possible to reduce its left block to a simplified form (see Fig. 1.5). In this case, the block matrix is ​​transformed to the form where is an elementary matrix satisfying the equality. If the matrix is ​​non-degenerate, then according to paragraph 2 of Remarks 3.3 its simplified form coincides with the identity matrix. Then from the equality it follows that. If the matrix is ​​singular, then its simplified form differs from the identity matrix, and the matrix does not have an inverse.

11. Matrix equations and their solution. Matrix form of recording SLAE. Matrix method (inverse matrix method) for solving SLAEs and conditions for its applicability.

Matrix equations are equations of the form: A*X=C; X*A=C; A*X*B=C where matrix A,B,C are known, the matrix X is not known, if the matrices A and B are not singular, then the solutions to the original matrices will be written in the appropriate form: X = A -1 * C; X=C*A -1 ; X=A -1 *C*B -1 Matrix form of writing systems of linear algebraic equations. Several matrices can be associated with each SLAE; Moreover, the SLAE itself can be written in the form of a matrix equation. For SLAE (1), consider the following matrices:

Matrix A is called matrix of the system. The elements of this matrix represent the coefficients of a given SLAE.

The matrix A˜ is called extended matrix system. It is obtained by adding to the system matrix a column containing free terms b1,b2,...,bm. Usually this column is separated by a vertical line for clarity.

The column matrix B is called matrix of free members, and the column matrix X is matrix of unknowns.

Using the notation introduced above, SLAE (1) can be written in the form of a matrix equation: A⋅X=B.

Note

The matrices associated with the system can be written in various ways: everything depends on the order of the variables and equations of the SLAE under consideration. But in any case, the order of the unknowns in each equation of a given SLAE must be the same.

The matrix method is suitable for solving SLAEs in which the number of equations coincides with the number of unknown variables and the determinant of the main matrix of the system is different from zero. If the system contains more than three equations, then finding the inverse matrix requires significant computational effort, therefore, in this case it is advisable to use Gaussian method.

12. Homogeneous SLAEs, conditions for the existence of their non-zero solutions. Properties of partial solutions of homogeneous SLAEs.

A linear equation is called homogeneous if its free term is equal to zero, and inhomogeneous otherwise. A system consisting of homogeneous equations is called homogeneous and has the general form:

13 .The concept of linear independence and dependence of partial solutions of a homogeneous SLAE. Fundamental system of solutions (FSD) and its determination. Representation of the general solution of a homogeneous SLAE through the FSR.

Function system y 1 (x ), y 2 (x ), …, y n (x ) is called linearly dependent on the interval ( a , b ), if there is a set of constant coefficients not equal to zero at the same time, such that the linear combination of these functions is identically equal to zero on ( a , b ): For . If equality for is possible only for , the system of functions y 1 (x ), y 2 (x ), …, y n (x ) is called linearly independent on the interval ( a , b ). In other words, the functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ), if there is an equal to zero on ( a , b ) their non-trivial linear combination. Functions y 1 (x ),y 2 (x ), …, y n (x ) linearly independent on the interval ( a , b ), if only their trivial linear combination is identically equal to zero on ( a , b ).

Fundamental decision system (FSR) A homogeneous SLAE is the basis of this system of columns.

The number of elements in the FSR is equal to the number of unknowns of the system minus the rank of the system matrix. Any solution to the original system is a linear combination FSR decisions.

Theorem

The general solution of a non-homogeneous SLAE is equal to the sum of a particular solution of a non-homogeneous SLAE and the general solution of the corresponding homogeneous SLAE.

1 . If the columns are solutions homogeneous system equations, then any linear combination of them is also a solution to a homogeneous system.

Indeed, from the equalities it follows that

those. a linear combination of solutions is a solution to a homogeneous system.

2. If the rank of the matrix of a homogeneous system is equal to , then the system has linearly independent solutions.

Indeed, using formulas (5.13) for the general solution of a homogeneous system, we find particular solutions, giving the free variables the following standard value sets (each time assuming that one of the free variables is equal to one and the rest are equal to zero):

which are linearly independent. In fact, if you create a matrix from these columns, then its last rows form the identity matrix. Consequently, the minor located in the last lines is not equal to zero (it is equal to one), i.e. is basic. Therefore, the rank of the matrix will be equal. This means that all columns of this matrix are linearly independent (see Theorem 3.4).

Any collection of linearly independent solutions of a homogeneous system is called fundamental system (set) of solutions .

14 Minor of the th order, basic minor, rank of the matrix. Calculating the rank of a matrix.

The order k minor of a matrix A is the determinant of some of its square submatrix of order k.

In a matrix A of dimensions m x n, a minor of order r is called basic if it is nonzero, and all minors of higher order, if they exist, are equal to zero.

The columns and rows of the matrix A, at the intersection of which there is a basis minor, are called the basis columns and rows of A.

Theorem 1. (On the rank of the matrix). For any matrix, the minor rank is equal to the row rank and equal to the column rank.

Theorem 2. (On the basis minor). Each matrix column is decomposed into a linear combination of its basis columns.

The rank of a matrix (or minor rank) is the order of the basis minor or, in other words, the most big order, for which there are nonzero minors. The rank of a zero matrix is ​​considered 0 by definition.

Let us note two obvious properties of minor rank.

1) The rank of a matrix does not change during transposition, since when a matrix is ​​transposed, all its submatrices are transposed and the minors do not change.

2) If A’ is a submatrix of matrix A, then the rank of A’ does not exceed the rank of A, since a non-zero minor included in A’ is also included in A.

15. The concept of a -dimensional arithmetic vector. Equality of vectors. Operations on vectors (addition, subtraction, multiplication by a number, multiplication by a matrix). Linear combination of vectors.

Ordered collection n real or complex numbers are called n-dimensional vector. The numbers are called vector coordinates.

Two (non-zero) vectors a And b are equal if they are equally directed and have the same module. All zero vectors are considered equal. In all other cases, the vectors are not equal.

Vector addition. There are two ways to add vectors: 1. Parallelogram rule. To add the vectors and, we place the origins of both at the same point. We build up to a parallelogram and from the same point we draw a diagonal of the parallelogram. This will be the sum of the vectors.

2. The second method of adding vectors is the triangle rule. Let's take the same vectors and . We will add the beginning of the second to the end of the first vector. Now let's connect the beginning of the first and the end of the second. This is the sum of the vectors and . Using the same rule, you can add several vectors. We arrange them one after another, and then connect the beginning of the first to the end of the last.

Subtraction of vectors. The vector is directed opposite to the vector. The lengths of the vectors are equal. Now it’s clear what vector subtraction is. The vector difference and is the sum of the vector and the vector .

Multiplying a vector by a number

Multiplying a vector by a number k produces a vector whose length is k times the length. It is codirectional with the vector if k is greater than zero, and oppositely directed if k is less than zero.

The scalar product of vectors is the product of the lengths of the vectors and the cosine of the angle between them. If the vectors are perpendicular, their scalar product is zero. And like this scalar product is expressed through the coordinates of the vectors and .

Linear combination of vectors

Linear combination of vectors called a vector

Where - linear combination coefficients. If a combination is called trivial if it is non-trivial.

16 .Scalar product of arithmetic vectors. Vector length and angle between vectors. The concept of vector orthogonality.

The scalar product of vectors a and b is the number

The scalar product is used to calculate: 1) finding the angle between them; 2) finding the projection of vectors; 3) calculating the length of a vector; 4) the conditions of perpendicularity of vectors.

The length of the segment AB is called the distance between points A and B. The angle between vectors A and B is called angle α = (a, b), 0≤ α ≤P. By which you need to rotate 1 vector so that its direction coincides with another vector. Provided that their origins coincide.

An ortom a is a vector a having unit length and direction a.

17. System of vectors and its linear combination. The concept of linear dependence and independence of a system of vectors. Theorem on necessary and sufficient conditions for the linear dependence of a system of vectors.

A system of vectors a1,a2,...,an is called linearly dependent if there are numbers λ1,λ2,...,λn such that at least one of them is nonzero and λ1a1+λ2a2+...+λnan=0. Otherwise, the system is called linearly independent.

Two vectors a1 and a2 are called collinear if their directions are the same or opposite.

Three vectors a1, a2 and a3 are called coplanar if they are parallel to some plane.

Geometric criteria for linear dependence:

a) system (a1,a2) is linearly dependent if and only if the vectors a1 and a2 are collinear.

b) system (a1,a2,a3) is linearly dependent if and only if the vectors a1,a2 and a3 are coplanar.

theorem. (Necessary and sufficient condition for linear dependence systems vectors.)

Vector system vector space is linear dependent if and only if one of the vectors of the system is linearly expressed in terms of the others vector this system.

Corollary 1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

Similar articles

2024 my-cross.ru. Cats and dogs. Small animals. Health. Medicine.