§3 Linear system of equations
1. The solution of a system of linear equations with n unknowns and n equations
[ Homogeneous and Inhomogeneous Linear Equations ] A linear system of equations with n unknowns and n equations takes the following form:
_{} ( 1 )
When the constant terms b _{1} , b _{2} ,...,b _{n} are not all zero, ( 1 ) is called an inhomogeneous system of linear equations; when b _{1 }, b _{2} ,...,b _{n} are all zero, ( 1 ) is called a homogeneous system of linear equations .
If you remember
A =( a _{ij} )= _{} (coefficient matrix)
x = ( x _{1 },x _{2} ,...,x _{n} )^{ t}
b = ( b _{1} ,b _{2} ,...,b _{n} )^{ t} (vector of constant terms)
where t represents transposition, then the linear equations ( 1 ) can be written in matrix form
A x = b ( 2 )
[ Inverse matrix method ] When ï A ï ≠ 0 , the solution of the linear system of equations ( 2 ) is
x = b_{}
where is the inverse of the coefficient matrix A , and x is called the solution vector of ( 2 ) ._{}
[ Clem's Law ] If ï A ï ≠ 0 , the solution of the system of equations ( 1 ) is
_{}, , ... , _{} _{}
in the formula
_{}, , ... , _{}
_{}
Here D _{j} ( j= 1,2,..., n) is the nth order determinant obtained by replacing the jth column vector in A with the constant term vector b . In particular
1 ° second order linear equations
_{}
The solution is
_{}, _{}
in the formula
_{}, , _{} _{}
2 ° third order linear equations
_{}
The solution is
_{}, , _{} _{}
in the formula
_{}, _{}
_{}, _{}
[ Principal element elimination method with backsubstitution process (Gaussian elimination method) ] For nth order linear equations
_{}
available matrix
_{}
Elimination steps:
( 1 ) Find the element with the largest absolute value in the coefficient matrix (this element is called the main element), you might as well set a _{11} (the first row and the first column element) as the main element, (otherwise, if it is the main element, you can first Exchange the position of the i th equation and the first equation , and then reverse the order of the unknowns x _{1} and x _{j} , then a new coefficient matrix is obtained, and its main element must be in the first row and the first column ) . Put the first Multiply the equations and add them to the i th equation respectively ( i =2 , 3 ,..., n ) to obtain a new n order linear equation system, which is represented by a matrix as follows_{}_{}_{}_{}
_{}
( 2 ) Find the main element in the coefficient matrix except for the first row . Let b _{22} be the main element . Then multiply the second equation by the ith equation and add them respectively ( i=3,4,. .. n ), to obtain a new system of n order linear equations, represented by a matrix as follows_{}
_{}
(3 ) After performing n times according to the method of ( 1 ) and ( 2 ) , the coefficient matrix outside the 1st to nth row_{}_{}
_{}
Find the main element in , may wish to set it as the main element, multiply the nth equation by and add the nth equation to obtain a new nth order linear equation system, which is expressed as a matrix as follows_{}_{}_{}
_{}
After doing this n times, the elimination process ends . The original coefficient matrix has been transformed into an upper triangular matrix (the order of the unknowns has been exchanged several times at this time) ._{}
Backup steps:
Solved by the nth equation
_{}
Substitute x _{n} into the n th equation, solve it , and then substitute x _{n} into the n th equation to solve , ... , and finally substitute the solved x _{2} , x _{3} , ... , x _{n} into the first Solve the equations for x _{1} ._{}_{}_{}_{}_{}_{}_{}_{}_{}_{}
Note that whenever the main element is found here, the procedures such as row and row exchange and unknown order exchange are carried out. You can also put the steps of changing the order of unknowns after the n 1th step and do it together. You can also get the triangle's coefficient matrix .
Example 1 Solving the System of Equations by Principal Element Elimination
_{}
The solution system of equations is represented by a matrix as:
_{}
The steps to solve are as follows:
(1)
The element in the second row and the first column  23 is the main element, which is framed by □ and represented by a matrix as
_{}
Multiply the 2nd row of the matrix and add it to the 1st row, and multiply the 2nd row and add it to the 3rd row to get the matrix_{}_{}
_{}
Find the second principal element in the coefficient matrix except row 2 at row 1 , column 2 of ._{}
(2) Multiply the 1st row and add it to the 3rd row to get the matrix_{}
_{}_{}
(3 ) Solve from the third equation , substitute it into the second equation, solve it , substitute , into the first equation, and solve it ._{}_{}_{}_{}_{}_{}
So the solution of the system of equations is (1,2,1).
[ Main element elimination method without backsubstitution process ] This method is basically the same as the above method, the difference is that each time the element is eliminated, a certain equation is used to eliminate the unknowns of all the remaining n 1 equations, such as the above method In the elimination step ( 2 ), the second equation is multiplied and added to the i th equation respectively, i= 1 , 3, 4 , ..., n (a total of n 1 , which is the same as the above method The difference is that i = 1 is included here, and a _{12} = b _{12} ), the new coefficient matrix is_{}_{}_{}
_{}
And the final diagonal coefficient matrix is:
_{}
Therefore, each unknown number can be solved directly without going through the backsubstitution process .
The main element elimination method without the backsubstitution process has a larger computational load than the one with the backsubstitution process, but it is simpler to program on an electronic computer .
In order to reduce the amount of operations and facilitate programming, the first step is to find the element with the largest absolute value in the first column of the coefficient matrix as the column main element. After elimination, the second step is to find the column main element from the second column of the coefficient matrix. Element elimination, etc. This elimination method is called columnmajor element elimination, and it can also achieve better accuracy .
[ Simple iterative method ] General steps:
( 1 ) Convert the linear equation system
_{}
rewrite as
_{}
( 2 ) A set of initial approximate values are arbitrarily selected as the 0th approximate solution of the equation ._{}
( 3 ) Make k = 1, 2, 3, ... in turn, using the formula
_{}
Find the kth approximate solution of the equation until it satisfies
_{}
So far, where e > 0 is the prespecified allowable error . So the kth approximate solution satisfies the equation system within the range of the allowable error e . Note that the allowable error here does not refer to the maximum absolute error between the approximate solution and the exact solution ._{}
Example 2 Using a simple iterative method to find a system of equations
_{}
, with an allowable error ._{}
The solution can be reduced to a system of equations according to Example 1
_{}
Respectively from , the iterative equation ( satisfying the convergence condition ) can be obtained_{}
_{}
Select the initial value and iterate successively to obtain a series of approximate solutions:_{}
_{} 
_{} 
_{} 
_{} 
_{} 
_{} 
_{} 
_{} 
0 
1 
1 
1 
10 
0.9581 
1.9684 
1.0000 
1 
0.8900 
2.0000 
0.2553 
11 
0.9643 
2.0000 
0.9764 
2 
0.9079 
1.4987 
1.0000 
12 
0.9723 
1.9841 
1.0000 
3 
0.8739 
2.0000 
0.6267 
13 
0.9769 
2.0000 
0.9882 
4 
0.8992 
1.7487 
1.0000 
14 
0.9821 
1.9920 
1.0000 
5 
0.8947 
2.0000 
0.8129 
15 
0.9853 
2.0000 
0.9941 
6 
0.9171 
1.8741 
1.0000 
16 
0.9887 
1.9960 
1.0000 
7 
0.9223 
2.0000 
0.9062 
17 
0.9908 
2.0000 
0.9970 
8 
0.9392 
1.9369 
1.0000 
18 
0.9929 
1.9980 
1.0000 
9 
0.9463 
2.0000 
0.9530 
19 
0.9943 
2.0000 
0.9985 
After 19 iterations , , , is obtained , which satisfies the system of equations within the allowable error range ._{}_{}_{}_{}
[ Seidel iteration method ] Change the iteration formula in step ( 3 ) of the simple iteration method to
_{}
The other steps are the same as the simple iterative method .
In general, the Seidel iteration method converges faster than the simple iteration method .
Example 3 Use the Seidel iteration method to find the solution to the system of equations in Example 2 .
The solution selects the initial value and substitutes it into the equation to calculate _{}_{}
_{}
Substitute into the equation to calculate_{}_{}
_{}
Substitute into the equation to calculate_{}_{}
_{}
Then , it can be found by continuing to iterate according to the Seidel iteration method, so just consider the equation ,_{}_{}_{}
_{}
That is, solve the equation
_{}
, so the solution to the system of equations is_{}
_{}, ,_{}_{}
[ Convergence Conditions and Error Estimation of Iterative Method ]
method 
Convergence condition 
The maximum error of the kth approximate solution_{} 

simple single iterate generation Law 
_{} 
_{} 

_{} 
_{} 

_{} 
_{} 

race have to you iterate generation Law 
_{} or _{} 
_{} in _{} 


[ Relaxed iteration method ] Change the iteration formula of the simple iteration method to
_{}
The other steps are the same as the simple iterative method . In the above formula, w is a constant, which is called relaxation factor . Proper selection of w can improve the convergence speed. Usually, w is taken as 1.5~2 (when w Î (1,2) is taken, it is called overrelaxation The iterative method, when taking w Î (0,1) , is called the lowrelaxation iterative method) .
[ Conjugate Slope Method ] Linear Equations
A x = b
It can be solved by the following steps:
( 1 ) First select an appropriate approximate solution as the initial value:
_{}
( 2 ) Calculate the initial residual vector
r ^{(}^{ 0)} = b  A x ^{(0)}^{}
Sum vector p ^{(0)} = ^{At r }^{(0} ) ^{}^{}^{}
where A ^{t} is the transpose matrix of A.
( 3 ) For i = 0, 1, 2,..., N 1 , iterate according to the following formula in turn
_{}
x ^{(}^{ i+1)} =x ^{(i)} +a _{i} p ^{(i)}
r ^{(}^{ i+1)} =r ^{(i)} a _{i} Ap ^{(i)}
_{}
p ^{(}^{ i+1)} =A ^{t} r ^{(i+1)} + b _{i+1} p ^{(i)}
where ( a , b ) represents the inner product of vectors a and b (see Chapter 8) .
This process can be stopped as long as r ^{(N)} is small enough .
[ Solution of real tridiagonal linear equations by chasing method ] Real tridiagonal linear equations
_{}_{}=_{}
It can be solved by the following steps:
Calculate first
_{}, _{}
Then for k =2,3,..., n 1, iterate according to the following formula in turn
_{}, _{}
Finally, the solution of the linear system of equations is obtained as
_{}
Example 4 Solving a System of Equations Using the Chase Method
_{}
The solution is calculated according to the above formula in turn
_{}, _{}
_{},
_{}
_{}
[ Solving the linear equation system of positive definite matrix by the square root method ] Let A be a positive definite matrix, then the linear equation system
It can be solved by the following steps:
( 1 ) Calculate l _{ij} (decomposition A = LL ^{t} , L =( l _{ij} ) is a real nonsingular lower triangular matrix)
_{}
_{}
where n is the order of the matrix A.
( 2 ) Calculate y _{i} (solve the system of equations Ly = b )
_{}
( 3 ) Calculate x _{i} (solve the system of equations L ^{t} x = y )
_{}
[ Solving Method of Linear Equations for Positive Definite Band Matrix ] Let A = ( a _{ij} ) be a positive definite band matrix, satisfying
a _{ij} = 0, ï ij ï > m ( m is a positive integer)
system of linear equations
A x=b
It can be solved by the following steps:
(1) Calculate l _{ij} . In order to save storage units and make full use of the symmetry and bandtype characteristics of the matrix, it is only necessary to store the diagonal and the elements in the band under the diagonal. At this time , the subscript of a _{ij can be changed, so that}
a _{ij} = a _{i,mi+j}_{}
For example, when n = 4, m = 2 , the storage format of the symmetric band matrix is:
_{}
Then calculate l _{ij} according to the following formula :
When i ≤ m ,
_{}
_{}
When i > m ,
_{}
_{}
( 2 ) Calculate y _{i} . Let
l _{ij} = l _{i,mi+j}
And calculate y _{i} according to the following formula :
_{}
_{}
( 3 ) Calculate x _{i} .
_{}
_{}
This method shows its superiority only when m is much smaller than n , otherwise other methods are used . This formula takes advantage of the symmetry and band characteristics of the matrix, which is convenient for storage and calculation on the electronic computer .
Second , the general case of linear equations
A system of linear equations with n unknowns and m equations takes the following form
_{} ( 1 )
remember
A =_{}
x =( x _{1 },x _{2} ,...,x _{n} )^{ t}
b =( b _{1} ,b _{2} ,...,b _{m} )^{ t}^{}
Then the matrix form of the given system of linear equations is
A x=b ( 1 )
The corresponding homogeneous equations are
A x=0 ( 2 )
A is called the coefficient matrix of the system of equations ( 1 ),
C =_{}
is called the augmented matrix of the system of equations ( 1 ) .
[ Theorem of Discrimination of Solutions for Linear Equations ] Let r ( A ) and r ( C ) represent the ranks of the coefficient matrix A and the augmented matrix C respectively, then we have
1 ° When m = n and r ( A ) = r ( C ) = n (or ô A ô 1 0 ), the system of equations ( 1 ) has a unique solution;
2 ° When r ( A ) < r ( C ) , the equation system ( 1 ) has no solution, at this time ( 1 ) is called a contradictory equation system;
3 ° When r ( A )= r ( C )= r < n (or ô A ô =0 ), the equation system ( 1 ) has infinitely many sets of solutions;
The necessary and sufficient conditions for 4 ° homogeneous linear equations ( 2 ) to have nonzero solutions are: r ( A ) < n ( ) ._{}
[ Structure of Solutions of Linear Equations ]
1 ° When r ( A ) = r < n , any nonzero solution x =( x _{1} ,x _{2} ,...,x _{n} ) ^{t} of the homogeneous system of equations A x =0 can use its n  is represented by a linear combination of r linearly independent solutions x ^{(i)} = ._{}_{}_{}^{}^{}_{}
The n  r linearly independent solutions are called the fundamental solution system of the system of equations, and it is not unique .
2 ° Let x ^{(}^{ 0)} = _{}be a special solution of the linear equation system A x = b , then any solution x = ( x _{1} , x _{2} ,...,x _{n} ) ^{t} can be expressed as
x= x ^{(}^{ 0 )} + h
where h = is a solution of its corresponding homogeneous system of equations A x = 0 ._{}
3. Integer Solutions of Linear Homogeneous Equations with Integer Coefficients
Suppose ( a _{ij} ) _{m }_{′ n} is an m ′ n integer matrix, m < n . Let
A _{i} =_{}
Then the system of linear homogeneous equations with integer coefficients
_{}
The integer solutions x _{1} , x _{2} ,..., x _{n }satisfy
_{}
where [ ] represents the integer part .
4. Solutions to a Class of Linear Inequalities (Clem's Law)
Assumption
A = ( a _{ij} )
is an n ′ n nonsingular matrix, then the system of linear inequalities
_{}
b _{i} ≥ 0 ( i =1,2,..., n )
The solution is
_{}
where A _{ij} is the algebraic cofactor of the elements in the i th row and j th column of matrix A.