Matrices and properties
Importance of linear equations
First mathematical models of many of the real world problems are either linear or can be
approximated reasonably well using linear relationships. Analysis of linear relationship of variables
is generally easier than that of non-linear relationships.
A linear equation involving two variables x and y has the standard form ππ₯ + ππ¦ = π, where a, b&
c are real numbers and a and b both cannot be equal to zero.
The equation becomes non-linear if any of the variables has the exponent other than one, example
4π₯ + 5π¦ = 15 ππππππ
4π₯ β π₯π¦ + 5π¦ = 15πππ β ππππππ
Β π₯ 2 + 5π¦ 2 = 15 πππ β πππππr
Linear equation occurs in more than two variables as π1π₯1 + π2π₯2 + π3π₯3 + β― πππ₯π = π. The set of equations is known as system of simultaneous equations, in matrix form it can be represented as
π΄π₯ = π΅
3π₯1 + 2π₯2 + 4π₯3 = 14
π₯1 β 2π₯2 = β7
Β βπ₯1 + 3π₯2 + 2π₯3 = 2
Existence of solution
In solving system of equations we find values of variables that satisfy all equations in the system
simultaneously. There may be 4 possibility in solving the equations.
- System with unique solution
here the lines or equations intersect in one and only one point.
2. System with no solution
Here the lines or equation never intersect or parallel lines.
3. System with infinite solution
Here two equation or lines overlap, so that there is infinite
Solutions
4. ILL condition system
There may be situation where the system has a solution but it is very close to being
singular, i.e, any equation have solution but is very difficult to identify the exact point at
which the lines intersect. If there is any slight changes in the value in the equation then we
will see huge change in the solution, this type of equation is called ILL condition system,
we should be careful in solving these kind of solutions. Example
1.01π₯ + 0.99π¦ = 2
0.99π₯ + 1.01π¦ = 2
On solving these equations we get the solution at x=1 & y=1, however if we make small
changes in b i.e.
1.01π₯ + 0.99π¦ = 2.02
0.99π₯ + 1.01π¦ = 1.98
On solving these equation we get x=2 & y=0
So slight changes results in huge change in solution.
Methods of solutions
Elimination method
Elimination method is a method of solving simultaneous linear. This method involves elimination
of a term containing one of the unknowns in all but one equation, one such step reduces the order
of equations by one, repeated elimination leads finally to one equation with one unknown.
Example: solve the following equation using elimination method
4π₯1 β 2π₯2 + π₯3 = 15 β¦ β¦ (1)
β3π₯1 β π₯2+4π₯3 = 8 β¦ β¦ (2)
Β π₯1 β π₯2 + 3π₯3 = 13 β¦ β¦ (3)
Here multiply π 1 ππ¦ 3 &π 2 ππ¦ 4 and add to eliminate π₯1from 2. Multiply π 1 ππ¦ β 1&π 3 ππ¦ 4 and add to eliminate π₯1from 3
4π₯1 β 2π₯2 + π₯3 = 15
β10π₯2+19π₯3 = 77
β2π₯2 + 11π₯3 = 37
Now to eliminate π₯2 from third equation multiply second row by 2 and third row by -10 and adding
4π₯1 β 2π₯2 + π₯3 = 15
β10π₯2+19π₯3 = 77
β72π₯3 = β216
Now we have a triangular system and solution is readily obtained from back-substitution
Gauss Elimination Method
The procedure in above example may not be satisfactory for large systems because the transformed coefficients can become very large as we convert to a triangular system. So we use another method called Gaussian Elimination method that avoid this by subtracting ππ1/ π11 times the first equation from π π‘β equation to make the transformed numbers in the first column equal to zero and proceed on. However we must always guard against divide by zero, a useful strategy to avoid divide by zero is to re-arrange the equations so as to put the coefficient of large magnitude on the diagonal at each step, this is called pivoting. Complete pivoting method require both row and column interchange but this is much difficult and not frequently done. Changing only row called partial pivoting which places a coefficient of larger magnitude on the diagonal by row interchange only. This will be guaranteeing a non-zero divisors if there is a solution to set of equations and will have the added advantage of giving improved arithmetic precision. The diagonal elements that result are called pivot elements.
Example (without pivoting element)
0.143π₯1 + 0.357π₯2 + 2.01π₯3 = β5.173
β1.31π₯1 + 0.911π₯2 + 1.99π₯3 = β5.458
11.2π₯1 β 4.30π₯2 β 0.605π₯3 = 4.415
Augmented matrix is
Gauss Jordan Method
Gauss Jordan method is another popular method used for solving a system of linear equations. In
this method the elements above the diagonal are made zero at the same time that zero are
created below the diagonal, usually the diagonal elements are made ones at the same time the
reduction is performed, this transforms the coefficient matrix into identity matrix. When this has
been accomplished the column of right-hand side has been transformed into the solution vector.
Pivoting is normally employed to preserve arithmetic accuracy.
Example solution using Gauss-Jordan method
2π₯1 + 4π₯2 β 6π₯3 = β8
π₯1 + 3π₯2 + π₯3 = 10
2π₯1 β 4π₯2 β 2π₯3 = β12
Example solution using Gauss-Jordan method (with pivoting)
2π₯1 + 4π₯2 β 6π₯3 = β8
π₯1 + 3π₯2 + π₯3 = 10
2π₯1 β 4π₯2 β 2π₯3 = β12
Augmented matrix is
The inverse of a matrix
The division a matrix is not defined but the equivalent is obtained from the inverse of the matrix. If the product of two square matrices A*B equals identity matrix I, B is said to be inverse of A (also A is inverse of B). the usual notation of the matrix is π΄ β1 . we can say as π΄π΅ = πΌ, π΄ = π΅ β1 ,π΅ = π΄ β1 .
Example: given matrix A, find the inverse of A using Gauss Jordan method.
Method of factorization
Consider the following system of equations
π11π₯1 + π12π₯2 + π13π₯3 = π1
π21π₯1 + π22π₯2 + π23π₯3 = π2
π31π₯1 + π32π₯2 + π33π₯3 = π3
These equations can be written in matrix form as
π΄π = B
n this method we use the fact that the square matrix A can be factorized into the form LU, where
L is lower triangular matrix and U can be upper triangular matrix such that π΄ = πΏπ
πΏππ = π΅
Let us assume ππ = π , then πΏπ = π΅
Now we can solve the system Aπ = π΅ in two stages
- Solve the equation, πΏπ = π΅ for Z by forward substitution
- Solve the equation, ππ = π for X using Z by backward substitution.
The elements of L and u can be determined by comparing the elements of the product of L and U
with those of A. The decomposition with L having unit diagonal values is called the Dolittle LU
decomposition while the other one with U having unit diagonal elements is called Crout LU
decomposition.
Dolittle LU decomposition
Equating the corresponding coefficients we get the values of l and u
Example: Now find L & U either by using Dolittle algorithm
2π₯ β 3π¦ + 10π§ = 3
βπ₯ + 4π¦ + 2π§ = 20
5π₯ + 2π¦ + π§ = β12
The given system is π΄π₯ = π΅, where
Now comparing both sides we get
π’11 = 2
π’12 = β3
π’13 = 10
π21π’11 = β1
π21 = β 1/ 2
π21π’12 + π’22 = 4
π’22 = 5/ 2
π21π’13 + π’23 = 2
π’23 = 7
π31π’11 = 5
π31 = 5/ 2
π31π’12 + π32π’22 = 2
π32 = 19 /5
π31π’13 + π32π’23 + π’33 = 1
π’33 = β 253/ 5
2π₯ β 3π¦ + 10π§ = 3
π₯ = β4
Croutβs method
Equating the corresponding coefficients, we get the values of l and u
Example solve the following system by the method of crouts algorithms factorization
2π₯ β 3π¦ + 10π§ = 3
βπ₯ + 4π¦ + 2π§ = 20
5π₯ + 2π¦ + π§ = β12
The given system is π΄π₯ = π΅, where
π31π’13 + π32π’23 + π33 = 1
π33 = β 253/ 5
So we have
Choleskys method
In case of A is symmetric, the LU decomposition can be modified so that upper factor in matrix is
the transpose of the lower one (vice versa)
i.e. π΄ = πΏπΏ π = π π π
Just as other method, perform as before
Symmetric matrix
A square matrix π¨ = [πππ] is called symmetric if πππ= πππ for all i and j
Example: factorize the matrix, using Cholesky algorithm
Iterative methods
Gauss elimination and its derivatives are called direct method, an entirely different way to solve
many systems is through iteration. In this we start with an initial estimate of the solution vector
and proceed to refine this estimate.
When the system of equation can be ordered so that each diagonal entry of the coefficient matrix
is larger in magnitude that the sum of the magnitude of the other coefficients in that row, then
such system is called diagonally dominant and the iteration will converge for any stating values.
Formally we say that an nxn matrix A is diagonally dominant if and only if for each i=1, 2, 3β¦.n
The iterative method depends on the arrangement of the equations in this manner
Let us consider a system of n equations in n unknowns
π11π₯1 + π12π₯2 + β― + π1ππ₯π = π1
π21π₯1 + π22π₯2 + β― + π2ππ₯π = π2
ππ1π₯1 + ππ2π₯2 + β― + ππππ₯π= ππ
We write the original system as
Now we can computer π₯1, π₯2 β¦ π₯πby using initial guess for these values. The new values area gain used to compute the next set of x values. The process can continue till we obtain a desired level of accuracy in x values.
Example
Solve the equation using Gauss Jacobi iteration method
6π₯1 β 2π₯2 + π₯3 = 11
π₯1 + 2π₯2 β 5π₯3 = β1
β2π₯1 + 7π₯2 + 2π₯3 = 5
Now first we recorder the equation so that coefficient matrix is diagonally dominant
6π₯1 β 2π₯2 + π₯3 = 11
β2π₯1 + 7π₯2 + 2π₯3 = 5
Β π₯1 + 2π₯2 β 5π₯3 = β1
Now
We can simplify as
We begin with some initial approximation to the value of the variables,
letβs take as π₯1 = 0, π₯2 = 0, π₯3 = 0,
Then new approximation using above formula will be as follows
1 Iteration |
π₯1=1.833333 |
π₯2=0.714286 |
π₯3=0.200000 |
2 Iteration |
π₯1=2.038095 |
π₯2 =1.180952 |
π₯3=0.852381 |
3 Iteration |
π₯1=2.084921 |
π₯2=1.053061 |
π₯3=1.080000 |
4 Iteration |
π₯1=2.004354 |
π₯2=1.001406 |
π₯3=1.038209 |
5 Iteration |
π₯1=1.994100 |
π₯2=0.990327 |
π₯3=1.001433 |
6 Iteration |
π₯1=1.996537 |
π₯2=0.997905 |
π₯3=0.994951 |
7 Iteration |
π₯1=2.000143 |
π₯2=1.000453 |
π₯3=0.998469 |
8 Iteration |
π₯1=2.000406 |
π₯2=1.000478 |
π₯3=1.000210 |
9 Iteration |
π₯1=2.000124 |
π₯2=1.000056 |
π₯3=1.000273 |
10 Iteration |
π₯1=1.999973 |
π₯2=0.999958 |
π₯3=1.000047 |
11 Iteration |
π₯1=1.999978 |
π₯2=0.999979 |
π₯3=0.999978 |
12 Iteration |
π₯1=1.999997 |
π₯2=1.000000 |
π₯3=0.999987 |
12 Iteration the final result is : |
π₯1=1.999997 π₯2=1.000000 π₯3=0.999987 |
Gauss Seidel Iteration method
This is a modification of Gauss Jacobi method, as before
Let us consider a system of n equations in n unknowns
π11π₯1 + π12π₯2 + β― + π1ππ₯π = π1
π21π₯1 + π22π₯2 + β― + π2ππ₯π = π2
ππ1π₯1 + ππ2π₯2 + β― + ππππ₯π = π3
We write the original system as
Now we can computer π₯1, π₯2 β¦ π₯π by using initial guess for these values. Here we use the updated values of π₯1, π₯2 β¦ π₯π in calculating new values of x in each iteration till we obtain a desired level of accuracy in x values. This method is more rapid in convergence than gauss Jacobi method. The rate of convergence of gauss seidel method is roughly twice that of gauss Jacobi.
Example
Solve the equation using Gauss Seidel iteration method
8π₯1 β 3π₯2 + 2π₯3 = 20
6π₯1 + 3π₯2 + 12π₯3 = 35
4π₯1 + 11π₯2 β π₯3 = 33
Now first we recorder the equation so that coefficient matrix is diagonally dominant
8π₯1 β 3π₯2 + 2π₯3 = 20
4π₯1 + 11π₯2 β π₯3 = 33
Β 6π₯1 + 3π₯2 + 12π₯3 = 35
We begin with some initial approximation to the value of the variables,
letβs take as π₯2 = 0, π₯3 = 0,
Then new approximation using above formula will be as follows
Since the 6th and 7th approximate are almost same up to 4 decimal places, we can the result is
Β π₯1 = 3.0168, π₯2 = 1.9859, π₯3 = 0.9118
Power method
Power method is a single value method used for determining the dominant eigen value of a matrix.
It as an iterative method implemented using an initial starting vector x. the starting vector can be
arbitrary if no suitable approximation is available. Power method is implemented as follows
π = π΄π β β β β β (π)
π = π/π β β β β β (π)
The new value of X is obtained in b is the used in equation a to compute new value of Y and the
process is repeated until the desired level of accuracy is obtained. The parameter k is called scaling
factor is the element of Y with largest magnitude.
Example: find the largest Eigen value π and the corresponding vector v, of the matrix using power
method
Reference 1 : Numerical Methods , Dr. V.N. Vedamurthy & Dr. N. Ch. S. N. Iyengar, Vikas Publishing House.