power method Archives - OnlineEngineeringNotes https://onlineengineeringnotes.com/tag/power-method/ A Complete Guide to future Engineers Fri, 04 Jun 2021 16:38:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Solution of Linear Algebraic Equations: Matrices and their properties, Elimination and Gauss Jordan methods, Method of factorization, power, iterative https://onlineengineeringnotes.com/2021/06/04/solution-of-linear-algebraic-equations-matrices-and-their-properties-elimination-and-gauss-jordan-methods-method-of-factorization-power-iterative/ https://onlineengineeringnotes.com/2021/06/04/solution-of-linear-algebraic-equations-matrices-and-their-properties-elimination-and-gauss-jordan-methods-method-of-factorization-power-iterative/#respond Fri, 04 Jun 2021 16:38:19 +0000 https://onlineengineeringnotes.com/?p=761 Matrices and properties Importance of linear equations First mathematical models of many of the real world problems are either linear or can beapproximated reasonably well using linear relationships. Analysis of linear relationship of variablesis generally easier than that of non-linear relationships.A linear equation involving two variables x and y has the standard form π‘Žπ‘₯ + ... Read more

The post Solution of Linear Algebraic Equations: Matrices and their properties, Elimination and Gauss Jordan methods, Method of factorization, power, iterative appeared first on OnlineEngineeringNotes.

]]>
Matrices and properties

Importance of linear equations

First mathematical models of many of the real world problems are either linear or can be
approximated reasonably well using linear relationships. Analysis of linear relationship of variables
is generally easier than that of non-linear relationships.
A linear equation involving two variables x and y has the standard form π‘Žπ‘₯ + 𝑏𝑦 = 𝑐, where a, b&
c are real numbers and a and b both cannot be equal to zero.
The equation becomes non-linear if any of the variables has the exponent other than one, example

4π‘₯ + 5𝑦 = 15 π‘™π‘–π‘›π‘’π‘Žπ‘Ÿ

4π‘₯ βˆ’ π‘₯𝑦 + 5𝑦 = 15π‘›π‘œπ‘› – π‘™π‘–π‘›π‘’π‘Žπ‘Ÿ

Β π‘₯ 2 + 5𝑦 2 = 15 π‘›π‘œπ‘› βˆ’ π‘™π‘–π‘›π‘’π‘Žr

Linear equation occurs in more than two variables as π‘Ž1π‘₯1 + π‘Ž2π‘₯2 + π‘Ž3π‘₯3 + β‹― π‘Žπ‘›π‘₯𝑛 = 𝑏. The set of equations is known as system of simultaneous equations, in matrix form it can be represented as

𝐴π‘₯ = 𝐡

3π‘₯1 + 2π‘₯2 + 4π‘₯3 = 14

π‘₯1 βˆ’ 2π‘₯2 = βˆ’7

Β βˆ’π‘₯1 + 3π‘₯2 + 2π‘₯3 = 2

Existence of solution

In solving system of equations we find values of variables that satisfy all equations in the system
simultaneously. There may be 4 possibility in solving the equations.

  1. System with unique solution
    here the lines or equations intersect in one and only one point.

2. System with no solution
Here the lines or equation never intersect or parallel lines.

3. System with infinite solution

Here two equation or lines overlap, so that there is infinite
Solutions

4. ILL condition system

There may be situation where the system has a solution but it is very close to being
singular, i.e, any equation have solution but is very difficult to identify the exact point at
which the lines intersect. If there is any slight changes in the value in the equation then we
will see huge change in the solution, this type of equation is called ILL condition system,
we should be careful in solving these kind of solutions. Example

1.01π‘₯ + 0.99𝑦 = 2
0.99π‘₯ + 1.01𝑦 = 2

On solving these equations we get the solution at x=1 & y=1, however if we make small
changes in b i.e.

1.01π‘₯ + 0.99𝑦 = 2.02
0.99π‘₯ + 1.01𝑦 = 1.98

On solving these equation we get x=2 & y=0
So slight changes results in huge change in solution.

Methods of solutions

Elimination method

Elimination method is a method of solving simultaneous linear. This method involves elimination
of a term containing one of the unknowns in all but one equation, one such step reduces the order
of equations by one, repeated elimination leads finally to one equation with one unknown.
Example: solve the following equation using elimination method

4π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 = 15 … … (1)

βˆ’3π‘₯1 βˆ’ π‘₯2+4π‘₯3 = 8 … … (2)

Β π‘₯1 βˆ’ π‘₯2 + 3π‘₯3 = 13 … … (3)

Here multiply 𝑅1 𝑏𝑦 3 &𝑅2 𝑏𝑦 4 and add to eliminate π‘₯1from 2. Multiply 𝑅1 𝑏𝑦 βˆ’ 1&𝑅3 𝑏𝑦 4 and add to eliminate π‘₯1from 3

4π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 = 15

βˆ’10π‘₯2+19π‘₯3 = 77

βˆ’2π‘₯2 + 11π‘₯3 = 37

Now to eliminate π‘₯2 from third equation multiply second row by 2 and third row by -10 and adding

4π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 = 15

 βˆ’10π‘₯2+19π‘₯3 = 77

βˆ’72π‘₯3 = βˆ’216

Now we have a triangular system and solution is readily obtained from back-substitution

Gauss Elimination Method

The procedure in above example may not be satisfactory for large systems because the transformed coefficients can become very large as we convert to a triangular system. So we use another method called Gaussian Elimination method that avoid this by subtracting π‘Žπ‘–1/ π‘Ž11 times the first equation from 𝑖 π‘‘β„Ž equation to make the transformed numbers in the first column equal to zero and proceed on. However we must always guard against divide by zero, a useful strategy to avoid divide by zero is to re-arrange the equations so as to put the coefficient of large magnitude on the diagonal at each step, this is called pivoting. Complete pivoting method require both row and column interchange but this is much difficult and not frequently done. Changing only row called partial pivoting which places a coefficient of larger magnitude on the diagonal by row interchange only. This will be guaranteeing a non-zero divisors if there is a solution to set of equations and will have the added advantage of giving improved arithmetic precision. The diagonal elements that result are called pivot elements.

Example (without pivoting element)

0.143π‘₯1 + 0.357π‘₯2 + 2.01π‘₯3 = βˆ’5.173

βˆ’1.31π‘₯1 + 0.911π‘₯2 + 1.99π‘₯3 = βˆ’5.458

11.2π‘₯1 βˆ’ 4.30π‘₯2 βˆ’ 0.605π‘₯3 = 4.415

Augmented matrix is

Gauss Jordan Method

Gauss Jordan method is another popular method used for solving a system of linear equations. In
this method the elements above the diagonal are made zero at the same time that zero are
created below the diagonal, usually the diagonal elements are made ones at the same time the
reduction is performed, this transforms the coefficient matrix into identity matrix. When this has
been accomplished the column of right-hand side has been transformed into the solution vector.
Pivoting is normally employed to preserve arithmetic accuracy.

Example solution using Gauss-Jordan method

2π‘₯1 + 4π‘₯2 βˆ’ 6π‘₯3 = βˆ’8

π‘₯1 + 3π‘₯2 + π‘₯3 = 10

2π‘₯1 βˆ’ 4π‘₯2 βˆ’ 2π‘₯3 = βˆ’12

Example solution using Gauss-Jordan method (with pivoting)

2π‘₯1 + 4π‘₯2 βˆ’ 6π‘₯3 = βˆ’8

π‘₯1 + 3π‘₯2 + π‘₯3 = 10

2π‘₯1 βˆ’ 4π‘₯2 βˆ’ 2π‘₯3 = βˆ’12

Augmented matrix is

The inverse of a matrix

The division a matrix is not defined but the equivalent is obtained from the inverse of the matrix. If the product of two square matrices A*B equals identity matrix I, B is said to be inverse of A (also A is inverse of B). the usual notation of the matrix is 𝐴 βˆ’1 . we can say as 𝐴𝐡 = 𝐼, 𝐴 = 𝐡 βˆ’1 ,𝐡 = 𝐴 βˆ’1 .

Example: given matrix A, find the inverse of A using Gauss Jordan method.

Method of factorization

Consider the following system of equations

π‘Ž11π‘₯1 + π‘Ž12π‘₯2 + π‘Ž13π‘₯3 = 𝑏1

 π‘Ž21π‘₯1 + π‘Ž22π‘₯2 + π‘Ž23π‘₯3 = 𝑏2

π‘Ž31π‘₯1 + π‘Ž32π‘₯2 + π‘Ž33π‘₯3 = 𝑏3

These equations can be written in matrix form as
𝐴𝑋 = B

n this method we use the fact that the square matrix A can be factorized into the form LU, where
L is lower triangular matrix and U can be upper triangular matrix such that 𝐴 = πΏπ‘ˆ

πΏπ‘ˆπ‘‹ = 𝐡

Let us assume π‘ˆπ‘‹ = 𝑍 , then 𝐿𝑍 = 𝐡
Now we can solve the system A𝑋 = 𝐡 in two stages

  1. Solve the equation, 𝐿𝑍 = 𝐡 for Z by forward substitution
  2. Solve the equation, π‘ˆπ‘‹ = 𝑍 for X using Z by backward substitution.

The elements of L and u can be determined by comparing the elements of the product of L and U
with those of A. The decomposition with L having unit diagonal values is called the Dolittle LU
decomposition while the other one with U having unit diagonal elements is called Crout LU
decomposition.

Dolittle LU decomposition

Equating the corresponding coefficients we get the values of l and u

Example: Now find L & U either by using Dolittle algorithm

2π‘₯ βˆ’ 3𝑦 + 10𝑧 = 3
βˆ’π‘₯ + 4𝑦 + 2𝑧 = 20
5π‘₯ + 2𝑦 + 𝑧 = βˆ’12

The given system is 𝐴π‘₯ = 𝐡, where

Now comparing both sides we get

𝑒11 = 2

 π‘’12 = βˆ’3

𝑒13 = 10

𝑙21𝑒11 = βˆ’1

𝑙21 = βˆ’ 1/ 2

𝑙21𝑒12 + 𝑒22 = 4

𝑒22 = 5/ 2

𝑙21𝑒13 + 𝑒23 = 2

𝑒23 = 7

𝑙31𝑒11 = 5

𝑙31 = 5/ 2

𝑙31𝑒12 + 𝑙32𝑒22 = 2

𝑙32 = 19 /5

𝑙31𝑒13 + 𝑙32𝑒23 + 𝑒33 = 1

𝑒33 = βˆ’ 253/ 5

2π‘₯ βˆ’ 3𝑦 + 10𝑧 = 3
π‘₯ = βˆ’4

Crout’s method

Equating the corresponding coefficients, we get the values of l and u

Example solve the following system by the method of crouts algorithms factorization
2π‘₯ βˆ’ 3𝑦 + 10𝑧 = 3
βˆ’π‘₯ + 4𝑦 + 2𝑧 = 20
5π‘₯ + 2𝑦 + 𝑧 = βˆ’12

The given system is 𝐴π‘₯ = 𝐡, where

𝑙31𝑒13 + 𝑙32𝑒23 + 𝑙33 = 1

𝑙33 = βˆ’ 253/ 5

So we have

Choleskys method

In case of A is symmetric, the LU decomposition can be modified so that upper factor in matrix is
the transpose of the lower one (vice versa)
i.e. 𝐴 = 𝐿𝐿 𝑇 = π‘ˆ 𝑇 π‘ˆ

Just as other method, perform as before

Symmetric matrix

A square matrix 𝑨 = [π’‚π’Šπ’‹] is called symmetric if π’‚π’Šπ’‹= π’‚π’‹π’Š for all i and j

Example: factorize the matrix, using Cholesky algorithm

Iterative methods

Gauss elimination and its derivatives are called direct method, an entirely different way to solve
many systems is through iteration. In this we start with an initial estimate of the solution vector
and proceed to refine this estimate.
When the system of equation can be ordered so that each diagonal entry of the coefficient matrix
is larger in magnitude that the sum of the magnitude of the other coefficients in that row, then
such system is called diagonally dominant and the iteration will converge for any stating values.
Formally we say that an nxn matrix A is diagonally dominant if and only if for each i=1, 2, 3….n

The iterative method depends on the arrangement of the equations in this manner
Let us consider a system of n equations in n unknowns

π‘Ž11π‘₯1 + π‘Ž12π‘₯2 + β‹― + π‘Ž1𝑛π‘₯𝑛 = 𝑏1

π‘Ž21π‘₯1 + π‘Ž22π‘₯2 + β‹― + π‘Ž2𝑛π‘₯𝑛 = 𝑏2

π‘Žπ‘›1π‘₯1 + π‘Žπ‘›2π‘₯2 + β‹― + π‘Žπ‘›π‘›π‘₯𝑛= 𝑏𝑛

We write the original system as

Now we can computer π‘₯1, π‘₯2 … π‘₯𝑛by using initial guess for these values. The new values area gain used to compute the next set of x values. The process can continue till we obtain a desired level of accuracy in x values.

Example
Solve the equation using Gauss Jacobi iteration method

6π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 = 11

π‘₯1 + 2π‘₯2 βˆ’ 5π‘₯3 = βˆ’1

βˆ’2π‘₯1 + 7π‘₯2 + 2π‘₯3 = 5

Now first we recorder the equation so that coefficient matrix is diagonally dominant

6π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 = 11

βˆ’2π‘₯1 + 7π‘₯2 + 2π‘₯3 = 5

Β π‘₯1 + 2π‘₯2 βˆ’ 5π‘₯3 = βˆ’1

Now

We can simplify as

We begin with some initial approximation to the value of the variables,

 let’s take as π‘₯1 = 0, π‘₯2 = 0, π‘₯3 = 0,

Then new approximation using above formula will be as follows

1 Iteration
π‘₯1=1.833333
π‘₯2=0.714286
π‘₯3=0.200000
 2 Iteration
π‘₯1=2.038095
π‘₯2 =1.180952
 π‘₯3=0.852381
3 Iteration
π‘₯1=2.084921
π‘₯2=1.053061
π‘₯3=1.080000
4 Iteration
 π‘₯1=2.004354
π‘₯2=1.001406
π‘₯3=1.038209
 5 Iteration
 π‘₯1=1.994100
π‘₯2=0.990327
π‘₯3=1.001433
6 Iteration
π‘₯1=1.996537
π‘₯2=0.997905
π‘₯3=0.994951
7 Iteration
π‘₯1=2.000143
π‘₯2=1.000453
π‘₯3=0.998469
8 Iteration
 π‘₯1=2.000406
π‘₯2=1.000478
π‘₯3=1.000210
9 Iteration
π‘₯1=2.000124
π‘₯2=1.000056
π‘₯3=1.000273
10 Iteration
 π‘₯1=1.999973
π‘₯2=0.999958
π‘₯3=1.000047
11 Iteration
π‘₯1=1.999978
π‘₯2=0.999979
 π‘₯3=0.999978
12 Iteration
π‘₯1=1.999997
π‘₯2=1.000000
π‘₯3=0.999987
12 Iteration the final result is :
π‘₯1=1.999997 π‘₯2=1.000000 π‘₯3=0.999987
Gauss Seidel Iteration method

This is a modification of Gauss Jacobi method, as before

Let us consider a system of n equations in n unknowns

 π‘Ž11π‘₯1 + π‘Ž12π‘₯2 + β‹― + π‘Ž1𝑛π‘₯𝑛 = 𝑏1

π‘Ž21π‘₯1 + π‘Ž22π‘₯2 + β‹― + π‘Ž2𝑛π‘₯𝑛 = 𝑏2

π‘Žπ‘›1π‘₯1 + π‘Žπ‘›2π‘₯2 + β‹― + π‘Žπ‘›π‘›π‘₯𝑛 = 𝑏3

We write the original system as

Now we can computer π‘₯1, π‘₯2 … π‘₯𝑛 by using initial guess for these values. Here we use the updated values of π‘₯1, π‘₯2 … π‘₯𝑛 in calculating new values of x in each iteration till we obtain a desired level of accuracy in x values. This method is more rapid in convergence than gauss Jacobi method. The rate of convergence of gauss seidel method is roughly twice that of gauss Jacobi.

Example
Solve the equation using Gauss Seidel iteration method

8π‘₯1 βˆ’ 3π‘₯2 + 2π‘₯3 = 20

 6π‘₯1 + 3π‘₯2 + 12π‘₯3 = 35

4π‘₯1 + 11π‘₯2 βˆ’ π‘₯3 = 33

Now first we recorder the equation so that coefficient matrix is diagonally dominant

8π‘₯1 βˆ’ 3π‘₯2 + 2π‘₯3 = 20

4π‘₯1 + 11π‘₯2 βˆ’ π‘₯3 = 33

Β 6π‘₯1 + 3π‘₯2 + 12π‘₯3 = 35

We begin with some initial approximation to the value of the variables,

let’s take as π‘₯2 = 0, π‘₯3 = 0,

Then new approximation using above formula will be as follows

Since the 6th and 7th approximate are almost same up to 4 decimal places, we can the result is

Β π‘₯1 = 3.0168, π‘₯2 = 1.9859, π‘₯3 = 0.9118

Power method
Power method is a single value method used for determining the dominant eigen value of a matrix.
It as an iterative method implemented using an initial starting vector x. the starting vector can be
arbitrary if no suitable approximation is available. Power method is implemented as follows
π‘Œ = 𝐴𝑋 βˆ’ βˆ’ βˆ’ βˆ’ βˆ’ (π‘Ž)

𝑋 = π‘Œ/π‘˜ βˆ’ βˆ’ βˆ’ βˆ’ βˆ’ (𝑏)
The new value of X is obtained in b is the used in equation a to compute new value of Y and the
process is repeated until the desired level of accuracy is obtained. The parameter k is called scaling
factor is the element of Y with largest magnitude.

Example: find the largest Eigen value πœ† and the corresponding vector v, of the matrix using power
method

Reference 1 : Numerical Methods , Dr. V.N. Vedamurthy & Dr. N. Ch. S. N. Iyengar, Vikas Publishing House.

The post Solution of Linear Algebraic Equations: Matrices and their properties, Elimination and Gauss Jordan methods, Method of factorization, power, iterative appeared first on OnlineEngineeringNotes.

]]>
https://onlineengineeringnotes.com/2021/06/04/solution-of-linear-algebraic-equations-matrices-and-their-properties-elimination-and-gauss-jordan-methods-method-of-factorization-power-iterative/feed/ 0