The Power Method for Finding

  • View
    220

  • Download
    0

Embed Size (px)

DESCRIPTION

Eigenvalues-Eigenvectors The eigenvectors of a matrix are vectors that satisfy Ax = x Or, (A I)x = 0 So, is an eigenvalue iff det(A I) = 0 Example:

Transcript

The Power Method for Finding
Scientific Computing The Power Method for Finding Dominant Eigenvalue Eigenvalues-Eigenvectors
The eigenvectors of a matrix are vectors that satisfy Ax = x Or, (A I)x = 0 So, is an eigenvalue iff det(A I) = 0 Example: Eigenvalues-Eigenvectors
Eigenvalues are used in the solution of Engineering Problems involving vibrations, elasticity, oscillating systems, etc. Eigenvalues are also important for the analysis of Markov Chains in statistics. The next set of slides are from the course Computer Applications in Engineering and Construction atTexas A&M (Fall 2008). Equilibrium positions
Mass-Spring System Equilibrium positions Mass-Spring System Homogeneous system
Find the eigenvalues from det[] = 0 Polynomial Method m1 = m2 = 40 kg, k = 200 N/m
Characteristic equation det[] = 0 Two eigenvalues = 3.873s1 or s 1 PeriodTp = 2/ = 1.62 s or 2.81 s Principal Modes of Vibration
Tp = 1.62 s X1 = X2 Tp = 2.81 s X1 = X2 Power Method Power method for finding eigenvalues
Start with an initial guess for x Calculate w = Ax Largest value (magnitude) in w is the estimate of eigenvalue Get next x by rescaling w (to avoid the computation of very large matrix An ) Continue until converged k is the dominant eigenvalue
Power Method Start with initial guess z = x0 rescaling kis the dominant eigenvalue Power Method For large number of iterations, should converge to the largest eigenvalue The normalization make the right hand side converge to , rather than n Example: Power Method Consider Start with
Assume all eigenvalues are equally important, since we dont know which one is dominant Start with Eigenvalue estimate Eigenvector Example Current estimate for largest eigenvalue is 21 Rescale w by eigenvalue to get new x Check Convergence (Norm < tol?) Norm Update the estimated eigenvector and repeat
New estimate for largest eigenvalue is Rescale w by eigenvalue to get new x Norm Convergence criterion -- Norm (or relative error) < tol
Example One more iteration Norm Convergence criterion -- Norm (or relative error) < tol Example: Power Method Script file: Power_eig.m MATLAB Example: Power Method
[z,m] = Power_eig(A,100,0.001); itmz(1)z(2)z(3)z(4)z(5) error = 8.3175e-004 z z = 0.9199 0.7085 1.0000 m m = x=eig(A) x = 0.6686 MATLAB Example: Power Method eigenvector eigenvalue MATLAB function MATLABs Methods e = eig(A) gives eigenvalues of A [V, D] = eig(A)
eigenvectors in V(:,k) eigenvalues = Dii (diagonal matrix D) [V, D] = eig(A, B) (more general eigenvalue problems) (Ax = Bx) AV = BVD Theorem: If A has a complete set of eigenvectors, then the Power method converges to the dominate eigenvalue of the matrix A. Proof: A has n eigenvalues 1,2,3,,n with 1>2>3>>n with a corresponding basis of eigenvectors w1,w2,w3,,wn. Let the initial vector w0 be a linear combination of the vectors w1,w2,w3,,wn. w0 = a1w1+a2w2+a3w3++anwn Aw0= A(a1w1+a2w2+a3w3++anwn) =a1Aw1+a2Aw2+a3Aw3++anAwn =a11w1+a22w2+a33w3++annwn Akw0=a1(1)kw1+a2(2)kw2++an(n)kwn Akw0/(1)k-1=a1(1)k /(1)k-1 w1 ++an(n)k /(1)k-1 wn For large values of k (as k goes to infinity) we get the following:
At each stage of the process we divide by the dominant term of the vector. If we write w1 as shown to the right and consider what happens between two consecutive estimates we get the following. Dividing by the dominant term gives something that is approximately 1.