Abstract |
In this thesis presented an overview of direct and iterative methods for solving large sparse linear systems
such as Ax = b where A ∈ Cn×n and b ∈ Cn.
In numerical analysis and scientific computing, an important condition for the computations is the low
consuming of memory storage. A sparse linear system has the advantage that the amount of storage
required is greatly reduced and several storage schemes have been devised for this special category. The
respective theory was developed in the second chapter. In addition, the computational cost is reduced
since we know beforehand the result of arithmetic operations with zero. The main challenge in sparse
linear algebra is to balance storage, computational cost and stability to create an effective solution. In
chapter 2, it is also described the Finite Difference Methods, a class of numerical techniques for solving
differential equations by approximating derivatives with finite differences.
Later in chapter 3, there are developed the Stationary Iterative Methods. Some of them are the well–
known methods of Jacobi, Gauss–Seidel and SOR method as well. Consequently, we emphasize the
linear iterative schemes that constitute an important part of iterative methods.
Continuing into Chapter 4, we define the Krylov subspace. The jth Krylov subspace formed by the
linear combination of b,Ab, · · · ,Aj−ib and comprises the base of Gradient methods. We will refer to the
steepest descent method and the conjugate gradient method which differ in the search direction.
Finally, in Chapter 5 we present the results of numerical experiments performed with the solution of
linear systems which result from the discretization of the Helmholtz equation, using finite difference and
finite element methods. The Python codes used in the numerical experiments performed in this thesis
are listed in Appendix A.
|