Solving Poisson Equation using Conjugate Gradient Methodand its implementation
The document presents a thorough exploration of solving the Poisson equation using the conjugate gradient method, detailing various iterative techniques including Jacobi, Gauss-Seidel, and Successive Over-Relaxation (SOR). It discusses the implementation of these methods for large sparse matrices, the use of preconditioners, and the utility of the Intel Math Kernel Library (MKL) for efficient computation. Additionally, the document covers optimization strategies and references key literature on the subject.
From the Basics,Ax=b
Linear Systems
𝐴𝑥 = 𝑏
Goal of this presentation
What have you learned?
• Direct Method
• Gauss Elimination
• Thomas Algorithm (TDMA) (for tridiagonal matrix only)
• Iterative Method
• Jacobi method
• SOR method
• Conjugate Gradient Method
• Red Black Jacobi Method
Preconditioned System
𝑴−𝟏
𝑨𝒙 =𝑴−𝟏
𝒃 With Preconditioner 𝑀
𝑀 𝐺𝑆 = 𝐷 − 𝐸Gauss-Seidel
𝑀𝑆𝑆𝑂𝑅 =
1
𝜔 2 − 𝜔
𝐷 − 𝜔𝐸 𝐷−1
(𝐷 − 𝜔𝐹)SSOR
𝑀𝐽𝐴 = 𝐷Jacobi
𝑀𝐽𝐴 =
1
𝜔
(𝐷 − 𝜔𝐸)SOR
It may not be “SPARSE” due to inverse (𝑀−1)
How to compute this?
𝑤 = 𝑀−1
𝐴𝑣 𝑟 = 𝐴𝑣 and 𝑀𝑤 = 𝑟
𝐴𝑣 might be expensive. Much better?
𝑤 = 𝑀−1 𝐴𝑣 = 𝑀−1 𝑀 − 𝑁 𝑣 = 𝐼 − 𝑀−1 𝑁 𝑣
𝑟 = 𝑁𝑣
𝑤 = 𝑀−1
𝑟
𝑤 ≔ 𝑣 − 𝑤
N may be sparser than A
and less expensive than 𝐴𝑣
10.
Minimization Problem
Forget about𝐴𝑥 = 𝑏 temporarily, but thinking about some quadratic function 𝑓
Function Matrix
𝑓(x) =
1
2
𝐴𝑥2
− 𝑏𝑥 + 𝑐 𝑓 𝑥 =
1
2
𝑥 𝑇
𝐴𝑥 − 𝑏 𝑇
𝑥 + 𝑐
𝑓′
x = 𝐴𝑥 − b 𝑓′ x =
1
2
𝐴 𝑇 𝑥 +
1
2
A𝑥 − b
If Matrix 𝐴 is symmetric, 𝐴 𝑇
= 𝐴, then
𝒇′ 𝒙 = 𝑨𝒙 − 𝒃
Setting the gradient to zero, we get the linear system we wish to solve.
Our original GOAL!!
11.
(a) Quadratic formfor a positive
definite matrix
(b) Quadratic form for a negative
definite matrix
(c) Singular (and positive-indefinite)
matrix; A line that runs through
bottom of the valley is the set of
solutions
(d) For an indefinite matrix. Saddle
point.
For a Symmetric and Positive Definite Matrix, minimizing
𝑓 𝑥 =
1
2
𝑥 𝑇 𝐴𝑥 − 𝑏 𝑇 𝑥 + 𝑐
Reduced to our solution
Minimization Problem
12.
Steep Descent Method
Choosedirection in which 𝑓 decrease most quickly, which is the direction opposite 𝑓′(𝑥 𝑖 )
𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
−𝑓′ 𝑥 𝑖 = 𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
𝑥(1) = 𝑥(0) + 𝛼𝑟(0)
To Find 𝛼, set
𝑑
𝑑𝛼
𝑓 𝑥 1 = 0
𝑑
𝑑𝛼
𝑓 𝑥 1 = 𝑓′
𝑥 1
𝑇 𝑑
𝑑𝛼
𝑥(1) = 𝑓′
𝑥 1
𝑇
𝑟(0)
𝑓′
𝑥 𝑖+1
𝑇
and 𝑟(𝑖) are orthogonal!
−𝑓′
𝑥 𝑖+1 = 𝑟(𝑖+1)
𝑓′
𝑥 𝑖+1
𝑇
𝑟(𝑖) = 0
𝑟 𝑖+1
𝑇
𝑟(𝑖) = 0
𝜶 =
𝒓 𝒊
𝑻
𝒓 𝒊
𝒓(𝒊)
𝑻
𝑨𝒓(𝒊)
13.
Conjugate Gradient Method
SteepDescent Method not always converge well
Worst case of steep descent method
• Solid lines : worst convergence line
• Dashed line : steps toward convergence
Why it doesn’t directly go along line for fast
convergence? → related to eigen value
problem
Introducing Conjugate Gradient method
14.
Conjugate Gradient Method
Whatis the meaning of conjugate?
• Definition : A binomial formed by negating the second term of binomial
• 𝑥 + 𝑦 ← conjugate → 𝑥 − 𝑦
Then, what is the meaning of conjugate gradient?
• Steep descent method often finds itself taking steps in the same direction
• Wouldn’t it better if we got it right the every step?
• Here is a step
• error 𝑒(𝑖) = 𝑥(𝑖) − 𝑥, residual 𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖), 𝑑(𝑖) a set of orthogonal search
direction
• for each step, we choose a point 𝑥(𝑖+1) = 𝑥(𝑖) + 𝛼(𝑖) 𝑑(𝑖)
• To find 𝛼, 𝑒(𝑖+1) should be orthogonal to 𝑑(𝑖). (𝑒 𝑖+1 = 𝑒 𝑖 + 𝛼 𝑖 𝑑 𝑖 )
𝑑(𝑖)
𝑇
𝑒(𝑖+1) = 0
𝑑(𝑖)
𝑇
(𝑒 𝑖 +𝛼(𝑖) 𝑑(𝑖)) = 0
𝛼(𝑖) = −
𝑑 𝑖
𝑇
𝑒 𝑖
𝑑(𝑖)
𝑇
𝑑(𝑖)
We don’t know anything about 𝑒(𝑖), because if we know 𝑒(𝑖), it means we know the answer.
15.
Conjugate Gradient Method
Insteadof orthogonal, introduce 𝐴-orthogonal
𝒅(𝒊)
𝑻
𝑨𝒅(𝒋) = 𝟎, if 𝑑(𝑖) and 𝑑(𝑗) are 𝐴-orthogonal, or conjugate
𝒆(𝒊+𝟏) is 𝑨-orthogonal to 𝒅(𝒊), and this condition is equivalent to finding the minimum
point along the search direction 𝑑(𝑖) , as in steep descent method
𝑑
𝑑𝛼
𝑓 𝑥 𝑖+1 = 0
𝛼 minimize 𝑓 when directional
derivative is equal to zero
𝑓′ 𝑥 𝑖+1
𝑇 𝑑
𝑑𝛼
𝑥 𝑖+1 = 0
−𝑟 𝑖+1
𝑇
𝑑(𝑖) = 0
Chain rule
𝑓′ 𝑥(𝑖+1) = 𝐴𝑥(𝑖+1) − 𝑏
𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
𝑥(𝑖+1) = 𝑥(𝑖) + 𝛼(𝑖) 𝑑(𝑖)
𝑑(𝑖)
𝑇
𝐴𝑒(𝑖+1) = 0 𝑥(𝑖+1)
𝑇
𝐴 𝑇
𝑑(𝑖) − 𝑏 𝑇
𝑑 𝑖 = 0
𝑥(𝑖+1)
𝑇
𝐴 𝑇
𝑑(𝑖) − 𝑥 𝑇
𝐴 𝑇
𝑑 𝑖 = 0
𝑒 𝑖+1
𝑇
𝐴 𝑇
𝑑(𝑖) = 0 Transpose again
How it can be same as orthogonality used in steep descent method?
Implementation Issue
• For3D case, Matrix 𝐴 would be huge. (for (128 × 128 × 128) grid, 𝐴 matrix has
128 × 128 × 128 × 128 × 128 × 128 = 32𝑇𝐵, (for 2D it takes only 2GB)
• However, there are almost 0 in 𝐴 matrix for poisson equation. ⇒ Sparse Matrix!
How to represent Sparse Matrix?
• Simplest thing. Store nonzero value and row, column index. (Coordinate
Format, COO)
Too many
duplication
21.
Sparse Matrix Format
CompressedSparse Row (CSR)
• Store only non-zero values
• Available three or four arrays
• Not easy to construct the algorithm such as ILU or IC preconditioner
22.
Use MKL (IntelMath Kernel Library)
MKL?
• a library of optimized math routines for science, engineering, and financial
applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse
solvers, fast Fourier transforms, and vector math. The routines in MKL are
hand-optimized specifically for Intel processors.
• For my problem, I usually use BLAS, fast Fourier transforms (for poisson
equation solver with Neumann, periodic, dirichlet BC)
BLAS?
• a specified set of low-level subroutines that perform common linear algebra
operations, widely used. Even in MATLAB!
• Usually used in vector or matrix multiplication, dot product like operations.
• Level 1 : vector – vector operation
• Level 2 : matrix – vector operation
• Level 3 : matrix – matrix operation
• Parallelized internally by Intel. Just turn on the option.
• Reference manual : https://software.intel.com/en-us/mkl_11.1_ref
23.
How to useLibrary
For MKL
• For compile (when creating .c files in your makefile)
• -i8 -openmp -I$(MKLROOT)/include
• For link (when creating executable files using –o option)
• -L$(MKLROOT)/lib/intel64 -lmkl_core -lmkl_intel_thread
-lpthread –lm
• https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
Library Linking Process
• Compile
• -I option indicate where is
header file (.h file), specifying
include path
• Linking
• -L option indicate where is
library file (.lib, .dll, .a, .so),
specifying linking path
• -l option indicate library name
24.
Reference
• Shewchuk, JonathanRichard. "An introduction to the conjugate gradient
method without the agonizing pain." (1994).
• Deepak Chandan, “Using Sparse Matrix and Solver Routines from Intel
MKL”, Scinet User Group Meeting, (2013)
• Saad, Yousef. Iterative methods for sparse linear systems. Siam, 2003.
• Akhunov, R. R., et al. "Optimization of the ILU(0) factorization algorithm with
the use of compressed sparse row format." Zapiski Nauchnykh Seminarov POMI
405 (2012): 40-53.