How to choose an appropriate solver?
Contents
How to choose an appropriate solver?#
FEniCS solver defaults are overzealous, meaning they are biased towards simplicity and robustness (in detriment of performance). A careful dive in the choice of the solver can then have a huge impact in the code performance. This tutorial provides rules of thumb that may be helpful to give a first boost in your solver performance (you may have to dig deeper to get the optimal choice for your specific case).
Parameters are passed to the solver via a Python dict
as:
a function argument if
solve
function is used (solver_parameters
argument)a class attribute if a solver object (e.g.
dolfin.cpp.fem.LinearVariationalSolver
) is used (parameters
)
A simple example for the first case is:
solver_parameters = {
'linear_solver': 'cg',
'preconditioner': 'sor',
}
solve(a == L, bcs, solver_parameters=solver_parameters)
For the second case:
from dolfin.fem.problem import LinearVariationalProblem
from dolfin.cpp.fem import LinearVariationalSolver
solver_parameters = {
'linear_solver': 'cg',
'preconditioner': 'sor',
}
problem = LinearVariationalProblem(a, L, u, bcs)
solver = LinearVariationalSolver(problem)
solver.parameters.update(solver_parameters)
Linear problems#
For linear problems (\(A x = b\)), we lean towards PETSc. When using this linear algebra backend, the following linear solvers and preconditioners are available, respectively (check out Introduction for the methods that provide this information):
Name
Method
‘bicgstab’
Biconjugate gradient stabilized method
‘cg’
Conjugate gradient method
‘gmres’
Generalized minimal residual method
‘minres’
Minimal residual method
‘petsc’
PETSc built in LU solver
‘richardson’
Richardson method
‘superlu_dist’
Parallel SuperLU
‘tfqmr’
Transpose-free quasi-minimal residual method
‘umfpack’
UMFPACK
Name
Method
‘icc’
Incomplete Cholesky factorization
‘ilu’
Incomplete LU factorization
‘petsc_amg’
PETSc algebraic multigrid
‘sor’
Successive over-relaxation
By default, sparse LU decomposition is used, which is only recommended for systems up to a few thousand unknowns.
To choose the default iterative method, linear_solver
must be set to iterative
or krylov
. FEniCS uses conjugate gradient for symmetric linear problems and generalized minimum residual for non-symmetric (conjugate gradient is much faster for large problems… that’s why we rant so much about keeping symmetry). Iterative methods may be sensitive to some parameters (e.g. tolerance).
Note
Check out this tutorial for additional information.
Eigenvalue problems#
For eigenvalue problems, we rely on SLEPc. The possible arguments for solver
are: power
, subspace
, arnoldi
, lanczos
, krylov-schur
, lapack
, arpack
, jacobi-davidson
, generalized-davidson
. You can find here further information about each option.
In this tutorial a performance comparison between different solvers is done. This may help in your decision process (spoiler: generalized-davidson
is very fast!).
Other parameters play also a crucial role in performance and convergence ability of the solver. In particular:
problem_type
should usually be set togen_hermitian
(the set of available solvers for non-symmetric problems is much smaller - check e.g. Tzounas, 2020)tolerance
hugely impacts performance
Additionally, spectrum
allows to choose the spectrum region where to look for eigenvalues (e.g. smallest magnitude
).