How to choose an appropriate solver?#

FEniCS solver defaults are overzealous, meaning they are biased towards simplicity and robustness (in detriment of performance). A careful dive in the choice of the solver can then have a huge impact in the code performance. This tutorial provides rules of thumb that may be helpful to give a first boost in your solver performance (you may have to dig deeper to get the optimal choice for your specific case).

Parameters are passed to the solver via a Python dict as:

  • a function argument if solve function is used (solver_parameters argument)

  • a class attribute if a solver object (e.g. dolfin.cpp.fem.LinearVariationalSolver) is used (parameters)

A simple example for the first case is:

solver_parameters = {
  'linear_solver': 'cg',
  'preconditioner': 'sor',

solve(a == L, bcs, solver_parameters=solver_parameters)

For the second case:

from dolfin.fem.problem import LinearVariationalProblem
from dolfin.cpp.fem import LinearVariationalSolver

solver_parameters = {
  'linear_solver': 'cg',
  'preconditioner': 'sor',

problem = LinearVariationalProblem(a, L, u, bcs)
solver = LinearVariationalSolver(problem)


Linear problems#

For linear problems (\(A x = b\)), we lean towards PETSc. When using this linear algebra backend, the following linear solvers and preconditioners are available, respectively (check out Introduction for the methods that provide this information):




Biconjugate gradient stabilized method


Conjugate gradient method


Generalized minimal residual method


Minimal residual method


PETSc built in LU solver


Richardson method


Parallel SuperLU


Transpose-free quasi-minimal residual method






Incomplete Cholesky factorization


Incomplete LU factorization


PETSc algebraic multigrid


Successive over-relaxation

By default, sparse LU decomposition is used, which is only recommended for systems up to a few thousand unknowns.

To choose the default iterative method, linear_solver must be set to iterative or krylov. FEniCS uses conjugate gradient for symmetric linear problems and generalized minimum residual for non-symmetric (conjugate gradient is much faster for large problems… that’s why we rant so much about keeping symmetry). Iterative methods may be sensitive to some parameters (e.g. tolerance).


Check out this tutorial for additional information.

Eigenvalue problems#

For eigenvalue problems, we rely on SLEPc. The possible arguments for solver are: power, subspace, arnoldi, lanczos, krylov-schur, lapack, arpack, jacobi-davidson, generalized-davidson. You can find here further information about each option.

In this tutorial a performance comparison between different solvers is done. This may help in your decision process (spoiler: generalized-davidson is very fast!).

Other parameters play also a crucial role in performance and convergence ability of the solver. In particular:

  • problem_type should usually be set to gen_hermitian (the set of available solvers for non-symmetric problems is much smaller - check e.g. Tzounas, 2020)

  • tolerance hugely impacts performance

Additionally, spectrum allows to choose the spectrum region where to look for eigenvalues (e.g. smallest magnitude).