# Solving a linear system

The configuration files `.cfg` allow for a wide range of options to solve a linear or non-linear system.

We consider now the following example

``````int main(int argc, char**argv )
{
// initialize feel++
using namespace Feel;
Environment env( _argc=argc, _argv=argv,
_author="Feel++ Consortium",
_email="feelpp-devel@feelpp.org"));
// create mesh
auto mesh = unitSquare();

// function space
auto Vh = Pch<1>( mesh );
auto u = Vh->element();
auto v = Vh->element();

// left hand side
auto a = form2( _trial=Vh, _test=Vh );
a = integrate(_range=elements(mesh),

// right hand side
auto l = form1( _test=Vh );
l = integrate(_range=elements(mesh),
_expr=id(v));

// tagB::marker_on[]
// apply the boundary condition
a+=on(_range=boundaryfaces(mesh), _rhs=l, _element=u,
_expr=expr(soption("functions.alpha")) );

// solve the equation \f$a(u,v) = l(v) \f$
a.solve(_rhs=l,_solution=u);
// end:marker_solve[]

// export results
auto e = exporter( _mesh=mesh );
e->save();
}``````

To execute this example

``````# sequential
./feelpp_tut_laplacian
# parallel on 4 cores
mpirun -np 4 ./feelpp_tut_laplacian``````

As described in section

## 1. Direct solver

`cholesky` and `lu` factorisation are available. However the parallel implementation depends on the availability of MUMPS. The configuration is very simple.

``````# no iterative solver
ksp-type=preonly
#
pc-type=cholesky``````

Using the PETSc backend allows to choose different packages to compute the factorization.

 Package Description Parallel `petsc` PETSc own implementation yes `mumps` MUltifrontal Massively Parallel sparse direct Solver yes `umfpack` Unsymmetric MultiFrontal package no `pastix` Parallel Sparse matriX package yes

To choose between these factorization package

``````# choose mumps
pc-factor-mat-solver-package=mumps
# choose umfpack (sequential)
pc-factor-mat-solver-package=umfpack``````

In order to perform a cholesky type of factorisation, it is required to set the underlying matrix to be SPD.

``````# matrix
auto A = backend->newMatrix(_test=...,_trial=...,_properties=SPD);
# bilinear form
auto a = form2( _test=..., _trial=..., _properties=SPD );``````

## 2. Using iterative solvers

### 2.1. Using CG

#### 2.1.1. ICC(3)

with a relative tolerance of 1e-12:

``````ksp-rtol=1.e-12
ksp-type=cg
pc-type=icc
pc-factor-levels=3``````

### 2.2. Using GMRES

#### 2.2.1. ILU(3)

with a relative tolerance of 1e-12 and a restart of 300:

``````ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart=300
pc-type=ilu
pc-factor-levels=3``````

#### 2.2.2. Jacobi

With a relative tolerance of 1e-12 and a restart of 100:

``````ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart 100
pc-type=jacobi``````

## 3. Monitoring linear non-linear and eigen problem solver residuals

``````# linear
ksp_monitor=1
# non-linear
snes-monitor=1
# eigen value problem
eps-monitor=1``````