Nonlinear problems
1. A model problem
We consider the following nonlinear Poisson equation:
\(k(u)\) makes the equation nonlinear if is not constant in \(u\). In order to verify the solution procedure, we choose the domain, \(k(u), f,\) and the boundary conditions such that we have a simple, exact solution \(u\). Denote \(\Omega\) the unit hypercube \([0,1]^{d}\) in \(d\) dimensions, \(k(u)=(1+u)^{m}, f=0, u=0\) for \(x_{0}=0, u=1\) for \(x_{0}=1,\) and \(\partial u / \partial n=0\) at all other boundaries \(x_{i}=0\) and \(x_{i}=1, i=1, \ldots, d-1 .\) The coordinates are now represented by the symbols \(x_{0}, \ldots, x_{d-1}\) . The exact solution is then
The variational formulation of our model problem reads: find \(u \in V\) such that
where
and
The discrete problem arises as usual by restricting \(V\) and \(V_0\) to a pair of discrete spaces. The problem reads: find \(u \in V\) such that
with \(u=\sum_{j=1}^{N} U_{j} \phi_{j}\) where \(U=(U_j)^T_{j=1,...,N}\) are the components of \(u\) in the basis \((\phi_j)_{j=1,...,N}\) of \(V\). We have a system of nonlinear algebraic equations to solve.
2. Picard strategy
The Picard strategy is also known as the method of successive substitutions.
Picard iteration is an easy way of handling nonlinear PDEs: it uses a known, previous solution in the nonlinear terms so that these terms become linear in the unknown \(u\).
For our particular problem, we use a known, previous solution in the coefficient \(k(u)\). Given a solution \(u^{n}\) from iteration \(n\), we seek a new (hopefully improved) solution \(u^{n+1}\) in iteration \(n+1\) such that \(u^{n+1}\) solves the linear problem
The iterations require an initial guess \(u^{0}\). The hope is that \(u^{n} \rightarrow u\) as \(n \rightarrow \infty,\) and that \(u^{n+1}\) is sufficiently close to the exact solution \(u\) of the discrete problem after just a few iterations.
We can formulate a variational problem for \(u^{n+1}\). Equivalently, we can approximate \(k(u)\) by \(k\left(u^{n}\right)\) to obtain the same linear variational problem. In both cases, the problem consists of seeking \(u^{n+1} \in V\) such that
with
since this is a linear problem in the unknown \(u^{n+1}\), we can equivalently use the formulation
with
The iterations can be stopped when \(\epsilon \equiv\left\|u^{n+1}-u^{n}\right\|<\varepsilon\)] , where \(\varepsilon\) is small, say \(10^{-5}\), or when the number of iterations exceed some critical limit. The latter case will detect divergence of the method or unacceptable slow convergence.
In the solution algorithm we only need to store \(u^{n}\) and \(u^{n+1}\).
3. Newton strategy at the algebraic level
After having discretized our nonlinear PDE problem, we may use Newton’s method to solve the system of nonlinear algebraic equations. From the continuous variational problem, the discrete version results in a system of equations for the unknown parameters \(U_{1}, \ldots, U_{N}\) (by inserting \(\left.u=\sum_{j=1}^{N} U_{j} \phi_{j} \text { and } v=\hat{\phi}_{i} \right)\)
Newton’s method for the system \(F_{i}\left(U_{1}, \ldots, U_{j}\right)=0, i=1, \ldots, N\) can be formulated as
where \(\omega \in[0,1\)] is a relaxation parameter, and \(n\) is an iteration index. An initial guess \(u^{0}\) must be provided to start the algorithm. The original Newton method has \(\omega=1\), but in problems where it is difficult to obtain convergence, so-called under-relaxation with \(\omega<1\) may help.
We need to compute the Jacobian matrix \(\partial F_{i} / \partial U_{j}\) and the right-hand side vector \(-F_{i} .\) Our present problem has \(F_{i}\) given by [F_i]. The derivative \(\partial F_{i} / \partial U_{j}\) becomes
The following results were used to obtain [der_incr_2]
We can reformulate the Jacobian matrix in [der] by introducing the short notation \(u^{n}=\sum_{j=1}^{N} U_{j}^{n} \phi_{j}\)
We need to formulate a corresponding variational problem. Looking at the linear system of equations in Newton’s method,
we can introduce \(v\) as a general test function replacing \(\hat{\phi}_{i}\), and we can identify the unknown \(\delta u=\) \(\sum_{j=1}^{N} \delta U_{j} \phi_{j} .\) From the linear system we can now go "backwards" to construct the corresponding discrete weak form
Equation [discr_form] fits the standard form \(a(\delta u, v)=\ell(v)\) with
the important feature in Newton’s method that the previous solution \(u^{n}\) replaces \(u\) in the formulas when computing the matrix \(\partial F_{i} / \partial U_{j}\) and vector \(F_{i}\) for the linear system in each Newton iteration. |
3.1. Implementation of Newton method
We have implemented in Feel++ the model problem.
First we need to define the coefficients. We set the reaction terms to 0 in the configuration file
reaction=0:x:y
the configuration is read in the code like this
auto reaction = expr( option(_name="reaction").as<std::string>(), "u", idv(u), "reaction" );
auto reaction_diff = diff( option(_name="reaction").as<std::string>(), "u", "u", idv(u), "reaction_diff" );
Then we set the diffusion terms
diffusion=(1+u)^m:x:y:m:u
the configuration is read in the code like this
auto diffusion = expr( option(_name="diffusion").as<std::string>(), "u", idv(u), "diffusion" );
auto diffusion_diff = diff( option(_name="diffusion").as<std::string>(), "u", "u", idv(u), "diffusion_diff" );
We then define the jacobian of the functional
auto Jacobian = [=](const vector_ptrtype& X, sparse_matrix_ptrtype& J)
{
if (!J) J = backend()->newMatrix( _test=Vh, _trial=Vh );
auto a = form2( _test=Vh, _trial=Vh, _matrix=J );
a = integrate( _range=elements( mesh ), _expr=(diffusion*gradt( u )+diffusion_diff*idt(u)*gradv(u))*trans( grad( v ) ) );
a += integrate( _range=elements( mesh ), _expr=reaction_diff*idt( u )*id( v ) );
a += integrate( _range=boundaryfaces( mesh ),
_expr=
( - trans( id( v ) )*( (diffusion*gradt( u )+diffusion_diff*idt(u)*gradv(u))*N() )
+ trans( idt( u ) )*( (diffusion+diffusion_diff*idv(u))*grad( v )*N() )
+ penalbc*trans( idt( u ) )*id( v )/hFace() ) );
};
and the associated residual
auto Residual = [=](const vector_ptrtype& X, vector_ptrtype& R)
{
auto u = Vh->element();
u = *X;
auto r = form1( _test=Vh, _vector=R );
r = integrate( _range=elements( mesh ), _expr=diffusion*gradv( u )*trans( grad( v ) ) );
r += integrate( _range=elements( mesh ), _expr=reaction*id( v ) );
r += integrate( _range=boundaryfaces( mesh ),
_expr=
( - diffusion*trans( id( v ) )*( diffusion*gradv( u )*N() )
+ diffusion*trans( idv( u ) )*( diffusion*grad( v )*N() )
+ penalbc*trans( idv( u ) )*id( v )/hFace() ) );
};
u.zero();
backend()->nlSolver()->residual = Residual;
backend()->nlSolver()->jacobian = Jacobian;
backend()->nlSolve( _solution=u );
auto e = exporter( _mesh=mesh );
e->add( "u", u );
e->save();
}
we used Nitsche’s method to define the Dirichlet boundary conditions in a weak way. |
Finally we solve the nonlinear problem:
u.zero();
backend()->nlSolver()->residual = Residual;
backend()->nlSolver()->jacobian = Jacobian;
backend()->nlSolve( _solution=u );
The full code is available here.