Resolution strategy

Resolution strategy for Maxwell equations

1. Polarisation method

Design and simulation of electrical engineering applications containing ferromagnetic parts require the accurate modeling of hysteresis characteristics and the implementation of the models and solve the nonlinear partial differential equations derived from Maxwell’s equations.

The partial differential equations are generally nonlinear because of the nonlinear characteristics of ferromagnetic materials.

The numerical analysis of electromagnetic fields can be characterized by the electric and magnetic field intensities and flux densities formulated by Maxwell’s equations, which are the collection of partial differential equations of the electric field intensity \(\mathbf{E}\), the magnetic field intensity \(\mathbf{H}\), the electric flux density \(\mathbf{D}\) and the magnetic flux density \(\mathbf{B}\).

Constitutive relations between the above quantities are defined to take into account the macroscopic properties of the medium.

We consider a constitutive relation between the magnetic field intensity and the magnetic flux density which is nonlinear , given by the operator \(\mathbf{B} = \mathcal{B}(\mathbf{H})\) or equivalently \(\mathbf{H} = \mathcal{B}^{−1}(\mathbf{B})\). These nonlinear characteristics can be reformulated by introducing the polarization method and the resulting system of nonlinear equations can be solved using fixed point iterations.

According to the polarization method, the magnetic flux density can be split in two parts as

\[\mathbf{B} = \mu \mathbf{H} + \mathbf{R}\]

where \(\mu \mathbf{H}\) is a linear term, \(\mu\) is constant and the nonlinearity is in the second term \(\mathbf{R}\). It is a magnetic flux density like quantity.

The question is the appropriate value of the parameter \(\mu\). This representation can be reformulated as

\[\mathbf{R} = \mathbf{B}-\mu \mathbf{H} = \mathbf{B}-\mu \mathcal{B}^{-1}(\mathbf{B})\]

Here, the inverse type hysteresis characteristics are used.

It is important to note that the nonlinear equations are solved by iterative methods, the \(n\)-th iteration is denoted by the superscript \((n)\) in the following. By using the various formula in the constitutive relations defined in Maxwell’s equations, the solution of the nonlinear partial differential equations with appropriate boundary conditions can be formulated as

\[\mathbf{B}^{(n)} = \mathcal{M}(\mathbf{R}^{(n-1)})\]

where the operator \(\mathcal{M}\) represents the set of Maxwell’s equations and the boundary conditions. The starting \(\mathbf{R}^{(0)}\) is arbitrary. The value of electromagnetic field quantities in the \(n\)-th step are depending on the value of \(\mathbf{R}\) in the \((n − 1)\)-th step and they represent source like quantities.

The source of electromagnetic fields (e.g. the electric current density \(\mathbf{J}\)) is not changing during fixed point iteration, i.e. the fixed point iteration has no particular physical meaning.

1.1. Abstract fixed point iteration

It may be better to use the reluctivity \(\nu\) when applying the inverse hysteresis model, because \(\nu(B) = \frac{\partial H}{\partial B}\). Let us now multiply the relation above by \(\nu_{\mathrm{o}} = 1/\mu_{\mathrm{o}}\), we have

\[\nu_{\mathrm{o}} \mathbf{B} = \mathbf{H} + \nu_{\mathrm{o}}\mathbf{R},\]

hence

\[\mathbf{H} = \nu_{\mathrm{o}} \mathbf{B} - \nu_{\mathrm{o}}\mathbf{R} = \nu_{\mathrm{o}} \mathbf{B} + \mathbf{I}\]

where \(\mathbf{I}= -\nu_{\mathrm{o}}\mathbf{R}\), \(\nu_{\mathrm{o}}=\frac{\nu_{\mathrm{min}}+\nu_{\mathrm{max}}}{2}\) and \(\nu_{\mathrm{min}}\) and \(\nu_{\mathrm{max}}\) are the minimum and maximum slope of the inverse of the hysteresis characteristics. We have \(\nu_{\mathrm{min}} = 1/\mu_{\mathrm{max}}\) and \(\nu_{\mathrm{max}}=1/\mu_{\mathrm{min}}\).

The nonlinear fixed point iteration can be summarized as follows. The iteration can be started by an arbitrary value of \(\mathbf{I}^{(0)}\), then for \(n > 0\) we do,

  • the magnetic flux density \(\mathbf{B}^{(n)}\) can be calculated by solving the partial differential equations obtained from Maxwell’s equations and using \(\mathbf{I}^{(n−1)}\), in other words,

\[\mathbf{B}^{(n)} =\mathcal{M}({\mathbf{I}^{(n−1)}}),\]
  • the magnetic field intensity \(\mathbf{H}^{(n)}\) can be calculated by applying the inverse type hysteresis model, \(\mathbf{H}^{(n)} = \mathcal{B}^{−1}(\mathbf{B}^{(n)}),\)

  • the nonlinear residual term can be updated by using the magnetic flux density and magnetic field intensity,

\[\mathbf{I}^{(n)} = \mathbf{H}^{(n)} − \nu_{\mathrm{o}} \mathbf{B}^{(n)} = \mathcal{B}^{−1}({\mathbf{B}^{(n)}}) − \nu_{\mathrm{o}} \mathbf{B}^{(n)}\]
  • and the sequence defined by steps (i)–(iii) must be repeated until convergence.

Convergence criterion can be \(||\mathbf{I}^{(n)} − \mathbf{I}^{(n−1)}|| < \varepsilon\), where \(\varepsilon\) is the tolerance. Convergence criterion can be also defined by using the magnetic field intensity or the magnetic flux density.

1.2. Fixed point iteration for the \(A\) formulation

Recall that we have

\[\nabla \times (\nu_{\mathrm{o}} \nabla \times \mathbf{A} ) = \mathbf{J} - \nabla \times \mathbf{I}\quad \text{ in } \Omega\]

associated to proper boundary conditions for \(\mathbf{A}\). The fixed point iteration algorithm reads

We start with arbitrary \(\mathbf{I}^{(0)}\) and then for \(n > 0\)

  • solve

\[\nabla \times (\nu_{\mathrm{o}} \nabla \times \mathbf{A}^{(n)} ) = \mathbf{J} - \nabla \times \mathbf{I}^{(n-1)}\quad \text{ in } \Omega\]
  • compute

\[\mathbf{B}^{(n)}=\nabla \times \mathbf{A}^{(n)}\]

*compute the residual

\[\mathbf{I}^{(n)} = \mathcal{B}^{−1}({\mathbf{B}^{(n)}}) − \nu_{\mathrm{o}} \mathbf{B}^{(n)}\]
  • repeat steps i-iii until convergence, e.g.

\[||\mathbf{I}^{(n)}-\mathbf{I}^{(n-1)}|| < \varepsilon\]

2. Picard method

The main benefit for the polarization method is to keep the matrix constant. There is no need to rebuild a preconditioner and to assemble again and again the matrix. Actually, as long as the volume of the non linear concrete is small against the total volume, the gain is proportionnaly small.

Generaly, the non linear law links the magnetic field magnitude \(\mathbf{\| B\|}\) to the magnetic field intensity \(\mathbf{\| H\|}\). In other word, we have an isotropic concrete. That is we have the relation \(B = \mu(H) H\) instead of \(\mathbf{\| B\|} = \mu(\mathbf{\| H\|}) \mathbf{H}\).

One can write:

\[\begin{aligned} B &= \mu(H) H, \\ &= \mu_r \mu_0 H, \end{aligned}\]

where \(\mu_r\) is the relative magnetic permeability.

We denote with \(\mu(H)\) the interpolation of the data. It is defined thanks to a \(P_1\) or Akima’s interpolation of the dataset \(B-\frac{H}{B}\).

The Picard algorithm reads:

Given an initial \(\mu_r^{(0)}\) chosen arbitrarily:

  • solve

\[\nabla \times \left(\frac{1}{\mu_r^{(n)}} \nabla \times \mathbf{A}^{(n)} \right) = \mu_0 \mathbf{J}\quad \text{ in } \Omega\]
  • compute

\[B^{(n)}=\|\nabla \times \mathbf{A}^{(n)}\|\]
  • compute

\[H^{(n)} = \frac{B^{(n)}}{\mu_0 \mu_r^{(n)}}\]
  • compute

\[\mu_r^{(n+1)} = \frac{\mu(H^{(n)})}{\mu_0}\]
  • repeat steps i-iii until convergence, e.g.

\[||\mathbf{B}^{(n)}-\mathbf{B}^{(n-1)}|| < \varepsilon\]

3. Linking Picard to the Polarization

The polarization method has some evident advantages. Our benchmarks (to come) has shown that algorithm is less robust than the Picard one.

We note here an idea we do not have yet implemented that should circumvent the Polarization problem.

We want to transfer from the right hand side to the matrix the non linearity. When a criteria we need to define is reached, we construct \(\mu_{N}\)

\[\begin{aligned} B &= \mu_0 \mu_{opt} H + I \\ &= \mu_0 \mu_{N} H \\ \mu_{N} &= \mu_{opt} + \frac{1}{\mu_0} \frac{I}{H} \end{aligned}\]

And then we start a new Polarization algorithm with an initial \(\mathbf{I}\) set to zero.

4. Formulations

4.1. Saddle-point formulation

The first possibility is to add a constraint on the divergence using the Coulomb gauge, \( \nabla \cdot \mathbf{A} = 0 \). It is managed by a scalar Lagrange multiplier, giving the saddle-point problem:

\[\begin{aligned} \nabla\times\nu_0\nabla\times\mathbf{A} + \nabla p &= \mathbf{J} - \nabla\times\mathbf{I} &\text{ in } \Omega\\ \nabla\cdot\mathbf{A} &= 0 &\text{ in } \Omega\\ \mathbf{A}\times\mathbf{n} &= \mathbf{A}_D &\text{ on } \partial\Omega\\ p &= 0 &\text{ on } \partial\Omega \end{aligned}\]

4.1.1. Variational formulation

The variational formulation the consists in finding \((\mathbf{A},p) \in ( X \subset H(\mathrm{curl},\Omega) \times H^1_0(\Omega))\) (see Notations) such that

\[\begin{aligned} &\int_{\Omega}\nu_0(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\delta \Omega}\nu_0 (\nabla \times \mathbf{A}) \cdot (\mathbf{v} \times \mathbf{n}) + \int_{\Omega} \mathbf{v} \cdot \nabla p = \int_{\Omega} \mathbf{J} \cdot \mathbf{v}- \int_{\Omega} (\nabla \times \mathbf{I})\cdot \mathbf{v} ~~\forall \mathbf{v} \in Y \\ &\int_{\Omega} \mathbf{A} \cdot \nabla q = 0 ~~\forall q \in H^1_0(\Omega) \end{aligned}\]

The Dirichlet boundary condition on \(\mathbf{A}\) imposed on strong form vanishes the boundary term of and the condition is directly taken into account in the definition of the function space \(X = H_{\mathbf{A}_D}(\mathrm{curl},\Omega) = \{ \mathbf{v} \in H(\mathrm{curl},\Omega) \mid \mathbf{v} \times \mathbf{n} = \mathbf{A}_D ~\text{on} ~\partial \Omega\}\). The variational formulation then consists in finding \((\mathbf{A},p) \in ( H_{\mathbf{A}_D}(\mathrm{curl},\Omega) \times H^1_0(\Omega))\) such that

\[\begin{aligned} & \int_{\Omega}\nu_0(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\Omega} \mathbf{v} \cdot \nabla p = \int_{\Omega} \mathbf{J} \cdot \mathbf{v} - \int_{\Omega} (\nabla \times \mathbf{I})\cdot \mathbf{v} ~~\forall \mathbf{v} \in H_{0}(\mathrm{curl},\Omega) \\ & \int_{\Omega} \mathbf{A} \cdot \nabla q = 0 ~~\forall q \in H^1_0(\Omega) \end{aligned}\]

We can also impose the Dirichlet boundary conditions on weak form, adding symetrization and penalisation terms and then avoiding to add condition in \(X\) function space, i.e. \(X = H(\mathrm{curl},\Omega)\). As previously, \(\gamma\) is the penalisation coefficient and \(h_s\) the mesh size. The variational formulation consists then in finding \(\mathbf{A} \in H(\mathrm{curl},\Omega)\) such that \(\forall (\mathbf{v},q) \in H(\mathrm{curl},\Omega)\times H^1_0(\Omega)\)

\[\begin{align} \int_{\Omega}\nu(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\delta \Omega}\nu (\nabla \times \mathbf{A}) \cdot (\mathbf{v} \times \mathbf{n})&\\ + \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}) \cdot (\mathbf{A} \times \mathbf{n}) + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v} \times \mathbf{n}) \cdot (\mathbf{A} \times \mathbf{n})& \\ + \int_{\Omega} \mathbf{v} \cdot \nabla p &= \int_{\Omega} \mathbf{J} \cdot \mathbf{v} - \int_{\Omega} (\nabla \times \mathbf{I})\cdot \mathbf{v}\\ &+ \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}) \cdot \mathbf{A}_D + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v} \times \mathbf{n}) \cdot \mathbf{A}_D\\ \int_{\Omega} \mathbf{A} \cdot \nabla q &= 0 \end{align}\]

4.1.2. Discretization

Since \(\mathbf{A}\) must be in \(H(\mathrm{curl},\Omega)\), we need to use Nédélec elements, see Notations. On the strong form, the discrete problem becomes:
Find \((\mathbf{A}_h,p_h)\in (H_{\mathbf{A}_D,h}(\mathrm{curl},\Omega)\times P_{c,h}^1(\Omega))\) such that

\[\begin{aligned} & \int_{\Omega}\nu_0(\nabla \times \mathbf{A}_h) \cdot (\nabla \times \mathbf{v}_h) + \int_{\Omega} \mathbf{v}_h \cdot \nabla p_h = \int_{\Omega} \mathbf{J} \cdot \mathbf{v}_h - \int_{\Omega} (\nabla \times \mathbf{I})\cdot \mathbf{v}_h ~~\forall \mathbf{v}_h \in H_{0,h}(\mathrm{curl},\Omega) \\ & \int_{\Omega} \mathbf{A}_h \cdot \nabla q_h = 0 ~~\forall q_h \in P_{0,c,h}^1(\Omega) \end{aligned}\]

On the weak form, the discrete problem becomes:
Find \((\mathbf{A}_h,p_h)\in (H_{h}(\mathrm{curl},\Omega)\times P_{c,h}^1(\Omega))\) such that \(\forall (\mathbf{v}_h,q_h) \in H_h(\mathrm{curl},\Omega)\times P^1_{0,c,h}(\Omega)\)

\[\begin{align} \int_{\Omega}\nu(\nabla \times \mathbf{A}_h) \cdot (\nabla \times \mathbf{v}_h) + \int_{\delta \Omega}\nu (\nabla \times \mathbf{A}_h) \cdot (\mathbf{v}_h \times \mathbf{n})&\\ + \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}_h) \cdot (\mathbf{A}_h \times \mathbf{n}) + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v}_h \times \mathbf{n}) \cdot (\mathbf{A}_h \times \mathbf{n})& \\ + \int_{\Omega} \mathbf{v}_h \cdot \nabla p_h &= \int_{\Omega} \mathbf{J} \cdot \mathbf{v}_h - \int_{\Omega} (\nabla \times \mathbf{I})\cdot \mathbf{v}_h\\ &+ \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}_h) \cdot \mathbf{A}_D + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v}_h \times \mathbf{n}) \cdot \mathbf{A}_D\\ \int_{\Omega} \mathbf{A}_h \cdot \nabla q_h &= 0 \end{align}\]

4.2. Regularized formulation

The second way consists of considering the ungauged problem as a special case of the time harmonic Maxwell’s equations. Then using a Fourier transform, we can write the problem as:

\[\begin{aligned} \nabla\times\nu_0\nabla\times\mathbf{A} + \varepsilon\mathbf{A} &= \mathbf{J} - \nabla\times\mathbf{I} &\text{ in } \Omega\\ \mathbf{A}\times\mathbf{n} &= \mathbf{A}_D &\text{ on } \partial\Omega \end{aligned}\]

4.2.1. Variational formulation

The variational formulation obtained consists in finding \(\mathbf{A} \in X \subset H(\mathrm{curl},\Omega)\) such that

\[\begin{aligned} \int_{\Omega}\nu_0(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\delta \Omega}\nu_0 (\nabla \times \mathbf{A}) \cdot (\mathbf{v} \times \mathbf{n} ) + \int_{\Omega}\varepsilon \mathbf{A} \cdot \mathbf{v} = \int_{\Omega} \mathbf{J} \cdot \mathbf{v} - \int_{\Omega} (\nabla \times \mathbf{I}) \cdot \mathbf{v} ~\forall \mathbf{v} \in Y \end{aligned}\]

Imposing the Dirichlet boundary condition on strong form, removes the boundary term and the condition is inherent to he function space \(X = H(\mathbf{A}_D,\mathrm{curl},\Omega)\). The variational formulation becomes : Find \(\mathbf{A} \in H_{\mathbf{A}_D}(\mathrm{curl},\Omega)\) such that

\[\begin{aligned} \int_{\Omega}\nu_0(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\Omega}\varepsilon \mathbf{A} \cdot \mathbf{v} = \int_{\Omega} \mathbf{J} \cdot \mathbf{v} - \int_{\Omega} (\nabla \times \mathbf{I}) \cdot \mathbf{v} \quad \forall \mathbf{v} \in H_{0}(\mathrm{curl},\Omega) \end{aligned}\]

We can also impose the Dirichlet boundary conditions on weak form, adding symetrization and penalisation terms and then avoiding to add condition in \(X\) function space, i.e. \(X = H(\mathrm{curl},\Omega)\). As previously, \(\gamma\) is the penalisation coefficient and \(h_s\) the mesh size. The variational formulation consists then in finding \(\mathbf{A} \in H(\mathrm{curl},\Omega)\) such that \(\forall \mathbf{v} \in H(\mathrm{curl},\Omega)\)

\[\begin{align*} \int_{\Omega}\nu(\nabla \times \mathbf{A}) \cdot (\nabla \times \mathbf{v}) + \int_{\delta \Omega}\nu (\nabla \times \mathbf{A}) \cdot (\mathbf{v} \times \mathbf{n} )&\\ + \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}) \cdot (\mathbf{A} \times \mathbf{n} ) + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v} \times \mathbf{n} ) \cdot (\mathbf{A} \times \mathbf{n} )&\\ + \int_{\Omega}\alpha \mathbf{A} \cdot \mathbf{v} &= \int_{\Omega} \mathbf{J} \cdot \mathbf{v} - \int_{\Omega} (\nabla \times \mathbf{I}) \cdot \mathbf{v} \\ &+ \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}) \cdot \mathbf{A}_D + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v} \times \mathbf{n} ) \cdot \mathbf{A}_D \end{align*}\]

4.2.2. Discretization

On strong form, the discrete problem is: find \(\mathbf{A}_h\in H_{\mathbf{A}_D,h}(\mathrm{curl},\Omega)\) such that

\[\begin{aligned} \int_{\Omega}\nu_0(\nabla \times \mathbf{A}_h) \cdot (\nabla \times \mathbf{v}_h) + \int_{\Omega}\varepsilon \mathbf{A}_h \cdot \mathbf{v}_h = \int_{\Omega} \mathbf{J} \cdot \mathbf{v}_h - \int_{\Omega} (\nabla \times \mathbf{I}) \cdot \mathbf{v}_h \quad \forall \mathbf{v}_h \in H_{0,h}(\mathrm{curl},\Omega) \end{aligned}\]

On weak form, the discrete problem is : find \(\mathbf{A}_h\in H(\mathrm{curl},\Omega)\) such that \(\forall \mathbf{v}_h \in H(\mathrm{curl},\Omega)\)

\[\begin{align} \int_{\Omega}\nu(\nabla \times \mathbf{A}_h) \cdot (\nabla \times \mathbf{v}_h) + \int_{\delta \Omega}\nu (\nabla \times \mathbf{A}_h) \cdot (\mathbf{v}_h \times \mathbf{n} )&\\ + \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}_h) \cdot (\mathbf{A}_h \times \mathbf{n} ) + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v}_h \times \mathbf{n} ) \cdot (\mathbf{A}_h \times \mathbf{n} )& \\ + \int_{\Omega}\alpha \mathbf{A}_h \cdot \mathbf{v}_h &= \int_{\Omega} \mathbf{J} \cdot \mathbf{v}_h - \int_{\Omega} (\nabla \times \mathbf{I}) \cdot \mathbf{v}_h \\ &+ \int_{\delta \Omega}\nu (\nabla \times \mathbf{v}_h) \cdot \mathbf{A}_D + \int_{\delta \Omega} \frac{\gamma}{h_s} \nu (\mathbf{v}_h \times \mathbf{n} ) \cdot \mathbf{A}_D \end{align}\]

5. Solver and Preconditionner

There are many methods available to solve the saddle-point formulation, using iterative solvers. Considering the number of non zero entries in the matrix to inverse due to the use of edge based finite elements, this method proves to be inefficient for this problem.

A block diagonal preconditioning approach for the time-harmonic Maxwell equations is introduced in <Greif-Schötzau>. Combined with a nodal auxiliary space preconditioning technique proposed in <Hiptmair Xu> specifically designed for \(H(\mathrm{curl})\)-conforming finite element, this method gives attractive performance results detailed in <Li>.

This section describes the main ingredients of preconditioning method introduced in <Li>. Based on the saddle point formulation, the magnetostatic problem reads on the form \(\mathcal{K} \mathbf{x} = \mathbf{b}\)

\[\underbrace{ \begin{pmatrix} \mathcal{A} & \mathcal{B}^{T} \\ \mathcal{B} & 0 \end{pmatrix} }_{\mathcal{K}} \underbrace{ \begin{pmatrix} \mathbf{A} \\ p \end{pmatrix} }_{x} = \underbrace{ \begin{pmatrix} \mathbf{f} \\ 0 \end{pmatrix} }_{b}\]

The solving of this system is performed at two solver "levels". An outer solver dealing with the resolution of the previous system with the block diagonal preconditioner <Greif-Schötzau>, and an inner solver dealing with the application of auxiliary space preconditioning technique <Hiptmair-Xu> to the first block of the latter.

5.1. Outer solver

The block diagonal preconditioner proposed in <Greif-Schötzau> consists in

\[ \mathcal{P}_{\mathcal{M},\mathcal{L}} = \begin{pmatrix} \mathcal{P}_{\mathcal{M}} & 0 \\ 0 & \mathcal{L} \end{pmatrix}\]

where

\[ \mathcal{P}_{\mathcal{M}} = \mathcal{A} + \mathcal{M} ~\text{with} ~\mathcal{M}_{i,j} = \displaystyle{ \int_{\Omega} \psi_j \cdot \psi_i }, ~~1 \leqslant i,j \leqslant n\]

and \(\mathcal{L}\) the scalar Laplacian on \(Q_h\) defined as

\[ \mathcal{L} = \displaystyle{ \int_{\Omega} \nabla \phi_j \cdot \nabla \phi_i }, ~~1 \leqslant i,j \leqslant m\]

The outer solver for \(\mathcal{K}\mathbf{x}=\mathbf{b}\) is a preconditioned MINRES, using previously defined \(\mathcal{P}_{\mathcal{M},\mathcal{L}}\) as preconditioner. The linear system to solve then reads \(\mathcal{P}_{\mathcal{M},\mathcal{L}} ~\mathcal{K} \mathbf{x} = \mathcal{P}_{\mathcal{M},\mathcal{L}} ~\mathbf{b}\)

\[ \underbrace{ \begin{pmatrix} \mathcal{P}_{\mathcal{M}} & 0 \\ 0 & \mathcal{L} \end{pmatrix} }_{\mathcal{P}_{\mathcal{M},\mathcal{L}}} \underbrace{ \begin{pmatrix} \mathbf{v} \\ q \end{pmatrix} }_{x_{out}} = \underbrace{ \begin{pmatrix} \mathbf{c} \\ d \end{pmatrix} }_{b_{out}}\]

This linear system is solved block by block, using subspace solvers. The block \((1,1)\) consists in the linear system

\[ \mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathbf{c}\]

and the block \((2,2)\) is the linear system

\[ \mathcal{L} q = d\]

The resolution of both systems gives the solution \((\mathbf{v},q)\) from which we conclude current MINRES iteration updating \(\mathbf{x}\) in \(\mathcal{K}\mathbf{x}=\mathbf{b}\). The same process can then applied for each MINRES iteration, until convergence.

While the solving of the scalar elliptic problem \(\mathcal{L} q = d\) can be efficiently performed with standard methods, the \(H(\mathrm{curl})\)-conforming linear system \(\mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathbf{c}\) of the outer problem is more tricky. This bottleneck can be overcomed using a second level of preconditioning, applying the effective auxiliary space preconditioning method <Hiptmair and Xu> to the inner problem \(\mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathbf{c}\).

5.2. Inner solver

The preconditioning method proposed in <Hiptmair and Xu> is used to solve the linear system \(\mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathbf{c}\).

This system is solved using a preconditioned conjugate gradient (CG) and then reads \(\mathcal{P}_{V} \mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathcal{P}_{V} \mathbf{c}\), where \(\mathcal{P}_{V}\) is the preconditioner to be descibed. The linear system to solve then becomes

\[ \mathcal{P}_{V} \mathbf{w} = \mathbf{r}\]

The preconditioner \(\mathcal{P}_{V}\) is defined such that

\[ \mathcal{P}_{V}^{-1} = diag(\mathcal{P}_{\mathcal{M}})^{-1} + P(\bar{\mathcal{L}} + \bar{\mathcal{Q}})^{-1}P^{T} + C(\bar{\mathcal{L}}^{-1})C^{T}\]

where

\[\bar{\mathcal{L}} = \frac{1}{\mu} diag(\mathcal{L},\mathcal{L},\mathcal{L})\]

and

\[ \bar{\mathcal{Q}} = diag(\mathcal{M},\mathcal{M},\mathcal{M})\]

The matrix \(C\) is composed by the coefficients of the gradient of \(Q_h\) basis functions \(\phi_j\) in the \(H(\mathrm{curl})\)-conforming space \(V_h\)

\[ C = \{ C_{i,j} \} ~1 \leqslant i \leqslant n, ~1 \leqslant j \leqslant m ~\text{such that} ~ \nabla \phi_j = \sum \limits_{i=1}^{n} C_{i,j} \psi_i ~1 \leqslant j \leqslant m\]

and the matrix \(P\) is the nodal interpolation operator \(\Pi_{h}^{\mathrm{curl}}\) from \(Q_h^3\) to \(V_h\).

From the definition of \(\mathcal{P}_V\), the linear system \(\mathcal{P}_{V} \mathbf{w} = \mathbf{r}\) gives

\[ \mathbf{w} = diag(\mathcal{P}_{\mathcal{M}})^{-1} \mathbf{r} + P\mathbf{y} + C \mathbf{z} ~~\text{with} ~\mathbf{y} = (\bar{\mathcal{L}} + \bar{\mathcal{Q}})^{-1}P^{T}\mathbf{r} ~~\text{and} ~\mathbf{z} = \bar{\mathcal{L}}^{-1} C^{T} \mathbf{r}\]

The terms \(\mathbf{y}\) and \(\mathbf{z}\) are computed by solving two linear systems

\[\begin{align} & (\bar{\mathcal{L}} + \bar{\mathcal{Q}}) \mathbf{y} = P^{T}\mathbf{r} \\ & \mathcal{L} \mathbf{z} = C^{T} \mathbf{r} \end{align}\]

Then, \(\mathbf{w}\) can be updated by solving both systems which solves the linear system \(\mathcal{P}_{\mathcal{M}} \mathbf{v} = \mathbf{c}\).

References

  • [Greif07] Preconditioners for the discretized time‐harmonic Maxwell equations in mixed form C Greif, D Schötzau - Numerical Linear Algebra, 2007 - Wiley

  • [Xu07] Nodal auxiliary space preconditioning in H (curl) and H (div) spaces, R Hiptmair, J Xu, SIAM Journal on Numerical Analysis 45 (6), 2483-2509

  • [Li2010] …​