Inf-sup lower bound

Lower bound for coercivity constant

We require a lower bound \(\alpha_{\mathrm{LB}}(\mu)\) for \(\alpha(\mu) = \alpha_c(\mu),\ \forall \mu \in \mathcal{D}^\mu\)

Two strategies are available:

  • « Min \(\Theta\) » approach if \(a\) is parametrically coercive (i.e. the coercivity constant depends solely on \(\mu\))

  • and more generally the Successive Constraint Method (SCM) which can also be applied in case of « Inf-Sup » stable problems (Stokes, Helmholtz,…​)

1. « Min \(\Theta\) » Approach

1.1. Lower bound for \(\alpha(\mu)\)


For a parametrically coercive bilinear form

  • \(\Theta^q(\mu) > 0,\ \forall \mu \in \mathcal{D}^\mu\) and

  • \(a^q(v,v) \geq 0,\ \forall v \in X,\ 1 \leq q \leq Q\)

We have

\[\Theta^{\mathrm{min},\bar{\mu}}_a(\mu) = \alpha(\bar{\mu})\min_{q=1...Q}\frac{\Theta_a^q(\mu)}{\Theta_a^q(\bar{\mu})} \leq \alpha(\mu)\]

where \(\bar{\mu} \in \mathcal{D}^\mu\) which was used to define the \(X\)-inner product and induced norm

Recall that

\(\begin{align} (u,v)_X &= a(u,v;\bar{\mu}), \quad \forall u,v \in X\\ \|v\|_X &= \sqrt{(u,v)_X}, \quad \forall v \in X \end{align}\)

1.2. Upper bound for \(\gamma(\mu)\)

Similarly we develop an upper bound \(\gamma_{\mathrm{UB}}(\mu)\) for \(\gamma(\mu)\). We define

\[ \infty > \Theta^{\mathrm{max},\bar{\mu}}_a(\mu) = \gamma(\bar{\mu}) \max_{q=1...Q}\frac{\Theta_q^q(\mu)}{\Theta_a^q(\bar{\mu})}\geq \gamma(\mu)\]

Remark : \(\gamma_{\mathrm{UB}}(\mu)\) is actually not required in practice but relevant in the theory.

1.3. Summary

If \(a\) is parametrically coercive we then choose

  • the coercivity constant lower bound to be \(\alpha_{\mathrm{LB}}(\mu)\equiv \Theta^{\mathrm{min},\bar{\mu}}_a(\mu)\)

  • and the continuity constant upper bound to be (\(a\) symmetric) \(\gamma_{\mathrm{UB}}(\mu)\equiv \Theta^{\mathrm{max},\bar{\mu}}_a(\mu)\)

Remark on cost :

  • Online cost to evaluate \(\alpha_{\mathrm{LB}}(\mu)\) : \(O(Q_a)\)

  • Choice of inner product important \((u,v)_X = a(u,v;\bar{\mu})\) (see multiple inner products approach)

  • Extends to non-symmetric problems by considering the symmetric part \(a_s(u,v;\mu) = \frac{1}{2}\left( a(u,v;\mu)+a(v,u;\mu) \right)\)

2. Successive constraint method (SCM)

2.1. Stability estimates

We wish to compute \(\alpha_{\mathrm{LB}}: \mathcal{D} \rightarrow \mathbb{R}\) such that

\[0 < \alpha_{\mathrm{LB}}(\mu) \leq \alpha^{\mathcal{N}}(\mu), \quad \mu \in \mathcal{D}\]

and it computation is rapid \(O(1)\) where \(\alpha^{\mathcal{N}}(\mu)= \displaystyle\inf_{w \in X^{\mathcal{N}}} \frac{a(w,w;\mu)}{\|w\|_X^2}\)

Computation of \(\alpha^{\mathcal{N}}(\mu)\) : \(\alpha^{\mathcal{N}}(\mu)\) is the minimum eigenvalue of the following generalized eigenvalue problem

\[a(w,v;\mu) = \lambda(\mu)\ m(w,v;\mu), \quad (A w = \lambda B w)\]

where \(m(\cdot,\cdot)\) is the bi-linear form associated with \(\|\cdot\|_X\) and \(B\) is the associated matrix.

2.2. Successive Constraint method : Reformulation

The problem as a minimization one First recall \( a(w,v;\mu) = \displaystyle\sum_{q=1}^{Q_{a}} \theta_q(\mu)\ a_q(w,v)\). Hence we have \( \alpha^{\mathcal{N}}(\mu)= \displaystyle\inf_{w \in X^{\mathcal{N}}} \displaystyle\sum_{q=1}^{Q_a} \theta_q(\mu) \frac{a_q(w,w)}{\|w\|_X^2}\) and we note \( \mathcal{J}^{\mathrm{obj}}(w;\mu) = \displaystyle\sum_{q=1}^{Q_a} \theta_q(\mu) \frac{a_q(w,w)}{\|w\|_X^2}\)

Reformulation : We have the following optimisation problem \( \alpha^{\mathcal{N}}(\mu)= \displaystyle\inf_{y \in \mathcal{Y}} \mathcal{J}^{\mathrm{obj}}(\mu; y)\), where \( \mathcal{J}^{\mathrm{obj}}(\mu; y) \equiv \displaystyle\sum_{q=1}^{Q_a} \theta_q(\mu) y_q\) and \( \mathcal{Y} = \left\{ y \in \mathbb{R}^{Q_a} |\ \exists w \in X^{\mathcal{N}}\ \mathrm{s.t.}\ y_q = \frac{a_q(w,w)}{\|w\|_{X^{\mathcal{N}}}^2}, 1 \leq q \leq Q_a \right\}\)

We now need to characterize \(\mathcal{Y}\), to do this we construct two sets \(\mathcal{Y}_{\mathrm{LB}}\) and \(\mathcal{Y}_{\mathrm{UB}}\) such that \(\mathcal{Y}_{\mathrm{UB}} \subset \mathcal{Y} \subset \mathcal{Y}_{\mathrm{LB}}\) over which finding \(\alpha^{\mathcal{N}}(\mu)\) is feasible.

Successive Constraint method : Ingredients

First we set the design space for the minimisation problem. We introduce

\[ \mathcal{B} = \prod_{q=1}^{Q_a} \left[ \inf_{w\in X^{\mathcal{N}}} \frac{a_q(w,w)}{\|w\|_X^2}; \sup_{w\in X^{\mathcal{N}}} \frac{a_q(w,w)}{\|w\|_X^2} \right]\]
\[ \Xi = \left\{ \mu_i \in \mathcal{D}; i=1,...,J \right\}\]


\[C_K = \left\{ \mu_i \in \Xi; i=1,...,K \right\} \subset \Xi\]

The set \(\Xi\) is constructed using a \(\frac{1}{2^p}\) division of \(\mathcal{D}\): in 1D, \(0, 1; \frac{1}{2}; \frac{1}{4}, \frac{3}{4};...\). \(C_K\) will be constructed using a greedy algorithm.

Finally we shall denote \(P_M(\mu;E)\) the set of \(M\) points closest to \(\mu\) in the set \(E\). We shall need this type of set to construct the lower bounds.

2.3. Successive Constraint method: Lower bounds \(\mathcal{Y}_{\mathrm{LB}}\)

Given \(M_\alpha, M_+ \in \mathbb{N}\) we are now ready to define \(\mathcal{Y}_{\mathrm{LB}}\)

\[\begin{align} \mathcal{Y}_{\mathrm{LB}}(\mu; C_K) = \Big\{ y \in \mathbb{R}^{Q_a}|& y \in \mathcal{B}, \\ &\sum_{q=1}^{Q_a} \theta_q(\mu') y_q \geq \alpha^{\mathcal{N}}(\mu'),\ \forall \mu' \in P_{M_\alpha}(\mu;C_K) \\ &\sum_{q=1}^{Q_a} \theta_q(\mu') y_q \geq \alpha_{\mathrm{LB}}(\mu';C_{K-1}),\ \forall \mu' \in P_{M_+}(\mu;\Xi\backslash C_K) \Big\} \end{align}\]

Computing \(\alpha_{\mathrm{LB}}(\mu;C_K)\) is in fact a linear program with \(Q_a\) design variables, \(y_q\), and \(2 Q_a+M_\alpha+M_+\) constraints online. It requires the construction of \(C_K\) offline.

Successive Constraint method: Upper bounds \(\mathcal{Y}_{\mathrm{UB}}\)

Let \( \mathcal{Y}_{\mathrm{UB}}( C_K ) = \left\{ y^*(\mu_k), 1 \leq k \leq K \right\}\) with \( y^*(\mu) = \mathrm{arg}\mathrm{min}_{y \in \mathcal{Y}}\ \mathcal{J}^{\mathrm{obj}}( \mu; y )\) We set \( \alpha_{\mathrm{UB}}( \mu; C_K) = \inf_{y \in \mathcal{Y}_{\mathrm{UB}}(C_K)}\ \mathcal{J}^{\mathrm{obj}}(\mu;y)\)

\(\mathcal{Y}_{\mathrm{UB}}\) requires \(K\) eigensolves to compute the eigenmode \(\eta_k\) associated with \(w_k, k=1,...,K\) and \(K Q \mathcal{N}\) inner products to compute the \(y^*_q(w_k)=\frac{a_q(\eta_k,\eta_k;\mu)}{\|\eta_k\|_{X^{\mathcal{N}}}^2}, k=1,...,K\) offline . Then computing \(\alpha_{\mathrm{UB}}( \mu; C_K)\) is a simple enumeration online.

Successive Constraint method : \(C_K\)

Given \(\Xi\) and \(\epsilon \in [0;1\)], \(C_{K_\mathrm{max}} = \mathrm{Greedy}(\Xi, \epsilon\)).

  • While \(\displaystyle\max_{\mu \in \Xi}\ \frac{\alpha_{\mathrm{UB}}( \mu; C_K) - \alpha_{\mathrm{LB}}( \mu; C_K)}{\alpha_{\mathrm{UB}}( \mu; C_K)} > \epsilon\)

    • \(\mu_{K+1} = \mathrm{arg} \mathrm{max}_{\mu \in \Xi}\ \frac{\alpha_{\mathrm{UB}}( \mu; C_K) - \alpha_{\mathrm{LB}}( \mu; C_K)}{\alpha_{\mathrm{UB}}( \mu; C_K)}\)

    • \(C_{K+1} = C_K \cup \{ \mu_{K+1} \}\)

    • \(K \leftarrow K+1\)

  • set \(K_{\mathrm{max}} = K\)

Successive Constraint method : Offline-Online


  • \(2Q_a+M_\alpha+M_+\) eigensolves \(\alpha^{\mathcal{N}}(\mu), y^*(\mu_k)\)

  • \(n_\Xi K_{\mathrm{max}} LP(Q,M_\alpha,M_+)\) to build \(C_{K_{\mathrm{max}}}\)

  • \(K_{\mathrm{max}} Q\) inner products over \(X^{\mathcal{N}} \Rightarrow \mathcal{Y}_{\mathrm{UB}}\)

Given \(\mu \in \mathcal{D}\), \(\alpha_{\mathrm{LB}}(\mu) = \mathrm{Online}( \mu, C_{K_{\mathrm{max}}}, M_\alpha, M_+ )\).

  • sort over \(C_{K_{\mathrm{max}}} \Rightarrow P_{M_\alpha}(\mu;C_{K_{\mathrm{max}}})\) and \(P_{M_+}(\mu;\Xi\backslash C_{K_{\mathrm{max}}})\)

  • \((M_\alpha+M_++2) Q_a\) evaluation of \(\theta_q(\mu')\)

  • \(M_\alpha\) lookups to get \(\mu' \rightarrow \alpha^{\mathcal{N}}(\mu')\)

  • \(LP(Q_a,M_\alpha,M+)\) to get \(\alpha_{\mathrm{LB}} (\mu)\)