Modeling and Analysis using the JSON files

The Model JSON (.json) files allow to configure a set of partial differential equations, more precisely they define:

1. Names

A model JSON file starts by giving names (long and short).

"Name": "Turek-Hron cfd2", (1)
"ShortName":"cfd2", (2)
1 Name of the example, usually printed on-screen and in log files during simulations
2 Short name of the example, it is used to create directories to store the results of the simulation of the model
These names are not used currently, but it should be the case in the future.

2. Models

A section Models is present if the toolbox enables multiple physics, i.e. it allows to select the type of PDE/Model. In most toolboxes, this section is optional and a default model is enabled.

Here is an example of this section for the fluid toolbox.

"Models": (1)
{
   "equations":"Navier-Stokes" (2)
}
1 Section Models defined by the toolbox to define the main configuration and in particular the set of equations to be solved
2 toolbox specific option to define set of equations to be solved, read the toolbox manual to learn about the possible options.

3. Parameters

This section of the Model JSON file defines the parameters that may enter inside symbolic expressions (as symbols) used in the subsequent sections.

Example of a Parameters section
"Parameters": (1)
    {
        "ubar":"1.0", (2)
        "alpha":"2*ubar:ubar", (3)
        "beta":"{3*alpha,ubar}:alpha:ubar", (4)
        "chi":"t<2:t", (5)
        "pIn": (6)
        {
            "type":"fit", (7)
            "filename":"$cfgdir/pin.csv", (8)
            "abscissa":"time", (9)
            "ordinate":"pressure", (10)
            "interpolation":"P1", (11)
            "expr":"10*t+3:t" (12)
        }
    }
1 name of the section
2 defines a new parameter ubar and its associated value
3 defines a new parameter alpha and its associated expression. This expression depends on another symbol, here the parameter ubar. The symbol defined by this new parameter is also called alpha.
4 defines a new parameter beta and its associated expression. Here the expression is vector of dimension 2. Consequently, symbols generated by this new parameter are beta_0 and beta_1 (Currently we cannot use a vector as a symbol).
5 defines a new parameter chi and its associated expression
6 defines a new parameter pIn and its definition is given in the subsection below
7 the type of parameter is fit
8 the filename of a csv file used for the fitting
9 column name of csv file used in abscissa
10 column name of csv file used in ordinate
11 interpolation type of the fit. Possible values are : P0, P1, Spline, Akima
12 expression used in order to read the fitted value

Some parameter names are reserved and consequently can’t be used inside the Parameters section. This is case with

  • t : represent the current time

  • x, y, z : the cartesian coordinates

  • nx, ny, nz : the components of normal vector

4. Materials

This section of the Model JSON file defines material properties linking the Physical Entities in the mesh data structures to these properties.

Example of Materials section
"Materials":
    {
        "Water": (1)
        {
            "physics":"heat-fluid", (2)
            "markers":"[marker1,marker2]", (3)
            "rho":"1.0e3", (4)
            "mu":"1.0" (5)
            "k":"5.0" (6)
        },
        "Beam": (7)
        {
            "physics":"heat",
            "markers":"marker3",
            "rho":"3.3e7",
            "k":"1.0e2"
        }
    }
1 gives the name of a material.
2 defined which kind of physics is applied in this material. This is an optional section, by default all physics are applied. The value can be also a vector of physic.
3 defined mesh marker(s) where the material properties are applied. This is an optional section, by default the marker is take as the name <1>.
4 density \(\rho\) is called rho and is given in SI units.
5 viscosity \(\mu\) is called mu and is given in SI units.
6 thermal conductivity is called k and is given in SI units.
7 start definition of another material nammed Beam.

We can define an arbitrary number of material properties but some names are reserved. The names reserved are :

  • for all materials : name, physics, markers, filename

  • properties defined by the physic used. For example with heat physic : rho, k, Cp, beta, …​ See specific toolbox documentation.

The material property can be define by a scalar, vector (dim 2 or 3) or square matrix (dim 2 or 3). For the material properties defined from the physic, the shape of the expression is imposed. For example, the density should be scalar, the thermal conductivity should be a scalar or a matrix (not a vector). See also the specific toolbox documentation.

Moreover, each material property can be used inside symbolic expressions (as symbols). Depending to shape of expression, the symbols are defined as follow :

  • scalar expression : materials_<matName>_<propName>

  • vectorial expression : materials_<matName>_<propName>_0, materials_<matName>_<propName>_1, materials_<matName>_<propName>_2

  • matrix expression : materials_<matName>_<propName>_00, materials_<matName>_<propName>_01, materials_<matName>_<propName>_10, materials_<matName>_<propName>_1 (and also the third component with matrix dim=3)

with <matName> the name given to the material and <propName> the name of the material property. In addition, we generate also symbols of material properties without the material names, i.e. of the form materials_<propName> (and potentially the component suffix 0,1,01,…​). In the context of one material only, it represents exactly the same symbol as before (with the material name). But, in multi-materials context, a property that appears in several materials can be express by this unique symbol. The expression it will represent will be defined according to its context of use. For example, if we integrate over the mesh, this symbol will be the property of Water for the marked elements related to Water and the property of Beam for the marked elements related to Beam.

If we take the previous example, the symbols available will be :

  • by material : materials_Water_rho, materials_Water_mu, materials_Water_k, materials_Beam_rho, materials_Beam_k

  • globally : materials_rho, materials_mu, materials_k

The use of global symbols can have a little bit cost compare to the symbols containing the material name.

In a material subsection, we can use direclty a symbol name belonging to this subsection without needing to add the prefix materials_<matName>. For example, we can defined these materials :

"Materials":
{
    "Cu":
    {
        "alpha":326, (1)
        "sigma":12, (2)
        "k":"3*sigma+alpha:sigma:alpha" (3)
    },
    "Fe":
    {
        "alpha":26,
        "sigma":87,
        "k":"sigma-alpha:sigma:alpha"
    }
}
1 define the symbol parameter materials_Cu_alpha
2 define the symbol parameter materials_Cu_sigma
3 define the symbol parameter materials_Cu_k depending on sigma (alias of materials_Cu_alpha) and sigma (alias of materials_Cu_sigma)
If the symbol is already defined inside the Parameters section, the alias symbol override this latter.

5. InitialConditions

This section of the Model JSON file defines initial conditions. Depending on the type of model :

  • if we use a transient model, it corresponds to the initial conditions of the time scheme applied

  • if we use a steady model, it corresponds to the initial guess given to the solver

As presented below, there are two ways to define initial conditions either use a mathematical expressions or a file.

Example of a InitialConditions defined from mathematical expressions
"InitialConditions":
{
    "temperature": (1)
    {
        "Expression": (2)
        {
            "myic1": (3)
            {
                "markers":"Omega1", (4)
                "expr":"293" (5)
            },
            "myic2": (6)
            {
                "markers":["Omega2,Omega3]", (7)
                "expr":"305*x*y:x:y"  (8)
            }
        }
    }
}
1 the field name of the toolbox to which the initial condition is associated
2 the type of boundary condition to apply, here Expression
3 a name that identifies an initial condition imposed on a field
4 the name of marker (or a list of markers) where an expression is imposed as initial condition. The markers can represent any kind of entity (Elements/Faces/Edges/Points). If this entry is not given, the expression is applied on the mesh support of the field.
5 an expression which is applied to the field
6 another name that identifies an initial condition
7 idem as <4>
8 idem as <5>
Example of a InitialConditions section defined from a file
"InitialConditions":
{
    "temperature": (1)
    {
        "File": (2)
        {
            "myic": (3)
            {
                "filename":"$home/feel/toolboxes/heat/temperature.h5", (4)
                "format":"hdf5" (5)
            }
        }
    }
}
1 the field name of the toolbox to which the initail condition is associated
2 the type of boundary condition to apply, here File
3 a name that identifies an initial condition imposed on a field
4 a file that represents a field saved (WARNING : must be compatible with the current mesh and partitioning)
5 the format of the file read (possible values are "default","hdf5","binary","text"). It’s an optional entry, the default value is choosen by Feel++ (it’s "hdf5" if Feel++ was compiled with a hdf5 library).

6. BoundaryConditions

This section of the Model JSON file defines the boundary conditions.

Example of a BoundaryConditions section
"BoundaryConditions":
    {
        "velocity":  (1)
        {
            "Dirichlet": (2)
            {
                "inlet": (3)
                {
                    "expr":"{ 1.5*ubar*(4./0.1681)*y*(0.41-y),0}:ubar:y" (4)
                },
                "wall1": (5)
                {
                    "expr":"{0,0}" (6)
                },
                "wall2": (7)
                {
                    "expr":"{0,0}" (8)
                }
            }
        },
        "fluid": (9)
        {
            "outlet": (10)
            {
                "outlet": (11)
                {
                    "expr":"0" (12)
                }
            }
        }
    }
1 the field name of the toolbox to which the boundary condition is associated
2 the type of boundary condition to apply, here Dirichlet
3 the physical entity (associated to the mesh) to which the condition is applied
4 the mathematical expression associated to the condition, note that the parameter ubar is used
5 another physical entity to which Dirichlet conditions are applied
6 the associated expression to the entity
7 another physical entity to which Dirichlet conditions are applied
8 the associated expression to the entity
9 the variable toolbox to which the condition is applied, here fluid which corresponds to velocity and pressure \((\mathbf{u},p)\)
10 the type of boundary condition applied, here outlet or outflow boundary condition
11 the physical entity to which outflow condition is applied
12 the expression associated to the outflow condition, note that it is scalar and corresponds in this case to the condition \(\sigma(\mathbf{u},p) \normal = 0 \normal\)

7. PostProcessing

This section allows to define the output fields and quantities to be computed and saved for e.g. visualization.

Template of a PostProcess section
"PostProcess":
{
    "Exports":
    {
        "fields":["field1","field2",...]
    },
    "Save":
    {
        "Fields":
        {
             "names":["field1","field2",...]
             "format":"hdf5"                                                                                                                                                                                                                   }
    },
    "Measures":
    {
        "<measure type>":
        {
            ....
        }
    }
}

7.1. Exports

The Exports section is implemented when you want to visualize some fields or mathematical expressions with ParaView software for example. There are two subsection :

  • the entry fields should be filled with names which are available in the toolbox used.

  • the entry expr should contains mathematical expression (scalar,vectorial,tensorial)

Template of a PostProcess section
"Exports":
{
   "fields":["temperature","all"],  (1)
   "expr": (2)
   {
      "toto":"2*x*y:x:y", (3)
      "titi":  (4)
      {
         "parts": [ (5)
            {
               "expr":"3*x*y:x:y", (6)
               "markers":"Omega1" (7)
            },
            {
               "expr":"4*x*y:x:y", (8)
               "markers":"Omega2" (9)
            }
         ],
         "representation":["nodal","element"] (10)
      },
      "tutu": (11)
      {
         "expr":"{materials_k_00,materials_k_01,materials_k_10,materials_k_11}:materials_k_00:materials_k_01:materials_k_10:materials_k_11", (12)
         "representation":["nodal","element"] (13)
      }
   }
1 exports fields that are available in the toolbox used (see the toolbox documentation).
2 start the expression subsection
3 export a field named toto from a mathematical expression defined on the whole mesh
4 export a field named titi from mathematical expressions
5 start a section named parts in order to tell that the exported fields is defined from several expressions related to a part of the mesh
6 an expression
7 markers where the expression is applied
8 another expression
9 markers where the previous expression is applied
10 representation of the exported field titi. Possoble values are : nodal or element or both. This is an optional entry, the default value is nodal.
11 export a field named tutu
12 an expression
13 representation of the exported field tutu

7.2. Save

The Save section is implemented when you want to store data using the Feel++ format. For example, It can be useful to have access to these data and use them in another application. Currently, there is only the possibility to save the fields (finite element approximation).

Example of a Save section
"Save":
{
    "Fields":
    {
         "names": (1)
         "format": (2)
    }
}
1 the names of fields that we want to save (can be a name or a vector of name)
2 the format used (possible values are "default","hdf5","binary","text"). It’s an optional entry, the default value is choosen by Feel++ (it’s "hdf5" if Feel++ was compiled with a hdf5 library).

7.3. Measures

Several quantities can be computed after each time step for transient simulation or after the solve of a stationary simulation. The values computed are stored in a CSV file format and named <toolbox>.measures.csv. In the template of PostProcess section, <measure type> is the name given of a measure. In next subsection, we present some types of measure that are common for all toolbox. Other types of measure are available but depend on the toolbox used, and the description is given in the specific toolbox documentation.

The common measures are :

7.3.1. Points

TODO

7.3.2. Statistics

The next table presents the several statistics that you can evaluate :

Statistics Type Expression

min

\( \underset{x\in\Omega}{\min} u(x) \)

max

\( \underset{x\in\Omega}{\max} u(x) \)

mean

\( \frac{1}{ | \Omega |} \int_{\Omega} u \)

integrate

\( \int_{\Omega} u \)

with u a function and \( \Omega\) the definition domain where the statistic is applied.

The next source code shows an example of Statistics section with several kinds of computation. The results are stored in a CSV file at columns named Statistics_mystatA_mean, Statistics_mystatB_min, Statistics_mystatB_max, Statistics_mystatB_mean, Statistics_mystatB_integrate.

Example of a Statistics section
"Statistics":
{
    "mystatA": (1)
    {
        "type":"mean", (2)
        "field":"temperature" (3)
    },
    "mystatB": (4)
    {
        "type":["min","max","mean","integrate"], (5)
        "expr":"2*x+y:x:y", (6)
        "markers":"omega" (7)
    }
}
1 the name associated with the first Statistics computation
2 the Statistics type
3 the field u evaluated in the Statistics (here the temperature field in the heat toolbox)
4 the name associated with the second Statistics computation
5 the Statistics type
6 the field u evaluated in the Statistics
7 the mesh marker where the Statistics is computed (\(\Omega\) in the previous table). This entry can be a vector of marker

The function u can be a finite element field or a symbolic expression. We use the field entry for a finite element field and expr for symbolic expression. field and expr can not be used simultaneously.

All expressions can depend on specifics symbols related to the toolboxes used. For example, in the heat toolboxes :

"expr":"2*heat_T+3*x:heat_T:x"

where heat_T is the temperature solution computed at last solve. It can also depend on a parameter defined in the Parameters section of the JSON.

The quadrature order used in the statistical evaluation can be specified. By default, the quadrature order is 5. For example, use a quadrature order equal to 10 is done by adding :

"quad":10
Quadrature order is also used with min and max statistics. We get the min/max values by evaluating the expression on each quadrature points.
In the mean and integrate Statistics, the quadrature order is automatically chosen when field is used. In this case, the quad entry has no effect.

The expression can be a scalar, a vector or a matrix. However, there is a particularity in the case of mean or integrate statistics with non-scalar expression. The result is not a scalar value but a vector or matrix. We store in the CSV file each entry of this vector/matrix.

7.3.3. Norm

The next table presents the several norms that you can evaluate :

Norm Type Expression

L2

\( \| u \|_{L^2} = \left ( \int_{\Omega} \| u \|^2 \right)^{\frac{1}{2}}\)

SemiH1

\( | u |_{H^1} = \left ( \int_{\Omega} \| \nabla u \|^2 \right)^{\frac{1}{2}} \)

H1

\( \| u \|_{H^1} = \left ( \int_{\Omega} \| u \|^2 + \int_{\Omega} \| \nabla u \|^2 \right)^{\frac{1}{2}} \)

L2-error

\( \| u-v \|_{L^2} = \left ( \int_{\Omega} \| u-v \|^2 \right)^{\frac{1}{2}}\)

SemiH1-error

\( | u-v |_{H^1} = \left ( \int_{\Omega} \| \nabla u-\nabla v \|^2 \right)^{\frac{1}{2}} \)

H1-error

\( \| u-v \|_{H^1} = \left ( \int_{\Omega} \| u-v \|^2 + \int_{\Omega} \| \nabla u-\nabla v \|^2 \right)^{\frac{1}{2}} \)

where \(\| . \|\) represents the norm of the generalized inner product. The symbol u represents a field or an expression and v an expression.

The next source code shows an example of Norm section with two norm computations. The results are stored in a CSV file at columns named Norm_mynorm_L2 and Norm_myerror_L2-error.

Example of a Norm section
"Norm":
{
    "mynorm": (1)
    {
        "type":"L2", (2)
        "field":"velocity" (3)
     },
     "myerror": (4)
     {
         "type":"L2-error", (5)
         "field":"velocity", (6)
         "solution":"{2*x,cos(y)}:x:y", (7)
         "markers":"omega" (8)
     }
}
1 the name associated with the first norm computation
2 the norm type
3 the field u evaluated in the norm (here the velocity field in the fluid toolbox)
4 the name associated with the second norm computation
5 the norm type
6 the field u evaluated in the norm
7 the expression v with the error norm type
8 the mesh marker where the norm is computed (\(\Omega\) in the previous table). This entry can be a vector of marker
with the H1-error or SemiH1-error norm, the gradient of the solution must be given with grad_solution entry. Probably this input should be automatically deduced in the near future.

Several norms can be computed by listing it in the type section :

"type":["L2-error","H1-error","SemiH1-error"],
"solution":"{2*x,cos(y)}:x:y",
"grad_solution":"{2,0,0,-sin(y)}:x:y",

The gradient of a vector field is a matrix field such that the rows are the gradient of the component. It means that if the function solution is written f={f1,f2} the field grad_solution has to be written like this : {dxf1,dyf1,dxf2,dyf2}:x:y (dxf1 standing for \(\partial_x f_1\)).

An expression (scalar/vector/matrix) can be also passed to evaluate the norm. But in this case, the field entry must be removed and this expression replaces the symbol u.

"expr":"2*x*y:x:y"
As before, in the case of H1 or SemiH1 norm type, the grad_expr entry must be given.
"grad_expr":"{2*y,2*x}:x:y"

All expressions can depend on specifics symbols related to the toolboxes used. For example, in the heat toolboxes :

"expr":"2*heat_T+3*x:heat_T:x"

where heat_T is the temperature solution computed at last solve. It can also depend on a parameter defined in the Parameters section of the JSON.

The quadrature order used in the norm computed can be also given if an analytical expression is used. By default, the quadrature order is 5. For example, use a quadrature order equal to 10 is done by adding :

"quad":10

8. An example

"PostProcess": (1)
    {
        "Exports": (2)
        {
            "fields":["velocity","pressure","pid"] (3)
        },
        "Measures": (4)
        {
            "Forces":"wall2", (5)
            "Points": (6)
            {
                "pointA": (7)
                {
                    "coord":"{0.6,0.2,0}", (8)
                    "fields":"pressure" (9)
                },
                "pointB": (10)
                {
                    "coord":"{0.15,0.2,0}", (11)
                    "fields":"pressure" (12)
                }
            }
        }
    }
1 the name of the section
2 the Exports identifies the toolbox fields that have to be exported for visualisation
3 the list of fields to be exported
4 the Measures section identifies outputs of interest such as
5 Forces applied to a surface given by the physical entity wall2
6 Points values of fields
7 name of the point
8 coordinates of the point
9 fields to be computed at the point coordinate
10 name of the point
11 coordinates of the point
12 fields to be computed at the point coordinate

Here is a biele example from the Toolbox examples.

9. The generator of cases by using the index definitions

Sometimes, it appears that a large part of a JSON section is duplicated many times and just a few words/letters of the syntax have changed. In order to avoid this repetition, a generic block can be created and the expansion is controlled by entries called index(i) (where (i) is an integer > 0).

it’s currently available in PostProcess or in markers subtree.

9.1. A first example

We want to apply several post-processings of type Statistics Measures from an expression (always identical) on several mesh markers called top, left, bottom and right. The classic way is to write theses measures for each marker. This implies a lot of duplication as illustrated in the next snippet JSON :

"Statistics":
{
    "my_top_eval":
    {
        "type":"integrate",
        "expr":"3.12*heat_dnT:heat_dnT",
        "markers":"top"
    },
    "my_left_eval":
    {
         "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"left"
    },
    "my_bottom_eval":
    {
         "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"bottom"
    },
    "my_right_eval":
    {
         "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"right"
    }
 }

The generic section that will generate exactly the same measures is :

"Statistics":
{
    "my_%1%_eval":
    {
        "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"%1%",
         "index1":["top","left","bottom","right"]
    }
}

The keyword %1% can be placed in any location of the properties of Statistics Measures and it will be replaced by the values given by index1.

For this example of measures, an important thing is to be sure that the name of the measure is unique, else it will be overridden.

9.2. A second example

The previous case is a little bit restrictive because only one value can be associated for each case generated. However, we can put several values by cases by using an array of array.

As an illustration, we have this JSON snippet that we want to factorize :

"Statistics":
{
    "Check_Heat-Flux_top":
    {
         "type":"integrate",
          "expr":"-heat_Concrete_k*heat_dnT - h_top*(heat_T-T0_top):heat_Concrete_k:heat_dnT:heat_T:h_top:T0_top",
          "markers":"top"
    },
    "Check_Heat-Flux_bottom":
    {
          "type":"integrate",
          "expr":"-heat_Aluminium_k*heat_dnT - h_bottom*(heat_T-T0_bottom):heat_Aluminium_k:heat_dnT:heat_T:h_bottom:T0_bottom",
          "markers":"bottom"
    },
    "Check_Heat-Flux_left":
    {
          "type":"integrate",
          "expr":"-heat_Wood_k*heat_dnT - h_left*(heat_T-T0_left):heat_Wood_k:heat_dnT:heat_T:h_left:T0_left",
          "markers":"left"
    },
    "Check_Heat-Flux_right":
    {
          "type":"integrate",
          "expr":"-heat_Insulation_k*heat_dnT - h_right*(heat_T-T0_right):heat_Insulation_k:heat_dnT:heat_T:h_right:T0_right",
          "markers":"right"
    }
}

The generic JSON section will be the following :

"Statistics":
{
    "Check_Heat-Flux_%1_1%":
     {
          "type":"integrate",
          "expr":"-heat_%1_2%_k*heat_dnT - h_%1_1%*(heat_T-T0_%1_1%):heat_%1_2%_k:heat_dnT:heat_T:h_%1_1%:T0_%1_1%",
          "markers":"%1_1%",
          "index1":[ ["top", "Concrete"],["bottom", "Aluminium"], ["left","Wood"], ["right","Insulation"] ]
     }
}

Compared to the previous case, the keywords used here are %1_1% and %1_2%. The number 1 placed in front corresponds to the fact that we use the index1. The second number (after the underscore) corresponds to the id in the sub-array. Each sub-array in the index1 array must have the same size. In this example, the size of a sub-array is 2. Consequently, we can only have here the value 1 or 2 for the id in the sub-array. In summary, this example generates 4 cases :

Case %1_1% %1_2%

<1>

top

Concrete

<2>

bottom

Aluminium

<3>

left

Wood

<4>

right

Insulation

9.3. Cases generated by cartesian product

We can also generate a set of case by a cartesian product of an arbitrary number of indexes. For example, to generate several measures associated one-by-one with the following markers : matA3, matA5, matA7, matB3, matB5, matB7. As show just after in the snippet JSON, the cartesian product is automaticallly apply when more than one index is given :

"Statistics":
{
    "my_%1%_%2%_eval":
    {
        "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"mat%1%%2%",
         "index1":["A","B"],
         "index2":["3","5","7"]
    }
}

The keyword %1% (resp %2%) is replaced by the values given by index1 (resp index2). An arbitrary number of index can be put, but the ids should be contiguous and always start to 1 (index1,index2,index3,…​).

We can also use the array of array format for giving several values in a index :

"Statistics":
{
    "my_%1%_%2_2%_eval":
    {
        "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"mat%1%%2_1%",
         "index1":["A","B"],
         "index2":[ ["3","trois"],["5","cinq"],["7","sept"] ]
    }
}

We retrieve here the symbol %2_1% and %2_2% because the index2 is build as an array of array.

Case %1% %2_1% %2_2%

<1>

A

3

trois

<2>

A

5

cinq

<3>

A

7

sept

<4>

B

3

trois

<5>

B

5

cinq

<6>

B

7

sept

Therefore, this example generates the following 6 measures :

  • my_A_trois_eval with markers assigned to matA3

  • my_A_cinq_eval with markers assigned to matA5

  • my_A_sept_eval with markers assigned to matA7

  • my_B_trois_eval with markers assigned to matB3

  • my_B_cinq_eval with markers assigned to matB5

  • my_B_sept_eval with markers assigned to matB7

9.4. Range of integers

A special syntax is designed to generate an index representing a range of integers. This sequence is defined by a start number, stop number (not include) and a progression step. These parameters are separated by the symbol : , as we can see here :

  • 1:10 → 1,2,3,4,5,6,7,8,9

  • 1:10:2 → 1,3,5,7,9

This notation can be used in all index(i) entries (and also in an array of array). Therefore, we can rewrite the previous example with this syntax :

"Statistics":
{
    "my_%1%_%2%_eval":
    {
        "type":"integrate",
        "expr":"3.12*heat_dnT:heat_dnT",
        "markers":"mat%1%%2%",
        "index1":["A","B"],
        "index2":["3:9:2"]
    }
}

9.5. The markers entry

In many contexts (Materials, BoundaryConditions, PostProcess, …​), it’s necessary to give the names of mesh markers. Generally, an entry called markers should be filled. There are 3 ways to use it :

  1. Only one string

    "markers":"matA3"
  2. An array of string

    "markers":["matA3","matA5","matA7","matB3","matB5","matB7"]
  3. A subtree with an entry called name that can be filled by one string or an array of string

    "markers":
    {
       "name":["matA3","matA5","matA7","matB3","matB5","matB7"]
    }

    The subtree case has been introduced in fact in order to use a generator of names of mesh markers based on the index methodology explain previously. If we want to generate the previous example, we can also write this JSON snippet :

"markers":
{
   "name":"mat%1%%2%",
   "index1":["A","B"],
   "index2":["3","5","7"]
}

9.6. Several levels of indexes

It’s also possible to combine the index at several levels of properties. The important thing is to keep a contiguous progression of the indexes ids. The following code JSON snippet generates some Statistics Measures by using several indexes. And for each measure, it uses also the generator of markers with other indexes.

"Statistics":
{
    "my_%1%_%2%_eval":
    {
        "type":"integrate",
        "expr":"3.12*heat_dnT:heat_dnT",
        "markers":
        {
            "name":"mat%1%%2%_%3%",
            "index3":["x","y","z"]
        },
        "index1":["A","B"],
        "index2":["3:9:2"]
    }
}

This example generates the following 6 measures :

  • my_A_3_eval with markers assigned to matA3_x,matA3_y,matA3_z

  • my_A_5_eval with markers assigned to matA5_x,matA5_y,matA5_z

  • my_A_7_eval with markers assigned to matA7_x,matA7_y,matA7_z

  • my_B_3_eval with markers assigned to matB3_x,matB3_y,matB3_z

  • my_B_5_eval with markers assigned to matB5_x,matB5_y,matB5_z

  • my_B_7_eval with markers assigned to matB7_x,matB7_y,matB7_z

We need to use index3 in the markers subtree because index1 and index2 are already used in a parent property. If several generators are completely independents, each section should start with the index1. It’s the case with the following example :

"Statistics":
{
    "my_%1%_eval1":
    {
        "type":"integrate",
         "expr":"3.12*heat_dnT:heat_dnT",
         "markers":"%1%",
         "index1":["top","left","bottom","right"]
    },
    "my_%1%_eval2":
    {
        "type":"integrate",
         "expr":"x*y:x:y",
         "markers":"%1%",
         "index1":["top","left","bottom","right"]
    }
}