5. Pre/Post Processing

5.1. Sequential Grid Processing

Prior to running the FUN3D solver, you will need to process your grid to get it in the FUN3D v3 format. This also includes domain decomposition for parallel processing if you have a multiprocessor system available. If you will not be running on more than one processor, you will (must) simply partition your grid into a single partition. A wide range of input grid formats are currently available—see the party main menu for guidance. Contact FUN3D Support for assistance in converting other formats.

The tool that handles this grid conversion process is called Party. This utility also allows the user to reconstruct global solution files from a partitioned domain. A rule-of-thumb for party is 1M nodes take about 1.5GB memory. If you have that much memory on your machine and encoutour a problem, then check the “limit” command for csh (or ulimit for bourne shell) and ensure that datasize,stacksize, and memoryuse are unlimited (or the largest your system will permit).

csh: limit stacksize unlimited
bsh: ulimit -s unlimited   
     ulimit -d unlimited
     ulimit -m unlimited

Processing a Grid

To process your grid, simply run party and follow the menu prompts. You will need to have your grid in one of the supported grid formats indicated in the first block of the party main menu. (The grid formats are described in the Input Files section below.) The first set of options on the main menu is geared towards preprocessing a grid. The second set is for repartitioning an existing solution/grid. The final set is for postprocessing a solution generated by FUN3D.

FUN3D now supports 2D computations. If the grid is a FUN2D grid (see Input Files below), 2D is automatically enabled, without any additional user input. Depending on cell type (e.g. tetrahedra cannot be run as 2D), other grid formats must be forced into 2D mode by running Party with the --twod command line option. Note that except for FUN2D grids, the input grid must be a 3D grid, one cell wide. FUN2D grids are true 2D grids. When party processes a grid as 2D (either automatically for FUN2D meshes or with the --twod option for other grid types), it creates additional data in the part files that will cause the flow solver, when run, to automatically go into “2D mode”, resulting in considerably faster execution than the standard 3D mode. FUN3D operating in 2D mode is roughly 1.5 to 2 times slower than FUN2D (which is no longer supported). Flow solver output for FUN2D meshes, when post-processed with the Party utility, will reside on the extruded prismatic grid, rather than the 2D triangular grid in the [project].faces file.

You will probably want the table of boundary conditions below available. When processing your grid, use either the indices in the first column or the second column. When you get your results back, the forces that are summarized in the [project].forces file will be labeled using the notation in the second column. This is done so that existing boundary condition flags may be used, while still allowing for 4-digit boundary conditions in the solver, which enable room for future growth. _Note that if you want to change a boundary condition, you must pre-process the grid over again. The BC indices are hardwired into the partition files._ This is on our to-do list to change.

Boundary Conditions

Boundary condition listing and input formatting is described in the previous section.

Input Files

FAST Formatted Grids

[project].fgrid

This file contains the complete grid stored in ASCII FAST format.

[project].fastbc

The first line in this file is the number of boundary groups contained in your [project].fgrid file. Each subsequent line in this file contains the boundary number and the type for each.

VGRID Formatted Grids

[project].cogsg

This file contains the complete grid stored in unformatted VGRID format. Please note that VGRID cogsg files are ALWAYS big endian no matter what type of machine is used in grid generation. If you are running party or pparty on a native litte endian machine you will have to use a compiler option or a runtime environment flag to set the default file type to big endian.

[project].bc

This file contains the boundary information for the grid, as well as a flag for each boundary face.

[project].mapbc

For each boundary flag used in [project].bc, this file contains the boundary type information. The boundary types are as shown in the preceding table.

FELISA Formatted Grids

[project].gri

This file contains the complete grid stored in formatted FELISA format.

[project].bco

This file contains a flag for each boundary face. If original FELISA bc flags (1, 2, or 3) are used, they are translated to the corresponding FUN3D 4-digit bc flag. Alternatively, FUN3D 4-digit bcs can be assigned directly in this file

[project].fro

This file contains the surface mesh nodes and connectivities and associated boundary face tags for each surface triangle. This file can contain additional surface normal or tangent information (as output from GridEx or SURFACE mesh generation tools), but the additional data is not read or utilized by FUN3D.

FUN2D Formatted Grids

[project].faces

This file contains the complete grid stored in formatted FUN2D format (triangles). These meshes automatically direct FUN3D to operate in 2D mode, with no other user flags. Party will extrude the triangles into prisms in the y-direction. Output from the flow solver will be on a one-cell wide prismatic mesh. Boundary conditions are contained in the FUN2D grid file. The first time party is run, it will output a [project].mapbc file that contains these original boundary conditions. If you wish to change the boundary conditions from those in the [project].faces file, simply change them in [project].mapbc and rerun Party. Party will use the boundary conditions in the [project].mapbc file rather than the [project].faces file. If you wish to revert to the boundary conditions in the [project].faces file, you can remove the [project].mapbc and rerun Party.

FieldView Formatted (ASCII) Grids

[project].fvgrid_fmt

This file contains the complete grid stored in ASCII FieldView FV-UNS format.

Supported FV-UNS file versions: 2.4, 2.5 and 3.0 (3.0 only in FUN3D v10.7 and higher). With FV-UNS 3.0, the support is for the grid file in split format; the combined grid/results format is not supported. FUN3D does not support the Arbitrary Polyhedron elements of the FV-UNS 3.0 standard. For ASCII FV-UNS 3.0, the standard allows comment lines (line starting with !) anywhere in the file. FUN3D only allows comments immediately after line 1. Only one Grids section is allowed.

[project].fvmapbc

This file contains the boundary information for the grid. The first line is an integer corresponding to the number of boundaries. Subsequent lines, one for each boundary, contain two integers. The first is the boundary number and the second if the boundary type. The boundary types are as shown in the preceding table. If the [project].fvmapbc does not exist the first time party is run it will create one with some dummy data based on the boundary data in the grid file. This template file can be edited to set the desired boundary conditions, and Party must be rerun to set these boundary conditions in the part files.

FieldView Unformatted Grids

[project].fvgrid_unf

This file contains the complete grid stored in unformatted FieldView FV-UNS format.

Supported FV-UNS file versions: 2.4, 2.5 and 3.0 (3.0 only in FUN3D v10.7 and higher). With FV-UNS 3.0, the support is for the grid file in split format; the combined grid/results format is not supported. FUN3D does not support the Arbitrary Polyhedron elements of the FV-UNS 3.0 standard. Only one Grids section is allowed.

Versions of FUN3D prior to v10.7 allowed only double-precision unformatted files; v10.7 and higher queries the user as to whether the file is single or double precision.

[project].fvmapbc

Same as above for FieldView formatted grids.

AFLR3 Formatted Grids

[project].ugrid

This file contains the complete grid stored in formatted AFLR3 ugrid format.

[project].mapbc

Same as above for FieldView formatted grids.

NSU3D Unformatted Grids

[project].mcell.unf

This file contains the complete grid stored in unformatted NSU3D mcell format.

[project].mapbc

Same as above for FieldView formatted grids.

Pre-existing Grids/Solutions

[project].msh

This file contains the entire grid.

[project]_part.n (Optional)

These files (n=1, # of partitions) contain the grid information for each of the partitions in the domain. Sequential flow solves require one part file.

[project]_flow.n (Optional)

These files (n=1, # of partitions) contain the binary restart information for each grid partition. They are read by the solver for restart computations, as well as by party for solution reconstruction and plotting purposes. If you are going to read these in, the [project]_part.n files are required.

PLOT3D Grids

Party can read PLOT3D structured grids, and use (as hexahedral unstructured grids). The grid must be multiblock and 3-D (no iblanking) in the form:

[project].p3d

and can be either formatted or unformatted. Only one-to-one connectivity is allowed with this option (no patching or overset). The grid should contain no singular (degenerate) lines or points. An additional “neutral map file” is also required:

[project].nmf

This file gives boundary conditions and connectivity information. The make-up of the .nmf file is described on the website:

http://geolab.larc.nasa.gov/Volume/Doc/nmf.htm

Note that for usage with FUN3D’s Party software, the “Type” name in the .nmf file must correspond with one of FUN3D’s BC types, plus it allows the Type one-to-one. If party does not recognize the Type, you will get errors like:

@This may be an invalid BC index. At a minimum, it is unknown within the module bcs_deprc. Your data has a BC index unknown to function bc_name.@

An example .nmf file is shown here for a simple 1-zone airfoil C-grid (5×257x129), with 6 exterior boundary conditions and 1 one-to-one patch (in the wake where the C-grid attaches to itself):

# ===== Neutral Map File generated by the V2k software of NASA Langley's GEOLAB ===== \
# =================================================================================== \
# Block#   IDIM   JDIM   KDIM \
# ----------------------------------------------------------------------------------- \
      1 \
 \
      1      5    257    129 \
 \
# =================================================================================== \
# Type            B1  F1  S1   E1  S2   E2   B2  F2   S1   E1   S2  E2  Swap \
# ----------------------------------------------------------------------------------- \
'tangency'         1   3   1  257   1  129 \
'tangency'         1   4   1  257   1  129 \
'farfield_extr'    1   5   1  129   1    5 \
'farfield_extr'    1   6   1  129   1    5 \
'one-to-one'       1   1   1    5   1   41    1   1    1    5  257 217 false \
'viscous_solid'    1   1   1    5  41  217 \
'farfield_riem'    1   2   1    5   1  257

CGNS Grids

Party has the capability to read CGNS grid files as well as write CGNS grid/solution files. The user must download the CGNS library from www.cgns.org, install it, and compile the code linked appropriately to it (e.g.,—with-CGNS=location-of-cgnslib). You must link to CGNS Version 2.5.3 or later. If the code is not linked to the CGNS library, then Party will not allow the option of reading/writing a CGNS file. For grid input, any CGNS file to be read must be Unstructured type. The CGNS file should be named:

[project].cgns

The following CGNS mixed element types are supported: PENTA_6 (prisms), HEX_8 (hexes), TETRA_4 (tets), and PYRA_5 (pyramids). The CGNS BCs currently allowable are:

  • ‘BCWall’, ‘BCWallViscousHeatFlux’, ‘BCWallViscous’, ‘BCWallViscousIsothermal’ – currently mapped to viscous_solid
  • ‘BCOutflow’, ‘BCOutflowSupersonic’, ‘BCExtrapolate’ – currently mapped to farfield_extr
  • ‘BCOutflowSubsonic’, ‘BCTunnelOutflow’ – currently mapped to farfield_pbck
  • ‘BCInflow’, ‘BCInflowSubsonic’, ‘BCInflowSupersonic’, ‘BCFarfield’ - currently mapped to farfield_riem
  • ‘BCTunnelInflow’ – currently mapped to subsonic_inflow_pt
  • ‘BCSymmetryPlane’ – currently mapped to symmetry_x, symmetry_y, or symmetry_z
  • ‘BCWallInviscid’ – currently mapped to tangency

There are some additional limitations currently:

Under BC_t, Party reads the BC names, but nothing else that might be stored in the CGNS file like BCDataSet or BCData. I.e., we are only using the top-level CGNS naming convention as a “guide” to establishing the appropriate corresponding BCs in FUN3D. For the most part, this should be OK.

Currently it is required that the CGNS file include Elements_t nodes for all boundary faces (using type QUAD_4 or TRI_3), otherwise the code cannot recognize where boundaries are present (because it currently identifies boundaries via 2-D element types).

The best approach for the CGNS file (which is also common practice among most unstructured-type CGNS users) is not only to include all volume elements, but also to include all surface elements under separate nodes (of type TRI_3 or QUAD_4). It is also helpful to have SEPARATE ELEMENT NODES for each boundary element of a given BC type. This way, it is very easy to read and use the file. For example, using Tecplot, which can read CGNS files, one can easily distinguish the various parts (body vs. symmetry vs. farfield, if they all happen to be TRI_3’s for example, as long as each part has its own node).

Under BC_t, Party requires that either ElementList / ElementRange be used, or else PointList / PointRange with GridLocation=CellCenter, to refer to the appropriate corresponding boundary elements.

Note that if the CGNS file is missing BCs (no BC_t node), Party still tries to construct the BCs based on the boundary face Elements_t information. If these boundary element nodes have been named using a recognizable BC name, then Party will try to set each BC accordingly. If the name is not recognized, you will see messages like “This may be an invalid BC index”. Always check the .mapbc file after Party has run, to make sure that the BCs have all been interpreted and set correctly. (If not, you should edit the .mapbc file then re-run Party, and it will use the new corrected values in the .mapbc file. Party always overrides the BC info with whatever is in the .mapbc file when it is present!)

Output Files

Preprocessing Mode

[project]_part.<i>n</i>

These files (n=1, # of partitions) contain the partitioned grid in FUN3D v3 format.

[project].msh

This file contains the entire grid.

[project].pr

This file is generated when processing a FAST- or VGRID-formatted grid file. It contains some basic geometric statistics on the mesh.

[project]_geom.tec

This file contains the surface grid information as processed from a FAST- or VGRID-formatted input grid. This file is in ASCII Tecplot format.

[project]_angles.dat

This file contains some checks for big angles in the grid.

[project]_mesh_info

This file contains most of the grid-related screen output generated during the party run, e.g., number of cells, nodes, element types, cell volumes, and so forth.

[project]_part_info

This file contains detailed information about the mesh partitions, e.g., number of cells, nodes, boundary faces, boundary conditions, and so on.

Postprocessing Mode

For FUN3D Version 10.7 and higher, see Flow Visualization Output Directly From Flow Solver for an alternate to the postprocessing mode of Party for generating solution data for visualization.

When postprocessing using Party, the user has many choices for output, some of which are listed here.

[project]_soln.tec

This file contains the reconstructed surface mesh, along with flow variable information for plotting. This file is in ASCII Tecplot format.

[project].fvuns

This file contains the entire reconstructed mesh, along with flow variable information for plotting. This file is in ASCII FieldView format.

[project]_grid.fgrid

This file contains the entire reconstructed mesh for plotting. This file is in ASCII FAST format. (Note that the node ordering in this file will be different from the original FAST file you may have provided, since the grid is renumbered for bandwidth minimization.)

[project]_soln.fgrid

This file contains flow variable information for the entire reconstructed mesh for plotting. This file is in ASCII FAST format.

[project]_vol.cgns

This file contains the entire reconstructed mesh, including boundary elements and most typical boundary conditions (it uses UserDefined if the BC cannot be figured out), along with flow variable information for plotting. This file is CGNS format and must be read with CGNS-compatible software. Note that if the code is not linked to the CGNS software library during compilation, then Party will not give the option to output a CGNS file.

Post-Processing/Repartitioning Moving Grid Cases

To post-process (or repartition) moving grid cases using party, you must use the command line option --moving_grid.

5.2. Parallel Grid Processing

Note: The standard sequential implementation of party is generally sufficient for most users. However, for extremely large grids and/or hardware with very limited memory, a distributed version of party is now available. Not all of the features of party are available yet, but the basic capabilities are all there.

The processing of parallel party (pParty), from a user’s perspective, works essentially the same as Party. The following section describes the high-level differences between using Party and pParty. Please read the section on Party before reading this section. Note, the use of pParty assumes the user knows how to execute an MPI application in the user’s environment.

The parallel version of Party (pParty) initially reads and distributes the grid in parallel, thereby reducing the initial and overall memory and computational requirements. Then pParty internally calls a parallel partitioning tool (ParMETIS) to compute an efficient partition vector used to distribute the memory and computational requirements over a set of processors for the remaining steps (and bulk) of the preprocessing.

pParty can be executed using interactive prompts, input either interactively or redirected from an input file. Additionally, a series of command line options are available, type pparty --help for a list.

For pParty, all the inputs can be provided by command line options and is often the recommended approach for pParty as various MPI environments often handle standard input differently. For example, to pre-process a FAST mesh with a project name of om6viscous and not group common boundary condition types:

 mpirun -np 4 -machinefile machines pparty \
              --pparty_iwhere 1 \
              --pparty_project om6viscous \
              --pparty_ilump 0 \
              --partition_lines \
              --no_renum

where -np 4 informs MPI of the number of processors while -machinefile specifies that the file machines should be used to determine on which machines the job is run. Information required by pParty consists of --pparty_iwhere, which is the grid partitioning option, --pparty_project, which is the project name, and --pparty_ilump, which is the boundary grouping option. Optional information passed to pParty in this case consists of --partition_lines, which keeps implicit lines intact during partitioning, and --no_renum, which turns off Cuthill-Mckee renumbering.

Preprocessing

For parallel party, the number of partitions created is the same as the number of processors that the user specified in the parallel job. Thus, pParty does not prompt the user, nor accepts input for the number of partitions through standard input. See the Concluding Remarks section below on how to create more partitions than actual (underlying) processors.

Currently, the following are the major preprocessing constructs not supported by pParty:

  • 2D computations
  • mixed elements
  • multigrid
  • Cuthill-Mckee renumbering (currently, an identity vector)
  • FIELDVIEW, AFLR3, NSU3D meshes
  • Boundary grouping is not supported with partition_lines
  • does not create (nor subsequently need) a mesh file

Repartitioning

Preprocessing in Party typically creates a mesh file, which can be used in repartitioning. The mesh file is typically large and has been eliminated in pParty. The pParty preprocessor does not write mesh files.

To repartition in pParty, the user starts with an existing set of partition files (either created by Party or pParty) and a set of solution files ([project].flow). The user renames the original partition files to [project]_orig_part.n. Then reruns the preprocessor using the desired number of partitions on which to map the solution. The new files will be [project]_part.n.

Finally, pParty is called for repartitioning. pParty reads the original partition files and solution files, extracts the grid information, reads the new partition files, and creates the repartitioned solution files. As a result, the new solution files ([project]_flow.n) and new partition files ([project]_part.n) will be available.

Example: Repartitioning from 2 to 4 processors

Input:

  • [project]_orig_part.(1,2) (renamed by user with old partition files)
  • [project]_flow.(1,2)
  • [project]_part.(1-4) (created by second call to preprocessor)
  • ../Adjoint/[project]_adj.(1,2) (if adjoint is to be repartitioned)

Output:

  • [project]_flow.(1-4) (overwrites old solution files)
  • ../Adjoint/[project]_adj.(1-4) (overwrites old adjoint files in ../Adjoint if adjoint is requested to be partitioned)

  mpirun -np 4 -machinefile machines pparty < inp_init_24 

where inp_init_24 is

   11 
   om6viscous 
   1 
   2
   4

Note: a user can repartition the solution from a larger number of partitions to a smaller number; or from a smaller number to a larger number. But, the number of processors specified to MPI for repartitioning must be the larger of the two numbers (i.e., the larger of the original partitions or number of new partitions). Both restart and adjoint files can be repartitioned. A command line option, --pparty_repart_adjoint, is used to turn on repartitioning of adjoint files. The command line option, --pparty_repart_restart, is used to turn on repartitioning of restart files.

An example session of repartitioning.

Repartitioning assumes that the user has existing partition and solution files; additionally, restart and/or adjoint files may also be available.

Create the original (2) partition file

 mpirun -np 2 ./pparty
   1
   om6viscous
   0

   ==> creates om6viscous_part.1 and om6viscous_part.2
Create the flow files
    mpirun -np 2 ./nodet_mpi

       <== uses @fun3d.nml@ (or @ginput.faces@ prior to release 10.5.0), om6viscous_part.(1,2)
       ==> creates om6viscous_flow.1 and om6viscous_flow.2

Step 1. Rename the original partitions files

    mv om6viscous_part.1 om6viscous_orig_part.1
    mv om6viscous_part.2 om6viscous_orig_part.2

Step 2. Create the new partition files

    mpirun -np 4 ./pparty
       1
       om6viscous
       0

       ==> creates om6viscous_part.(1-4)

Step 3. Create the new flow files from the previous flow files.

Based on the previous steps, the following files should be available in the current directory. (If adjoint is to be repartitioned, then the om6viscous_adj.(1-2) should already exist in the ../Adjoint directory.)
    om6viscous_part.(1-4), om6viscous_orig_part.(1-2),  and 
    om6viscous_flow.(1-2)
  mpirun -np 4 ./pparty

  Enter your selection:
    11

  Enter project name for input file
     om6viscous

  Do you have restart files that you would like to repartition? (yes=1 no=0)
     1

  Do you also have some adjoint files that you want converted? (yes=1 no=0)
     0

  How many partitions does your mesh have?
      (For mesh movement cases only: use a negative number
      to read new x, y, z coordinates from partition files.)
     2

    --- user input requested ---
   How many partitions do you want?
     4

  Reading om6viscous_flow.* (version           3 )
  Writing om6viscous_flow.* (version           3 )

    ==> om6viscous_flow.(1-4) 

Repartitioning Adjoint files

The partitioning of adjoint works in the same manner. Starting from step 3 in the previous example, enter a “1” (yes) to have some adjoint files to be converted. The input for the Adjoint will be read from ../Adjoint and overwritten in the same directory (../Adjoint).

Note, if the user enters a “0” (no) for restart files to not be repartitioned, the question for the adjoint files will not be presented. If one wishes to repartition adjoint files only, then use the “—pparty_repart_adjoint 1” command line option, which will force the adjoint to be repartitioned even if the restart files are not.

Postprocessing

All of the non-experimental options (except global residual files) are available and work identically to Party.

For FUN3D Version 10.7 and higher, see Flow Visualization Output Directly From Flow Solver for an alternate to the postprocessing mode of pParty for generating solution data for visualization.

Summary of Pparty Command Line Options

Option Description Argument Party pParty
--party_nomsh Turn off writing .msh; use partition files instead Yes Yes
--party_partonly Stop after writing the partition file Yes Yes
--pparty_iwhere Grid Partitioning integer No Yes
--pparty_project Project Name string No Yes
--pparty_ilump Boundary Grouping integer No Yes
--pparty_outformat Turn on ASCII output of partition files No Yes
--pparty_metisin Turn on the reading of metis partition file No Yes
--pparty_metisout Turn on the writing of metis partition file No Yes
--pparty_repart_resize_from Number of partitions to repartition from integer No Yes
--pparty_repart_resize_to Number of partitions to repartition to integer No Yes
--interleaf Write parallel partitioned file in groups of interleaf factor integer No Yes

Concluding remarks

Currently, the preprocessing of pParty creates the same number of partitions as the user specified processors. If a user wants to create more partitions than processors, then the user should run multiple instances of pParty on the same processor. Thus, the number passed to -np will be the same as the number of partitions, but the actual number of underlying processors will be smaller. For more details, please refer to the MPI documentation for the target machine.

The --interleaf command line option controls the number of files being written out at once. The default is to write all files out at once. This may swamp a file system, thus a user can control the number of files written out concurrently. For instance, --interleaf 2 will write only 2 files at the same time.