Running CFL3Dv6 And Related Codes
CONTENTS
NOTES:
- All the run commands shown below are shown as if the executable resides
in the working directory; if environment variables are set up as suggested
under Building Executables
then any of the executable names shown below would need to be prepended by
a $.
- Only the parallel versions of cfl3d (cfl3d_mpi and cfl3dcmplx_mpi)
can be run on multiple processors; all other codes in the Version 6 package
run on single processors.
- Codes that are not interactive are shown as being executed in the background,
with the command line ending in an &. Interactive codes that require userinput are shown without the &.
PRECFL3D
Because version 6 has dynamic memory allocation, there is no
requirement to run precfl3d before you can run cfl3d.
However, you may still find it useful to do so
in order to assess how much memory will be required to run the case
at hand, allowing you to determine
whether a particular problem can fit within the memory of the machine,
or to deterimine the appropriate queue in which to submit the job.
The usage of precfl3d has changed slightly from previous versions: you
must now specify the number of processors in addition to the input file,
for example:
precfl3d -np num_procs < cfl3d.inp &
where num_procs is the total number of processors,
including the host.
When running on a single processor, that processor is the host, so
num_procs=1 will suffice to assess the memory requirements for the
sequential version of the code.
An important reason why you may want to run precfl3d before
running the parallel
version of the code is that for num_procs > 1, precfl3d
will output an auxiliary file called ideal_speedup.dat.
This file will list the best possible speedup you could hope to achieve
for the current case, using various numbers of compute processors, ranging
from 1 to the number of zones in your grid.
Return To Top
CFL3D
Standard (real) code, single processor:
Derivative (complex) code, single processor:
cfl3dcmplx_seq < cfl3d.inp &
Standard (real) code, multiple processor:
mpirun -np num_procs cfl3d_mpi
< cfl3d.inp &
Derivative (complex) code, multiple processor:
mpirun -np num_procs cfl3dcmplx_mpi
< cfl3d.inp &
where num_procs is the number of processors you will use.
Note: the current parallel code uses one of these processors as a host
that performs I/O and coordination tasks, but does not perform any
significant computational work.
Thus, the code really only effectively uses num_procs - 1
processors.
You may want to verify the correct procedure for running mpi code on your
your platform (e.g. some mpp's use -n instead of -np)
As a general rule, when running CFL3D one should always search through
the standard output file (usually called cfl3d.out) for "WARNING"
messages and other information, including possible grid problems such as
a "geometric mismatch" between zones.
Return To Top
PRERONNIE
Like cfl3d, ronnie now has dynamic memory allocation, there is no
requirement to run preronnie before you can run ronnie.
However, you may still find it useful to do so
in order to assess how much memory will be required to run the case
at hand, allowing you to determine
whether a particular problem can fit within the memory of the machine,
or to deterimine the appropriate queue in which to submit the job.
Since ronnie currently runs on only a single processor, there is no
need to specify num_procs as there is for precfl3d:
Return To Top
RONNIE
Return To Top
MAGGIE
NOTE: maggie currently does not have dynamic memory allocation,
and will have to be recompiled for each new case
Return To Top
SPLITTER
Standard (real) block/input file splitter:
splitter < splitter.inp &
Complex-valued grid block/input file splitter:
splittercmplx < splitter.inp &
Return To Top
MOOVMAKER
A typical interactive run is shown, with user input in bold:
name of plot3d grid file to read
name of plot3d q file to read
stationary or moving grid?...0=stationary, 1=moving
3D or 2D grid?...0=3D, 1=2d
how many frames are in the plot3d file?
(code will then echo info for each time step, not shown here)
completed writing new plot3d grid file: g.bin
completed writing new plot3d q file: q.bin
the "number of frames are in the plot3d file" is given near the bottom
of the cfl3d.out file when the cfl3d input parameter movie is .ne. 0,
so be sure to check for this before you run moovmaker.
Return To Top
PLOT3DG_TO_CGNS
A typical interactive run is shown, with user input in bold:
What is the CFL3D input file name?
Does this case use a patch file (e.g., patch.bin)? (1=yes)
required array dimensions:
read a 0=formatted or 1=unformatted grid?
input unformatted grid filename to read:
type 0 = PLOT3D-type, 1 = CFL3D-type grid to read:
block# 1: id,jd,kd= 2 373 29
block# 2: id,jd,kd= 2 299 45
block# 3: id,jd,kd= 2 133 33
block# 4: id,jd,kd= 2 343 17
This program creates a grid file SEPARATE from the 1 file soln.cgns
(they are LINKED, eliminating redundancy
in having to keep multiple copies of the
grid for multiple restarts)
Input desired name for CGNS gridfile (e.g., gridname.cgns):
creating adf grid file name: 30P.cgns
Reading CFL3D input file with following title:
there are 8 1-to-1 interfaces being read
CGNS grid file written to: 30P.cgns
(additional informational output not shown)
Return To Top
GRID_PERTURB
Standard (real-valued) grid perturbation tool:
A typical interactive run is shown, with user input in bold:
input name of baseline grid file
unformatted or formatted (unform = 0; form=1)
input name of grid sensitivity file to use
unformatted or formatted (unform = 0; form=1)
input name of perturbed grid to create
unformatted or formatted (unform = 0; form=1)
input DV number to base perturbation on
input step size for design variable 1
required array dimensions:
Complex-valued grid perturbation tool:
A typical interactive run is similar to that shown for the real-valued case
Return To Top
GET_FD
A typical interactive run is shown:
enter first restart file to extract history data from
this should be the "+" step file
enter second restart file to extract history data from
this should be the "-" step file
finite diffs to be calculated with central diffs
enter file name for output finite differences
enter 0 to output convergence of dcy/ddv,dcmy/ddv
enter 1 to output convergence of dcz/ddv,dcmz/ddv
Return To Top
Page Curator and NASA Responsible Official:
Christopher L. Rumsey
Last Updated: January 16, 2007
Privacy Statement
LMS Feedback Form