Page Contents
Quickstart guide:
   Atmospheric dynamical core
1. Introduction
   1.1 What is an atmospheric dynamical core
   1.2 The available cores
         1.2.1 Finite difference core (B-grid core)
         1.2.2 Spectral core
   1.3 Support, feedback, user contributions
   1.4 Portability
   1.5 FMS Licensing
2. Details of the code
   2.1 Where to find documentation
   2.2 Overview of the finite difference core
   2.3 Overview of the spectral core
   2.4 The dynamical core interface
   2.5 Shared components
3. Acquiring CVS source code
   3.1 How to acquire CVS source code
   3.2 What is CVS?
   3.3 What is GForge?
   3.4 CVS details
4. Compiling the source code
   4.1 The mkmf utility
   4.2 Creating the makefile
   4.3 Example: Compiling the code
   4.4 Compiling without MPI
5. Preparing the runscript
   5.1 The runscript
   5.2 The diagnostic table
   5.3 The field table
   5.4 Namelist options
   5.5 Initial conditions and restart files
   5.6 mppnccombine
6. Examining output
   6.1 Model output
   6.2 Displaying the output
   6.3 ncview
7. Performance

The FMS Jakarta Atmospheric Dynamical Core User Guide



1. Introduction
   1.1 What is an atmospheric dynamical core?
   1.2 The available cores
        
1.2.1 Finite difference core (B-grid core)
         1.2.2 Spectral core
   1.3 Support, feedback and user contributions
   1.4 Portability
   1.5 FMS Licensing

1.1 What is an atmospheric dynamical core?

We divide a global atmospheric model into a "dynamical core" and a set of "physics" modules, the combination of which is sufficient to integrate the state of the atmosphere forward in time for a given time interval. The dynamical core must be able to integrate the basic fluid equations for an inviscid, adiabatic ideal gas over an irregular rotating surface forward in time. Included in the dynamical core is the horizontal and vertical advection, horizontal subgrid scale mixing, and the time differencing.

Our operational definition of a global atmospheric dynamical core is a module, or set of modules, which is capable of integrating a particular benchmark calculation defined by Held and Suarez (1994) so as to obtain a statistically steady "climate". Model physics is replaced by very simple linear relaxation of temperature to a specified spatial structure and the winds near the surface are relaxed to zero. Because the resulting flow is turbulent and cascades variance to small horizontal scales, either explicit or implicit horizontal mixing is required to obtain a solution, and, as stated above, is considered to be part of the dynamical core.



1.2 The available cores

1.2.1 Finite difference core (B-grid core)

The global, hydrostatic finite difference dynamical core, also called the B-grid core, was developed from models described in 1 Mesinger, et al. (1988) and 2 Wyman (1996). The horizontal grid is the Arakawa B-grid and a hybrid sigma/pressure vertical coordinate is used. The step-mountain (eta coordinate) option is no longer supported.


1.2.2 Spectral core

The spectral dynamical core is a "plain vanilla" version of the classic Eulerian spectral model, with a spherical harmonic basis, for integrating the primitive equations on the sphere. The option of advecting tracers with a finite-volume gridpoint scheme is also available. Barotropic (2D non-divergent) and shallow water spherical models are also provided.




1.3 Support, feedback and user contributions

We will try our best to respond to your support requests and bug reports quickly, given the limits of our human resources devoted to this project. Please use the mailing lists (oar.gfdl.fms@majordomo.gfdl.noaa.gov and oar.gfdl.fms-atmos@majordomo.gfdl.noaa.gov) as the forum for support requests: browse the mailing lists for answers to your questions, and post new questions directly to the mailing list. We would also appreciate it if you could answer other users' questions, especially those related to site and platform configuration issues that you may have encountered. We will provide very limited support for platforms not listed in the portability section, and for modifications that you make to the released code.



1.4 Portability

For obvious reasons, our commitment at any time is only on those platforms where we have adequate access for our own thorough testing. We will add supported platforms as we can.

The platforms we support at present are the following:
1) SGI
   Chipset: MIPS
   OS: Irix 6.5
   Compiler: MIPSPro version 7.3.1.2 or higher
   Libraries: Message Passing Toolkit (MPT) version 1.5.1.0 or higher.
	      netCDF version 3.4 or higher (64-bit version)

2) Linux:
   Chipset: AMD
   OS: GNU/Linux
   Compiler: Intel Fortran Compiler Version 7.1-030 
   Libraries: MPI-1 (e.g MPICH-1.2.5)
	      netCDF version 3.4 or higher (64-bit version)


1.5 FMS Licensing

The Flexible Modeling System (FMS) is free software; you can redistribute it and/or modify it and are expected to follow the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

FMS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this release; if not,

write to: Free Software Foundation, Inc.
               59 Temple Place, Suite 330
               Boston, MA 02111-1307
               USA
or see:    http://www.gnu.org/licenses/gpl.html



1   Mesinger, F., Z. I. Janjic, S. Nickovic, D. Gavrilov and D. G. Deaven, 1988: The step-mountain coordinate: Model description and performance for cases of Alpine lee cyclogenesis and for a case of an Appalachian redevelopment. Mon. Wea. Rev., 116, 1493--1518.
2   Wyman, B. L., 1996: A step-mountain coordinate general circulation model: Description and validation of medium-range forecasts. Mon. Wea. Rev., 124, 102--121.

Back to top


2. Details of the code
   2.1 Where to find documentation
   2.2 Overview of the finite difference core
   2.3 Overview of the spectral core
   2.4 The dynamical core interface
   2.5 Shared components

2.1 Where to find documentation

In addition to this web page, additional documentation for the atmospheric dynamical cores may be found in these documents:

2.2 Overview of the finite difference core

Key features of the finite difference dynamical core are:



General features are:



2.3 Overview of the spectral core

Key features of the spectral dynamical core are:



2.4 The dynamical core interface

We do not specify the precise interface expected of a dynamical core. Within FMS we do specify the precise interface for the atmospheric model as a whole, so as to allow it to communicate with land, ocean, and ice models. Atmospheric models are generally constructed for each core individually from the dynamical core and individual physics modules. Sets of physics modules are bundled into physics packages that can be used to more easily compare models with the same physics and different cores (the Held-Suarez forcing is the simplest example of such a package) but we also recognize that different cores may require the physics to be called in distinct ways.

The B-grid and spectral dynamical cores share high-level superstructure code and low-level FMS infrastructure codes. The dynamical core is sandwiched between these levels. For the simple test cases provided with this release the superstructure only consists of a main program, but in more realistic models, drivers for component models and coupling software may also be considered part of the superstructure. The FMS infrastructure codes, which include many commonly-used utilities, are used by both the dynamical core and the superstructure code. The following figure depicts this hierarchy of model codes.



The dynamical cores have the same user interface at the atmospheric driver level. Atmospheric drivers are constructed for each core for specific types of models. The drivers included with this public release are for running in a dynamical core only mode using simple forcing from a shared module (the Held-Suarez GCM benchmark model). Other drivers exist for running coupled models using full atmospheric physics in either AMIP mode or fully coupled to a realistic ocean, ice, and land model.

A user selects which dynamical core to use prior to compiling the model. A list of path names to the source code of a specific core and the FMS shared codes is supplied to a compilation script. Refer to section 5 for details on compiling the source code.


2.5 Shared components

  1. FMS infrastructure


  2. parallelization tools
    Simple routines that provide a uniform interface to different message-passing libraries, perform domain decomposition and updates, and parallel I/O on distributed systems.
    I/O and restart file utilities
    Routines for performing commonly used functions and for reading and writing restart files in native or NetCDF format.
    time manager
    Simple interfaces for managing and manipulating time and dates.
    diagnostics manager
    Simple calls for parallel NetCDF diagnostics on distributed systems.
    field and tracer manager
    Code to read entries from a field table and manage the simple addition of tracers to the FMS code.
    fast Fourier transform
    Performs simultaneous FFTs between real grid space and complex Fourier space.
    topography
    Routines for creating land surface topography and land-water masks.
    constants
    Sets values of physical constants and pi.
    horiz_interp
    Performs spatial interpolation between grids.


  3. Atmospheric shared components


  4. vertical advection
    Computes a tendency due to vertical advection for an arbitrary quantity.
    forcing for Held-Suarez GCM benchmark
    Routines for computing heating and dissipation for the Held-Suarez GCM benchmark integration of a dry GCM.


  5. FMS superstructure


  6. main program
    For running a stand-alone atmospheric model.


3   Simmons, A. J. and D. M. Burridge, 1981: An energy and angular-momentum conserving vertical finite-difference scheme and hybrid vertical coordinates. Mon. Wea. Rev., 109, 758--766.
4   Lin, S.-J., 1997: A finite-volume integration method for computing pressure gradient force in general vertical coordinates. Quart. J. Roy. Meteor. Soc., 123, 1749--1762.
5   Lin, S.-J. and R. B. Rood, 1996: Multidimensional flux-form semi-lagrangian transport schemes. Quart. J. Roy. Meteor. Soc., 123, 1749--1762.

Back to top


3. Acquiring CVS source code
   3.1 How to acquire CVS source code
   3.2 What is CVS?
   3.3 What is GForge?
   3.4 CVS details

3.1 How to acquire CVS source code

The FMS development team uses a local implementation of GForge to serve FMS software, located at http://fms.gfdl.noaa.gov. In order to obtain the source code, you must register as an FMS user on our software server. After submitting the registration form on the software server, you should receive an automatically generated confirmation email within a few minutes. Clicking on the link in the email confirms the creation of your account.

After your account has been created, you should log in and request access to the Flexible Modeling System project. Once the FMS project administrator grants you access, you will receive a second e-mail notification. This email requires action on the part of the project administrator and thus may take longer to arrive. The email will contain a software access password along with instructions for obtaining the release package, which are described below.

To check out the release package containing source code, scripts, and documentation via CVS, type the following commands into a shell window. You might wish to first create a directory called fms in which to run these commands. You should enter the software access password when prompted by the cvs login command. At cvs login, the file ~/.cvspass is read. If this file does not already exist, an error message may display and the cvs login may fail. In this event, you should first create this file via touch ~/.cvspass.

cvs -z3 -d:pserver:cvs@fms.gfdl.noaa.gov:/cvsroot/atm-dycores login
cvs -z3 -d:pserver:cvs@fms.gfdl.noaa.gov:/cvsroot/atm-dycores co -r jakarta atm_dycores

This will create a directory called atm_dycores in your current working directory containing the release package.

If you prefer not to use CVS, you may download the tar file called atm_dycores.tar.gz from https://fms.gfdl.noaa.gov/projects/atm-dycores/. Sample output is also available there for download. See Section 6.1 for more information on the sample output.



3.2 What is CVS?

FMS code is maintained under a software version control system, the Concurrent Versions System (CVS). CVS provides a set of tools for management of source codes with multiple users and developers distributed across a wide area network. It maintains all source code files as history files, in which different versions are stored as incremental differences from a base file. These history files exist in a directory tree called a repository. Normally, the files in the repository are never accessed directly. Instead, CVS commands are used to obtain a working copy of the files in a working directory. In CVS terminology, this is called checking out the code from the repository. After the source code has been edited, the user checks in (or commits) the files back into the repository. The repository then contains the changes, and they are accessible to other users.

For a comprehensive source of CVS features, commands and options, consult the CVS manual.



3.3 What is GForge?

GForge is an Open Source collaborative software development tool, which allows organization and management of any number of software development projects. It is designed for managing large teams of software engineers and/or engineers scattered among multiple locations. Gforge is available at http://www.gforge.org. General user documentation can be found at http://gforge.org/docman/?group_id=1.



3.4 CVS details

On UNIX systems, the command which cvs can be used to verify that CVS is installed. Please contact your system administrator if CVS is not installed on your system.

The FMS source code is checked out of the CVS repository through remote (:pserver:) login. At cvs login, the file ~/.cvspass is read. If this file does not exist, an error message may display and the login may fail. In this event, the user should first create a ~/.cvspass file using the UNIX touch command. Then the user should be able to access CVS via remote login.

The -d option to cvs commands specifies the location of the CVS repository. The user may choose to set the $CVSROOT environment variable to the string ":pserver:cvs@fms.gfdl.noaa.gov:/cvsroot/atm-dycores". This optional step will free the user from specifying the location of the repository each time a CVS command is entered.




Back to top


4. Compiling the source code
   4.1 The mkmf utility
   4.2 Creating the makefile
   4.3 Example: Compiling the source code
   4.4 Compiling without MPI

4.1 The mkmf utility

*_pathnames.html , in the atm_dycores/input directory, is a file created for the user's convenience and contains links to all the documentation files that have been checked out with the source code. A listing of relative path names to all checked out source code files is included in the *_pathnames file, located in the atm_dycores/input directory. This file is used in combination with the makefile utility mkmf , located in the atm_dycores/bin directory. Prior to compiling the FMS source code, a makefile must be created. A makefile is used to determine the source code dependencies and the order in which source code files will be compiled. mkmf ("make-make-file" or "make-m-f") is a tool written in perl5 that will construct a makefile from distributed source. The result of the mkmf utility is a single executable program. Note that mkmf must be used when executing the runscripts.

mkmf has the ability to understand dependencies in f90, such as modules and use, the FORTRAN include statement and the cpp #include statement in any type of source. mkmf also places no restrictions on filenames, module names, etc. The utility supports the concept of overlays, where source is maintained in layers of directories with a defined precedence. In addition, mkmf can keep track of changes to cpp flags and knows when to recompile the affected source. This refers to files containing #ifdefs that have been changed since the last invocation.

The calling syntax and associated arguments are:

mkmf [-a abspath][-c cppdefs][-d][-f][-m makefile][-p program] \
   [-t template][-v][-x][args]

attaches the absolute path to the front of all relative paths to source files -a abspath
list of cpp #defines to be passed to the source files -c cppdefs
debug flag -d
formatting flag -f
name of makefile written -m makefile
final target name -p program
file containing a list of make macros or commands -t template
verbosity flag -v
executes the makefile immediately -x
list of directories and files to be searched for targets and dependencies args

The debug flag is much more verbose than -v and used only if you are modifying mkmf itself. The formatting flag restricts the lines in the makefile to 256 characters. Lines exceeding 256 characters use continuation lines. If filenames are omitted for the options [-m makefile] and [-p program], the defaults Makefile and a.out are applied. The list of make macro or commands contained in [-t template] are written to the beginning of the makefile.



4.2 Creating the makefile

When the mkmf utility is executed, it reads a template file and runs a list of make macros, commands and compilers. The template is a platform-specific file that contains standard compilation flags. Default template files are provided with the FMS source code and are located in the atm_dycores/bin directory. It is recommended that users set up their own compilation template specific for their platform and compiler. The template file contains the following variables:

compiler for FORTRAN files FC
executable for the loader step LD
cpp options that do not change between compilations CPPFLAGS
flags to the compiler FC FFLAGS
flags to the loader step LDFLAGS

For example, the template file for the SGI MIPSpro compiler contains:

FC = f90
LD = f90
LDFLAGS = -64 -mips4 -lexc -lscs -lmpi -L/usr/local/lib -lnetcdf
CPPFLAGS = -macro_expand -Dsgi_mipspro -Duse_netCDF -I/usr/local/include
FFLAGS = -OPT:Olimit=0 -O2 -r8 -d8 -i4 -64 -mips4 -expand_source -I/usr/local/include

An include file is any file with an include file suffix, such as .H, .fh, .h, .inc, which is recursively searched for embedded includes. mkmf first attempts to locate the include file in the same directory as the source file. If it is not found there, it looks in the directories listed as arguments, maintaining a left-to-right precedence. The argument list, args, is also treated sequentially from left to right. The default action for non-existent files is to create null files of that name in the current working directory via the UNIX touch command. There should be a single main program source among the arguments to mkmf , since all the object files are linked to a single executable.

The argument cppdefs should contain a comprehensive list of the cpp #defines to be preprocessed. This list is compared against the current "state", which is maintained in the file .cppdefs in the current working directory. If there are any changes to this state, mkmf will remove all object files affected by this change so that the subsequent make will recompile those files. The file .cppdefs is created if it does not exist. .cppdefs also sets the make macro CPPDEFS. If this was set in a template file and also in the -c flag to mkmf, the value in -c takes precedence. Typically, you should set only $CPPFLAGS in the template file and CPPDEFS via mkmf -c.

To execute the mkmf utility, the user must locate the appropriate *_pathnames file in the atm_dycores/input directory: barotropic_pathnames, bgrid_pathnames, shallow_pathnames, or spectral_pathnames. The script list_paths, in the atm_dycores/bin directory, can be used to create a new *_pathnames file if the original file has been modified or additional source code files need to be compiled. In the working directory containing the FMS source code, the user should set up the compilation template file and execute the mkmf utility. Next, create a compilation directory and copy the Makefile there. The Makefile is created in the compilation directory using the make command.



4.3 Example: Compiling the source code

The following example demonstrates how to create the makefile and compile the fms_atm_dycores module of the FMS Jakarta source code. This step is necessary when executing the runscript.

4.4 Compiling without MPI

Underlying FMS is a modular parallel computing infrastructure. MPI (Message-Passing Interface) is a standard library developed for writing message-passing programs for distributed computing across loosely-coupled systems. Incorporated in the FMS source code is MPP (Massively Parallel Processing), which provides a uniform message-passing API interface to the different message-passing libraries. Together, MPI and MPP establish a practical, portable, efficient, and flexible standard for message passing.

There are a number of freely available implementations of MPI that run on a variety of platforms. The MPICH implementation, developed by researchers at Argonne National Lab and Mississippi State University, runs on many different platforms, from networks of workstations to MPPs. If MPICH is installed, the user can compile the source code with MPI. If the user does not have MPICH or the communications library, the FMS source code can be compiled without MPI in one of two ways. If the makefile is created external to the runscript, omit the -c cppdefs flag from the mkmf syntax. To compile the FMS source code without MPI delete -Duse_libMPI from the cppflags variable.

When MPI is not used, the messages from MPP_lib may display but the data is being copied by MPP. Compiling the FMS source code without MPI has been tested and the results show that 1 GB of memory is required for executing the code with and without MPI on a single PE.




Back to top


5. Preparing the runscript
   5.1 The runscript
   5.2 The diagnostic table
   5.3 The field table
   5.4 Namelist options
   5.5 Initial conditions and restart files
   5.6 mppnccombine

5.1 The runscript

Simple (csh shell) scripts are provided for running the atmospheric dynamical cores. These runscripts perform the minimum required steps to compile and run the models and are intended as a starting point for the development of more practical run scripts. The scripts for all available dynamical core models are located in the scripts directory and are called run_bgrid , run_spectral , run_spectral_barotropic , and run_spectral_shallow .

Near the top of the scripts, variables are set to the full path name of the initial condition (optional), diagnostics table, tracer field table (optional), compilation directory, and the mppnccombine script. The script proceeds to compile and link the source code, create a working directory, and copy the required input data into the working directory. The mpirun command is then used to run the model. The final step for multi-processor runs, is to combine domain-decomposed diagnostics files into global files.

The default the scripts are set to run one or two days on a single processor. The number of processors used is controlled by a variable near the top of the scripts. Users may want to increase the number of processors to decrease the wallclock time needed for a run. The run length (in days) and the atmospheric time step (dt_atmos) in seconds is controlled by namelist &main_nml which is set directly in the runscripts for convenience.

To compile and link the model codes a template file provides platform dependent parameters, such as the location of the netCDF library on your system, to a compilation utility called mkmf . Sample template files for various platforms are provided in the bin directory. More information on mkmf and compiling the source code can be found in Section 4.

The sample scripts compile the model code with the MPI library and execute using the mpirun command. Refer to Section 4.4 for issues related to the MPI implementation and for compiling without MPI. The mpirun command is specific to Silicon Graphics machines, and users may need to change this to run on other platforms.

The following sections will describe some of the steps needed to understand and modify the simple runscripts.



5.2 The diagnostic table

The FMS diagnostics manager is used to save diagnostic fields to netCDF files. Diagnostic output is controlled by specifying file and field names in an ASCII table called diag_table. The diagnostic tables supplied with the dynamical cores are found in the input directory and are called hs_diag_table , spectral_barotropic_diag_table , and spectral_shallow_diag_table . The B-grid and spectral cores uses the same diagnostics table ( hs_diag_table ) for the Held-Suarez benchmark test case. In the runscript, the user specifies the full path to the appropriate diagnostics table, and it is copied to file diag_table.

The diagnostic table consists of comma-separated ASCII values and may be edited by the user. The table is separated into three sections: the global section, file section and field section. The first two lines of the table comprise the global section and contain the experiment title and base date. The base date is the reference time used for the time axis. For the solo dynamical cores the base date is irrelevant and is typically set to all zeroes. The lines in the file section contain the file name, output frequency, output frequency units, file format (currently, only netCDF), time units and long name for the time axis. The last section, the field section, contains the module name, field name, output field name, file name, time sampling for averaging (currently, all time steps), time average (.true. or .false.), other operations which are not implemented presently and the packing value. The packing value defines the precision of the output: 1 for double, 2 for floating, 4 for packed 16-bit integers and 8 for packed 1-byte. Any line that begins with a "#" is skipped.

A sample diagnostic table is displayed below.

      "Model results from the Held-Suarez benchmark"
      0 0 0 0 0 0
      #output files
      "atmos_daily",    24, "hours", 1, "days", "time", 
      "atmos_average",  -1, "hours", 1, "days", "time", 
      #diagnostic field entries
      "dynamics",   "ps",      "ps",      "atmos_daily",    "all", .false., "none", 2,
      "dynamics",   "bk",      "bk",      "atmos_average",  "all", .false., "none", 2,
      "dynamics",   "pk",      "pk",      "atmos_average",  "all", .false., "none", 2,
      "dynamics",   "zsurf",   "zsurf",   "atmos_average",  "all", .false., "none", 2,
      "dynamics",   "ps",      "ps",      "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "ucomp",   "ucomp",   "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "vcomp",   "vcomp",   "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "temp",    "temp",    "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "omega",   "omega",   "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "tracer1", "tracer1", "atmos_average",  "all", .true.,  "none", 2,
      "dynamics",   "tracer2", "tracer2", "atmos_average",  "all", .true.,  "none", 2,
      #"hs_forcing", "teq",     "teq",     "atmos_average",  "all", .true.,  "none", 2,
     



5.3 The field table

The FMS field and tracer managers are used to manager tracers and specify tracer options. All tracers used by the model must be registered in an ASCII table called field_table. The field tables supplied with the dynamical cores are found in the input directory. There are separate field tables for the B-grid and spectral core called bgrid_field_table and spectral_field_table . There are no field tables for the barotropic or shallow-water models. In the runscript, the user specifies the full path to the appropriate field table, and it is copied to file field_table.

The field table consists of entries in the following format. The first line of an entry should consist of three quoted strings. The first quoted string will tell the field manager what type of field it is. The string "tracer" is used to declare a tracer field entry. The second quoted string will tell the field manager which model the field is being applied to. The supported types at present are "atmos_mod" for the atmosphere model, "ocean_mod" for the ocean model, "land_mod" for the land model, and, "ice_mod" for the ice model. The third quoted string should be a unique tracer name that the model will recognize.

The second and following lines of each entry are called methods in this context. Methods can be developed within any module and these modules can query the field manager to find any methods that are supplied in the field table. These lines can consist of two or three quoted strings. The first string will be an identifier that the querying module will ask for. The second string will be a name that the querying module can use to set up values for the module. The third string, if present, can supply parameters to the calling module that can be parsed and used to further modify values. An entry is ended with a backslash (/) as the final character in a row. Comments can be inserted in the field table by having a # as the first character in the line.

Here is an example of a field table entry for an idealized tracer called "gunk".

       "TRACER",     "atmos_mod",               "gunk"
       "longname",   "really bad stuff" 
       "units",      "kg/kg"
       "advec_vert", "finite_volume_parabolic"
       "diff_horiz", "linear",                  "coeff=.30" / 
     

In this example, we have a simple declaration of a tracer called "gunk". Methods that are being applied to this tracer include setting the long name of the tracer to be "really bad stuff", the units to "kg/kg", declaring the vertical advection method to be "finite_volume_parabolic", and the horizontal diffusion method to be "linear" with a coefficient of "0.30".

A method is a way to allow a component module to alter the parameters it needs for various tracers. In essence, this is a way to modify a default value. A namelist can supply default parameters for all tracers and a method, as supplied through the field table, will allow the user to modify the default parameters on an individual tracer basis.

The following web-based documentation describes the available method_types for the dynamical cores.



5.4 Namelist options

Many model options are configurable at runtime using namelist input. All FMS modules read their namelist records from a file called input.nml. A module will read input.nml sequentially until the first occurrence of its namelist record is found. Only the first namelist record found is used. Most (if not all) namelist variables have default values, so it is not necessary to list all namelist records in the input.nml file. The runscripts provided set up the file input.nml by concatenating a core-specific namelist file to the namelists for several shared modules. The core-specific namelists provided with this release are found in the input directory and are called bgrid_namelist , spectral_namelist , spectral_barotropic_namelist , and spectral_shallow_namelist .



5.5 Initial conditions and restart files

The FMS uses restart files to save the exact (bit-reproducible) state of the model at the end of a model run. The restart files are used as an initial condition to continue a model run from an earlier model integration. A module (or package of modules) will write data to a restart file if it is needed for a bit-reproducible restart. The input restart files (the initial conditions) are read from the INPUT subdirectory. The output restart files are written to the RESTART subdirectory. The simple runscripts create these two directories when setting up the model run.

The initial condition file specified in the simple runscripts is a cpio archive file that contains the restart files created by individual modules. The test cases provided with this release do not specify an initial condition file, but rather generate their initial states internally. The test case will however create output restart files, and a user may want to archive or move the output restart files so they can be used as an initial condition when continuing a model run.

To create a cpio archive file:

cd RESTART
/bin/ls *.res* | cpio -ov > OutputRestart.cpio

or, simply move the output files to the input directory:

rm INPUT/*.res*
mv RESTART/*.res* INPUT
mpirun -np 1 fms.exe # rerun the model

Because the restart file for the main program contains information about the current model time, there is no need to modify any namelist or input files before rerunning the model.

The restart file created by the B-grid core is called bgrid_prog_var.res.nc and the restart file created by the spectral core is called spectral_dynamics.res.nc. Here are some specific details about the restart file for each core.

Bgrid: Spectral:


5.6 mppnccombine

Running the FMS source code in a parallel processing environment will produce one output netCDF diagnostic file per processor. mppnccombine joins together an arbitrary number of data files containing chunks of a decomposed domain into a unified netCDF file. If the user is running the source code on one processor, the domain is not decomposed and there is only one data file. mppnccombine will still copy the full contents of the data file, but this is inefficient and mppnccombine should not be used in this case. Executing mppnccombine is automated through the runscripts. The data files are netCDF format for now, but IEEE binary may be supported in the future.

mppnccombine requires decomposed dimensions in each file to have a domain_decomposition attribute. This attribute contains four integer values: starting value of the entire non-decomposed dimension range (usually 1), ending value of the entire non-decomposed dimension range, starting value of the current chunk's dimension range and ending value of the current chunk's dimension range. mppnccombine also requires that each file have a NumFilesInSet global attribute which contains a single integer value representing the total number of chunks (i.e., files) to combine.

The syntax and arguments of mppnccombine are as follows:

mppnccombine [-v] [-a] [-r] output.nc [input ...]

print some progress information -v
append to an existing netCDF file -a
remove the '.####' decomposed files after a successful run -r

An output file must be specified and it is assumed to be the first filename argument. If the output file already exists, then it will not be modified unless the option is chosen to append to it. If no input files are specified, their names will be based on the name of the output file plus the extensions '.0000', '.0001', etc. If input files are specified, they are assumed to be absolute filenames. A value of 0 is returned if execution is completed successfully and a value of 1 indicates otherwise.

The source of mppnccombine is packaged with the FMS atm_dycores code in the postprocessing directory. mppnccombine.c should be compiled on the platform where the user intends to run the FMS Jakarta atmospheric dynamical core source code so the runscript can call it. A C compiler and netCDF library are required for compiling mppnccombine.c.



Back to top


6. Examining output
   6.1 Model output
   6.2 Displaying the output
   6.3 ncview

6.1 Model output

Output from a FMS model run will be written to the directory where the model was run. FMS models write output in ASCII, binary, and netCDF formats. Ascii or text output files have the *.out suffix. For example, files of the form *integral.out contain global integrals and logfile.out contains the namelist and revision number output. Note that the spectral model does not produce *integral.out files. Standard output and standard error messages created by the model may be directed to a file called fms.out. The diagnostics files, specified in the diagnostics table, are written as netCDF files with the *.nc suffix. The output restart files are written to the subdirectory RESTART and will have the *.res.nc or *.res suffix.

You may download sample output data for comparison at https://fms.gfdl.noaa.gov/projects/fms/. Each tar file expands to a directory containing a readme file along with netcdf and ascii output. The files bgrid_output.tar.gz and spectral_output.tar.gz contain daily snapshots of surface pressure through the 200 day spinup period and time means of all fields over the 200 to 1200 day period. The files barotropic_output.tar.gz and shallow_output.tar.gz contain thirty days of diagnostic output for the spectral barotropic model and spectral shallow water model, respectively.



6.2 Displaying the output

There are several graphical packages available to display the model output. These packages widely vary depending on factors, such as the number of dimensions, the amount and complexity of options available and the output data format. The data will first have to be put into a common format that all the package can read. FMS requires the data to be stored in netCDF format since it is so widely supported for scientific visualization. The graphical package is also dependent upon the computing environment. This section will discuss a two-dimensional browser that is used on workstations, ncview. Please reference GFDL's Scientific Visualization Guide for information on additional graphical packages.



6.3 ncview

ncview is a visual browser for netCDF data format files and displays a two-dimensional, color representation of single precision floating point data in a netCDF file. You can animate the data in time by creating simple movies, flip or enlarge the picture, scan through various axes, change colormaps, etc. ncview is not a plotting program or an analysis package. Thus, contour lines, vectors, legend/labelbar, axis/tic-marks and geography are not included. The user is unable to perform X-Z or Y-Z cross-sections, unless there is a fourth dimension, time. Rather, ncview's purpose is to view data stored in netCDF format files quickly, easily and simply.

ncview is capable of short user spin-up, fast cross sectioning, magnification, predefined color palettes which can be inverted and 'scrunched' to highlight high or low values, animation along the least quickly varying dimension only with speed control and printing. It also has the ability to read a series of files as input, such as a sequence of snapshot history files. A time series graph for variables pops up by clicking the mouse at a specific point. Other options include a mouse-selectable colormap endpoints with optional reset, map overlay and a filter for one-dimensional variables.

In ncview, the user can <left-click> on any point in a plot to get a graph of the variable verses time at that point. Also, <Ctrl><left-click> on any point to set the colormap minimum to the value at that point, while <Ctrl><right-click> on any point will set the colormap maximum to the value at that point. Use the "Range" button to set (or reset) the colormap min/max. For additional information on ncview, refer to the ncview UNIX manual page (man ncview) or the ncview homepage.




Back to top


7. Performance

The test cases provided with this release have been run on the SGI Origin 3800 large-scale cluster at the Geophysical Fluid Dynamics Lab. The table below summaries the performance for each of the test cases.

Model Resolution Run length (days) # pe Time (sec)
bgrid N45 (144 x 90 x 20) 200 30 3995
spectral T42 (128 x 64 x 20) 200 16 1760
barotropic T85 (256 x 128) 30 16 50
shallow T85 (256 x 128) 30 16 50


Back to top


Last modified: 10-09-2003 15:12:14.  
Send comments/suggestions to Lori Thompson