Library of AMIP data Transmission Standards

			    LATS 1.0

		     The LATS Test Procedure -- 
   The Rules of LATS and how to use LATS in Real Applications


	     http://www-pcmdi.llnl.gov/software/lats
		       lats@pcmdi.llnl.gov

		  Mike Fiorino and Bob Drach(1)

     Program for Climate Model Diagnosis and Intercomparison
	     Lawrence Livermore National Laboratory
		    University of California
		       P.O. Box 808 L-264
		       Livermore, CA 94551
	     510-423-8505 (voice) 510-422-7675(fax)


			  19 March, 1997
				


1.  Introduction
----------------

This is perhaps the most important document in the distribution.
It not only guides you through the testing, but gives you a
detailed description of how to build LATS fortran applications
and the LATS "rules of the road."  This document, and the lats 
man pages (lats.html), are worth printing out before proceeding.

The first step is to test the LATS distribution, i.e., make sure
it is working properly and in the same way as when it was built.

I have made this a one step process:

test.lats.sh DDDD

where DDDD is the distribution name.  For example,

test.lats.sh irix5

on an SGI irix5 platform.

Review the contents section below and then run the test.  After
you have gotten LATS verified, carefully read the sections on the
driver script and the fortran code before writing your own LATS
program.  These sections contain essential information on how
LATS operates.


2.  Contents
------------

$LATSHOME/LATS-V.V/test/
                        README              - this file
                        lats.sh             - management shell (sh) script
                        testlats.f          - fortran test program
                        test.lats.sh        - test management shell (sh) script
                        veri_grib.out.STND  - output to verify GRIB data
                        veri_nc.c           - C code to verify netCDF data
                        veri_nc.out.STND    - output from netCDF verification
                        psl.gif             - picture of test sea level pressure (t=1)
                        ta.500gif           - picture of test 500 hPa T (t=2)

and after you run the test:

                        latsout.ctl         - GrADS data descriptor file 
                        latsout.gmp         - GrADS gribmap file
                        latsout.grb         - GRIB data file
                        latsout.nc          - netCDF data file
                        testlats            - fortran application binary  
                        veri_grib.out       - output from GRIB verification
                        veri_nc             - C application binary
                        veri_nc.out         - output from netCDF verification


key:
	$LATSHOME is the root or home directory where LATS-V.V was installed
	V.V is the version (1.0).


3.  Output from the test script
-------------------------------

On my SGI, this is the output I get:


lllllllllllllllllllllllllllllll
 
command to create the LATS application testlats:
 
f77 testlats.f -o testlats -L../lib -llats -lnetcdf -lm
 
command to create the netCDF verification C code:
 
cc veri_nc.c -o veri_nc -I../include -L../lib -lnetcdf
 
lllllllllllllllllllllllllllllll
RRRRRRR
RRRRRRR - running testlats with both GRIB and netCDF support
RRRRRRR
 LATS grib file id =            1

gribmap:  Scanning binary GRIB file(s):
gribmap:  Opening GRIB file latsout.grb
gribmap:  Reached EOF
gribmap:  Writing the map...

LATS_GRIB: SUCCESS -- gribmap for GrADS/VCS seems to have worked...

 LATS netcdf file id =            2

Undefined parameter table (center 100-2 table 128), using NCEP-2

Undefined parameter table (center 100-2 table 128), using NCEP-2
1:0:d=79010100:PRMSL:kpds5=2:kpds6=102:kpds7=0:TR=3:P1=0:P2=1:TimeU=3:MSL:0-1mon ave:NAve=0

Undefined parameter table (center 100-2 table 128), using NCEP-2
8:46956:d=79020100:TMP:kpds5=11:kpds6=100:kpds7=200:TR=3:P1=0:P2=1:TimeU=3:200 mb:0-1mon ave:NAve=0
VVVVVVV
VVVVVVV - verification by diff with known answers
VVVVVVV
 
GRIB  
GRIB  Congratulations!!!!
GRIB  
GRIB  LATS GRIB verification succeeded
GRIB  
 
 
netCDF 
netCDF Congratulations!!!!
netCDF 
netCDF LATS netCDF verification succeeded
netCDF 

Explanation:
------------

.
.
.

command to create the LATS application testlats:
 
f77 testlats.f -o testlats -L../lib -llats -lnetcdf -lm
 
command to create the netCDF verification C code:
 
cc veri_nc.c -o veri_nc -I../include -L../lib -lnetcdf
.
.
.

 
Demonstrates how applications were compiled.  This output will
vary from platform to platform...

.
.
.

RRRRRRR
RRRRRRR - running testlats with both GRIB and netCDF support
RRRRRRR
 LATS grib file id =            1          - from the fortran code

gribmap:  Scanning binary GRIB file(s):    - from LATS
gribmap:  Opening GRIB file latsout.grb    - from LATS
gribmap:  Reached EOF                      - from LATS
gribmap:  Writing the map...               - from LATS 

LATS_GRIB: SUCCESS -- gribmap for GrADS/VCS seems to have worked... - from LATS

 LATS netcdf file id =            2        - from the fortran code

The "gribmap:" and "LATS_GRIB:" messages come from LATS and
indicate whether the internal GrADS .ctl and gribmapping worked

.
.
.

Undefined parameter table (center 100-2 table 128), using NCEP-2

Undefined parameter table (center 100-2 table 128), using NCEP-2
1:0:d=79010100:PRMSL:kpds5=2:kpds6=102:kpds7=0:TR=3:P1=0:P2=1:TimeU=3:MSL:0-1mon ave:NAve=0

Undefined parameter table (center 100-2 table 128), using NCEP-2
8:46956:d=79020100:TMP:kpds5=11:kpds6=100:kpds7=200:TR=3:P1=0:P2=1:TimeU=3:200 mb:0-1mon ave:NAve=0

.
.
.

This is output from the wgrib utility (see the ../bin/README).

The "Undefined par..."  warnings come from wgrib and are warning
that this GRIB data does NOT come from NCEP (true), but that the
NCEP-2 table (the standard WMO wgrib table) will be used.  It's
not an error in the data and I expect to put the AMIP II table
inside wgrib in future releases of wgrib
(http://wesley.wwb.noaa.gov).

The remaining output should be self explanatory, if you don't get
this result, then you'll need to contact Mike at
lats@pcmdi.llnl.gov...  But first consult the trouble shooting
section below.


4.  The management script lats.sh
---------------------------------

The test.lats.sh script is simply a driver to the LATS
application development script, lats.sh.  Please refer to the
lats.sh and this document to follow the discussion.  Only pieces
of the script will be shown to illustrate key points.

The shell variables in lats.sh,

.
.
.

LATSVER="1.0"
runmode="test"

.
.
.

define the version and the mode the script will run.  When
porting, I set runmmode to "development" so I can bring over
result files.  Thus, the only variable that you would have to
change with a new version is LATSVER, but more than likely you
would get a new script in updates.

The "if,then,else" sequence is where the platform dependent
variables are set, for example,

.
.
.

if [ $MACH = "sol2" ] ; then
#
#	sol2 (SunOS sas-sun 5.5.1 Generic_103640-03 sun4d sparc SUNW,SPARCcenter-2000)
#
  F77="f77"
  CC="cc"
  FFLAGS=""
  NCLIB="../lib"
  LATSLIB="../lib"
  LATSBIN="../bin"
  LDFLAGS="-lnsl"

elif [ $MACH = "sunos4" ] ; then
.
.
.

F77        - name of the fortran compiler,
CC         - name of the ANSI C compiler (might be gcc, acc, or c89)
FFLAGS     - fortran compile flags
NCLIB      - directory with the netCDF library libnetcdf.a
LATSBIN    - directory with the LATS utilities
LATSLIB    - directory with the LATS library liblats.a
LDFLAGS    - optional loader flags (e.g. special libraries)

The next section builds a LATS application,

.
.
.

ofile="latsout"

if [ $1 = "c" ] ; then

  create="$F77 $FFLAGS testlats.f -o testlats -L$LATSLIB -llats -lnetcdf -lm $LDFLAGS"
 
.
.
.

Note that I load the math library (-lm) even though it might not
be necessary.  All these setting are very platform/environment
dependent, but this one worked on 10 platforms so is probably
reliable.  Also note that the lats library load (-llats) comes
BEFORE netcdf (-lnetcdf).  This ordering may not be necessary,
but then it might, depending on your machine.

I recommend creating a simple "make" script such as:

---- make.lats.sh -----
#!/bin/sh

f77 $1.f -o $1 -L/users/myhome/lats/lib -llats -lnetcdf -lm

exit
------ cut here ---

where /users/myhome/lats/lib is the directory you will be linking
the LATS and netCDF libraries from.  You would create the
application by, for example,

make.last.sh mylats

where mylats.f is your LATS fortran application program.

Finally, we STRONGLY URGE YOU USE OUR NETCDF LIBRARY!  It is up
to date and has been verified to produce workable results.

However, if you have a special version, you would change the f77
line to something like,

f77 $1.f -o $1 -L/users/myhome/lats/lib -llats -L/usr/local/mylibs -lnetcdf -lm

The rest of lats.sh simply runs the LATS application and uses
various unix utilities to do verification.


5.  The fortran program testlats.f
----------------------------------

The heart of the distribution is really here.

As with lats.sh, we will only discuss segments of the code, so
please have a copy of the code and the LATS man pages (e.g.,
../doc/lats.ps) handy when reading this section.  The code is
well documented and you might even understand how to use LATS
from the code alone.

Surprisingly, the most significant line in the whole code might
be:

      include "../include/lats.inc"

Including files in fortran is completely compiler/OS dependent.

However, this statement worked on 10 unix platforms, but...

While I could have used a precompiler (e.g., testlats.F), I
believe that straight code is simpler and it puts all the
"gotchas" (is it the precompiler, usually cpp, or f77 that is
giving you an error?) in one place.

YOU WILL HAVE TO MODIFY THIS LINE ACCORDING TO WHERE YOU
INSTALLED LATS (E.G., WHERE YOU UNTAR'D THE DISTRIBUTION FILE)
AND MODIFY IT IN ALL APPLICATIONS SHOULD YOU MOVE THE
DISTRIBUTION.

This is one reason why I'm not a big fan of fortran for data
work...
 
At the highest level, a LATS application has three sections:

1)	describe the data and open a data file
2)	write the data
3)	close the data file

THE PRIME RULE IN LATS IS THAT ALL DATA DESCRIPTION / DEFINITION
MUST BE DONE BEFORE ANY DATA IS WRITTEN.

There's a little story behind this in the web document
lats.amip2.html and at first glance it may seem a bit cumbersome.

This restriction results from putting a common interface over two
very different formats.  You end up with a least common
denominator to maintain simplicity (netCDF is the source of this
restriction).  Follow this rule and you will avoid most of the
errors I have seen people make with LATS.

From the code:

.
.
.

      parameter(ni=72,nj=46,nk=3,nv=2)

      character*20 center
      character*20 model
      character*9 var

      dimension var(nv),id_var(nv)
      double precision rlon(ni),rlat(nj),plev(nk),slev
      real ta(ni,nj,nk),psl(ni,nj)
C         
C         define the production center and the process (model)
C 
      data center/'PCMDI'/
      data model/'lats'/
C         
C         set the variable names (AMIP II convention)
C         
C         psl - sea level pressure (Pa)
C         ta - air temperature (degK)
C
      data var/'psl','ta'/ 
C         
C         define the pressure levels (hPa)
C
      data plev/850.0,500.0,200.0/
C         
.
.
.

This section sets the variable names, the center and model names,
and a level dimension.  I used data statements, but there many
options, including reading a configuration file.

Be very careful to define all parameters passed to subroutines!
FORTRAN does a lousy job (or none at all) of verifying that an
argument to is being passed to a subroutine is properly "typed"
(e.g., an integer array).

The key line in this section is:

      double precision rlon(ni),rlat(nj),plev(nk),slev

ALL GRID VARIABLES (longitude, latitude and levels) MUST BE
DOUBLE PRECISION!!!  I even pass a double precision variable to
latswrite (the routine that writes the data) even though the
parameter is not used.  The reason is that how constants
("literals") are passed (and are "cast") is compiler/machine
dependent.

A good rule is,

NEVER PASS A FLOAT CONSTANT TO A LATS SUBROUTINE.  ALWAYS DECLARE
A VARIABLE AND INITIALIZE IT!

The first LATS routine (optional) is to load an external table
that might be different than internal LATS parameter table, e.g.,

      id_parmtab=latsparmtab("../table/amip2.lats.table")
      if(id_parmtab.eq.0) stop 'latsparmtab error'

The table loaded is in fact the internal table, but I make the
call to demonstrate how it is done in an application.

FOR REAL APPLICATIONS, THE PATH AND NAME OF THE TABLE FILE WILL
HAVE TO BE MODIFIED!!!  Further, for AMIP II we will be supplying
table updates, so it is a good idea to ALWAYS call latsparmtab.

Note that I return the output from latsparmtab to an integer
variable and then check it.

This suggests another good LATS rule:

ALWAYS RETURN RESULTS OF LATS SUBROUTINE CALLS TO UNIQUE
(INTEGER) VARIABLES AND CHECK IF THE VALUE IS NON ZERO.

This is not critical as LATS will generally abort if a call is
fouled, but it is good practice nonetheless and helps trap
errors.

The next step is to create, or open, a LATS data file.  We define
many characteristics of the data in this step.

From the code,

.
.
.

        if(iconv.eq.1) latsconv=LATS_GRADS_GRIB
        if(iconv.eq.2) latsconv=LATS_COARDS

          id_fil = latscreate('latsout',
     $         latsconv,
     $         LATS_STANDARD,
     $         LATS_MONTHLY,1,center,
     $         model,'LATS GRIB test')
          print*,'LATS grib file id = ',id_fil
.
.
.

Note that I have already violated the rule about checking the
"return code" (id_fil).  If latscreate fails, your application
will abort even before it gets to the print* line.

The LATS_* variables come from the lats.inc file and are defined
in the man pages (see ../man or ../doc/lats.html), but here is
what we are doing:

The output file will be latsout.XXX (the file extension .XXX is
set inside to .grb for GRIB or .nc for netCDF).  

When iconv = 1, the application outputs data conforming to the
"LATS_GRADS_GRIB" convention, i.e., GRIB data plus an interface
to GrADS (and VCS).

For iconv = 2, netCDF data conforming to the COARDS convention
(see ../doc/lats.amip2.html) or "LATS_COARDS" is output.  These
data are also readable by GrADS and VCS.

"LATS_STANDARD" means we will be using the "standard" or Gregorian
calendar (an AMIP II requirement).

"LATS_MONTHLY" indicates the data will be monthly means (or
accumulations) and,

"1" is the number of months between outputs when writing a
sequence of times, i.e., the "delta t" in time increments.

"center" and "model" add descriptive information, and in the case
of GRIB sets the process id in the product definition section
(this value is specified in the center section of the the LATS
parameter table).

Finally, 'LATS GRIB test' is a comment that will appear in the
title line of the GrADS .ctl file or as an attribute in the
netCDF file.

Now that we have opened a file, we next describe what will be
going into it.

First the lon/lat grid.  

From the code:

.
.
.

          do i=1,ni
            rlon(i)=0.0+(i-1)*360.0/ni
          end do

          do j=1,nj
            rlat(j)=-90.0+(j-1)*180.0/(nj-1)
          end do

          id_grd=latsgrid("u54",LATS_LINEAR, ni, rlon, nj, 
     $         rlat)

.
.
.

This is a uniform lon/lat grid with a grid spacing of 5 deg in
longitude and 4 deg in latitude with pole points.

The most significant line here is,

            rlon(i)=0.0+(i-1)*360.0/ni

i.e., the wrap point is NOT included or

i=1 -> 0
i=2 -> 5
.
.
.
i=72 -> 355

Also, the longitudes start at 0 deg and increase towards the
east.  The start point is only an AMIP II recommendation, but the
monotonic increase towards the east is a REQUIREMENT, ditto for
EXCLUDING the wrap point (i.e., no duplicate longitudes such as
lon=0 and lon=360).

The call to latsgrid uses the include file variable
"LATS_LINEAR".  Other RECTILINEAR grids are supported such as
gaussian ("LATS_GAUSSIAN") or other ("LATS_GENERIC").

Another LATS rule (really a STRONG recommendation) is:

ONLY ONE LON/LAT GRID CAN BE SPECIFIED PER FILE!!!!

However, if you have multiple grids (AMIP II REQUIRES ALL DATA BE
ON A COMMON GRID), you can create and close multiple files from a
single application.

The "u54" becomes attribute in a netCDF file and is the name of
the grid inside your LATS application.

The next piece of code,

.
.
.

        id_lev=latsvertdim('pressure', 'plev', nk, plev)
.
.
.

defines a vertical dimension variable:

'pressure' - dimension attribute for netCDF
'plev'     - THE NAME OF THE VERTICAL DIMENSION IN THE LATS TABLE
nk         - number of levels
plev       - double precision array with the pressure values.

The pressure level variable or vertical dimension in the LATS
table is defined as decreasing with increasing index and LATS
will check if the pressures are ordered properly (e.g., 1000, 850
,500 ...100).

Further, while all POSSIBLE levels need to be defined, they do
not necessarily have to written out.  That is, latsvertdim merely
DESCRIBES a vertical dimension; it does not have to be used.

The key feature of the LATS description subroutines latsgrid and
latsvertdim is that they define descriptions which are completely
independent of any file or variable -- grid and vertical
dimension are associate with LATS VARIABLES.

From the code,

.
.
.


        id_var(1)=latsvar(id_fil,var(1),
     $       LATS_FLOAT,LATS_AVERAGE,id_grd,
     $       0, 'sfc variable')
        if(id_var(1).eq.0) stop 'latsvar(1) error'

        id_var(2)=latsvar(id_fil,var(2),
     $       LATS_FLOAT,LATS_AVERAGE,id_grd,
     $       id_lev, 'ua variable')
        if(id_var(2).eq.0) stop 'latsvar(2) error'
.
.
.

where we create two LATS variables.

VARIABLES ARE ASSOCIATED WITH FILES AND GRID/LEVELS ASSOCIATED
WITH VARIABLES.

latsvar IS PERHAPS THE MOST SIGNIFICANT CALL IN A LATS
APPLICATION.  IT DEFINES WHAT WILL ACTUALLY GO INTO THE FILE!!!

Let's break down the first call,

id_fil         - the file id from latscreate, i.e., WHERE the variable will go
var(1)         - the NAME of the variable based on the LATS table = "psl"
                 for sea-level pressure
LATS_FLOAT     - floating point data (e.g., real psl(1,1))
LATS_AVERAGE   - this variable represents an average during the month 
                 (LATS_MONTHLY in latscreate).  The temporal property of the 
                 variable is defined here.
id_grd         - the GRID id from latsgrid
0              - the level ID, for "surface" parameters such as sea level pressure
                 or "psl" this is ignored
'sfc variable' - descriptive information only appearing in a netCDF file 

Note that the vertical dimension of this LATS variable is defined
in the LATS table, if you attempt to associate a sfc variable
with a level, or fail to associate a vertical dimension with
variable with vertical variation, latsvar will fail.

From the amip2.lats.table LATS table file, psl is defined by the line,

psl | 2 | Mean sea-level pressure | Pa | float | msl |  0 | -999 | 2,3  | g1 |

msl associates psl with the level "msl" in the table, i.e.,

msl | Mean Sea Level  |		| single |   up	|  102 | 0 |  0 | 0

the keyword "single" in the table means "psl" is a surface variable
and cannot be associated with a vertical dimension.
 
In the second call,

id_fil         - the file id from latscreate and 
                 the same as for the first variable
var(2)         - the NAME of the variable based on the LATS table = "ta"
                 or air temperature 
LATS_FLOAT     - floating point data (e.g., real ta(1,1,1))
LATS_AVERAGE   - this variable represents an average during the month 
                 (LATS_MONTHLY in latscreate).  The temporal property
                 of the variable is defined here.
id_grd         - the GRID id from latsgrid
id_lev         - the level ID from latsvertdim
'ua variable'  - descriptive information only appearing in a netCDF file 

Again from the table,

ta  | 11 | Air Temperature    | K   | float |        |  1   | -999 | 1,3  | g1 |

the section after float is blank which means "ta" can, and MUST,
be associated with a vertical dimension.

In contrast, surface air temperature "tas" is defined by,

tas  | 11 | Surface (2m) air temperature  | K  | float | sfc2m  |  1   | -999 | 2,6  | g1 |

The keyword "sfc2m" means it is a "surface" variable defined as
two meters about the ground.

See the ../doc/parmfile.* documents for more information on the
LATS table file.

You may be wondering, "where do I define the actual pressure
level?"  That happens in latswrite.

First, we need to (optionally) set an undefined value, i.e., a
value where a proper grid value is not defined.  Undefined points
might occur in models where interpolation to pressure surfaces is
NOT made below the ground.  These points are given a "flag" value
very different frm the real or "defined" data (e.g., an undefined
value of 1e20 for winds in m/s).

From the code,

.
.
.

CCCC      ierr=latsmissreal(id_fil,id_var(1),1e20,1e13)

.
.
.

I have commented it out because the test data have no undefined
values and LATS performs slightly better when there are no
undefined points.

In the call,

id_fil    - this is a FILE property
id_var(1) - the variable id
1e20      - the undefined value
1e13      - tolerance in defining a point as undefined, i.e., 
            if a value is +/- 0.5e13 around 1e20, it is undefined

While it is possible to set DIFFERENT undefined values for each
variable, it is strongly recommended that you do not.  GRIB does
not care, but netCDF might.

However, if your data DOES have undefined points, you MUST CALL
latsmissreal!!!  While it is NOT significant to netCDF (it is
simple an attribute) it is CRUCIAL to the GRIB options.

The reasons are twofold.  First, and most importantly, GRIB will
use the undefined value in calculating its internal compression
parameters.  If this value is very big, the resulting
representation of the numbers in the array will be extremely
poor.  Second, GRIB will create a mask and only pack the DEFINED
points resulting in potentially higher compression.

Yet another LATS rule,

ALWAYS CALL latsmissreal OR latsmissint IF YOU HAVE UNDEFINED
DATA POINTS AND ONLY USE ONE VALUE!!!!

The grid, the levels, and the variables have been defined.  NOW,
we are ready to write data.

From the code,

.
.
.

C         
C         valid time 00UTC 1 Jan 1979
C         
        do imo=1,2
          iyr=1979	
          ida=1
          ihr=0
.
.
.

iyr, imo, ida and ihr set the valid time of the data.  We'll be
writing out two months of dat starting in January of 1979 (the do
imo loop).

NOTE: BY AMIP CONVENTION, THE VALID TIME OF A MEAN OR ACCUMULATED
QUANTITY IS REQUIRED TO BE SET AT BEGINNING TIME OF THE PERIOD.

LATS does NOT require this, but we strongly urge you to follow
this convention for consistency sake.  

In this case, the first valid time is,

00UTC 1 January, 1979 (iyr = 1979, imo= 1, ida = 1, ihr= 0)

.
.
.

C         
C         create sfc field and write
C
	
          call read_data(psl,var(1),ni,nj,0,imo)
.
.
.

read_data is a little routine to to generate some quasi realistic
meteorological fields.  Substitute your data read code...

.
.
.


C
C	I know this is overkill, but if you don't do this the hp version
C	will core dump!
C
	  slev=0.0D0
          id_write1=latswrite(id_fil,id_var(1),slev,
     $         iyr,imo,ida,ihr,psl)
          if(id_write1.eq.0) stop 'latswrite error - sfc'
.
.
.

The comment above says it all about the slev=0.0D0 line.

Let's breakdown the first latswrite call:

id_fil      - the file id (i.e., the file "where")
id_var(1)   - the surface variable id (i.e., the "what")
slev        - the LEVEL which is ignored for this variable but wants
              a double precision value anyway (i.e., the "where")
iyr         - the integer year (i.e., the "when")
imo         - the integer month
ida         - the integer day
ihr         - the integer hour (UTC)
psl         - the array with the data (2-D psl(ni,nj))

.
.
.
C         
C         create multi-level field
C
          do k=1,nk
            call read_data(ta(1,1,k),var(2),ni,nj,k,imo)
            id_write2=latswrite(id_fil,id_var(2),plev(k),
     $           iyr,imo,ida,ihr,
     $           ta(1,1,k))
            if(id_write2.eq.0) stop 'latswrite error - ua'

.
.
.

For the multi-level variable:

id_fil      - the file id (i.e., the file "where")
id_var(1)   - the surface variable id (i.e., the "what")
plev(k)     - the VALUE of the level (i.e., the "where" in the vertical)
iyr         - the integer year (i.e., the "when")
imo         - the integer month
ida         - the integer day
ihr         - the integer hour (UTC)
ta(1,1,k)   - the first element of the array (a "pointer" in ta) to
              start writing from.

.
.
.

Another important point about this sequence is how it
demonstrates another LATS rule -- ALL VARIABLES are written at
EACH time.

That is, LATS does NOT currently support writing multiple
variables as a sequence of time series.

If we consider a data file as a 5-D array, LATS currently
supports this structure ONLY:

data(lon,lat,lev,var,time) (time slowest varying)

and NOT,

data(lon,lat,lev,time,var) (variable slowest varying)

This restriction may be eased in future release.  Its purpose in
AMIP II is to simplify the data structures that PCMDI and the
diagnostic subprojects have to work with.

FINALLY, the LAST and VERY IMPORTANT STEP, is to close the file.

From the code,
.
.
.

        id_close = latsclose(id_fil)
        if(id_close.eq.0) stop 'latsclose error'

.
.
.

This call builds the GrADS interface to GRIB data and properly
closes the data files to the operating system.  You'll get an
error if you don't close the file, and the data MAY be OK, but
other side effects are possible.

Let's summarize the "Rules of LATS:"


1) THE PRIME RULE -- ALL DATA DESCRIPTION / DEFINITION MUST
HAPPEN BEFORE ANY DATA ARE WRITTEN.

2) ALL GRID VARIABLES (longitude, latitude and levels) MUST BE
DOUBLE PRECISION.

3) NEVER PASS A FLOAT CONSTANT TO A LATS SUBROUTINE.  ALWAYS
DECLARE A VARIABLE AND INITIALIZE IT.

4) USE ONLY ONE LON/LAT GRID PER FILE AND ALWAYS EXCLUDE WRAP
POINTS IN LONGITUDE.

5) ALWAYS CALL latsmissreal OR latsmissint IF YOU HAVE UNDEFINED
DATA POINTS AND ONLY USE ONE VALUE FOR THE UNDEFINED FLAG.

6) ALL VARIABLES ARE WRITTEN AT EACH TIME.  TIME SERIES OF
VARIABLES ARE NOT SUPPORTED.


6.  Output from the test
------------------------

Two GIF images have been included based on the LATS data from the
test program.  While not very realistic physically, the values
and range of the sea-level pressure field (psl.gif) and the 500
hPa temperature at least pass the "planet" check (is the Earth's
atmosphere?).

The main web document (http://www-pcmdi.llnl.gov/software/lats)
links to the pictures as well.


7.  Applications
----------------

If you understand the test code, and the rules of LATS, you
should have little problem incorporating LATS routines into
existing code.  Be careful in setting the location of the include
file and the libraries in your applications.

Look to our web (http://www-pcmdi.llnl.gov/software/lats) in the
future for some of our LATS applications.  We hope this will give
users other ideas on how LATS can be used.

I am also interested in seeing YOUR LATS applications.  I find
that seeing how other people use the code helps my LATS
development and LATS application programming.

One additional recommendation.  Before embedding LATS routines
in a complicated, compute-intensive code, either test outside or
put a logical switch to NOT do the calculations so that you can
track down LATS problems more quickly.

Finally, please contact me at lats@pcmdi.llnl.gov should you want
C versions of the the test code or have other
questions/complaints.

Good Luck!!


8.  Troubleshooting
-------------------

Here are some problems you may run into:

a)	Library table of contents missing or defective.  

There are two commands you can try depending on the flavor of
unix you are running:

ranlib lib*.a

or

ar ts lib*.a 

I suspect you'll run into this on sunos4.  Here is what I got
before running ranlib:

ld: ../lib/liblats.a: warning: table of contents for archive is out of date; rerun ranlib(1)
ld: ../lib/libnetcdf.a: warning: table of contents for archive is out of date; rerun ranlib(1)


b)	locating the include file in the fortran program

If the program cannot find the include file, the internal LATS
variables such as LATS_COARDS may be erroneously set.  Again, you
are at the mercy of the fortran compiler.

c)	LATS "blows up" - core dumps or segmentation faults

This is often symptomatic of problems with parameters being
passed to LATS routines.  For example, passing the wrong type or
an insufficient number of arguments MAY cause core dumps,
segmentation faults and other operating system nasties.  Whereas
C has strong type check, fortran does not and will happily
(sometimes) accept an integer where a float is required or accept
7 parameters when 6 are needed.

Again, we cannot predict how your compiler will behave (or
misbehave).  By checking the return code and being careful what
you pass to the subroutines, you should avoid these problems.

Another example of parameter passing problems is NOT defining
variables being passed to a subroutine (passing a bad "pointer").

For example,

          id_fil = latscreate('latsout',
     $         latsconv,
     $         LATS_STANDARD,
     $         LATS_MONTHLY,1,center,
     $         model,'LATS GRIB test')


where center and/or model are not defined as character and
initialized to a value, e.g., doing this

c      character*20 center
c      character*20 model

c      data center/'PCMDI'/
c      data model/'lats'/

instead of

       character*20 center
       character*20 model

       data center/'PCMDI'/
       data model/'lats'/


This kind of error caused a core dump on an SGI irix5 platform.

I'd like to test for such "bad pointers," but I'm not sure how
well I could detect such compiler/machine dependent "features."

However, please contact me at lats@pcmdi.llnl.gov should you have
problems.



-------------------------------------------------------------

(1) NOTE:

	Although Bob Drach has been the principal developer of
	LATS, Mike Fiorino has taken over maintenance and development
	until Bob returns from sabbatical this coming autumn.  
	The GRIB interface was written by Mike and the netCDF
	interface by Bob.